Ethical use of generative AI in internal communications: Best practices

In the first two blog posts in our series on the ethical use of AI in internal communications, the focus was on the regulative landscape and how this informs when and how we should declare the use of AI. In this final blog post in the series, I’d like to take a step back and provide some thoughts on best practices that I believe internal communicators should adopt as they embed the use of AI.

The thinking behind these best practices is based on the regulation and ethical guidelines I described in part one, our own experiences gained from developing our own generative AI solution Dr SWOOP, and everything we’ve learned from providing analytics to internal communicators for more than a decade:

  1. Keep human oversight front and centre: Always have a human in the loop. Use AI to assist, not autonomously take over. A human communicator should review and take responsibility for any AI-generated content before it reaches employees. This means fact-checking, editing for tone, and ensuring the message aligns with company values. Human oversight is not just a safety net; it’s essential for accountability and credibility.

  2. Develop a clear AI usage policy: Establish guidelines on how your team will use AI and when to disclose it. For example, the policy might state; “AI may be used for initial drafting or content ideas, but all content is reviewed by a human. We will inform employees when content is largely AI-generated or when a message is authored by AI.” By setting these rules, you ensure consistency. Everyone on the team will then know, for instance, if they use AI to write 80% of an article, they should include the agreed-upon disclosure note. This policy can be shared with employees at a high level, reinforcing trust that AI is used thoughtfully.

  3. Be transparent when AI plays a major role: If an AI tool generated significant portions of a communication or answered employees’ questions automatically (as in an AI chatbot), let the audience know. You can do this unobtrusively, such as a parenthetical “(AI-Assisted)” in the headline or a line at the end. The disclosure should be in plain language (e.g., “with AI assistance”). Transparency isn’t needed for every tiny AI suggestion, but for whole drafts or automated responses, it’s the ethical choice. One approach is to create a small icon or label for AI-generated content on your intranet or internal blog, so it’s consistently marked.

  4. Preserve authentic voice and quality: Don’t let AI dilute the human touch. Even if AI drafts something, edit it to add the personal or contextual details that only a human would know. This maintains the authenticity employees expect. Content should feel genuine and tailored, not generic. Also, rigorously check AI content for errors or bias (AI can occasionally produce incorrect or biased outputs). Ensuring accuracy and appropriateness is non-negotiable, disclosed or not, the content reflects on your organisation. If the AI’s draft isn’t up to par, invest the time to fix it or don’t use it at all.

  5. Protect data and confidentiality: When using AI, especially third-party services, be mindful not to input sensitive company or personal data unless you’re using a secure, approved system. Ethical use includes data ethics. Many companies restrict using public AI (like the ChatGPT) with internal documents. Follow your IT and legal guidance here. If you have an in-house AI solution for writing, even better. Being able to say; “we use AI internally with full data security” can reassure any employees concerned about privacy.

  6. Educate your team and leaders: Provide training or briefing to the communications team and leadership about what generative AI can and cannot do. Make sure everyone understands the importance of not over-relying on it and of being transparent. Leaders whose names are on communications generated by AI should be especially aware of the process and comfortable with any disclosure. It’s wise to brief executives: e.g., “We sometimes use AI to draft your town hall summaries, but we always edit them, and we plan to note AI assistance if the draft was mostly AI.” Getting leadership buy-in on that approach prevents surprises.

  7. Monitor employee feedback: Keep an ear out for how employees are reacting to AI-influenced communications. Are they noticing or commenting on tone changes? Does transparency about AI usage raise questions? Ideally, solicit feedback: a simple poll like; “Would it concern you if some of our newsletters were drafted with AI (with human oversight)?” This data can guide how you adjust your practices. It also continues the transparent conversation about AI and shows that the company values employees’ opinions on new tech in the workplace.

Implementing these practices will help ensure AI is a positive addition to your internal communication strategy, not a source of mistrust or confusion. Essentially, treat AI as a powerful assistant, but one that you openly acknowledge and manage. By doing so, you not only respect ethical norms and emerging laws but also maintain the integrity of your communication function’s relationship with employees.


Final words

Generative AI can be a boon for internal communications, offering speed and support in content creation. However, “with great power comes great responsibility”, specifically, the responsibility to use AI ethically and transparently. The guidance from the EU AI Act, OECD, UNESCO, and various industry bodies converges on a clear principle: if AI is substantially involved in creating content, people should be informed. Within an organisation, this means internal communicators should declare AI’s involvement when it goes beyond trivial assistance.

There is indeed a difference in degrees of AI use. Using AI merely as a thesaurus or brainstorming buddy doesn’t need an announcement. Using it to draft an email that you then refine is a grey zone. But letting AI write most of a message (even in the CEO’s tone) and presenting it as purely human-crafted is not advised; it’s in those cases that a disclaimer or clear communication about AI usage is essential to uphold trust.

Cai Kjaer, CEO, SWOOP Analytics.

Ultimately, declaring the use of AI when appropriate isn’t a sign of weakness or an apology, but a sign of integrity. It says that the organisation values honesty over any short-term polish that secret AI help might provide. As the workplace and technology evolve, maintaining that ethical stance will keep internal communication credible. The tone, trust, and transparency of our communications must remain distinctly human, even as we leverage AI as a tool. By remembering that, communicators can ensure AI works for them and retains the respect of their audience.


Disclaimer: I used AI to help me research this blog post series. I also used AI to test my thinking, to avoid missing key areas of importance and to review my draft text for clarity of thought.


Interested in reading more about AI? Download our guide How to get your intranet ready for AI.

 
Next
Next

Ethical use of generative AI in internal communications: When to declare AI use