Ethical use of generative AI in internal communications: When to declare AI use
This is part two in our series on the ethical use of AI for internal communicators. In the first blog post I described the regulatory environment, and in this blog post we turn our focus to determining when and where to declare the use of AI.
Not every use of AI in writing needs a public declaration. The need for disclosure increases with the level of AI’s involvement and the risk of misperception. We can think of a spectrum:
Minimal AI assistance (inspiration, outlining, grammatical help)
If AI’s role is limited to suggesting ideas, improving wording, or correcting grammar (effectively an evolution of spell-check and thesaurus), then no disclosure to the audience is typically needed. In these cases, a human is the primary author, and the AI influence is minor. For example, an internal communication specialist might ask Copilot or ChatGPT for creative ways to start a difficult email, or to brainstorm titles for an intranet article. The final content, however, is written by the human, and the AI’s contribution is indistinguishable from normal editorial input. Ethically, this is akin to using any productivity tool or consulting a style guide; it doesn’t warrant a special call-out. (Of course, internally within the team, one should still vet that AI suggestions are correct and suitable, but employees reading the email don’t need a footnote about the writer having used AI for inspiration.)Moderate AI assistance (AI-generated drafts with human editing)
This is a common scenario: an AI system like Copilot or ChatGPT generates a first draft or chunks of text, and then a human communicator reviews, edits, and customises the content. Here, the AI’s role is significant as it provided raw material, and the human is curating the output. Whether to disclose in this case can depend on context. If the communication is routine (say, a weekly team update) and the human editor ensures the tone and facts are correct, explicit disclosure each time might not be necessary. The human essentially “co-authored” and stands behind the content. Some guidelines (like the EU AI Act’s provisions) suggest that when a human has “editorial control,” the requirement to label AI-generated text might not apply. However, a general transparency statement is still wise. For instance, an organisation could let employees know broadly that; “We use AI tools to help draft some of our communications, under human review.” This proactive transparency covers the practice without needing to flag every single message. If the content is sensitive or leadership-facing (e.g., a CEO’s message crafted with AI help), it may be prudent to lean into more transparency, perhaps noting at the bottom of a long memo: “Prepared with assistance from an AI writing tool.” This signals honesty and can pre-empt any gossip if employees suspect AI involvement. The key is that the human editor takes responsibility for the final text, so the voice, accuracy, and intent are human-guaranteed. In these collaborative cases, transparency is a judgment call: it can enhance trust, but if overdone (labelling every minor AI tweak) it might unnecessarily alarm or confuse people.Predominantly or fully AI-generated content
If an entire message or document is essentially written by AI with minimal human modification, disclosure is strongly advisable. At this end of the spectrum, passing off content as human-written when it isn’t crosses into ethically dubious territory. Employees could feel deceived if they later learn an “all-staff email from HR” or a “Q&A document” was mostly churned out by a bot. Furthermore, fully AI-generated content might contain subtle errors or off-tone phrases that alert savvy readers something is off. It’s better to be upfront and write e.g., “This FAQ was generated by an AI system and reviewed by our team for accuracy.” Such a statement can be placed discreetly (fine print or an asterisk), but it fulfils the duty of honesty. According to the IABC AI Principles, members must “not attempt to hide or disguise the use of AI in my professional output.”. Even beyond ethics, consider practical accountability. If the AI-written content later turns out to have a mistake or causes confusion, having disclosed its AI origin makes it easier to discuss corrections (“We’ll adjust our AI tool settings and improve review” rather than betraying trust). On the other hand, not disclosing and then having an issue can lead to embarrassment and loss of credibility (“Did our leaders really understand what they sent us?”).
To gauge the need for disclosure, internal communicators can ask: “Would the average employee assume this content was written entirely by a person? If they found out it wasn’t, would they feel misled or think differently about it?”. If the answer to these questions is “yes”, lean toward disclosure.
Another factor is whether the content is attributed to a specific leader or department. If an email is signed by the CEO but was 90% AI-produced, that’s a high-risk scenario for trust, and transparency is recommended (either refrain from heavy AI use there or clearly involve the CEO in editing and approving it so it’s genuinely their message). If the content is more generic (e.g., an IT helpdesk knowledge article), employees might care less, but it’s still good practice to label AI-generated knowledge base content so users know to double-check critical details.
In practical terms, disclosure can be as simple as phrasing in a preface or footer. For instance: “Note: This content was created with the assistance of AI.” Or an icon indicating AI involvement. The point is not to scare or alienate the reader, but to respect their right to know the nature of the content.
Here at SWOOP Analytics we use AI within our product to help people interpret and take action on the analytics they are presented with. To help our audience know exactly how it is applied we have published an article that outlines both the use of AI, but also the considerations and actions we take to ensure ethical use.
Why transparency matters: Trust and employee reactions
The decision to declare AI use isn’t just about following rules or principles, it directly impacts employee trust and the credibility of communications. Internal communication thrives on a foundation of trust and authenticity. When employees read a message from leadership, they invest a level of trust that the words are sincere and representative of the leader’s views. If that turns out not to be the case (because an AI actually wrote most of it), that trust can be shaken.
Trust and authenticity: In communication, authenticity is paramount; it’s hard for an AI to replicate genuine human empathy or lived experience. Employees can often tell when language feels formulaic or impersonal. If communications start to read like generic AI output, employees may grow sceptical or disengaged. Conversely, if employees know a message is AI-assisted yet still find it relevant and authentic (because a human guided it), they may appreciate the honesty and the effort to keep content quality high. Being transparent can actually increase trust, because the organisation is seen as honest about its practices and not trying to pull the wool over anyone’s eyes.
Avoiding a “deception” backfire: Failing to disclose significant AI involvement can lead to feelings of deception if people later learn the truth. Imagine an employee finds out via an IT slip or an external news piece that the CEO’s inspirational monthly letters were mostly drafted by ChatGPT. They might feel the personal connection was fake, and this can breed cynicism (“Do they truly care, or am I just reading auto-generated words?”).
Such scenarios have played out in media, for example, when a major publication was caught using AI-generated articles under fake author names, the backlash was severe, and credibility suffered. Douglas Kahn pointed me to a news article from April 2024 where Netflix came under fire for including undisclosed AI imagery in a true crime documentary. More recently, Vogue ended up in a controversy for featuring an AI-generated model in an advertisement that was even disclosed (albeit easily overlooked).
Nadine Stokes also pointed me to a 2023 story published by CNN where Vanderbilt University’s Peabody School had used GhatGPT to write an email to students about a mass shooting at another university. The CNN article writes that at the end of the school’s email it said: “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023”. Following an outcry from students about the use of AI to write a letter about a human tragedy, the associate dean of Peabody had to send an apology note the next day.
These examples show what happens when the audience feels misled. While internal comms might not face public scandal, the internal fallout of lost trust is just as real. It can damage morale and people’s willingness to read future messages with an open mind.Employees’ desire for human contact: Internal messages often carry emotional weight, eg. announcing changes, celebrating successes, addressing crises. Employees generally expect and value that those messages come from leadership’s heart and mind. If AI is used in these moments, it must be done carefully and with transparency. A heartfelt note about layoffs or a congratulatory email for a hard project will land very poorly if employees suspect it was auto-generated. Many organisations already commit to not using AI for sensitive communications. For less sensitive but still important communications, using AI in a supporting role is fine, but acknowledging it (when AI did a lot of the writing) shows respect for the audience. It tells employees: “We use modern tools, but we don’t intend to fool you; we stand by what we send.”
Accountability and openness: Transparency also signals that the organisation remains accountable. If an AI-written message contains a mistake or off-tone remark, owning up to AI involvement makes it easier to address (“We apologise. An AI tool introduced this error, and we missed it in review”). If AI was hidden, any error might lead employees to question the competence of the communicators or leader, until the AI cause is revealed, at which point the hidden use becomes the story. Openness heads off that worst-case scenario. It also invites employees to give feedback. Suppose an employee finds an AI-crafted FAQ answer unhelpful; if they know it was AI-crafted, their feedback might be; “could the team review the AI’s answer on X, it wasn’t clear.” If they don’t know, they might just conclude the company isn’t good at providing answers. Thus, transparency keeps the channel open for constructive feedback and continuous improvement of AI use.
In summary, disclosing AI use when appropriate is part of treating employees as stakeholders who deserve honesty. Just as companies disclose other material information internally (like changes in policy, or who authored a report, etc.), disclosing that “this content was AI-assisted” in the right situations respects employees’ intelligence and agency. Most will appreciate the candour, and it pre-emptively builds trust rather than risking a breach of trust later. As one communications principle goes, “trust is hard to build, easy to lose.” Being honest about AI is a small step that can go a long way in preserving that trust.
In the next, and final, blogpost in this series I’ll round out by summarising what I have called “best practices” for the ethical use of AI in internal communications. Maybe they should be called “emerging practices” as this is still an area which is rapidly developing. Stay tuned!
Cai Kjaer, CEO, SWOOP Analytics.
Disclaimer: I used AI to help me research this blog post series. I also used AI to test my thinking, to avoid missing key areas of importance and to review my draft text for clarity of thought.
Interested in reading more about AI? Download our guide How to get your intranet ready for AI.