Ethical use of generative AI in internal communications: Transparency and disclosure
Generative AI is increasingly used to help create internal communications, from intranet articles and leadership emails, to presentation materials. While AI can boost efficiency and provide creative inspiration, it raises an important question: When should communicators disclose that content was AI-generated or AI-assisted?
This is something I’ve spent a lot of time thinking about because at SWOOP Analytics we provide data and insights for internal communicators, and I’ve personally been deeply involved in building our own Gen-AI chatbot.
To help shed light on the question of the ethical use of Gen-AI for internal communicators I have written a series of three blog posts where I’ll attempt to answer that question. The series includes:
The regulation and ethical guidelines that outline key principles for the ethical use of AI for internal communicators.
When to declare AI use: AI-assisted versus fully AI-generated content.
Best practices for ethical AI use in internal communications.
Let’s get started with part 1!
Regulations and ethical guidelines emphasising transparency
I think it is essential for internal communicators to know the use of AI isn’t a free for all in terms of disclosing use, and several leading institutions have developed either regulation or guidelines to ensure ethical and transparent use.
In the Australia, UK and the US there is currently no legislation I could find that forces organisations to disclose the use of AI, but in Europe the new EU AI Act means that if your AI generated content is viewed there, then the transparency requirements kick in. If you run internal comms for a global company you’ll most likely have employees or contractors in the European Union, so it would be very prudent be on top of this.
In addition to regulation there are a number of guidelines and principles that have been agreed on too, and transparency is a common principle across core AI laws and ethics frameworks. Here is my attempt at outlining the key ones, but I’d recommend you talking with your contact in your legal team to get guidance that applies to your specific context.
EU AI Act: This EU regulation explicitly requires labelling AI-generated content in certain cases. If an AI system produces text intended to inform people (for example, a news article), the content must be clearly marked as AI-generated unless a human editor with editorial responsibility has reviewed and accepted it. In practice, that means an AI-written piece published “as is” needs disclosure, whereas AI-assisted content that a human edits and takes responsibility for may not require a label by law. The goal is to ensure people “clearly and distinguishably” know when they’re seeing AI output. It also mandates that users be aware when they are interacting with an AI system (like a chatbot) rather than a human.
OECD AI Principles: These international guidelines, adopted by dozens of countries, highlight transparency as a cornerstone of trustworthy AI. They state actions should “provide meaningful information, appropriate to the context” to make stakeholders aware when they are interacting with AI systems, including in the workplace. In other words, if employees are receiving information or engaging with a system influenced heavily by AI, they should be informed.
UNESCO’s Recommendation on the Ethics of AI: This global agreement also underscores transparency. It asserts that people should be fully informed when decisions or content affecting them are based on AI algorithms. The intent is to uphold human rights and autonomy, as hidden AI influence could potentially be undermining those rights. For internal communications, this implies that if AI had a major role in forming a message that influences employees’ understanding or decisions, disclosing that use aligns with UNESCO’s ethical guidance.
Council of Europe AI Convention: The world’s first binding AI treaty similarly insists on transparency and human oversight. It includes provisions that a person should be notified if they are interacting with an AI (as opposed to a human) and promotes “transparency and oversight” for AI systems, especially where human rights are concerned. An internal email written entirely by AI and sent under a CEO’s name, for instance, touches on these themes. Ethically, employees have a right to know it wasn’t personally written by the CEO.
Australia’s AI Ethics Framework: One of its eight principles is “Transparency and explainability.” It calls for responsible disclosure so that people “can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.”. In a workplace context, if AI was used to generate content that employees rely on (policy FAQs, HR announcements, etc.), this principle suggests employees should be able to know about that AI involvement.
G7 Hiroshima AI Process: The G7 nations have discussed a Code of Conduct for AI developers, which includes transparency commitments. While more focused on industry behaviour, it reinforces that leading democracies see transparency as vital for AI governance.
In summary, across multiple frameworks (EU, OECD, UNESCO, Council of Europe, G7, Australia), the message is consistent: if AI is used in a way that influences people, transparency is ethically (and increasingly legally) required. Hiding AI’s role is discouraged because it can mislead people and erode trust. These guidelines provide a strong foundation for internal communicators when making policies about AI-generated content.
Industry principles and professional guidelines
Beyond laws and high-level ethics, the communications and marketing industry has its own standards addressing AI. Several professional bodies have proactively issued guidelines to maintain trust:
IABC Guidelines for the ethical use of AI: The International Association of Business Communicators reinforces human oversight, accuracy verification, privacy protection, and bias mitigation in AI-generated content. Communicators are urged to transparently disclose AI involvement and maintain accountability for content integrity.
IoIC’s AI Ethics Charter: The Institute of Internal Communication offers practical guidance for ethical AI use, emphasising trust, inclusivity, safety, and lawful operation. Its principles encourage continuous human-led stewardship, active governance, and clear transparency in AI-generated communications.
ICCO’s “Warsaw Principles” for AI in PR: The global PR association (ICCO) ratified principles stating, for example; “Advice is always human,” meaning AI should support, not replace, the human judgment of PR professionals. Critically, the principles call for “Transparency, Disclosure, and Authenticity.” Practitioners are expected to “transparently and proactively disclose when generative AI is used to create purely artificial content that might distort reality.”
FEDMA Ethical AI Charter: The Federation of European Data and Marketing’s charter likewise emphasises trust, transparency, and integrity. It advises marketers to operate AI systems with transparency and avoid any use of AI that would manipulate or mislead consumers. While this is aimed at customers, the same ethical logic applies to employees: communications driven by AI should never mislead the workforce. Being open about AI use is framed as a way to “build trust” with your audience.
Asilomar AI Principles: These influential AI guidelines (from the Future of Life Institute) include principles on responsibility and transparency. They imply that those deploying AI should be accountable for it and that AI’s design and impacts should not be a black box. For communicators, this translates to maintaining accountability: if you use AI to draft a message, you (the human) are responsible for the result. It also means not hiding AI’s role, especially if doing so would mislead; transparency is linked to integrity.
Other Communications Industry Insights: Leaders in internal comms I have spoken with have stressed maintaining a “human touch” even with AI. For instance, communications experts note that purely AI-generated text can lack the empathy and context that humans bring. There’s an emerging consensus that AI can be a powerful aid (for efficiency, writer’s block, personalisation, etc.), but final content should feel human and honest.
In aggregate, both the legal/regulatory angle and the professional ethics angle point to a key principle: don’t deceive your audience about AI involvement. Use AI as a tool, but ensure the audience knows when content is significantly AI-created. Authenticity and honesty are core to effective communication, and AI doesn’t change that.
In the next blog post I’ll outline where and when internal communicators should disclose the use of AI.
Cai Kjaer, CEO, SWOOP Analytics.
Disclaimer: I used AI to help me research this blogpost series. I also used AI to test my thinking, to avoid missing key areas of importance and to review my draft text for clarity of thought.
Interested in reading more about AI? Download our guide How to get your intranet ready for AI.