New Research From Politecnico di Milano Reveals That Transparency About AI-Generated Content Triggers a Trust Penalty, Creating a Regulatory Dilemma at the Heart of Europe's Disclosure Mandates
In Brief: A forthcoming study presented at Politecnico di Milano demonstrates that when companies disclose AI authorship of corporate social responsibility messages, consumers perceive less effort behind the communication, leading to reduced sincerity and diminished brand trust. The findings pose uncomfortable questions for policymakers implementing the EU AI Act's transparency requirements: mandated disclosure may inadvertently punish the very honesty regulators seek to encourage.
The tension between transparency and trust is precisely the kind of question that deserves sustained attention. For those shaping Europe's approach to AI governance, Human x AI Europe in Vienna on May 19 offers a space to work through these contradictions together.
The Effort Inference Problem
Stand in front of a corporate sustainability report and notice what happens to attention. The eye scans for signals: specificity of commitments, quality of prose, evidence of genuine engagement with complexity. These signals function as proxies for something harder to measure directly. They indicate whether someone cared enough to think carefully about what they were saying.
This is the mechanism that new research from Kristina Klein, Professor of Marketing and Consumer Behaviour at the University of Bremen, illuminates with uncomfortable precision. Across two pretests and two experiments, Klein's work demonstrates that disclosing AI authorship of corporate social responsibility (CSR) communications triggers a specific cognitive response: consumers infer that less effort went into the message.
The chain reaction that follows is predictable but consequential. Lower perceived effort leads to lower perceived sincerity. Lower perceived sincerity leads to diminished brand trust. The transparency that regulators mandate becomes, in effect, a trust tax.
What Attribution Theory Reveals
Klein's research draws on attribution theory, the psychological framework explaining how people assign causes to observed behaviors. When consumers encounter a message, they automatically ask: what motivated this communication? The answer shapes their response.
A human-authored CSR message, even if consumers suspect some degree of strategic calculation, carries an implicit signal: someone invested time, thought, and care into crafting these words. The effort itself becomes evidence of commitment. Generative artificial intelligence (GenAI), the category of AI systems capable of producing text, images, and other content, disrupts this inference. If a machine can produce the message in seconds, what does that say about the organization's actual investment in the values being communicated?
The research identifies perceived communicative effort as the novel cognitive mechanism linking disclosure to trust outcomes. This is not about whether the content is accurate or well-written. The same words, disclosed as AI-authored, trigger different attributions than when their origin remains ambiguous.
The Regulatory Collision Course
The timing of this research matters. The EU AI Act establishes transparency obligations requiring disclosure when AI systems generate content that could be mistaken for human-created material. The intention is sound: people deserve to know when they are interacting with machine-generated content, particularly in contexts where authenticity matters.
But Klein's findings suggest the policy creates a paradox. Companies that comply with disclosure requirements may face competitive disadvantage against those who find ways to obscure AI involvement. The honest actors get punished; the evasive ones benefit.
This is not an argument against transparency. Rather, it is a diagnostic of what transparency alone cannot accomplish. Disclosure rules assume that information empowers consumers to make better decisions. What the research reveals is that disclosure also activates psychological mechanisms that may have nothing to do with the actual quality or sincerity of the communication.
The CSR Context Intensifies the Effect
The research focuses specifically on corporate social responsibility communication, and this choice is deliberate. CSR represents what Klein calls a moralized domain, a context where authenticity and genuine commitment are particularly salient to audience evaluation.
When a company communicates about environmental sustainability, labor practices, or community investment, consumers are already primed to detect insincerity. The greenwashing scandals of recent decades have trained audiences to approach such messages with skepticism. In this context, AI disclosure does not merely signal efficiency. It signals that the organization chose the fastest, cheapest path to producing words about values that supposedly matter deeply.
The contrast is almost too neat. A company claims to care about something important enough to communicate publicly. The same company delegates that communication to a system optimized for speed and scale rather than depth and care. The medium contradicts the message.
What This Means for Policymakers
For those drafting implementation guidance for the EU AI Act, Klein's research suggests several considerations.
First, disclosure requirements may need to be paired with frameworks that help organizations demonstrate genuine commitment through other channels. If the words themselves cannot carry the weight of authenticity when AI-authored, what other signals can? Investment in actual sustainability practices, third-party verification, stakeholder engagement processes: these become more important, not less, in an era of AI-generated communication.
Second, the research raises questions about whether all disclosure contexts are equivalent. The trust penalty may be more severe in moralized domains like CSR than in purely informational contexts. A weather report generated by AI may not trigger the same effort inference as a statement about corporate values. Regulatory frameworks might benefit from acknowledging these distinctions.
Third, there is a temporal dimension worth considering. Consumer responses to AI disclosure may shift as generative AI becomes more normalized. The current trust penalty may reflect a transitional moment where AI authorship still signals something unusual. Whether this effect persists, intensifies, or diminishes as AI-generated content becomes ubiquitous remains an open question.
The Deeper Question
Beneath the policy implications lies something more fundamental. The research reveals that consumers are not simply evaluating content for accuracy or persuasiveness. They are reading content for evidence of care.
This is worth sitting with. In an era of infinite content production, the scarcity that matters is not information but attention, not words but genuine engagement. When organizations can produce unlimited CSR messaging at near-zero marginal cost, what becomes valuable is precisely what cannot be automated: the willingness to invest limited resources in communication that reflects actual thought.
The paradox, then, is not really about AI disclosure. It is about what happens when efficiency gains collide with domains where efficiency was never the point. Corporate social responsibility communication is supposed to be costly. The cost is the signal.
Implications for the European AI Ecosystem
For startups building generative AI tools for corporate communication, the research suggests a market opportunity in the opposite direction: systems that help organizations demonstrate authentic engagement rather than merely producing content at scale. The value proposition shifts from "produce more content faster" to "produce content that carries credible signals of genuine investment."
For investors evaluating AI communication platforms, the findings indicate that pure efficiency plays may face headwinds as disclosure requirements take effect. The companies that thrive may be those that help clients navigate the sincerity paradox rather than ignore it.
For governance scholars, Klein's work offers a case study in how well-intentioned transparency mandates can produce unintended consequences. The lesson is not that transparency is wrong but that transparency is insufficient. Disclosure rules are necessary but not sufficient conditions for the trust they aim to protect.
What Comes Next
The research will be presented at Politecnico di Milano on April 30, 2026, offering an opportunity for direct engagement with the findings. For those unable to attend, the implications are clear enough to act on now.
The question is not whether to disclose AI authorship. Regulation will increasingly require it. The question is what organizations do alongside disclosure to maintain the trust that transparency alone cannot guarantee.
The answer, almost certainly, involves demonstrating effort through channels that AI cannot replicate. Not because AI-generated content is inherently untrustworthy, but because trust in moralized domains has always depended on evidence of care. The technology changes. The psychology does not.
Frequently Asked Questions
Q: What is the main finding of the Politecnico di Milano research on AI disclosure?
A: The research demonstrates that disclosing AI authorship of corporate social responsibility messages reduces perceived communicative effort, which in turn diminishes perceived sincerity and brand trust. This creates a "trust penalty" for transparent disclosure.
Q: How does the EU AI Act require disclosure of AI-generated content?
A: The EU AI Act establishes transparency obligations requiring organizations to disclose when AI systems generate content that could be mistaken for human-created material, particularly in contexts where authenticity matters to audience evaluation.
Q: What is "perceived communicative effort" and why does it matter?
A: Perceived communicative effort refers to the thought and care that audiences infer went into creating a message. The research identifies it as the cognitive mechanism linking AI disclosure to trust outcomes: when consumers learn content is AI-generated, they infer less effort was invested.
Q: Why is the trust penalty stronger for CSR communications specifically?
A: Corporate social responsibility represents a "moralized domain" where authenticity and genuine commitment are particularly important to audience evaluation. AI disclosure signals that the organization chose efficiency over care in communicating about values that supposedly matter deeply.
Q: What can companies do to maintain trust while complying with AI disclosure requirements?
A: Organizations can demonstrate genuine commitment through other channels: investment in actual sustainability practices, third-party verification, stakeholder engagement processes, and other signals of care that cannot be automated or produced at scale.
Q: When and where will this research be presented?
A: Professor Kristina Klein will present the research at a lunch seminar at Politecnico di Milano on April 30, 2026, at Campus Bovisa, Aula 0.19 Edificio BL26, via Lambruschini 4/B, Milano.