A Journalist Discovers Her Name Is Being Used to Sell AI Editing Advice She Never Gave. The Legal Case Is Straightforward. The Underlying Debate Is Not.
Julia Angwin has spent decades building a reputation as an investigative journalist. She founded The Markup, a nonprofit news organization focused on technology's impact on society. She writes opinion pieces for The New York Times. Her professional identity – the credibility attached to her name – represents years of accumulated expertise.
Last week, she discovered that Grammarly was selling that identity as a product feature, without ever asking her permission.
The writing software's Expert Review tool offered users AI-generated editing suggestions attributed to real writers and academics – Angwin among them, alongside Stephen King, Neil deGrasse Tyson, and hundreds of others. None had consented. None had been contacted. The feature simply appropriated their names and reputations to add credibility to machine-generated advice.
Angwin is now the lead plaintiff in a class action lawsuit filed in the Southern District of New York against Superhuman, the tech company that owns Grammarly. The legal claim is relatively narrow: New York and California law prohibit commercial use of a person's name and likeness without permission. As her attorney Peter Romer-Friedman told WIRED, "Legally, we think it's a pretty straightforward case."
But the questions this case surfaces are anything but straightforward. They touch on something that matters deeply to anyone whose professional value depends on expertise, reputation, or skill: What happens when AI companies can simulate your professional identity and sell it?
The Facts of the Case
The timeline deserves attention. Grammarly launched its Expert Review feature in August 2025 as part of a broader suite of AI-powered tools. According to Mashable, the feature was available on both free and paid subscription tiers, promoted as providing "feedback on the content of their writing" drawn from "insights from subject-matter experts and trusted publications."
Users could select specific authors whose "expertise" they wanted applied to their text. The tool would then generate editing suggestions attributed to those individuals.
The disclaimer buried in the user guide acknowledged that references to experts "are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities." But the same documentation also claimed the feature offered "insights from leading professionals, authors, and subject-matter experts" – a claim that appears difficult to reconcile with the fact that none of these experts had actually provided any insights.
When the feature came to public attention through WIRED's reporting, the backlash was swift. Grammarly's initial response was to offer an opt-out mechanism: affected writers could email the company to have their names removed.
This response generated its own criticism. Gaming journalist Wes Fenlon, whose persona was used in the tool, wrote on BlueSky: "Opt-out via email is a laughably inadequate recourse for selling a product that verges on impersonation and profits on unearned credibility."
Within days, Superhuman CEO Shishir Mehrotra apologized on LinkedIn, acknowledging the tool had "misrepresented" the voices of experts. The company disabled Expert Review entirely, stating it would "reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented – or not represented at all."
The lawsuit, seeking damages in excess of $5 million (a minimum jurisdictional requirement – actual damages will be calculated based on the company's earnings from the tool), was filed the same day.
What Kind of Disagreement Is This?
Here is where the conversation gets interesting – and where it tends to collapse into tribal positions if not handled carefully.
One framing treats this as a clear-cut intellectual property violation. Grammarly took something that belonged to these writers – their professional identities – and monetized it without consent. The legal remedy is obvious: stop doing that, and compensate those harmed.
Another framing treats this as an inevitable friction point in AI development. Large language models are trained on publicly available text. They learn patterns, styles, and approaches from that training data. Drawing a bright line around "identity" when the underlying technology works by synthesizing patterns from millions of sources may prove practically impossible.
A third framing focuses on the quality problem. Angwin described the AI's output as a "slopperganger" – a reference to content described on social media as "AI slop." The edits attributed to her were, in her assessment, "not good" and "actually making the sentences worse, more complex." The reputational harm isn't just that her name was used without permission; it's that her name was attached to advice she considers actively bad.
These three framings aren't mutually exclusive, but they point toward different policy responses. The first suggests strengthening existing personality rights and enforcement mechanisms. The second suggests that personality rights may need fundamental rethinking in an AI context. The third suggests that quality standards and accuracy requirements might matter as much as consent mechanisms.
The Structural Issue
What makes this case significant beyond its specific facts is what it reveals about a broader pattern.
Grammarly's Expert Review feature represents a particular business model: take the accumulated expertise and reputation of professionals, simulate it with AI, and sell access to that simulation. The professionals whose identities are being monetized receive nothing – not compensation, not credit, not even notification.
This model is not unique to Grammarly. It reflects a recurring dynamic in AI development: value created by human expertise over years or decades gets absorbed into training data, transformed into AI capabilities, and commercialized by companies that had no role in creating that original value.
The strongest version of the argument defending this practice would note that learning from publicly available information is how all knowledge transmission works. Writers learn from other writers. Editors develop their skills by studying how good editing works. Drawing a legal line that prevents AI systems from doing what humans do naturally could stifle beneficial innovation.
The strongest version of the argument opposing this practice would note that there's a meaningful difference between learning from someone's work and selling a product that claims to embody their expertise. The former is how knowledge spreads. The latter is commercial appropriation of identity.
Where does this position break down? That's where it gets interesting. The learning-versus-appropriation distinction may be clear at the extremes but fuzzy in the middle. An AI writing assistant that has learned general editing principles from millions of texts is different from one that explicitly markets itself as channeling specific named individuals. But where exactly is the line?
What Comes Next
The lawsuit will proceed through the courts. According to the BBC, Angwin's legal team has heard from over 40 potential plaintiffs within 24 hours of filing. Superhuman has stated it will "strongly defend" against the claims, which it characterizes as "without merit."
Regardless of the legal outcome, the case has already achieved something: it has made visible a practice that was happening quietly and forced a public conversation about its legitimacy.
For policymakers and governance scholars, the case highlights gaps in existing frameworks. Personality rights laws were developed for a pre-AI context. They may need updating to address scenarios where AI systems can simulate professional identities at scale.
For startup leaders and investors, the case offers a cautionary example. The speed with which Grammarly moved from "we'll offer opt-out" to "we're disabling the feature entirely" suggests that the reputational costs of getting this wrong can materialize quickly.
For AI researchers, the case raises questions about how to build systems that benefit from human expertise without appropriating human identity. These are design questions as much as legal ones.
The question that lingers: If professional identity becomes something AI can simulate and companies can sell, what happens to the incentives that lead people to develop expertise in the first place?
That question doesn't have an obvious answer. But it's the right question to be asking.
These tensions between AI capability and professional identity aren't going away. They're becoming more acute as the technology advances. For those who want to engage with these questions seriously – not as tribal combat but as genuine problem-solving – Human x AI Europe convenes in Vienna on May 19. The room will be full of people who understand that productive disagreement is how good policy gets made.
Frequently Asked Questions
Q: What is the Grammarly Expert Review lawsuit about?
A: Investigative journalist Julia Angwin filed a class action lawsuit against Superhuman (Grammarly's parent company) for using her name and the names of hundreds of other writers to sell AI-generated editing advice without their consent. The suit was filed in the Southern District of New York on March 11, 2026.
Q: How much in damages does the Grammarly lawsuit seek?
A: The lawsuit states damages exceed $5 million, but this is a minimum jurisdictional requirement. According to Angwin's attorney Peter Romer-Friedman, actual damages will be calculated based on Superhuman's earnings from the Expert Review feature.
Q: What happened to Grammarly's Expert Review feature?
A: Superhuman disabled Expert Review entirely after public backlash. CEO Shishir Mehrotra apologized on LinkedIn, acknowledging the tool had "misrepresented" the voices of experts. The company stated it would reimagine the feature to give experts "real control" over their representation.
Q: What legal basis does the Grammarly lawsuit use?
A: The lawsuit relies on existing personality rights laws in New York and California that prohibit commercial use of a person's name and likeness without permission. Attorney Peter Romer-Friedman described it as "a pretty straightforward case" legally.
Q: Who are the named plaintiffs in the Grammarly class action?
A: Julia Angwin, founder of The Markup and New York Times contributing opinion writer, is the only named plaintiff. However, the lawsuit represents a class of "hundreds of journalists, authors, writers, and editors" whose identities were used. Over 40 potential plaintiffs contacted the legal team within 24 hours of filing.
Q: What was Grammarly's initial response before disabling Expert Review?
A: Grammarly initially offered an opt-out mechanism where affected writers could email the company to have their names removed. This was criticized as inadequate since experts were never notified or asked for permission in the first place, prompting the company to disable the feature entirely.