Grammarly Sued Over Expert Review Using Journalists’ Names

Grammarly Sued Over Expert Review Using Journalists' Names

I opened my feed to a flurry of disbelief: Grammarly’s new “Expert Review” had been using names of living writers and famous authors without asking. Within a day the feature was pulled and a class-action suit landed in California court. The room felt smaller—suddenly the tools we trust were behaving like strangers in our inbox.

I’ve covered tech corridors where product teams move fast and apologizes come later. You’ll want the timeline and the legal angles, but more than that—you’ll want to know what this says about authorship, consent, and the AI economy that pays the bills. I’ll walk you through what happened, who’s involved, and the question any writer or user should be asking next.

In my inbox, people were tagging colleagues — What Grammarly released and why writers complained

Grammarly rolled out an “Expert Review” feature that offered revision advice in the voices of named authors and journalists. The company said those figures served as “inspiration”; the writers say they were never consulted. Reporters at Wired, The Verge, and outlets like Gizmodo ran examples showing advice that ranged from harmless to actively poor—Julia Angwin told Wired she was surprised at how bad it was.

Grammarly’s CEO, Shishir Mehrotra, posted on LinkedIn that the feature aimed to help users “discover influential perspectives” and build ties between experts and fans, then acknowledged the company “fell short.” The company pulled the feature the same day the post appeared.

The newsroom reaction was immediate — Who’s named in the suit and what it alleges

Investigative journalist Julia Angwin has filed the class-action complaint that names Grammarly’s owner, Superhuman, and references prominent figures such as Stephen King and dozens more whose identities the feature allegedly echoed. The filing says Grammarly “misappropriated the names and identities of hundreds of journalists, authors, writers, and editors” to generate profit.

Did Grammarly misappropriate journalists’ names?

The complaint invokes California Civil Code § 3344(a)(1), which bars using another’s name, voice, signature, photograph, or likeness “in any manner…for purposes of advertising or selling…services, without that person’s prior consent.” The suit doesn’t demand a specific dollar award, but states the amount in controversy exceeds $5 million (€4,600,000). That figure frames the claim as more than a grievance—it’s a commercial allegation.

Angwin is the only named plaintiff so far, but the class is described broadly. If the court allows the class to expand, the litigation could sweep in a large group of writers whose styles or names were used as prompts or labels for the AI suggestions.

In the timeline, the feature’s removal was swift — What the practical problems were

Writers complained that the Expert Review output sometimes degraded prose or suggested fixes that misrepresented the named author’s sensibility. Raymond Wong of Gizmodo was flagged as an inspiration, for instance, and others found the results surprising or actively harmful.

Beyond taste, there’s a reputational angle: publishing a “Stephen King” mode that produces clumsy horror tropes can feel like reputation laundering without consent. The feature functioned as a Trojan horse for questions about consent and attribution in AI tools.

What is Grammarly’s Expert Review feature?

It presented editing recommendations styled after named figures, using copies of their public work as training or prompts, according to reporting. Grammarly described those figures as “inspiration,” not direct imitators. Users saw options to select voices; writers saw their names attached without consultation. That gap is the legal and ethical fulcrum of the dispute.

At my desk I checked the filings — The legal mechanics and consequences

The complaint cites right-of-publicity law and alleges profit-driven misuse. California’s statute allows a person whose likeness or name is used commercially without consent to seek damages. The suit frames Grammarly’s labels and suggestions as marketing and commercial use rather than protected speech or fair commentary.

“Any person who knowingly uses another’s name, voice, signature, photograph, or likeness…for purposes of advertising or selling…services, without that person’s prior consent…shall be liable for any damages sustained by the person or persons injured as a result thereof.”

Courts will have to weigh whether an AI editing mode that offers “expert-style” feedback is commercial exploitation or a product feature. The distinction will determine whether platforms like Grammarly, or any AI tool that references public figures, need explicit consent before offering those persona-based options.

At coffee shops and conferences, people asked — Why this matters to users and creators

Creators care about control of reputation and revenue; users care about transparency and quality. If an app labels a mode “Stephen King” or “Julia Angwin,” you expect some fidelity to those voices and, ideally, permission from their owners. Instead, many writers saw their names used as marketing shorthand.

This isn’t just about a single feature; it’s about how AI products borrow cultural capital. When tools package personality as a selectable option, they risk turning authorship into a commodity you can license or mimic without negotiation. The AI outputs can become a hall of mirrors that reflect back simplified, and sometimes inaccurate, versions of creative voices.

Grammarly has responded publicly through Mehrotra’s LinkedIn apology and by temporarily disabling the agent. Legal counsel for the plaintiffs and the company will now test how the law applies to AI-driven persona features. Platforms named in reporting—Grammarly, Superhuman, Wired, Gizmodo, The Verge—are now part of a debate that spans ethics, IP, and product design.

On my beat, I watch how regulation meets product — What to watch next

Expect three things to matter: whether the class is certified, how courts interpret right-of-publicity claims for AI modes, and how companies change labeling and consent flows. If the suit proceeds and gains class members, product teams may be required to add consent dialogs or remove persona labels entirely.

As a user, ask whether a feature is using a real person’s name as marketing or simply describing a style. As a writer, consider whether you want to be a selectable voice for an algorithm—and if so, under what terms. The longer question is how much control creators retain over the cultural currency they generate.

Who gets to monetize imitation—and who pays for it—could reshape AI product roadmaps and the careers of writers; will the courts side with names and reputations, or with companies building with broad datasets?