Insights AI News Grammarly AI persona lawsuit How to protect your byline
post

AI News

17 Mar 2026

Read 10 min

Grammarly AI persona lawsuit How to protect your byline

Grammarly AI persona lawsuit shows writers how to secure their bylines and block AI impersonation now

The Grammarly AI persona lawsuit spotlights a fast-growing risk for writers: AI agents that mimic your voice and credit it to you without consent. After backlash and a class action, Grammarly’s ‘Expert Review’ personas were pulled. Here’s what happened, why it matters, and how to guard your byline. Grammarly removed an AI feature that copied the styles of well-known writers and gave users edits “inspired by” those voices. The tool appeared in its Expert Review product. It attached names like Stephen King, Carl Sagan, and Julia Angwin to suggestions that the experts did not write. After public anger and legal threats, the company apologized and shut the feature down for a redesign.

Inside the Grammarly AI persona lawsuit

The Grammarly AI persona lawsuit centers on claims that the company used the names and reputations of “hundreds” of writers to drive paid subscriptions, without permission. Investigative journalist Julia Angwin filed a class action in New York. Her lawyer, Peter Romer-Friedman, said more than 40 people reached out within a day of filing. The complaint says the tool misled users by attaching real experts’ names to advice they never gave. It also argues that using someone’s name for commercial gain without consent breaks the law. The lawsuit lists damages above $5 million as a starting point; the final figure could tie to the tool’s earnings. Grammarly’s chief executive, Shishir Mehrotra, apologized and admitted the agent “misrepresented” expert voices. He said the tool used public information from third-party AI models to generate suggestions “inspired by” published work. The company first floated an email opt-out for named writers. That move drew heavy criticism, with reporters calling it an unfair burden on creators. The firm now says Expert Review had little use, is offline, and will be rebuilt with a better approach to expert participation. It plans to fight the legal claims. Angwin called the outputs “slopperganger” edits, saying the tool used her name to suggest worse, more complex sentences. For many writers, the damage risk was not just legal. It was reputational: readers could believe the bad advice came from the real person.

Why this fight matters

Consent is the line

– Your name is your brand. Using it in a product suggests endorsement and authorship. – Consent and compensation are basic rules. Many creators feel those rules were ignored.

Trust is at stake

– Readers trust bylines. If AI can fake a voice and attach a name, that trust erodes. – Editors and educators may share mistaken credit, harming careers.

Quality and accountability

– When AI outputs are weak or wrong, the person named gets the blame. – Clear labels and provenance help users know what they are reading.

Protect your byline: practical steps

1) Claim your public profiles and voice

– Keep updated profiles on major platforms that state how your name, likeness, and work may be used. – Publish a short “use-of-name” policy on your site. Say that AI personas using your name require written consent.

2) Watch for misuse

– Set alerts for your name plus “AI,” “persona,” “agent,” or “style.” – Ask readers and peers to flag impersonations or odd outputs in your name.

3) Respond fast in writing

– Send a clear cease-and-desist to the company:
  • State that use of your name and implied endorsement is unauthorized.
  • Demand removal, a record of where your name appears, and confirmation of changes.
  • Request a contact for future approvals.
  • – Keep screenshots and timestamps of the misuse.

    4) Secure your workflow

    – Add a standard rights clause to contracts: no AI persona or endorsement without your written consent and pay. – Ask publishers to disclose any AI tools that touch your drafts or byline.

    5) Use transparent credit

    – When you use AI as a tool, label it in notes or acknowledgments to prevent confusion. – Keep versioned drafts so you can prove authorship.

    6) Escalate when needed

    – Talk to a media or IP lawyer if a platform ignores you. – Coordinate with your union or professional group for shared action and public statements.

    What AI companies should do next

    – Default to opt-in, not opt-out, for any name, voice, or persona. – Use plain labels: “AI-generated suggestion, not written by [Name].” – Offer a visible registry of protected names with public status pages. – Share revenue with contributors if real experts join as paid participants. – Keep audit logs so users and experts can see why a suggestion appeared. – Publish red-team results that test for impersonation, bias, and harm.

    The bigger picture for AI and consent

    The Grammarly AI persona lawsuit highlights a wider problem: AI systems mimic style with ease, but laws and norms lag behind. Platforms argue they build on public text. Writers argue that a name equals endorsement and should never be implied without consent. Both courts and customers will shape the rules. As the Grammarly AI persona lawsuit moves forward, more creators may press for stronger protections, clearer consent flows, and real penalties for misuse. Expect new platform policies, contract language, and maybe fresh laws that draw a hard line around identity, reputation, and commercial use. In the end, your name is your promise to readers. Guard it. Use clear policies, quick responses, and strong contracts. And keep watching how the Grammarly AI persona lawsuit evolves, because its outcome may define how AI treats author identity and consent for years to come. (p(Source: https://www.bbc.com/news/articles/cx28v08jpe7o)

    For more news: Click Here

    FAQ

    Q: What is the Grammarly AI persona lawsuit about? A: The Grammarly AI persona lawsuit concerns Grammarly’s Expert Review feature, which mimicked the styles of well-known writers and attached their names to AI-generated editing suggestions without consent, prompting public backlash and a class-action suit led by Julia Angwin. Superhuman, the firm that runs Grammarly, disabled the feature and apologised while saying it would redesign the tool. Q: Who filed the class action and what are its main claims? A: Investigative journalist Julia Angwin filed a class-action lawsuit in the Southern District of New York, alleging that Grammarly and Superhuman misappropriated the identities of “hundreds” of writers to drive paid subscriptions. The complaint argues using names for commercial purposes without consent is unlawful, seeks to stop attribution of advice experts never gave, and lists damages above $5 million as a jurisdictional minimum. Q: Which authors and experts were reportedly used as AI personas? A: Reports and the legal filing say the Expert Review tool attached names including Stephen King, scientist Carl Sagan and journalist Julia Angwin to AI editing suggestions. Plaintiffs contend the feature relied on the identities of “hundreds” of writers without their permission. Q: What did Julia Angwin say about the AI edits attributed to her? A: Angwin said she was “stunned” to find her professional identity being marketed as a commercial product and described the AI output as a “slopperganger.” She told the BBC the edits attributed to her were poor, made sentences worse and more complex, and she found it appalling her name would be used to give bad advice. Q: How did Grammarly and its CEO respond to the backlash? A: Chief executive Shishir Mehrotra apologised on LinkedIn, acknowledged the agent “misrepresented” expert voices, and said the AI drew on publicly available information from third-party LLMs. The company said it had taken Expert Review offline for a redesign, initially offered an opt-out route for named writers, claimed the feature saw very little usage, and said it will strongly defend against the legal claims. Q: What practical steps can writers take to protect their bylines from AI impersonation? A: The article advises writers to claim and keep public profiles current, publish a short “use-of-name” policy, and set alerts for mentions of their name with terms like “AI”, “persona” or “agent”. It also recommends responding quickly in writing if misuse appears—sending a cease-and-desist that demands removal and records, keeping screenshots and timestamps, and adding contract clauses that forbid AI personas without written consent. Q: What changes should AI companies adopt to prevent similar controversies? A: The piece suggests platforms default to opt-in rather than opt-out for use of any name or persona, use clear labels such as “AI-generated suggestion, not written by [Name],” and publish visible registries of protected names and audit logs. It also recommends sharing revenue with contributors who opt in, running red-team tests for impersonation and harm, and improving transparency around expert participation. Q: Could the Grammarly AI persona lawsuit influence future policies or laws affecting authorship and AI? A: The article says the case highlights a wider problem where laws and norms lag behind AI capabilities, and that courts and customers will help shape new rules. It suggests the lawsuit could prompt clearer platform policies, revised contract language and potentially new legislation defining how author identity and consent are handled.

    Contents