David Greene NotebookLM voice lawsuit shows how to secure voice rights now and pursue legal remedies.
The David Greene NotebookLM voice lawsuit claims Google’s AI podcast tool used a voice that sounds like the former NPR host without his consent. Greene says NotebookLM’s male voice mirrors his cadence and fillers. Google denies it and says it hired a paid actor. The case highlights rising battles over voice rights in AI.
Veteran radio host David Greene says he was stunned when friends and colleagues asked if he had licensed his voice to Google’s NotebookLM. The AI tool can generate short “audio overview” podcasts with a male and a female voice in conversation. Greene insists the male voice sounds like him. Google says that is not true and that it used a professional actor. The suit, filed in Santa Clara County on Jan. 23, could test how the law treats voice likeness in AI.
David Greene NotebookLM voice lawsuit: What’s in dispute
Greene’s claim
Greene says the AI voice matches his tone, rhythm, and even his “uhs” and “likes.” He views his voice as central to his work and identity. His complaint cites an unnamed forensic firm that analyzed NotebookLM audio. The firm rated the chance that Greene’s voice trained the bot at 53% to 60%, which the filing calls “relatively high.” The lawsuit does not present direct proof that Google recorded or copied Greene’s voice, but it argues the similarity is too close to ignore.
Google’s response
Google rejects the claim. A company spokesperson says the male voice in NotebookLM’s Audio Overviews comes from a paid professional actor. Google labels the allegations “baseless” and says it stands by its process.
Why this matters
Voice is personal. It is often protected by state “right of publicity” laws, which bar using a person’s likeness for commercial purposes without consent. AI complicates this. Models can mimic patterns without directly copying a single recording. In 2024, OpenAI pulled its “Sky” voice after Scarlett Johansson raised concerns that it sounded like her. Platforms have also struggled with AI “deepfakes,” like the viral fake images of Taylor Swift that drew harsh criticism and takedowns. The David Greene NotebookLM voice lawsuit sits at the center of these issues: consent, similarity versus copying, and the boundary between inspiration and impersonation.
How NotebookLM-style AI podcast voices work
Simple flow
Users feed the tool notes, links, or documents.
The AI summarizes the material and writes a dialogue.
Preset voices read the script as a short audio “overview.”
These systems can sound natural because they string together learned patterns of speech. They do not need to match a known person. But if a preset voice sounds too much like a public figure, questions arise about training data, selection, and intent.
What creators can do to protect voice and likeness
Lock down agreements
Update contracts to ban AI training or cloning without your written consent.
Add clear terms on takedowns, data deletion, and damages if a partner breaks the rules.
Watch for copies
Set up alerts for your name, show titles, and unique phrases you often say.
Ask fans to flag content that uses a voice that sounds like you.
Keep time-stamped clips of your work to show how you sound and how it changed over time.
Use legal tools early
Send right-of-publicity and DMCA notices when platforms host likely clones or confusingly similar voices tied to your brand.
Consult an IP or media lawyer before joining text-to-speech marketplaces or licensing archives.
Adopt basic tech defenses
Consider audio watermarking or signatures that help prove origin. These tools are not perfect, but they can support a claim.
Work with reputable forensic services if you need a similarity analysis.
Set a public policy
Publish an AI use policy on your site. State that cloning, training, or simulating your voice needs permission.
Join guilds or industry groups that negotiate AI protections and offer takedown help.
If the case moves forward: what to watch
Key questions a court may weigh
Similarity: Does the AI voice sound like Greene to the average listener, or only to trained ears?
Source: Is there evidence that Greene’s recordings trained the model, or is the similarity coincidental?
Commercial use: Did the voice help market or deliver a product in a way that exploits Greene’s identity?
Consent and disclosure: What records show how Google selected and produced the voice?
Possible outcomes include dismissal, settlement with terms, or a ruling that sets new limits on AI voice use. Even without a verdict, discovery could push platforms to adopt clearer consent logs, voice provenance records, and stricter review before shipping new presets.
For creators and platforms, a wake-up call
The David Greene NotebookLM voice lawsuit is a signal to lock in stronger consent standards. Creators can protect themselves with better contracts and monitoring. Companies can reduce risk by documenting training sources, hiring human actors under explicit voice-use agreements, auditing for “celebrity-adjacent” similarities, and offering quick appeal and takedown paths. Clear consent and traceability are now table stakes.
Best practices for responsible AI voice use
Hire and credit professional voice actors with explicit, limited licenses.
Avoid marketing language that hints a voice is “inspired by” a living person.
Test presets with listeners to catch unintended resemblances.
Keep a provenance log: who recorded, when, where, and under what license.
Offer a channel for people to report close resemblances and act fast on complaints.
The bottom line
The David Greene NotebookLM voice lawsuit captures a simple truth: voices are personal, and consent matters. As AI audio grows, creators need clear rights and quick remedies, while platforms need transparent sourcing and fast response. If both sides commit to consent, records, and fair use, AI audio can grow without crossing the line.
(Source: https://nypost.com/2026/02/16/media/npr-host-david-greene-accuses-google-of-stealing-his-voice-for-ai-podcast-tool/)
For more news: Click Here
FAQ
Q: What is the David Greene NotebookLM voice lawsuit about?
A: The David Greene NotebookLM voice lawsuit alleges that Google’s AI podcast tool used a male voice that sounds like Greene without his consent and was filed Jan. 23 in Santa Clara County. Google denies the claim, saying the voice came from a paid professional actor.
Q: What does David Greene say about NotebookLM’s male podcaster voice?
A: Greene says the NotebookLM male voice mirrors his tone, rhythm, cadence and even fillers like “uhs” and “likes,” and that friends, family and colleagues reached out asking whether he had licensed his voice. He told the Washington Post the resemblance felt “eerie” and said his voice is central to his identity.
Q: How has Google responded to Greene’s allegations?
A: Google has called the allegations baseless and told The Post the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor, according to spokesperson José Castañeda. The company also noted it launched the automated podcast tool in 2024 and stands by its process.
Q: What evidence does the lawsuit cite to support Greene’s claim?
A: The complaint cites an unnamed AI forensic firm whose software gave a 53% to 60% rating that Greene’s voice was used to train the NotebookLM voice, which the filing calls a “relatively high” confidence. The lawsuit nevertheless does not present direct proof that Google recorded or copied Greene’s voice.
Q: What legal issues does the case raise regarding voice and AI?
A: The case raises questions about consent, voice likeness versus copying, and state “right of publicity” laws that bar using a person’s likeness for commercial purposes without permission. It also highlights the difficulty of distinguishing similarity produced by AI models from direct copying, echoing earlier disputes over AI voice likeness.
Q: What steps can creators take to protect their voice and likeness?
A: Creators can update contracts to ban AI training or cloning without written consent, add clear takedown and data-deletion terms, and keep time-stamped clips of their work to document how they sound over time. They can also set up monitoring alerts, ask fans to flag suspicious content, send right-of-publicity or DMCA notices, and consult an IP or media lawyer early.
Q: If the case moves forward, what might courts consider?
A: Courts may weigh whether the AI voice sounds like Greene to an average listener, whether there is evidence his recordings trained the model, whether the voice was used commercially in a way that exploits his identity, and what records exist about how the voice was selected and produced. Possible outcomes include dismissal, settlement, or a ruling that could set new limits on AI voice use.
Q: How can platforms and companies reduce the risk of disputes like this?
A: Platforms can reduce risk by documenting training sources, hiring and licensing professional voice actors under explicit agreements, auditing presets for unintended “celebrity-adjacent” similarities, and offering quick appeal and takedown paths. The David Greene NotebookLM voice lawsuit is presented as a wake-up call that provenance logs, transparent sourcing and fast responses are now table stakes for responsible AI voice use.