Insights AI News Does AI waive attorney-client privilege How to avoid waiver
post

AI News

04 Apr 2026

Read 11 min

Does AI waive attorney-client privilege How to avoid waiver

Does AI waive attorney-client privilege, learn how to protect client confidences and avoid waiver.

Does AI waive attorney-client privilege? In United States v. Heppner, a federal judge said documents made with an AI chatbot were not protected. The judge reached the right result but used shaky reasoning about privacy policies. You can lower risk by using secure tools, involving counsel, and following clear steps to protect confidentiality. A high-profile case has put a sharp question on the table: does AI waive attorney-client privilege when a client uses a chatbot to draft strategy or translate messages for a lawyer? In Heppner, the court said the AI was not a lawyer and the work was not done at counsel’s direction, so no privilege or work product applied. But the opinion also leaned on the AI company’s privacy policy to say confidentiality was waived—a move that could chill everyday, low-cost tools clients rely on.

Does AI waive attorney-client privilege? What Heppner decided

The core holdings

  • The AI tool (Claude) is not a lawyer, so direct chats with it are not privileged.
  • Work product did not apply because the client used the AI on his own, not at the direction of counsel.
  • These points track long-standing law. If a client Googles defenses or asks a non-lawyer friend for ideas, those notes are not privileged. The same logic applied here.

    The controversial step

    Judge Rakoff also said the AI provider’s privacy policy broke confidentiality. Because the policy allowed data use and possible disclosure, the client could not reasonably expect secrecy. That step goes further than needed and creates risk for many cloud tools that lawyers and clients use every day.

    Terms of service are a weak test for confidentiality

    Courts often look at reasonable expectations, not fine-print settings. Lawyers store files in Google Workspace and Microsoft 365. Those services have policies that allow certain data processing, yet using them does not automatically waive privilege when firms take reasonable safeguards. ABA Formal Opinions 477R and 512 tell lawyers to vet vendors, use secure settings, and warn clients about risks. That is the right frame: risk-based, fact-specific, and tool-agnostic. If a platform is secure and used to serve legal work, it should not matter whether it runs AI in the background. The Heppner opinion also blurred the line between “talking to a third party” and “using software.” Many AI features live inside the same cloud systems firms already trust. The right question is not whether the tool chats back. It is whether the parties took reasonable steps to keep legal communications confidential.

    Real-world stakes for clients and lawyers

    Clients already use AI to:
  • Translate messages for a lawyer.
  • Summarize records before a meeting.
  • Draft timelines and questions after advice.
  • Write emails in tools that auto-suggest text with AI.
  • When counsel directs these steps for a case, work product may protect some materials. For self-represented people, a court has even protected AI-assisted work product. But most client-lawyer work is not litigation. There, privilege is the main shield. If a court says a privacy policy kills confidentiality, many low-cost, helpful uses become risky. This cuts hardest against people with fewer resources. Wealthier clients can hire translators, accountants, and consultants whose help can be protected under privilege or Kovel. Many others cannot. If they use public AI instead, they may lose protection for reasons they cannot see or control.

    How to avoid waiver when using AI

    So, does AI waive attorney-client privilege in practice? It can—if parties use public tools without controls. The better path is to apply known safeguards to modern tools.

    Set direction and document it

  • Have counsel direct client AI tasks tied to the case (e.g., “summarize these bank records”).
  • Keep a note that the work is for legal advice or litigation preparation.
  • Use enterprise-grade, no-training modes

  • Prefer firm-controlled AI or enterprise tools that disable data retention and model training.
  • Turn off chat history where possible. Use SSO, access controls, and encryption.
  • Treat AI like other cloud tools

  • Keep privileged content inside approved platforms (Microsoft 365, Google Workspace) with DLP and audit logs.
  • Label documents “Attorney-Client Privileged/Work Product.” Store them in restricted folders.
  • Limit what goes into public chatbots

  • Avoid posting names, dates, account numbers, or advice received.
  • Abstract facts (“Client A,” “Bank X”) or use synthetic examples if you must test an idea.
  • Translate and analyze safely

  • Use offline or enterprise translation, or vetted vendors under NDA. Avoid public copy-paste of full filings.
  • For math or summaries, use tools that run locally or in a trusted tenant.
  • Do vendor due diligence

  • Follow ABA 477R and 512: review security, data use, retention, and government access terms.
  • Record settings (no training, data isolation) and keep a log for later proof.
  • Educate clients early

  • Give a one-page guide: what tools to use, what to avoid, and how to label and store files.
  • Provide secure intake portals so clients do not rely on public AI during early fact-gathering.
  • When firms use these steps, they reduce the chance that “does AI waive attorney-client privilege” becomes a yes for routine, helpful workflows.

    What courts should do next

    Courts can keep the law stable and fair by:
  • Deciding cases on settled grounds: an AI tool is not a lawyer; work product needs counsel direction.
  • Judging confidentiality by reasonable steps and purpose, not by dense terms of service.
  • Treating AI like other software: if the platform is secure and used to support legal advice, do not presume waiver.
  • Bar groups can help too. Clear guidance on client-directed AI use—translation, organizing records, secure settings—would align practice with ABA 477R and 512 and reduce surprise waivers.

    Bottom line

    The right answer to “does AI waive attorney-client privilege” is: it depends on how you use it. Heppner warns that unsupervised use of public chatbots can risk waiver. But when counsel directs the work, parties use secure, no-training modes, and they document safeguards, AI can support legal service without losing protection. Use the tech, but protect the trust.

    (Source: https://www.lawfaremedia.org/article/ai-and-privilege-after-united-states-v.-heppner)

    For more news: Click Here

    FAQ

    Q: does AI waive attorney-client privilege in light of United States v. Heppner? A: In United States v. Heppner, the court held that documents created using the AI tool Claude were not protected by attorney-client privilege or the work product doctrine because Claude is not a lawyer and the defendant used it without counsel’s direction. The opinion also relied on Anthropic’s privacy policy to find confidentiality waived, a step the article criticizes as unnecessary and problematic. Q: What were Judge Jed Rakoff’s core holdings in Heppner about AI and privilege? A: Judge Rakoff held that Claude is not a lawyer and therefore direct communications with it do not establish an attorney-client relationship, and he concluded the work product doctrine did not apply because counsel did not direct the defendant’s use of the AI. He also found that the AI provider’s privacy policy undercut any reasonable expectation of confidentiality, a point the article says goes beyond what was necessary to resolve the case. Q: Why is relying on an AI provider’s privacy policy to determine waiver controversial? A: It is controversial because courts typically assess a user’s reasonable expectation of confidentiality rather than treating vendor terms of service as dispositive. The article warns that elevating privacy policies risks sweeping ordinary cloud-based legal tools into waiver and conflicts with frameworks like ABA Formal Opinions 477R and 512. Q: Can the work product doctrine protect materials created with AI tools? A: The work product doctrine can protect materials prepared in anticipation of litigation or at counsel’s direction, but in Heppner the defendant’s undisputed, self‑directed use of Claude meant the doctrine did not apply. The article also notes that a different magistrate judge in the Eastern District of Michigan held that a self-represented litigant’s ChatGPT exchanges could be protected as work product. Q: What practical steps can lawyers and clients take to reduce the risk that AI use leads to waiver? A: Counsel should direct client AI tasks tied to a legal matter, document that the work is for legal advice or litigation preparation, and keep a record of those directions to show purpose and control. They should also use enterprise-grade or firm-controlled AI with no-training modes, disable chat history where possible, store privileged material on approved platforms with DLP and audit logs, and perform vendor due diligence consistent with ABA guidance. Q: Should courts treat interactive AI chatbots differently from other cloud-based legal tools for confidentiality purposes? A: The article argues courts should not treat interactive AI chatbots differently from other cloud-based tools and should assess confidentiality based on reasonable steps taken to preserve secrecy, not on the form of the interaction. It criticizes Heppner’s framing that emphasized “communicating with” a chatbot and warns that such distinctions do not reflect the underlying technical realities of how services protect data. Q: How might Heppner affect clients with limited resources who use AI translators or summarizers? A: The decision could disadvantage clients who cannot afford human intermediaries like interpreters, translators, or paid consultants because they may lose privilege for low-cost AI assistance. The article warns this outcome could entrench resource disparities by making access to confidential legal support depend on a client’s ability to pay for privileged human services. Q: What should future courts and bar associations do to clarify how AI use affects privilege? A: Future courts should resolve AI-related privilege disputes on established grounds—recognizing that AI is not a lawyer and that work product requires counsel direction—while determining confidentiality by the facts, purpose, and reasonable steps taken to protect information rather than by vendor terms of service. Bar associations should issue guidance on how attorneys can safely direct clients’ use of AI and advise clients consistent with ABA Formal Opinions 477R and 512.

    Contents