Insights AI News How to use AI without waiving attorney-client privilege now
post

AI News

27 Feb 2026

Read 11 min

How to use AI without waiving attorney-client privilege now

how to use AI without waiving attorney-client privilege: use attorney-led tools to protect privacy.

Courts are signaling risk: using public chatbots can waive privilege. This guide explains how to use AI without waiving attorney-client privilege, drawn from a recent SDNY ruling. Choose secure, attorney-directed tools, lock down data, and document workflows so your strategy stays confidential and defensible. A recent federal case sent a clear warning. A defendant used a public AI tool to draft defense ideas. The judge ruled the files were not protected by privilege or work product. Why? He typed sensitive facts into a consumer AI with data sharing in its terms. He acted without lawyer direction. Agents later seized the device and prompts. The court said confidentiality was gone.

What the Heppner ruling teaches

The facts in simple terms

– The defendant wrote 31 AI-assisted documents about his case. – He used a public chatbot with data retention and training. – He did this on his own, not at his lawyer’s request. – He later shared the files with his lawyers. – The court said privilege did not apply and work product was waived.

Key takeaways

– Confidentiality is the core of privilege. Sharing facts with a third party can waive it. – Kovel does not cover tools used outside attorney direction or true need. – Work product is strongest when a lawyer shapes the work. Third-party disclosure can still waive it. – This is one district court ruling, but others may follow its logic.

How to use AI without waiving attorney-client privilege

If you want to know how to use AI without waiving attorney-client privilege, start with two rules: keep data confidential and involve your lawyer. Build your AI workflow around those rules.

Pick the right tool

– Use an enterprise AI with written promises: no training on your data, zero retention by default, and strong encryption. – Get a contract or DPA that states confidentiality, access limits, and breach notice. – Ask for audits and certifications (for example, SOC 2, ISO 27001). Keep proof.

Make it attorney-directed

– Have your lawyer choose or approve the AI and the workflow. – Put the AI use in the engagement letter or a memo. State purpose and scope. – Treat the AI like a translator or analyst brought in by counsel when truly needed.

Control what you type

– Do not paste names, unique facts, or case IDs into a public bot. – Mask details. Use placeholders until counsel approves full facts. – Upload only what is necessary for the task. Less is safer.

Lock down access

– Use a private tenant or virtual private cloud. Enable SSO and MFA. – Limit who can see prompts and outputs. Use role-based access. – Turn off public sharing, browser plug-ins, and third-party add-ons.

Set clear data rules

– Disable provider data training on inputs and outputs. – Store logs inside your system, not the vendor’s, when possible. – Keep prompt and output logs in a privileged folder with restricted access.

Mark, review, and integrate

– Label drafts “Attorney Work Product – Draft” when counsel is involved. – Make the lawyer the editor-in-chief. Counsel should review, revise, and adopt or reject the AI draft. – Keep a short note of attorney direction and review to show involvement.

Train your team and clients

– Issue a simple one-page policy on safe AI use. – Tell clients, in writing, not to use public chatbots for case facts. – Add AI warnings to engagement letters and litigation hold notices.

Prepare for discovery

– Assume prompts may be requested. Keep them organized and privileged where possible. – Know your retention rules. Hold what you must; delete what policy allows. – If you used a consumer bot in the past, tell counsel early so they can plan.

Assess your current stack

– Review research tools, email, and document systems that now include AI features. – Turn off data sharing settings that expand vendor training. – Update vendor contracts to match your confidentiality needs.

What counts as “confidential enough”

Courts often accept cloud tools when you take reasonable steps to protect secrecy. That means you: – pick a trusted provider with strong security, – sign a contract that binds the provider to keep data private, – limit access and sharing, and – use the tool under attorney guidance for legal advice. These steps help align modern AI use with long-standing privilege rules.

Common mistakes that risk waiver

– Using a free chatbot for sensitive facts. – Letting AI providers train on your inputs. – Working without attorney direction, then sending the AI output to counsel later. – Sharing AI drafts widely inside the company or with vendors. – Failing to label, secure, or review AI-assisted work.

Where the law may evolve next

The ruling focused on a public tool and no lawyer direction. Future cases may treat secure, enterprise AI differently, especially when counsel directs the work. Courts will look at need, contracts, settings, and your actual behavior. Good process will matter as much as good technology.

Quick checklist: do this before you prompt

– Get counsel’s approval for the tool and the task. – Confirm no training on your data and zero retention. – Limit facts and anonymize where possible. – Label drafts and store them in privileged locations. – Have counsel review and adopt edits before circulation. Strong habits make it easier to show how to use AI without waiving attorney-client privilege across your team. Good AI can speed legal work. Bad AI habits can hand your playbook to the other side. Use secure tools. Involve your lawyer early. Share only what you must. Document your steps. Follow these rules to know how to use AI without waiving attorney-client privilege and keep your strategy protected. (Source: https://ogletree.com/insights-resources/blog-posts/the-intersection-of-ai-and-attorney-client-privilege-a-cautionary-tale/) For more news: Click Here

FAQ

Q: What did the Heppner case decide about AI-generated documents and privilege? A: Judge Jed S. Rakoff ruled that documents generated using a publicly available consumer AI tool were not protected by attorney-client privilege or the work product doctrine because the defendant used a public platform that allowed data retention and did so without attorney direction. The opinion provided a fact-specific roadmap for analyzing privilege claims involving AI-generated materials but is a single district court decision with limited precedential weight. Q: Why did the court find attorney-client privilege waived in that case? A: The court found confidentiality was fatally compromised when the defendant input sensitive facts into a consumer AI platform whose terms allowed data collection, retention, and use for model training, constituting voluntary disclosure outside the attorney-client relationship. The court also rejected retroactive privilege and held that sharing AI-generated materials with counsel after disclosure could not cure the earlier waiver. Q: What practical steps can attorneys and clients take to keep AI-assisted work confidential? A: The article recommends using enterprise or secure AI platforms with contractual confidentiality protections, zero-retention or no-training provisions, and strong encryption, and having counsel choose or approve the tool and workflow. It also advises documenting AI use in engagement letters and treating the lawyer as editor-in-chief who reviews, revises, and adopts or rejects AI drafts. Q: How should individuals control what they type or upload into AI tools to avoid waiver? A: The guidance says not to paste names, unique facts, or case IDs into public bots, to mask details and use placeholders, and to upload only what is necessary for the task unless counsel approves otherwise. Individuals should assume prompts may be discoverable and consult their lawyer before sharing substantive case facts with any consumer-facing AI. Q: Under what circumstances might Kovel or work-product protection apply to AI-assisted materials? A: Courts may treat AI use differently when counsel directs the work, the third-party AI functions as a necessary aide, or when a secure enterprise platform with contractual confidentiality is used, because privilege depends on confidentiality and attorney involvement. The Heppner court rejected Kovel and work product protections where the client used a consumer AI on his own initiative and the platform’s terms negated any reasonable expectation of confidentiality. Q: What contractual and technical vendor features should you require to preserve confidentiality when using AI? A: Require a written contract or DPA that forbids training on your inputs, promises zero or limited retention, provides breach notice, and binds the provider to confidentiality, and seek audits or certifications such as SOC 2 or ISO 27001 when available. Also use encrypted private tenants or virtual private clouds, enable SSO and MFA, and keep logs under your control to reduce the risk that provider retention or training practices will destroy privilege. Q: How should law firms document AI workflows to defend privilege during discovery? A: The article advises labeling drafts (for example, “Attorney Work Product – Draft”), keeping prompt and output logs in privileged folders with restricted access, and recording short notes of attorney direction, review, and adoption to show counsel involvement. It also recommends updating engagement letters and litigation hold notices to address permitted AI use and informing clients not to use public chatbots for case facts. Q: What quick checklist should clients follow to know how to use AI without waiving attorney-client privilege? A: The article’s checklist recommends getting counsel approval for the tool and task, confirming the vendor will not train on your data and has zero retention, anonymizing or limiting the facts you enter, labeling drafts and storing them in privileged locations, and having counsel review and adopt any AI-assisted work before broader circulation. Following these steps and documenting attorney-directed workflows helps align AI use with privilege rules.

Contents