Insights AI News OpenAI chat log preservation order 2025 explained
post

AI News

12 Oct 2025

Read 14 min

OpenAI chat log preservation order 2025 explained

OpenAI chat log preservation order 2025 ends sweeping retention, restoring user privacy protections.

US court narrowed data hold in the New York Times v. OpenAI case. Judge Ona T. Wang ended the broad requirement to keep all ChatGPT output logs. The OpenAI chat log preservation order 2025 now keeps earlier saved data and logs tied to NYT-flagged accounts, while routine deletions can resume. A major shift just landed in the high-profile copyright fight between The New York Times and OpenAI. A federal judge ended the sweeping requirement that forced OpenAI to keep every ChatGPT output log indefinitely. OpenAI can now go back to its normal data deletion, with tight exceptions for accounts the Times has flagged and data already preserved earlier in the case. The change lowers risk for user privacy but keeps key evidence in play as the lawsuit continues.

What the OpenAI chat log preservation order 2025 actually says

The original hold: keep everything

In May 2025, the court told OpenAI to save all chat logs. This was part of discovery in the Times’ copyright lawsuit. The idea was simple: do not delete anything that might show how ChatGPT behaves with news content or how model outputs may relate to the Times’ claims.

OpenAI pushed back on scope and privacy

OpenAI appealed. The company argued the order was too broad and risky. Keeping every output forever could expose sensitive user data. It could also disrupt normal operations and data hygiene. The company said the hold was an overreach for what the case actually needs.

The October 9 decision: a narrower path

On October 9, Judge Ona T. Wang ended the broad hold. OpenAI no longer has to “preserve and segregate all output log data that would otherwise be deleted on a going forward basis.” This means OpenAI can resume normal deletions and retention rules.

Effective date and key exceptions

The court set a clear cutoff. As of September 26, OpenAI does not need to preserve new logs under the old, sweeping order. But there are two big exceptions: – Logs and data already preserved under the earlier order remain accessible. – OpenAI must keep data linked to ChatGPT accounts that the New York Times has flagged. The Times can also expand the list of flagged accounts as it reviews preserved records. That lets the plaintiff continue to build its case without forcing OpenAI to stockpile all new logs.

What remains preserved

Evidence captured under the earlier order stays. This includes chat logs that existed and were saved up to the cutoff, and any material tied to flagged accounts. The court struck a balance: it protects relevant evidence while limiting fresh, bulk collection of user conversations.

Why this matters: privacy, product operations, and legal discovery

User privacy risk goes down

A blanket hold on every output log can sweep in sensitive user data. It can include personal details typed into prompts, private business plans, or proprietary code. Narrowing the order reduces that risk. It limits the number of new user conversations stored just for litigation.

OpenAI gets operational relief

Saving everything costs money and slows systems. It makes routine deletion and minimization harder. The new order lets OpenAI return to standard retention practices. That means better data hygiene, lower storage costs, and fewer chances for accidental exposure of old logs.

Discovery stays on target

Courts want relevant evidence, not everything under the sun. This order keeps preserved data that might matter to the case and focuses on accounts the Times flags. It still gives the plaintiff a path to request more, but it ties new preservation to concrete needs.

Signals for future AI lawsuits

This decision shows courts are willing to limit broad data holds in AI cases. Judges may ask plaintiffs to define narrower scopes rooted in specific users, time windows, or content types. That approach aims to protect privacy while keeping discovery fair.

Who gains and who loses from the change

  • OpenAI: Gains flexibility, lowers storage and privacy risk, and keeps normal deletion flows. Still must preserve flagged-account data and earlier logs.
  • The New York Times: Keeps access to already saved logs and can expand flagged accounts. Loses the leverage of a universal ongoing hold.
  • Users: Benefit from less blanket retention. Sensitive conversations are less likely to sit in long-term legal storage, though flagged-account data can still be preserved.
  • Courts and regulators: See a workable model for narrowing discovery in data-heavy AI disputes without undermining evidence needs.
  • Flagged accounts: what that means and how expansion could work

    What counts as “flagged”

    The Times can identify specific ChatGPT accounts it believes are relevant. Once an account is flagged, OpenAI must preserve data linked to it. The order does not describe public criteria. It simply allows the plaintiff to mark accounts for preservation.

    Expansion over time

    As the Times reviews existing, preserved logs, it can add more accounts to the list. This lets the plaintiff follow leads. It also avoids a blanket hold on all users. The court can still step in if scope grows too wide or too vague.

    Practical takeaways for companies using generative AI

    Set clear retention policies

    Write simple rules for how long you keep prompts, outputs, and logs. Limit retention by default. Delete data you do not need for security, quality, or legal reasons. Document your practices so you can defend them in court if needed.

    Be ready for targeted litigation holds

    When a dispute starts, you may need to pause deletion for certain data. Aim for holds that are specific to users, dates, and topics. Broad holds raise cost and risk. Work with counsel to make holds precise and justified.

    Separate training data from user logs

    Keep a clear boundary between model training corpora and user chat logs. Document sources and licenses. This helps you answer questions about copyright without exposing private user conversations.

    Minimize sensitive inputs

    Teach teams not to paste secrets into prompts. Use data-loss prevention tools. If you offer an AI product, provide controls for redaction and enterprise-grade privacy settings. Less sensitive input means less sensitive output to preserve.

    Audit and explain

    Maintain records that show what data you collect, why you collect it, and how long you keep it. Be ready to explain retention to courts and customers. Transparent logs and policies build trust and reduce legal friction.

    Timeline at a glance

  • Late 2023: The New York Times files a copyright lawsuit against OpenAI.
  • May 2025: Court orders OpenAI to preserve all ChatGPT output logs for discovery.
  • September 26, 2025: Cutoff date after which new blanket preservation is no longer required.
  • October 9, 2025: Judge Ona T. Wang ends the broad order; exceptions for earlier-preserved data and NYT-flagged accounts remain.
  • This sequence shows how the OpenAI chat log preservation order 2025 moved from a wide data hold to a focused approach that preserves what is likely relevant.

    What everyday users can do now

    Check your settings

    Review how your AI tool handles chat history and data controls. If you can turn off history or opt out of certain uses, decide what works for you. Smaller data footprints lower risk.

    Think before you paste

    Do not share personal identifiers, contracts, credentials, or private source code unless you are comfortable with retention policies and security controls. Keep sensitive work in secure, approved environments.

    Use enterprise features where possible

    If your company offers an enterprise plan with stronger privacy, use it. These plans often include data isolation and shorter retention windows designed for compliance.

    Follow case updates

    The court’s order can shape how long your chats live in legal storage if your account is ever flagged. Staying informed helps you make better choices about what you share with AI tools.

    The bigger picture for AI training data and copyright

    Core dispute remains

    This order does not decide the copyright question. The Times says OpenAI used its content without proper compensation. OpenAI disputes the claims. The court will weigh evidence, including preserved logs, to decide if and how outputs relate to the Times’ works.

    Discovery without overreach

    The new order shows judges can cut back overly broad holds while protecting key evidence. That balance respects user privacy and reduces unnecessary data stockpiles, yet it still supports a fair process.

    Precedent for future cases

    Other AI cases will likely cite this approach: keep preserved material that matters, allow targeted flags, and avoid perpetual, universal retention. It is a model for handling data-rich products where discovery pressures can collide with privacy.

    Open questions to watch

  • How many accounts will the Times flag, and over what timeframe?
  • Will the court further refine what data linked to flagged accounts must be preserved?
  • How will preserved logs influence arguments about model behavior and alleged copying?
  • Could future orders require different preservation scopes for other AI services?
  • The case will keep testing how courts manage large-scale chat data while protecting individuals and businesses. The narrowed order brings structure, but many legal issues are still ahead. In short, the court’s decision trims a sweeping data hold down to a targeted one. Evidence already preserved stays in play. OpenAI can return to normal deletion for new logs, except for data tied to identified accounts. The OpenAI chat log preservation order 2025 is now a focused tool for discovery, not a blanket dragnet. (Source: https://www.engadget.com/ai/openai-no-longer-has-to-preserve-all-of-its-chatgpt-data-with-some-exceptions-192422093.html) For more news: Click Here

    FAQ

    Q: What did the OpenAI chat log preservation order 2025 change? A: On October 9, Judge Ona T. Wang filed a narrower order that ended the sweeping May 2025 requirement to preserve and segregate all ChatGPT output logs going forward. The OpenAI chat log preservation order 2025 allows OpenAI to resume normal deletions while keeping earlier-preserved data and logs tied to New York Times-flagged accounts. Q: Which ChatGPT logs still have to be kept under the new order? A: Any chat logs already saved under the earlier preservation order remain accessible and OpenAI must preserve data linked to ChatGPT accounts that the New York Times has flagged. The Times is allowed to expand the number of flagged users as it reviews preserved records. Q: When did the original preservation order start and what are the key dates in the case? A: The New York Times sued OpenAI in late 2023 and a court order in May 2025 required OpenAI to retain all ChatGPT output logs for discovery. The cutoff after which the broad hold no longer applies is September 26, 2025 and Judge Wang filed the narrower order on October 9, 2025. Q: How does the narrowed order affect user privacy and OpenAI’s operations? A: Narrowing the preservation order reduces the risk to user privacy by limiting new, blanket retention of conversations while preserving only earlier-saved logs and data tied to flagged accounts. It also gives OpenAI operational relief by allowing routine deletions and easing the storage and data-hygiene burdens that indefinite preservation created. Q: What does it mean for an account to be “flagged” by the New York Times? A: A “flagged” account is one the New York Times identifies as potentially relevant to its copyright claims, and once flagged OpenAI must retain data linked to that account. The court order does not specify public criteria for flagging, though the Times can add accounts as it examines preserved records. Q: Does this order decide whether OpenAI infringed the New York Times’ copyright? A: No, the decision narrowed the scope of discovery but did not resolve the underlying copyright dispute between the parties. Preserved logs and any data tied to flagged accounts can still be used as evidence as the case proceeds. Q: Could this ruling influence how other AI-related lawsuits handle data preservation? A: Yes, the decision signals that courts may prefer targeted preservation tied to specific users, time windows, or content types rather than perpetual, universal holds in data-heavy AI disputes. That approach aims to protect user privacy while keeping relevant evidence available for litigation. Q: What practical steps should users and companies take in response to the OpenAI chat log preservation order 2025? A: Review privacy settings and retention controls, avoid pasting sensitive identifiers or secrets into prompts, and document data-retention policies so you can justify them if litigation arises. Companies should also plan for targeted litigation holds, separate user logs from training data, and consider enterprise features that offer stronger isolation and retention options.

    Contents