Insights AI News HHS model cards repeal 2025 How to protect AI patient safety
post

AI News

30 Dec 2025

Read 10 min

HHS model cards repeal 2025 How to protect AI patient safety

HHS model cards repeal 2025 threatens patient safety, learn practical steps clinicians can take now.

The HHS model cards repeal 2025 proposal would remove a federal rule that asked health AI vendors to publish “model cards” showing how tools are built, tested, and what risks they hold. Here is what the change means for hospitals, developers, and patients — and how to keep AI safe if the rule goes away. The U.S. Department of Health and Human Services has proposed rolling back a transparency rule for health AI. That rule, created under the previous administration, asked vendors to disclose basic facts about their models. It aimed to help buyers judge safety, fairness, and fit for use. Supporters of the rollback say fewer rules will speed innovation. Critics worry the change could hide important details about AI performance, especially across different patient groups. Your choices now matter. You can still ask for proof, set guardrails, and build a safer path forward.

What the HHS model cards repeal 2025 would change

What model cards are

Model cards are simple documents that explain an AI system in plain language. They do not reveal source code. They list the model’s intended use, how it was tested, and known risks. They typically include:
  • Training data sources and limits
  • Performance metrics overall and by subgroup (age, sex, race, language)
  • Intended and not-intended uses
  • Clinical validation methods and settings
  • Known failure modes and risk controls
  • Update and monitoring plans
  • What would go away

    If the HHS model cards repeal 2025 moves ahead, vendors would no longer have to submit these disclosures under federal health IT rules. EHR certification would not require model cards. Many buyers would have less standardized information when comparing tools.

    Why that matters

    Model cards make hidden risks visible. They help clinicians see where an AI tool works and where it may fail. They also help hospitals ask better questions and set clearer expectations in contracts.

    Why this matters for patient safety

  • Bias detection: Without subgroup performance data, hospitals may miss lower accuracy for certain populations.
  • Context mismatch: A tool tested in one care setting may not work the same in another. Model cards flag that risk.
  • Accountability gaps: Clear documentation supports audits, incident reviews, and fixes.
  • Trust and training: Clinicians need simple guides to know when to rely on AI — and when to pause.
  • Procurement quality: Standardized disclosures reduce hype and help evidence-based buying.
  • Steps hospitals can take now

    Even without a federal mandate, hospitals can set strong expectations.

    Make transparency a buying rule

    Ask vendors to provide a model card or an equivalent technical brief. Require the following at minimum:
  • Intended use, input data needed, and known limits
  • Performance by subgroup relevant to your patient mix
  • Clinical validation design (sites, sample size, time period)
  • Human oversight plan and fail-safe steps
  • Post-deployment monitoring and update policy
  • Write it into contracts

    Include clauses that require:
  • Access to audit logs and version histories
  • Notice before material model changes go live
  • Incident reporting within set time frames
  • Shared responsibility for harm due to undisclosed defects
  • Right to suspend the tool if safety thresholds are not met
  • Set internal guardrails

    Create a cross-functional AI safety committee (clinicians, IT, quality, risk, compliance, patient reps). Give it clear duties:
  • Approve use cases and risk tiers
  • Review evidence before go-live
  • Define monitoring metrics and alert thresholds
  • Run regular bias and drift checks
  • Report findings to leadership and front-line staff
  • What developers should do to keep trust

    Smart vendors will keep sharing, even if rules change.
  • Publish concise model cards or “AI fact sheets” on your website
  • Provide test results across key subgroups and settings
  • Offer model lineage, version notes, and change logs
  • Run external validation with independent partners
  • Stand up an incident reporting channel and commit to response times
  • Adopt secure, privacy-safe data practices and document them
  • This approach reduces sales friction, shortens due diligence, and builds long-term customer confidence if the HHS model cards repeal 2025 becomes final.

    How clinicians can use AI safely

  • Know the indication: Use the tool only for its stated purpose.
  • Check the context: If your patients differ from the test group, be extra cautious.
  • Look for red flags: Sudden shifts in outputs, missing data, or unusual recommendations.
  • Keep human-in-the-loop: Confirm high-impact decisions with clinical judgment.
  • Report issues fast: Share errors or near-misses with your safety team.
  • Monitoring after go-live

    Build a simple, steady process:
  • Track key metrics (accuracy, false alerts, time to action, subgroup gaps)
  • Compare current performance to baseline and vendor claims
  • Spot data drift (new populations, new documentation patterns)
  • Review adverse events and learnings in monthly safety huddles
  • Pause or roll back updates that reduce safety or performance
  • Data rights and privacy safeguards

    Trust also depends on careful data use.
  • Use de-identified or limited datasets where possible
  • Set strict access controls and logging
  • Clarify secondary use rights in contracts
  • Apply encryption at rest and in transit
  • Align with HIPAA and state privacy laws
  • What to watch next

    This is a proposal, not final policy. Federal rules typically go through a public comment period before a final rule. Health systems, patients, payers, and developers can submit comments that highlight safety needs, voluntary standards, and practical alternatives to one-size-fits-all mandates.

    Preparing for the road ahead

    Hospitals can prepare for the HHS model cards repeal 2025 by building their own transparency checklists, standard contract language, and internal safety processes. Developers can publish concise, useful documentation and support independent checks. Payers and states can set incentives for high-quality disclosure without slowing good tools. Strong AI governance is not only about rules. It is about culture. Ask clear questions. Measure what matters. Share what you learn. That is how we keep patients safe while we adopt helpful technology. In short, even if the HHS model cards repeal 2025 moves forward, the health sector can keep transparency alive through smart procurement, steady monitoring, and open communication. Patient safety improves when everyone can see how a tool works and agrees on how to use it well.

    (Source: https://www.statnews.com/2025/12/22/hhs-proposes-scrapping-ai-model-cards-transparency-rule/)

    For more news: Click Here

    FAQ

    Q: What is the HHS model cards repeal 2025 proposal? A: The HHS model cards repeal 2025 proposal would remove a federal rule that asked health AI vendors to publish model cards showing how tools are built, tested, and what risks they hold. The U.S. Department of Health and Human Services has proposed rolling back the transparency requirement that was put in place under President Joe Biden. Q: What are model cards and what information do they provide? A: Model cards are simple documents that explain an AI system in plain language and do not reveal source code. They typically list intended use, training data sources and limits, performance metrics overall and by subgroup, clinical validation methods, known failure modes, and update and monitoring plans. Q: Who would be affected if the model cards requirement is removed? A: Vendors would no longer have to submit these disclosures under federal health IT rules and EHR certification would not require model cards, reducing standardized information for buyers. Hospitals, clinicians, patients, and other purchasers could face less visibility into how tools perform across different populations and settings. Q: Why do model cards matter for patient safety? A: Model cards make hidden risks visible by showing where an AI tool works and where it may fail, helping with bias detection and context mismatch. They also support accountability for audits and incident reviews and help clinicians know when to rely on AI and when to pause. Q: What steps can hospitals take now to maintain transparency if the rule goes away? A: Hospitals can require vendors to provide a model card or equivalent technical brief that spells out intended use, input data needs, subgroup performance, validation design, human oversight plans, and monitoring policies. They can also write contract clauses for audit access, notice of material changes, incident reporting timelines, shared responsibility for undisclosed defects, and the right to suspend a tool that fails safety thresholds. Q: How should developers respond to maintain trust if the HHS model cards repeal 2025 becomes final? A: Developers can continue to publish concise model cards or “AI fact sheets” on their websites, provide test results across key subgroups and settings, maintain model lineage and change logs, run external validation, and set up incident reporting channels. These steps reduce sales friction, support due diligence, and help build long-term customer confidence if the federal disclosure requirement disappears. Q: What practical checks should clinicians use to employ AI tools safely without guaranteed model cards? A: Clinicians should use tools only for their stated indications, check whether their patient population matches the tool’s test group, and keep a human-in-the-loop for high-impact decisions. They should also watch for red flags like sudden shifts in outputs, missing data, or unusual recommendations and report errors or near-misses quickly to safety teams. Q: What happens next in the rulemaking process and how can stakeholders weigh in? A: The proposal is not final and typically goes through a public comment period before a final rule is issued, giving health systems, patients, payers, and developers a chance to submit comments. Stakeholders can use that period to highlight safety needs, suggest voluntary standards, and propose practical alternatives to one-size-fits-all mandates.

    Contents