Insights AI News Health NZ AI clinical notes ban How to avoid discipline
post

AI News

29 Mar 2026

Read 11 min

Health NZ AI clinical notes ban How to avoid discipline

Health NZ AI clinical notes ban bars staff from using free AI to safeguard privacy and avoid discipline

Health NZ AI clinical notes ban warns staff not to use free AI chatbots for patient documentation. The policy cites privacy, security, and accountability risks. Here’s what the rule means, how to keep your notes compliant, safer ways to use AI at work, and steps to avoid discipline. Health New Zealand has told staff not to use free AI tools like ChatGPT, Claude, or Gemini to draft clinical notes. A recent memo to mental health and addiction teams set a firm line: do not paste patient details into public AI, and do not copy AI-drafted text back into records. Breaches can lead to formal discipline. At the same time, HNZ is rolling out approved tools, including an AI scribe in emergency departments, through a national registration process. Unions say staff feel huge pressure and need better support, not fear.

What the Health NZ AI clinical notes ban means for frontline staff

Under the Health NZ AI clinical notes ban, staff must not use public, free AI systems for any part of clinical documentation. That includes “just drafting,” “just summarizing,” or “just anonymized” text. If the tool is not approved and registered, it is off-limits for clinical content. Managers can consider exemptions case by case, but you need formal approval first. Using unapproved AI for notes may breach privacy rules and the code of conduct.

Why the rule exists: privacy, security, accountability

– Patient privacy: Public AI tools may store or process data in ways you cannot control. That creates re-identification risk, even if you remove names. – Data security: Free tools often run overseas, with unclear data handling and retention. – Clinical accountability: AI can make errors or invent facts. If a note is wrong, the clinician is still responsible.

Approved AI vs free tools

HNZ requires AI tools to be assessed and registered by a national advisory group before clinical use. Some approved tools, like the Heidi AI scribe in emergency departments, are being introduced with safeguards, audits, and training. If a tool is not on the approved list, do not use it for clinical notes.

How to avoid discipline and still save time

You can protect patients and your job while reducing admin load. Focus on approved workflows and simple time-savers.

Use safe, allowed time-savers first

  • Smart EHR features: Use templates, smart phrases, and checklists already built into your system.
  • Approved dictation: Use enterprise-grade dictation tools that your IT team supports and has cleared.
  • Batch your notes: Block short windows after clinics to finish notes while details are fresh.
  • Structure early: Jot bullet points during consults to speed up final documentation.
  • Proof once, post once: Reduce rework by reviewing for clarity and completeness before signing.

If AI is available, confirm it is approved

  • Ask for the approved tools list and how to access them.
  • If Heidi or another scribe is in your area, request onboarding and training.
  • If a tool is being piloted, get written approval before you use it in care.

Data do’s and don’ts with AI

  • Don’t paste any patient information into public AI tools. This applies even if you try to “anonymize” it.
  • Don’t copy AI drafts into the clinical record unless the tool is approved and your role is authorized to use it.
  • Do check your local policy on grammar and spelling tools. Many on-device checkers are fine; cloud-based tools may not be.
  • Do keep an audit trail. If you use an approved AI, document how you verified the output.

Already used a public AI? Act now

  • Stop using the tool for clinical notes immediately.
  • Tell your line manager or privacy lead, and follow incident procedures. Early reporting shows good faith.
  • Update any affected notes after a fresh clinical review.

Talk with your manager and IT

Staff are trying to keep up under real pressure. Start an open, solution-focused chat.
  • Clarify the policy: Ask for a plain-language summary with examples of what is in and out.
  • Request training: Short sessions on documentation best practices and any approved AI can lift speed and quality.
  • Escalate workload risks: If documentation time threatens safe care, log it and propose fixes (templates, staffing, scribe access).

Leadership steps to reduce risk and build trust

Managers can lower violations and lift quality by making the compliant path the easy path.
  • Communicate clearly: Share a one-page guide on the Health NZ AI clinical notes ban with “do this, not that” examples.
  • Publish the approved tools list: Keep it easy to find and up to date. Include a simple request form for trials.
  • Stand up safe AI options: Offer enterprise AI with strict privacy controls, no training on your data, and strong audit logs.
  • Train and coach: Run brief, practical sessions on fast, accurate documentation without public AI.
  • Track outcomes: Measure documentation time, error rates, and staff feedback. Improve the system, not just compliance.
  • Foster psychological safety: Encourage questions and early reporting. Use discipline for willful or repeated breaches, not honest mistakes.

Common scenarios and safe alternatives

“I just need a cleaner summary of today’s consult.”

  • Use an approved scribe or dictation tool, then review carefully.
  • Leverage EHR templates to structure Assessment and Plan.

“I want help with grammar so notes read better.”

  • Use on-device spellcheck or an enterprise tool cleared by IT.
  • Avoid cloud grammar tools that capture text.

“I’m drowning in backlog.”

  • Batch similar notes and use smart phrases.
  • Ask for temporary admin or scribe support and flag risks to care if delays persist.

What this means for patients

Patients expect privacy and accurate notes. Banning public AI for clinical documentation protects sensitive details and cuts the risk of AI errors in the record. When approved AI is used with guardrails, it can save time and let clinicians focus on care, not keystrokes. The safest way forward is simple: follow the policy, use only registered tools, and keep patient data out of public AI systems. If in doubt, ask. To avoid discipline under the Health NZ AI clinical notes ban, stick to approved workflows, document your checks, and seek support when pressure rises. (Source: https://www.rnz.co.nz/news/national/590645/health-nz-staff-told-to-stop-using-chatgpt-to-write-clinical-notes) For more news: Click Here

FAQ

Q: What does the Health NZ AI clinical notes ban prohibit? A: The Health NZ AI clinical notes ban prohibits staff from using public, free AI systems such as ChatGPT, Claude or Gemini for any part of clinical documentation, including drafting, summarising or anonymised text. Any AI tool must be registered with the Health NZ National Artificial Intelligence and Algorithm Expert Advisory Group before being used for clinical notes. Q: Which AI tools are mentioned as banned in the memo? A: The memo specifically names free public AI tools including ChatGPT, Claude and Gemini as prohibited for clinical purposes. It also warns more broadly against using free drafting tools and transcribing AI-generated text into records even if patient information is anonymised. Q: Why has Health NZ implemented this restriction on public AI for clinical documentation? A: The Health NZ AI clinical notes ban cites data security, patient privacy and clinical accountability concerns, noting public AI can retain or process data in ways staff cannot control. The guidance also warns of re-identification risks and that AI can make errors or invent facts, leaving clinicians responsible for incorrect notes. Q: Are there any approved AI tools and how can staff use them? A: Health NZ requires AI tools to be assessed and registered with the National Artificial Intelligence and Algorithm Expert Advisory Group (NAIAEAG) before clinical use, and some approved options like the Heidi AI scribe are being rolled out in emergency departments. Staff should use only tools on the approved list and seek written approval for any piloted tools, with exemptions assessed case by case. Q: What should I do if I have already used a public AI tool to draft clinical notes? A: Stop using the public AI tool for clinical notes immediately and notify your line manager or privacy lead while following your incident reporting procedures. You should update any affected notes after a fresh clinical review to ensure accuracy and document the steps you took. Q: What safer alternatives can clinicians use to save time without breaching the ban? A: Use built-in EHR features like templates, smart phrases and checklists, approved enterprise dictation or on-device spellcheckers, and batch or schedule short windows to complete notes while details are fresh. If an approved AI scribe is available in your area, request onboarding and training before using it for clinical documentation. Q: Could I face disciplinary action for using unapproved AI to write clinical notes? A: Yes, the memo warns that using free AI tools for clinical purposes could result in formal disciplinary action under the code of conduct. Health NZ did not disclose how many instances had occurred or whether anyone had been disciplined in the reported cases. Q: How can managers support staff while enforcing the Health NZ AI clinical notes ban? A: To enforce the Health NZ AI clinical notes ban while supporting staff, managers should clearly communicate the policy, publish the approved tools list, offer training and safe enterprise AI options, and encourage early, no-blame reporting of incidents. Leadership can also track documentation time and error rates and foster psychological safety so staff feel able to ask for help rather than improvise with public tools.

Contents