
The biggest threat from AI and EHRs right now isn’t sci‑fi takeover. It’s you noticing something is dangerously wrong and everyone around you shrugging.
You’re post‑residency. You’ve got a license, maybe a hospital appointment, maybe a private group job. And you’ve just realized: the AI tool or EHR you’re using is generating errors that could hurt people. Not theoretical risk. Actual, “this could kill someone on Tuesday” level risk.
Here’s what you do. Step by step. No fluff, no hero fantasies. Just how to handle this without wrecking your career or abandoning your patients.
First: Get Clear On What You’re Dealing With
You cannot escalate “vibes.” You escalate specific, reproducible, documentable hazards.
You’re seeing one (or more) of these:
- AI clinical decision support recommending clearly wrong doses, diagnoses, or workups
- AI documentation tools fabricating exam elements or critical details (“hallucinations”) that end up in the legal medical record
- EHR defaults or order sets that lead to wrong meds, wrong routes, wrong frequencies, or missed tests
- Interoperability glitches (data not pulling correctly) that hide allergies, labs, imaging, or prior notes
- Incorrect risk scores or alerts (e.g., sepsis, PE, stroke) that are clearly wrong and changing care
If you’re thinking, “Well, that’s just how the system is,” stop. Safety hazards from software are as real as a bad batch of heparin. You do not need to “get used to it.”
Your first task: turn a vague bad feeling into a concrete safety concern.
Ask yourself:
Can I describe the error in one sentence?
- “The AI note generator routinely fabricates normal neuro exams in unresponsive patients.”
- “The EHR opioid order set defaults to 4x the safe morphine equivalent for naive patients.”
Can I show at least one real case where this already happened or almost happened?
Can I explain the harm chain clearly:
- Error → How a normal clinician might miss it → Patient harm
If you can do those three, you have something you can escalate.
Step 1: Quietly Document What You’re Seeing
You’re not whistleblowing to Twitter. You’re creating a clean record that you saw a safety issue and you handled it professionally.
You want:
- Dates and times
- Screenshots or printouts with identifiers removed or obscured
- A brief description of what the system did and what should have happened
- Whether harm occurred, a near miss happened, or it’s a clear risk for future harm
Example entry in your private notes (NOT the chart):
10/15/26 – AI discharge summary tool auto‑generated “no medication allergies” despite active EHR allergy list showing “anaphylaxis – penicillin.” Appeared in final note until I manually corrected. Could easily be missed by other clinicians relying on AI summary.
Keep this offline and secure. Encrypted personal device or secure institutional drive that you control access to. No PHI if you can avoid it; if you must temporarily capture it (e.g., internal screenshot), de‑identify as much as possible or blur identifiers.
You’re building a pattern. Patterns are hard to ignore and hard to retaliate against.
Step 2: Protect Today’s Patients First
Before committees and ethics and vendors, you have one job: do not let this system hurt people on your watch.
That usually means three concrete actions:
Disable or bypass the tool for your own practice if you safely can.
Turn off the AI suggestion pane. Stop using the default order set. Un‑check the “auto‑generate” box.Add human double‑checks where the tool is risky.
- Manually re‑calculate doses
- Re‑read AI‑generated notes before signing
- Cross‑check important data (allergies, vitals, meds) directly in the EHR instead of relying on AI summaries or dashboards
Quietly warn your immediate team.
Short and direct, not dramatic:- “FYI, the sepsis alert is firing on obviously non‑septic patients lately. Don’t rely on it for decision making.”
- “The AI note generator is inserting normal findings that I did not document. Please read carefully before signing.”
You’re buying time. You don’t fix the system in a day. You block obvious harm while the bureaucracy grinds.
Step 3: Use the Internal Safety Pathway First
You’re a physician in a modern health system. There is almost certainly a formal safety channel. Use it. Correctly.
Typical options:
- Event reporting system (RL6, SafetyZone, or something similar)
- Patient Safety / Risk Management office
- Chief Medical Officer / Medical Director
- Department Quality/Safety Committee
You start with the safest one that still creates a trackable record.
How to file a safety report that actually gets attention
Don’t rant. Don’t write a novel. You want three things:
- Description of the error
- Concrete example
- Clear statement of risk
Something like:
“EHR AI note assistant is auto‑populating exam and history elements that were not performed or elicited. Example: for patient on 10/15/26, AI draft added ‘no focal deficits’ to neuro exam despite patient being minimally responsive and limited exam performed. This creates inaccurate medical records and could lead to missed diagnoses and legal risk.”
Or:
“Pre‑built CHF admission order set includes IV furosemide dosing of 80 mg q6h as default for all patients, including frail or diuretic‑naive patients. This has led to near‑miss hypotension in at least one case. Default dosing poses patient safety risk.”
Key rules:
- Name the specific tool/module (e.g., “Epic NoteWriter AI,” “Cerner sepsis alert,” “XYZ Vendor AI Triage Bot”)
- Use the words “patient safety risk,” “near miss,” or “potential for serious harm”
- Avoid blaming wording like “idiotic system” or “whoever built this is incompetent” – it distracts from the safety issue
Submit it. Save a copy or screenshot your submission confirmation.
Now the ball is in their court, and you have a timestamped record that you raised a concern.
Step 4: Talk to a Human With Actual Authority
Reporting systems are like black holes. Things go in; who knows when anything comes out.
So you also need a human conversation.
Ideal targets, in order:
- Your Department Chair or Section Chief
- The Chief Medical Information Officer (CMIO) or Physician Informatics Lead
- The Chief Quality Officer or Director of Patient Safety
You don’t need all three. You need one who listens and will move.
How to structure that conversation:
Ask for a short, focused meeting:
“I’m seeing some serious safety issues with [AI tool/EHR module] that are putting patients at risk. Can I get 20 minutes to walk you through concrete examples?”In the meeting:
- Bring 2–3 anonymized examples
- Show how a normal clinician could easily be misled
- Explain what you’ve already done to protect patients
- Make a specific ask
Specific ask examples:
- “I’d like this AI note tool disabled until it can be validated and safety‑checked.”
- “We need to remove this default dose from the order set and require manual entry.”
- “We should issue a temporary practice alert that these AI summaries are not to be used for clinical decision making.”
You’re not asking them to “look into it.” You’re asking them to do something immediate and concrete, even if temporary.
Step 5: Separate Three Different Risks – Patient, Legal, Career
You’re sitting in the middle of three overlapping risk fields:
- Patient harm – obvious.
- Legal exposure – also obvious, but mostly for the institution.
- Your career – less obvious, very real.
You need to manage all three.
Patient risk
You’ve started that process: double‑check, bypass, warn, report. Good.
Legal risk
If the system is:
- Altering your notes after signing
- Auto‑signing things in your name
- Logging you as ordering things you did not order
- Corrupting or hiding critical data (e.g., allergies vanish when imported)
Then you’re in malpractice and medical board territory.
What you do:
- Save evidence of those behaviors (de‑identified screenshots, email replies, IT tickets).
- Emphasize in your reports: “Inaccurate legal medical record being created by system behavior.”
- Consider getting independent legal advice if the system is forcing you to practice in a way you consider below standard of care.
Career risk
Let me be blunt: some institutions punish people who poke their tech vendors. They’ll call it “disruptive,” “not a team player,” or “resistant to innovation.”
You protect yourself by:
- Staying factual and documented
- Keeping your communication calm, short, and focused on patient safety
- Avoiding social media rants or public accusations (at least until you’ve exhausted internal channels and talked to a lawyer if you’re going beyond)
Do not put “this AI tool is garbage and unsafe” on Twitter/X under your real name while you’re on staff. That’s how you become the story instead of the problem.
Step 6: Escalate Levels if You’re Ignored
Here’s the part no one teaches you: what to do when your safety report gets “thank you for your feedback” and nothing changes.
You escalate deliberately:
| Step | Description |
|---|---|
| Step 1 | Identify Dangerous Error |
| Step 2 | Document Examples |
| Step 3 | File Internal Safety Report |
| Step 4 | Meet Dept or CMIO |
| Step 5 | Monitor and Follow Up |
| Step 6 | Escalate to Higher Leadership |
| Step 7 | External Guidance or Reporting |
| Step 8 | Adequate Response? |
| Step 9 | Ongoing Risk? |
Level 1 – Internal safety + departmental / CMIO
You’ve done this. Give them a little time, but not months.
Reasonable expectation:
- Acknowledgment within 1 week
- Some kind of action plan or mitigation within 2–4 weeks for serious issues
If it’s crickets or hand‑waving (“We trust the vendor that it’s safe”), move on.
Level 2 – Institutional leadership
Next options:
- Chief Medical Officer (CMO)
- Chief Quality Officer (CQO)
- IT leadership if they’re actually accountable to clinicians
Your message (email or brief meeting request):
“I have raised a serious safety concern about [AI/EHR tool] through our safety reporting system and with [Name, Role]. The tool continues to [describe hazard] and poses ongoing risk for significant patient harm and inaccurate medical records. I’d like to ensure this is visible at the executive level and discuss temporary risk mitigation while a full review is conducted.”
Again, you are calm, specific, and you tie it directly to patient safety and legal record integrity. Those are words executives hear.
Level 3 – External guidance (quietly)
If you’re still being stonewalled and patients are genuinely at risk, you start getting outside intelligence, not necessarily blowing whistles yet.
Real options:
- State or national specialty society – sometimes have informatics or ethics contacts who’ve seen this before.
- AMA or your country’s medical association – often have digital health and legal resources.
- Malpractice carrier risk management – this is huge and underused. They hate unsafe systems. They will often advise you how to protect yourself and sometimes lean on institutions behind the scenes.
You call or email anonymously or with minimal detail first:
“Hi, I’m a practicing [specialty] in [state]. Our institution is using an AI/EHR tool that is [brief description of hazard]. I’ve reported internally, but I’m not seeing action. I’m concerned about patient harm and my own liability. What are your recommendations for next steps and documentation?”
Do this before you consider reporting to regulators or media.
Step 7: When It’s Bad Enough to Go Outside the Walls
Some situations justify external reporting:
- Clear patient harm is already happening and institution refuses to change the system
- You’re being pressured to sign or use AI outputs you know are unsafe
- Medical records are being altered in a way that hides errors or misrepresents care
- You’re retaliated against for good‑faith safety reporting
Then you’re in whistleblower territory. That’s not a word to throw around lightly.
Three main avenues:
Regulators
- State medical board (if it impacts your ability to meet standard of care)
- CMS or accrediting bodies (e.g., Joint Commission) if systemic
- In some countries, data protection authorities if there’s improper data handling
Malpractice carrier
They’ve got leverage. If enough insured physicians say “this system makes safe practice impossible,” carriers pay attention.Legal counsel
Particularly if you’re being threatened, disciplined, or your privileges/employment are at risk because you refused to use an unsafe system.
This is where you stop improvising and get actual legal advice. Do not send long, emotional emails to news outlets before you talk to someone who knows healthcare law.
Step 8: Understand Where AI/EHR Problems Typically Come From
You’re not just fighting “the algorithm.” You’re fighting a specific failure mode. Knowing which one helps you argue effectively.
Common patterns:
| Failure Mode | What You See Clinically |
|---|---|
| Unsafe defaults | Overdosing, wrong frequency, auto-checks |
| Hidden automation | AI changes text or orders after you sign |
| Over-trusted alerts | Alerts override clinical reasoning |
| Data mapping errors | Allergies, labs, meds missing or wrong |
| Hallucinating AI text | Fabricated histories or exam findings |
You don’t need to be an informatics expert. You just need to be able to say:
- “This is a bad default problem.”
- “This is post‑signing alteration.”
- “This is an interoperability/mapping error; data looks fine in system A but wrong in system B.”
- “This is AI hallucinating details and creating false documentation.”
That language wakes up the right people.
Step 9: Make It Hard for Anyone to Blame Just You
A classic move: the institution pins it on “user error” or “poor documentation” by clinicians, not the system.
You defend yourself in three ways:
Consistent documentation habits
- Read AI‑generated text before signing
- Add short clarifications when you override or disagree with AI/EHR suggestions if they’re significant:
“AI summary suggested no CHF history; chart review confirms chronic HFrEF.” - Avoid blindly accepting defaults in high‑risk orders (anticoagulants, insulin, opioids, chemo)
Written records of your concerns
- Keep copies of safety reports you’ve filed
- Save emails you sent raising specific concerns
- Brief, factual personal log of key events (dates, who you spoke with, what was said)
Shared awareness
Quietly normalizing the phrase “the tool is not reliable for X” among colleagues makes it a system awareness, not your personal quirk.
If the worst happens and a case blows up, you want to be able to show: you recognized the problem, adjusted your practice, and notified the system appropriately.
Step 10: Decide If This Is a Fight You Stay For
One more uncomfortable truth: some institutions will not fix this stuff until a big lawsuit, a front‑page story, or a regulatory hammer forces them.
You are allowed to decide it is not your job to be the sacrificial lamb.
Signs you should consider leaving:
- Leadership minimizes or mocks safety concerns about AI/EHR
- You are explicitly told to “just sign” AI‑generated notes or orders you’re not comfortable with
- Retaliation (schedule changes, threats, bad evaluations) follows your good‑faith reporting
- The EHR/AI is so broken that you cannot reasonably practice standard‑of‑care medicine
Yes, moving jobs is painful post‑residency. But staying somewhere that forces you into unsafe practice will burn you out and put your license and conscience at risk.
If you do consider leaving, frame it this way in interviews elsewhere:
“At my prior institution, we had implemented an AI/EHR tool that introduced safety risks and documentation inaccuracies. After trying internally to address these, it became clear that leadership wasn’t prioritizing correction. I’m looking for a place where clinicians have a real voice in how technology is used in patient care.”
That’s honest, and any reasonable place will respect it.
A Quick Visual of Your Reality
Here’s how your time and attention actually get split once broken tech enters your clinical life:
| Category | Value |
|---|---|
| Direct Patient Care | 45 |
| Manual Double-Checking AI/EHR | 25 |
| Fighting Tech-Related Errors | 20 |
| Administrative and Safety Reporting | 10 |
You’re not crazy for feeling like you’re doing QA for a software company instead of medicine. You are.
What You Can Do Today
Do not wait for a catastrophe to “prove” the system is dangerous.
Your next step today:
Think of the most concerning AI or EHR behavior you’ve seen in the last month. Open a blank document and write three things:
- One specific example (no identifiers)
- Exactly how it could harm a patient
- The simplest immediate action that would reduce that risk (turn it off, change a default, issue a warning)
Once you’ve got those three written, decide: will you file a safety report, schedule a 20‑minute meeting with your department lead or CMIO, or both?
Pick one and do it before your next shift starts.