
The clash between your AI-hating attending and AI-loving hospital policy is not philosophical. It’s a workplace minefield you have to walk through without blowing up your reputation.
You’re not stuck between “old-school vs innovation.” You’re stuck between “the person grading you” and “the system paying for your work.” Those are not equal. So you can’t treat them like they are.
Here’s how to handle this without tanking your evals, your sanity, or your future in a world where AI is only going to become more central.
Step 1: Get Very Clear On The Actual Rules
First move: separate vibes from policy.
Your attending saying “no AI on my team” is not the same as the hospital saying “AI is forbidden.” Hospitals are rolling out:
- Epic-embedded AI note generators
- Radiology decision support
- Sepsis prediction tools
- Discharge summary assistants
- Patient message drafting tools
Some of these are officially endorsed. Some are pilots. Some are “we’re testing this, please use it.” You need to know what’s what.
Do this:
Check official documents.
Open your institution’s:- “Generative AI use policy” (many systems now have one)
- EHR announcements / tip sheets
- GME or resident handbook updates
Often there’s a PDF or intranet page that explicitly lists “approved AI tools” and “prohibited use cases.”
Ask one safe person in authority.
Try: “I’ve seen the hospital rolling out AI tools in the EHR. For trainees, what’s the current expectation on using them for documentation or clinical work?”
Ask:- Chief resident
- Program director
- GME office
Not: random senior who loves ranting.
Document what you find.
Literally keep a short note (personal file, not hospital computer) that says:- Hospital-approved tools
- Explicit no-go zones (e.g., “do not paste PHI into external tools”)
- Any emails or links you might need later
Why? Because if this ever becomes “why were you using AI?” you want to be able to calmly show: “I followed institutional policy and used only approved tools.”
Step 2: Understand Why Your Attending Hates AI (So You Can Work Around It)
Your attending’s “no AI” stance usually isn’t about you. It’s about one of these:
- They saw a resident copy a hallucinated fact into a note
- They’re scared of liability and chart audits
- They think it’s lazy and “eroding clinical reasoning”
- They don’t understand the tools but don’t want to admit it
- They resent anything that makes trainees “faster” when they suffered without it
You don’t need to fix their worldview. You do need to map it, so you know how to operate without triggering them.
Listen for key phrases:
- “I don’t trust bots to practice medicine.” → Safety/liability concern
- “You all don’t know how to think anymore.” → Education/competence worry
- “If I see AI notes in my patients’ charts…” → Documentation purity + control
Once you know the underlying fear, you can target your strategy:
- If it’s safety: emphasize double-checking and your own clinical reasoning
- If it’s learning: show you’re using AI as a tool after you think, not instead of thinking
- If it’s documentation control: respect their style in the shared chart, use AI elsewhere
Step 3: Draw the Line Between “Chart Work” and “Private Brain Space”
This is the key move most people miss.
You have two different worlds:
- The official medical record = shared space, under the attending’s name and license
- Your personal cognitive workspace = your own drafts, study, templates, thinking aids
Your attending can absolutely control what goes into the official record on patients under their care. That’s their license. They cannot reasonably police what you do in your own thought process as long as:
- You comply with PHI rules
- You don’t misrepresent AI outputs as your own independent clinical judgment
- The final documented product meets their standards
So split your AI use like this:
- In-chart (EHR notes, orders, messages): follow attending’s rule 100%. No AI assistance on anything they clearly forbid.
- Out-of-chart (your notes, study materials, generic templates): you can still use AI heavily, smartly, and quietly.
Example:
- Attending: “No AI-generated notes. I can smell them a mile away.”
Your move:- Write the actual note yourself in the EHR
- Use AI on your own device for things like:
- “Explain high anion gap metabolic acidosis for a third-year med student.”
- “Create a generic template for post-op day 1 progress note for laparoscopic cholecystectomy (no PHI).”
- Then adapt that template manually in the chart, from your own brain
You’re respecting their boundary on the shared record while still leveraging AI to make you faster and sharper.
Step 4: Use AI Where It Doesn’t Trip Their Radar
There are several low-drama, high-yield places you can use AI that:
- Don’t touch PHI
- Don’t show up in the chart
- Don’t obviously broadcast “AI did this”
Here’s where to push usage even if your attending is anti-AI:
Knowledge refinement and studying
You can use AI to:- Break down guidelines (“summarize 2023 GOLD COPD recommendations at resident level”)
- Generate board-style questions
- Compare differential diagnoses
Then cross-check with UpToDate or guidelines. Never accept AI as a primary source.
Presentation practice
Prompt: “Here’s my HPI, PE, labs (de-identified). Rewrite as a tight, 90-second oral presentation for internal medicine rounds, using this structure: One-liner, problem list with assessment and plan.”
Then read it out loud and adapt. You learn the rhythm of crisp presentations.Patient education drafts without PHI
Example: “Draft a plain-language explanation of heart failure for a 7th-grade reading level, including why we use diuretics and ACE inhibitors.”
You paste or handwrite a modified version into your discharge instructions. No identifiers. No raw copy-paste.Time-saving for non-clinical work
- Emails (to GME, research mentors, admin)
- Slide outlines for journal club
- Abstract structure drafts (you fill in the real data)
None of this touches the attending’s authority over patient care documentation.
Step 5: If You Want To Push Back (Without Getting Crushed)
Sometimes the hospital or residency leadership is explicitly encouraging AI in documentation (“We invested in this Epic ambient tool, please use it”) and your attending is saying the opposite.
You’re now in a policy conflict. The instinct to argue in real time on rounds? Terrible idea.
Instead, try this controlled escalation path.
5.1 Start with a neutral clarification
Pick a calm moment. Not during rounds. Not when they’re hungry.
Try something like:
“Dr. Lee, I wanted to clarify something. The hospital has been encouraging us to use the new Epic AI draft-notes tool for H&Ps and progress notes. You mentioned you don’t want AI-generated notes for your patients. For this rotation, would you prefer I avoid that tool entirely for your patients, even though it’s hospital-approved?”
You’re:
- Showing awareness of both sides
- Giving them clear control over their service
- Making it explicit that you’re complying despite conflicting signals
If they say “Yes, avoid it”: you do. For their patients, you’re done. Don’t martyr yourself. Your eval matters more than being right about policy.
If they say something muddled (“Use it but don’t let me see it”), then your rule is simple: nothing obviously AI-ish in the final note, no raw AI text. You can use it as a draft, but heavy edit into your own voice.
5.2 If the conflict is bigger than one attending
If several attendings are openly blocking hospital-mandated tools, now this becomes a systems issue.
Your move is not to fight them. Your move is to inform the right people safely.
Talk to:
Chief resident: “We’re being told by the hospital to use the ambient AI tool, but on X service we’re explicitly told never to touch it. What’s the residency’s stance? I want to follow expectations and not get in trouble either way.”
Program director: only if the chief says this is a program-wide problem or encourages you to. Approach as confusion, not complaint: “I’m getting conflicting instructions. How do you want residents to handle this?”
You’re not filing a grievance. You’re asking for alignment. Let leaders fight the policy battles.
Step 6: Protect Yourself Legally and Professionally
You’re practicing in a gray zone that’s going to look very different 5 years from now. You don’t want today’s “innovative shortcut” to become tomorrow’s problem in a lawsuit, chart audit, or professionalism review.
Use these guardrails:
No PHI into external tools without explicit approval.
“External tools” = anything not directly integrated into your hospital EHR and clearly approved. That includes public chatbots on your phone or web.Do not document ‘AI said’ as your rationale.
If AI contributed to your thinking, you still document the clinical reasoning and evidence basis. You do not say “We started heparin because ChatGPT suggested PE was likely.” Obviously.Always be able to defend the content as your own clinical judgment.
If an attending or reviewer points at any line in your note and says “Why did you write this?” you need an answer that does not start with “Because the AI…”
It should be: “Because the patient had X, Y, Z, and based on [guideline/study/standard practice], this is appropriate.”Do not hide AI usage by lying if asked directly.
If an attending says, “Did you use AI to write this note?” and you did, don’t say no. Instead say: “I used the hospital’s approved tool for a rough draft, then I edited it heavily to ensure accuracy and reflect my own assessment. If you’d prefer, I can stop using that for your patients.”Take the hit once if needed. It’s better than being caught in a lie that goes on your record.
Step 7: Build Skills Now For the World You’re Actually Going To Practice In
Reality check: Your future job is more AI, not less.
Even if you’re stuck under an “absolutely no AI” attending today, you can quietly build skills so you’re not left behind.
Here’s how to train yourself:
Practice “AI + judgment” on de-identified cases.
Take yesterday’s interesting case. Strip all identifiers.- Ask AI: “Given this de-identified case, generate a differential and workup plan as an internal medicine attending would.”
- Compare its output to what your actual team did.
- Identify: where is it dumb, where is it surprisingly good?
This teaches you where to never trust it and where it can help.
Learn to spot AI hallucinations fast.
Get in the habit of:- Never trusting uncited facts
- Always cross-checking numbers, doses, and rare disease “facts” in standard references
- Assuming any confident-sounding but weird answer needs verification
Get comfortable editing AI text into human, concise, clinician-style language.
Most AI-generated stuff is bloated and stiff. Practice:- “Rewrite this discharge summary in 50% fewer words, keep all relevant clinical info, remove fluff.”
Your actual attending will love your human-written concise note later.
- “Rewrite this discharge summary in 50% fewer words, keep all relevant clinical info, remove fluff.”
Follow your institution’s AI evolution.
Subscribe to:- IT / EHR bulletins
- GME announcements about tools
- Any “Clinical AI committee” outputs
You want to know where things are going. This will matter when you’re on job interviews and they ask: “How do you see AI fitting into your practice?”
| Category | Value |
|---|---|
| Board-style questions | 1 |
| Summarizing guidelines | 2 |
| Drafting emails/slides | 1 |
| Note drafting in EHR | 3 |
| Clinical decision suggestions with PHI | 5 |
Step 8: Scripts For Specific Situations You’re Probably Going To Hit
Let’s get concrete.
Situation A: Attending sees your note and says, “This looks AI-generated. Did you use AI?”
Response blueprint:
- Stay calm.
- Do not get defensive or snarky.
- Be honest but controlled.
Example:
“Yes, I used the hospital’s built-in draft function in Epic to generate an initial version, then I edited and verified all the content myself. I understand you don’t want AI involvement in notes under your name. I’m happy to stop using it for your patients and will write from scratch moving forward.”
If they continue to lecture, absorb it. You get a story, not a lawsuit.
Situation B: Hospital emails: “We expect residents to use the new AI scribe,” attending on day 1: “No one on my team is using that garbage.”
You do not make this your hill.
You say nothing in that moment. After rounds, you ask:
“I saw the hospital messaging about encouraging use of the AI scribe tool. For this rotation specifically, you’d prefer we not use it at all, correct?”
They say yes. You comply. Later, when your chiefs or PD ask why uptake is low, you can say:
“On some services, attendings strongly prefer we don’t use it on their patients, so we’ve been following their direction.”
Let leadership fight leadership. You’re not HR.
Situation C: Co-resident loudly brags about ignoring the attending and using AI anyway
Don’t imitate their risk tolerance.
If they say, “Dude I just paste everything into Bard, it’s amazing, Dr. X has no idea,” your mental response should be:
“That’s how people end up in case studies under ‘unprofessional conduct and privacy violation.’”
You keep your practice clean:
- No PHI off-system
- No AI in shared notes when explicitly forbidden
- No flexing about “beating the system”
If they crash and burn, you’re not collateral damage.

Step 9: When The Conflict Actually Helps You
There’s an upside to this tension if you’re smart.
You’re getting forced to:
- Practice “manual” skills (clean notes, tight presentations, sharp reasoning) under AI-skeptical attendings
- Practice “augmented” skills (AI-assisted thinking, drafting, learning) on your own time or with AI-friendly faculty
That combination will make you better than both extremes:
- Better than the pure traditionalist who wastes hours on clerical work and never learns to leverage tools
- Better than the AI-dependant trainee who can’t think when the system is down
The trick is not to pick a side. It’s to treat this like bilingualism:
- “I speak No-AI Attending Language” on their service
- “I speak AI-Augmented Clinician” in the larger system and my own development
The future of healthcare is going to reward the people who can code-switch between those worlds without drama.
| Situation | Power That Matters Most | Your Best Move |
|---|---|---|
| Attending forbids, hospital vague | Attending | Avoid AI in their patients' charts |
| Attending forbids, hospital pushes | Attending (short term) | Comply on service, use AI elsewhere |
| Attending neutral, hospital encourages | Hospital/system | Use approved tools thoughtfully |
| Attending encourages, hospital unclear | Risk/liability | Clarify PHI/AI policy before use |
| Step | Description |
|---|---|
| Step 1 | Want to use AI |
| Step 2 | Ask chief or PD |
| Step 3 | Do not use with PHI |
| Step 4 | Do not use in chart for their patients |
| Step 5 | Use carefully, verify outputs |
| Step 6 | Use AI only for study and personal drafts |
| Step 7 | Document with your own judgment |
| Step 8 | Hospital AI policy clear? |
| Step 9 | Tool hospital approved? |
| Step 10 | Attending stance? |
| Category | Value |
|---|---|
| Studying/learning | 35 |
| Non-clinical writing | 20 |
| Personal templates | 25 |
| Direct note drafting | 10 |
| Clinical decision support | 10 |

FAQs
1. What if my program director explicitly tells us to use the hospital’s AI tools, but my attending still bans them?
Your attending controls the care and documentation under their name. In the day-to-day, that wins. Comply with your attending on that service. Separately, tell your PD or chiefs, “On X service, we’ve been told not to use the AI tool at all, so we’ve respected that.” You’re not disobedient; you’re following the chain of command on the ground while making leadership aware of the mismatch.
2. Is it ever okay to paste de-identified patient info into a public AI tool?
Only if your institution explicitly allows it and you truly understand what “de-identified” means. Most hospitals currently say no, or they’re extremely conservative. In practice, assume: no PHI, no dates, no unique combinations that could re-identify. When in doubt, don’t do it. Use AI for generic patterns and teaching cases, not your live, real patients, unless your system has a sanctioned, integrated tool.
3. How do I get better at writing notes without AI if my attending bans it but I’m slow?
Use AI as a training partner outside the chart, not as a ghostwriter. Have it generate example notes for generic scenarios, then practice rewriting them from memory for your patients. Ask AI to critique your structure: “Make this SOAP note tighter and point out where I’m redundant.” Then, when you’re in the EHR, write from your own head using those improved mental templates. Over a few weeks, your speed and clarity will jump.
4. Could using AI now hurt me later when applying for jobs or fellowships?
Using AI itself won’t hurt you. Sloppy, uncritical, or dishonest use might. What program directors and employers will care about: Can you think independently? Are your notes accurate and readable? Do you respect privacy and policy? If you can say, “I’m familiar with AI tools, I use them within policy as an adjunct, but my clinical reasoning and documentation stand on their own,” you’re in excellent shape. The risk isn’t using AI—it’s outsourcing your brain to it.
Open your last three notes right now and ask yourself: “If I suddenly had to stop using AI entirely tomorrow, could I still produce notes at this level?” If the honest answer is no, you know exactly what to fix this month.