
The shiny marketing about AI in healthcare is mostly fantasy. What actually happens on rounds is far messier, quieter, and a lot more human than the brochures admit.
You want the truth? Most attendings are already using AI. They just aren’t calling it that. And they definitely aren’t stopping rounds to say, “Let’s consult the large language model now.”
They’re slipping it in between patients. On their phones in the stairwell. On the workroom desktop while you’re “updating the note.” Sometimes it’s brilliant. Sometimes it’s lazy. And sometimes it’s one typo away from being dangerous.
Let me walk you through how this really looks from the attending side.
The Myth vs. The Reality
The myth is the keynote talk: an integrated AI system, seamlessly embedded in the EHR, suggesting diagnoses, flagging sepsis, auto-writing notes, all perfectly curated.
The reality on rounds at most academic and community hospitals right now looks more like:
- A pulmonary attending copying an ABG into ChatGPT to see if the interpretation matches theirs.
- A hospitalist pasting yesterday’s note into an AI assistant to generate today’s “interval events” and assessment.
- A busy ID attending asking an AI model: “Summarize current guideline-based therapy for MSSA bacteremia in 5 bullets.”
- A chief resident secretly using AI to draft patient instructions, then editing the tone before attending sign-off.
No integration. No formal policy. Just ad hoc use — mostly in the margins.
| Category | Value |
|---|---|
| Never | 20 |
| A few times a month | 35 |
| Weekly | 25 |
| Daily | 20 |
That’s from an internal poll one hospitalist group shared over coffee. Totally unscientific, but it matches what I’ve seen across three institutions.
Where AI Actually Shows Up During Rounds
Let’s go service by service. Because the pattern is surprisingly consistent.
1. The “Check My Thinking” Use
This is the most common, and the one nobody admits out loud.
Picture this: it’s 9:20 a.m., the team is outside a room with a 72-year-old with acute confusion, mild fever, CKD, on 10 meds, with a recent fall. Your attending has a differential in their head — infection, med effect, metabolic, subdural, delirium from hospitalization — but they’re not totally at peace with it.
They excuse themselves to “answer a page” in the hall.
On their phone, they type into an AI model:
“72-year-old man with CKD stage 3, acute confusion over 24 hours, T max 38.1, Na 131, Cr 1.9 (baseline 1.4), meds: sertraline, oxycodone PRN, gabapentin, HCTZ, lisinopril. Recent fall 4 days ago. CT head yesterday without bleed. Give a prioritized differential diagnosis and key next diagnostic steps.”
What they’re doing is not outsourcing their brain. They want to see if the AI surfaces anything non-obvious. Hypoglycemia? Already checked. Sepsis? Thinking about it. Drug-induced hyponatremia? On the list. Subdural missed? Maybe. They want confirmation that their mental map isn’t missing something big.
Most of the time, the model regurgitates a decent UpToDate-level differential. Occasionally it throws in a zebra that reminds them of a path they hadn’t fully considered.
Do they tell the team, “I used AI”? Almost never. They just say, “Let’s make sure we’ve ruled out infection, look again at meds for delirium/hyponatremia, and think about pain control and sleep.”
Behind the curtain, that’s “AI as a second-opinion whiteboard.” Not leading. Just cross-checking.
2. The Documentation Machine
This is where AI use is both most widespread and most abused.
I’ve watched hospitalists at two different systems do the same dance:
- End of rounds, they have 12 notes to write.
- They open yesterday’s note, paste it into an AI tool, and say:
- “Update this progress note for today’s date, keeping format, but shortening the assessment and adding today’s vitals and labs.”
- Then they paste in today’s vitals and key labs.
The AI spits out a prettified note with fluff like “The patient is resting comfortably” even though nobody actually went back after rounds to see.
The better attendings then edit hard:
- Remove fake statements about what was personally examined.
- Tighten the plan to match what was actually discussed.
- Strip generic lines like “Continue to monitor” that are legally useless.
The lazy ones? They barely read it. Add a line or two. Sign.
That’s the dark truth: AI is already creating notes that overstate what was done.
Ask any coder in a major system how often they can tell a note was AI-inflated. They’ll roll their eyes. The language is too clean, too polished, too “Internist of the Year” for a post-call hospitalist who’s 16 hours into a shift.
But from the attending perspective: the choice is not “write the perfect note manually” vs “use AI.” It’s “purposely partial notes that barely meet billing” vs “AI-boosted notes that are maybe slightly over-polished but complete.”
Is that good medicine? Sometimes. Sometimes not. But pretending this isn’t happening is delusional.
3. Bedside Explainers and Teaching Moments
Here’s where AI is weirdly useful and actually good for patient care when done right.
At one academic hospital, a cardiology attending I know keeps a chat window open on their workstation on wheels. After explaining heart failure to a patient, they’ll say:
“Let me print you something that explains this in plain language.”
And they’ll type:
“Explain systolic heart failure to a 10th-grade reading level patient in 2 short paragraphs. Emphasize what it means for the heart to be weak, why we use water pills, and how salt affects things.”
They read it quickly, strip out any weird phrasing, and print. Takes 60 seconds. The patient walks away with something far better than the generic brochure from 2005.
Same attending will quietly use an AI model to generate a teaching table for the team:
| Class | Example | Main Benefit |
|---|---|---|
| ACE inhibitor | Lisinopril | Survival benefit |
| Beta blocker | Metoprolol | Decrease mortality |
| Diuretic | Furosemide | Symptom relief |
| MRA | Spironolactone | Morbidity reduction |
They’ll paste a rough prompt, get a clean table, and pull it up on the screen during post-rounds teaching. You think they had time to format that themselves between consult calls?
This is AI as an accelerant for good teaching. The attending still decides what’s correct, what’s in-bounds for level, what is or isn’t aligned with current guidelines.
But again, they almost never say, “This came from AI.” They say, “Here’s a quick table I put together.”
Because the culture is still “real clinicians don’t need help,” even as half their life is propped up by digital scaffolding.
4. Guidelines, Summaries, and “What Changed Since I Trained?”
This is where older attendings lean harder on AI than they’ll admit.
A 58-year-old nephrologist isn’t going to read the full new KDIGO guideline front to back the week it comes out. But they are going to encounter a patient where the old mental algorithm feels out of date.
What they’ll quietly do:
“Summarize major changes in 2024 KDIGO CKD guideline for proteinuric CKD compared to 2012. Focus on SGLT2 inhibitors, nonsteroidal MRAs, and BP targets.”
Is that perfect? No. If the model hallucinates a recommendation that doesn’t exist, that’s a problem. The careful ones will cross-check with UpToDate or the actual guideline PDF.
But for scoping what’s changed, AI is ridiculously efficient.
Same for rare diseases that pop up once a year on service. Autoinflammatory syndromes. Weird immunology. That attending who gave you a beautiful 5-minute summary of HLH last month? There’s a non-trivial chance they skimmed an AI-generated summary on their phone in the bathroom 10 minutes before that talk.
Does this make them less of an expert? Not really. The judgment in how they interpret, simplify, and communicate the information is still uniquely theirs. AI is just replacing that old habit of speed-reading a review article on PubMed.
5. Triaging Unfamiliar Medications and Interactions
Rounds on a busy medicine service: a patient is on some bizarre anti-epileptic from another country plus “some supplements” they can’t name. The intern is buried. Pharmacy is slammed. The EMR drug database is clunky.
I’ve watched attendings do this:
“What is brivaracetam? Summarize mechanism, major side effects, and key interactions in 4 bullet points.”
They get an answer faster than the EMR drug database loads. Then they’ll confirm critical parts with a more official source if anything drives management.
Same for herbal supplements:
“Major known interactions of St John wort with cardiovascular meds.”
Is that reliance on non-validated AI? Yes. Is it better than shrugging and ignoring entire categories of interactions? Also yes.
The good attendings will not finalize a management decision off a single AI response. They’ll use it to know what to look up properly.
Where AI Use Quietly Crosses the Line
Now for the part nobody likes to talk about: misuses.
Copy-Paste AI Plans That Don’t Match the Patient
I’ve seen more than one note where the “assessment and plan” clearly came from an AI prompt like:
“Write an internal medicine progress note assessment and plan for a stable patient with COPD exacerbation improving on steroids and nebs, plan for discharge in 1–2 days.”
And what’s in the chart?
- “Continue duonebs q4h around the clock” (patient actually on PRN).
- “Monitor ABGs” (none ordered).
- “Plan for discharge in 1–2 days” (patient actually going home that day).
You can always tell: nice hierarchical headings, generic but polished language, and multiple recommendations that don’t match orders.
This is the inevitable byproduct of two pressures: documentation insanity and naive AI use. The physician signs a plan that doesn’t accurately reflect their real thinking or orders, because they skimmed, not read.
From a medicolegal standpoint, that’s a minefield. Plaintiffs’ lawyers will eat this alive in a few years, and hospital risk management knows it.
But right now, it’s happening quietly. Daily.
PHI and the “We All Know This Is Wrong” Problem
Everyone likes to pretend no clinician ever pastes PHI into public AI tools.
Reality: they do. Tons do. Some carefully remove identifiers. Some don’t.
A hospitalist I know literally typed:
“Summarize this HPI in 3–4 sentences: [full raw HPI with name and unique dates]”
Into a free web AI.
Is that allowed under hospital policy? Absolutely not. Did they know that? Yes. Did they care in the moment when they had 18 patients and a cross-cover admission? Not really.
Most systems are rushing to roll out “enterprise” AI tools that claim to be HIPAA-compliant. But they’re not consistently rolled out, the workflows are clunky, logins break, and docs fall back to whatever works fastest on their phone.
This is the ethical and regulatory sinkhole that administration is desperately trying to patch from behind.
What Attendings Won’t Admit to Trainees
Here’s the dynamic you’re living inside as a student or resident: you’re being evaluated by people who are themselves learning new tools in secret.
So they sometimes project.
You’ll hear an attending trash AI:
- “I don’t trust any of that stuff.”
- “If you don’t know it, you need to read, not ask a bot.”
- “I better not find AI-generated text in your notes.”
Then you’ll see them, 30 minutes later, in the workroom, clearly reading from the AI-generated summary they just pulled up to explain a pathophys concept.
The subtext is: “I’m allowed to use shortcuts because I’ve already built a foundation. You’re not allowed because you haven’t.”
Is that entirely fair? No. Is there a grain of truth? Yes.
The attendings who are actually good at using AI safely have a few things in common:
- They already know the field well enough to spot nonsense.
- They treat AI like a fast intern: can draft, can summarize, can suggest, but everything is reviewed.
- They correct it aggressively when it’s wrong, which sharpens their own reasoning.
The ones you should worry about are the ones who treat it like an oracle and barely look.
How You, as a Trainee, Should Actually Use AI on Rounds
You’re not going to change what attendings do. But you can control how you integrate this quietly into your own learning and care.
Here’s the pattern I see in the best residents and fellows right now:
They use AI before rounds, never during presentations.
They’ll:
- Drop a complex overnight admission into an AI model (stripped of identifiers) and ask for a differential, then compare it to their own.
- Ask it to “explain in simple language” a weird disease they’re about to present, so they can teach the patient and the medical student better.
- Use it to draft a teaching handout, then edit it to match current guidelines.
They do not:
- Read off AI text while presenting.
- Copy assessment and plans straight into the chart.
- Argue with the attending by saying, “Well, the AI said…”
The strongest move is: use AI to raise your floor, not replace your ceiling.
Meaning: use it so the baseline level of your knowledge and communication is higher and more consistent, but keep doing the real work — reading primary sources, guidelines, and learning pattern recognition at the bedside.
| Step | Description |
|---|---|
| Step 1 | Initial Patient Assessment |
| Step 2 | Your Own Differential |
| Step 3 | Targeted AI Query |
| Step 4 | Cross-check With Guidelines |
| Step 5 | Stick With Your Plan |
| Step 6 | Refine Diagnosis and Plan |
| Step 7 | Document With Careful Review |
| Step 8 | New Useful Ideas? |
That’s the ideal flow. What actually happens in reality often skips B and jumps straight from A to C. And that’s where people get into trouble.
Where This Is Headed (And What No One Says Out Loud)
Let me be blunt: AI isn’t going away. The question isn’t “will attendings use it on rounds?” The question is “how visibly and how safely?”
Three things are coming, fast:
EHR-embedded AI that writes notes from ambient listening.
Several systems already have pilot programs where the attending just talks, and the AI drafts the note. On rounds, this will mean the progress note is half-written before the team leaves the room. The role of the resident will shift from “note generator” to “note editor and fact-checker.”System-level alerts powered by AI that quietly change decisions.
Sepsis alerts are primitive now. The next wave will be predictive models quietly flagging patients at risk for decompensation or readmission. Attendings will override or follow those suggestions, but they’ll rarely walk you through the model’s logic, because they don’t fully understand it themselves.Formal policies on AI-assisted documentation — with consequences.
Hospital lawyers and compliance officers are already drafting language: what level of AI use is acceptable, what must be disclosed, what constitutes fraud if AI inserts untrue statements. Right now it’s the Wild West. That will not last.
| Category | Value |
|---|---|
| 2023 | 20 |
| 2025 | 45 |
| 2027 | 70 |
| 2030 | 85 |
Those percentages are what CMIOs are predicting for “some AI involvement in routine documentation and decision support.” The direction is not subtle.
If you pretend this isn’t happening, you’ll be the physician in 5–10 years who is functionally illiterate in the language your entire system is built on. Not a great career strategy.

How to Stay Sharp in an AI-Heavy World
Two practical rules I’ve seen smart clinicians adopt:
Always generate your own answer first.
Before you ask the model, write down your differential, plan, or explanation. Then compare. If you can’t articulate your own version, you’re not “using AI to learn,” you’re outsourcing thinking.Never let AI be your only source on anything that changes management.
Summaries? Fine. Teaching aids? Fine. But if a dose, a diagnostic step, or an escalation decision hinges on it — cross-check with guidelines, UpToDate, or a senior colleague.
AI is like a new kind of intern. Speaks confidently. Often right on bread-and-butter stuff. Occasionally wrong in catastrophic, creative ways. You would never sign an intern’s note or follow their plan blindly without reviewing. Same principle.

FAQ: What You’re Probably Afraid to Ask
Is it “cheating” if I use AI to help draft my H&P or progress note as a resident?
It depends entirely on your program and hospital policy. Some places flat-out ban it in documentation. Others are quietly fine with it as long as the content is accurate and you review thoroughly. What’s always cheating is pasting AI text you barely read and signing your name to statements that aren’t true (exams not performed, discussions not held). That’s not an AI issue; that’s a professionalism and legal issue.Will attendings judge me if I admit I used AI to understand a topic before rounds?
The honest ones won’t. Many of them are doing the same thing themselves. If you say, “I read UpToDate and also used an AI tool to get a simple explanation,” that’s usually seen as efficient, not lazy. Where you’ll get side-eye is if your only source is “the bot” and you can’t reference a guideline, paper, or reputable resource.Can I use AI live on rounds when I’m stuck on a question?
Right now, at most places, that will land badly. It feels like you didn’t prepare. Better approach: after rounds, look up what you didn’t know using a mix of AI and traditional resources, then circle back the next day and say, “I looked this up; here’s what I found.” Some progressive attendings might be open to a transparent “let’s ask the tool and critique it together,” but don’t assume that.Are attendings really using AI for clinical decision-making, not just notes?
Yes, but usually as a secondary input. They’ll use it to broaden a differential, remind themselves of rare associations, or summarize guidelines. They are not (the smart ones, at least) letting it dictate management without independent verification. The main decision-making still relies on their experience, pattern recognition, and system-level tools like order sets and institutional pathways.How do I learn to use AI “the right way” as a future attending?
Start now by treating AI as a force multiplier, not a crutch. Generate your own thinking first, then compare. Use AI to create teaching aids, patient education materials, and draft notes you meticulously edit. Get familiar with your institution’s approved tools and their limits. And pay close attention to times when AI is confidently wrong — those are the moments that train your judgment, which is ultimately what you’re being paid for.
Two things to keep in your head as you watch attendings “not” use AI on rounds:
- A lot of them already are — quietly, in the background, mostly to survive documentation and stay current.
- The attendings you want to become are the ones who use AI like a sharp assistant, not a replacement: helpful, fast, never blindly trusted.
You do not need to fear this shift. You just need to be honest about it, sharper than the tool you’re using, and unwilling to sign your name to anything you did not actually think through.