Residency Advisor Logo Residency Advisor

What Attendings Really Think About AI Clinical Decision Tools

January 8, 2026
16 minute read

Attending physician reviewing AI-generated clinical recommendations on a workstation in a busy hospital ward -  for What Atte

The dirty secret is this: most attendings don’t “trust” AI clinical decision tools—but they’re already letting them quietly steer patient care.

Let me walk you through what’s actually being said in workrooms, behind closed doors, and in committee meetings. Not the polished “innovation” narrative you get in glossy hospital newsletters. The real thing.

The Two Conversations: Public vs. Private

Publicly, attendings say things like:

“AI is a promising adjunct to clinical judgment.”
“We must ensure these tools are used ethically and carefully.”
“It’s a tool, not a replacement for the clinician.”

That’s the language for town halls, grand rounds, and vendor demos.

Privately, in the call room and the charting area at 11:30 p.m., what you hear is closer to:

“This sepsis alert fires every five minutes and is basically useless.”
“If I follow the AI recommendation and it goes bad, am I on the hook or is the hospital?”
“Honestly, I just click through half this stuff so I can get my notes done.”

There are roughly four tribes of attendings when it comes to AI tools. And you need to understand them, because your career—and your ethical reputation—will be shaped by how you operate around each group.

pie chart: Skeptical but Curious, Quietly Dependent, Openly Resistant, Tech Evangelist

Typical Attending Attitudes Toward AI Tools
CategoryValue
Skeptical but Curious35
Quietly Dependent30
Openly Resistant20
Tech Evangelist15

Tribe 1: Skeptical but Curious (largest group)

These are your mid-career folks. 8–20 years out of training. They roll their eyes at buzzwords, but they aren’t stupid; they see where the world is going.

What they really think:

  • “If this thing can keep me from missing the one weird PE, fine. But don’t slow me down.”
  • “I’ll use it, but I’m not telling the vendor it helped unless I’m sure.”
  • “If it contradicts my gut, it better have damn good reasons.”

They’re the ones who will ask you: “What did the AI say? And what do you think?” That’s a test. They’re not actually asking because they trust the tool. They’re checking whether you’ve turned your brain off.

Tribe 2: Quietly Dependent (growing fast)

They won’t admit this in front of leadership, but they are leaning heavily on AI—especially in radiology, ED, and hospital medicine.

You see it when:

  • They “just double-check” abnormal labs against an AI risk score every time.
  • They start reading the AI summary of a chart instead of the original notes.
  • They accept the suggested antibiotic regimen unless there’s an obvious reason not to.

Behind the scenes, these attendings will say things like:

“If the tool says low risk and my gut says low risk, I’m discharging. I’m not losing sleep over that.”
“If it flagged a PE and I didn’t order the CT, I’d never forgive myself.”

The dependency isn’t obvious. It creeps in. First as a safety net. Then as default.

Tribe 3: Openly Resistant

You know them. They still prefer paper rounding lists and will say “we did medicine fine without this nonsense.”

They’re not all wrong.

They’ve lived through EHR rollouts, CPOE disasters, and “smart” order sets that were neither. They’ve seen tools promised as time-savers that turned into click-heavy compliance traps.

Their real fear isn’t technology. Their fear is loss of control over clinical judgment. And liability.

They ask the sharpest question in meetings: “When this thing is wrong, who is legally responsible?” Vendors dodge that every time.

Tribe 4: Tech Evangelists

They’re on the AI committees. They present at conferences. Some of them have side gigs with startups. A few believe deeply; a few like the spotlight.

They’ll say:

“The model’s AUC is 0.89, that’s better than most residents.”
“We’re seeing great improvements in LOS and ED throughput.”

But if you grab them in the hallway and ask, “Do you personally trust it with your own family member?” you’ll get a pause. A little shrug. Then: “It depends on the case.”

That hesitation tells you everything.

What Attendings Actually Use AI Tools For

Forget vendor slides. Here’s where AI tools are already changing behavior.

Residents and attending reviewing AI risk scores on a shared workstation -  for What Attendings Really Think About AI Clinica

1. Triage and Risk Scoring

Sepsis risk scores, readmission prediction, PE probability, AKI alerts. These are the backbone of current AI deployment.

The attending perspective:

  • They mostly ignore generic “could be sepsis” alerts. Alert fatigue has gutted those.
  • They do pay attention when the risk score is very high and their mental model is “mildly concerned at best.”

I’ve watched this play out dozens of times:

Resident: “I think we can send her home with close follow-up.”
AI tool: “High risk of 72-hour ED return / decompensation.”
Attending: pauses, then says, “Alright, let’s observe overnight.”

Not because the attending believes the model is clairvoyant. But because now there’s a documented warning. Discharging looks riskier—medically and legally.

So yes, AI tools are quietly nudging disposition decisions. Especially at the margins.

2. Order Suggestions & Care Pathways

Antibiotic choice, imaging appropriateness, DVT prophylaxis, heart failure bundles. Many “AI tools” here are really glorified guideline engines plus pattern recognition.

What attendings say in meetings:
“Standardizing care improves quality.”

What they say after sign-out:
“This thing wants a CT on every borderline belly pain. It’s insane.”
Or: “Fine, I’ll accept the suggested regimen; it’s ID-approved and I don’t have time to argue with the order set.”

In practice, they use AI suggestions as default when:

  • They’re tired.
  • The case is routine.
  • They know the committee will review “variances” from the pathway.

So “clinical judgment” gets reserved for the tricky 10–20%. The rest flows down the AI-assisted rails.

3. Documentation and Coding

This is where many attendings secretly like the tools the most—and trust them the least.

AI that suggests diagnoses to capture, comorbidities to add, or phrases to drop into notes?

They know exactly what’s going on: revenue optimization in a lab coat.

I’ve heard:

“If this thing wants to upgrade every pneumonia to sepsis, I’m not doing that.”
“This is clearly tuned for DRG, not reality.”
But also:
“Sure, if the note generator saves me 15 minutes, I’ll edit it.”

Here’s the split:

  • Ethically conservative attendings will refuse to accept suggested diagnoses they don’t truly believe.
  • Burned-out attendings will hit accept, then skim for glaring nonsense.

You will be expected to “review and edit” these AI artefacts. In reality, a lot slips through untouched. Everyone knows this. Nobody says it on the record.

4. Image and Pattern Recognition

Radiology, derm, pathology, ophthalmology—this is where AI actually scares some attendings and impresses others.

Off the record, a senior radiologist once told me:

“The AI is quite good at catching the obvious nodules and pulmonary emboli. It’s mediocre at subtle patterns that actually require experience. But administration only cares about the false negatives it prevents, not the noise it produces.”

So you get:

  • Attendings who over-rely: “If the AI didn’t flag it, it’s probably fine.”
  • Attendings who under-rely: “I read the scan, then glance at the AI just to see if it missed what I saw.”

Your job, if you’re in training, is to mirror the second group. Read first. Check AI second. Speak that out loud on rounds. Faculty notice.

The Real Fear: Liability and Blame

Let’s talk about what attendings actually worry about at 3 a.m.

How AI Use Shifts Perceived Liability
ScenarioHow Attendings Actually Feel
Follow AI, bad outcome"I ignored my clinical sense for a black box."
Ignore AI, bad outcome"Plaintiff lawyer will crucify that alert in court."
No AI tool present"At least it is the usual standard of care debate."
AI contradicts guidelines"I will always default to published guidelines."

Three quiet realities:

  1. Most attendings have no idea how the model works beyond buzzwords like “machine learning” and “neural network.”
  2. They do understand that the medical-legal system will not accept “the computer told me so” as a defense.
  3. They suspect that administration is more excited about metrics than malpractice exposure.

So they develop a few survival rules:

  • If AI aligns with guidelines and my judgment, I’ll happily document that.
  • If AI contradicts my judgment, and I’m going against it, I’m documenting my rationale. In detail.
  • If AI is clearly wrong, I ignore it—but I’m silently angry that I had to waste time dealing with it.

You’ll see attendings ask the awkward questions in implementation meetings:

“Will your company provide malpractice coverage when your algorithm is directly implicated?”
“Can I see the training data? Were our patients represented?”

Vendors usually respond with word salad about “partnerships” and “continual learning.” That’s when the sharp attendings mentally downgrade the tool from “ally” to “risk.”

Ethical Tension: Patients vs. Product vs. System

The ethics here aren’t abstract. They show up at the bedside.

Mermaid flowchart TD diagram
AI Influence on Clinical Decisions
StepDescription
Step 1Patient Data
Step 2AI Tool Output
Step 3AI quietly reinforces decision
Step 4Clinician must justify choice
Step 5Extra documentation time
Step 6Perceived legal risk
Step 7Clinician may default to AI next time
Step 8Aligns with Clinician Judgment

Three conflicts attendings feel but rarely articulate fully to learners:

  1. Patient-centered care vs. system metrics

AI tools are often optimized for length of stay, readmissions, throughput, or cost. Those are system goals. Sometimes they’re aligned with patient interests. Sometimes they’re not.

Example: an AI nudging early discharge to hit LOS targets in a borderline safe patient. Is that good medicine? Depends whom you ask.

  1. Transparency vs. Black Box

Ethically, patients have a right to know when an opaque algorithm is influencing care. In practice, almost nobody is saying:
“Part of my decision is based on a predictive model you cannot see or understand.”

Attendings know this would open a can of worms. So they phrase it as:
“We use some advanced tools in the background to help us risk-stratify patients.”

Technically true. Ethically thin.

  1. Autonomy vs. Pressure

Residents and juniors will feel subtle pressure to agree with AI recommendations—because attendings feel subtle pressure to not be the outlier who “ignores evidence-based tools.”

The ethical danger is obvious: you shift from asking “What’s right for this patient?” to “What will be easiest to defend and document?”

The brave attendings explicitly push back on this. They’ll say out loud:
“I know the AI says X. I think that’s wrong for this patient. Here’s why.”

You should pay attention to those people. They’re modeling the kind of physician you want to be.

What This Means for You as a Trainee

You are the generation that will be expected to be “good with the AI.” That’s not just tech literacy. It’s professional survival.

bar chart: Ignore Mostly, Check but Rarely Follow, Use as Safety Net, Default to AI for Routine, Heavily Dependent

How Residents Report Using AI in Clinical Work
CategoryValue
Ignore Mostly10
Check but Rarely Follow20
Use as Safety Net40
Default to AI for Routine20
Heavily Dependent10

1. Never Lead with “The AI Said…”

On rounds, if you start with:
“I used the AI tool and it said low risk, so I think we can discharge”—
the better attendings will immediately ask: “And what do you think?”

Use this structure instead:

  • First: Your assessment, in your own words.
  • Second: Relevant clinical data.
  • Third: “For context, the risk tool estimated X%, which is consistent/inconsistent with my assessment.”

You’re signaling: AI is a tool, not your brain. Attendings respect that.

2. Learn Where the Tool Is Blind

Every AI tool has blind spots. Attendings might not know the statistics, but they know the patterns of failure:

  • Missing unusual presentations in atypical populations.
  • Overcalling common things in low-risk patients.
  • Underperforming in language, race, or age groups underrepresented in training data.

Ask the uncomfortable question early:
“Has this model been validated specifically in our patient population? Any known disparities?”

Most people won’t have an answer. That’s the point. You’re surfacing the ethical issue.

3. Document Like a Lawyer, Think Like a Clinician

If you go against an AI recommendation, document why. Not just “disagreed.” Spell out your reasoning. Future you—sitting in a deposition—will thank you.

If you go with the AI, still make your own reasoning explicit. Do not let the chart read like you outsourced judgment to a black box.

Ethically, the line is simple:

  • It’s fine to be informed by AI.
  • It’s not fine to be replaced by it while pretending otherwise.

4. Watch What Attendings Actually Do, Not What They Say

In any given rotation, pick one attending and quietly track:

  • How often do they reference AI tools?
  • Do they visibly change management based on them?
  • Do they ever explain to patients that an AI is involved?
  • Do they model skepticism, blind trust, or a balanced approach?

You’ll quickly see the gap between stated policy and lived practice. That gap is where your professional judgment has to live.

What Attendings Secretly Want from AI (But Rarely Admit)

Here’s the truth nobody in leadership says out loud: most attendings don’t care if AI is “state-of-the-art.” They care if it makes their day less miserable without putting their license at risk.

Tired attending physician alone at a workstation at night, surrounded by open charts and screens -  for What Attendings Reall

They want:

  • Fewer mindless clicks, not more.
  • Fewer irrelevant alerts, not “improved sensitivity.”
  • Help catching rare but catastrophic misses.
  • Honest explanations they can understand, not buzzword soup.

They do not want:

  • Administrators using AI outputs as retrospective cudgels: “The model predicted high risk; why didn’t you admit?”
  • Being on the hook for defending decisions made on top of proprietary logic they aren’t allowed to see.
  • Residents parroting AI outputs instead of learning to think.

The attendings you’ll respect most are the ones who say things like:

“I will happily use any tool that shows, in my patients, that it makes care safer and outcomes better. But I’m not outsourcing my clinical brain because a brochure says AUC 0.92.”

That’s the line you should adopt.

How This Ties Into Your Own Professional Identity

This isn’t just about gadgets. It’s about what kind of physician you’re becoming.

Medical trainee and attending discussing an AI recommendation on screen in a nuanced way -  for What Attendings Really Think

You’re going to be practicing in a world where black-box tools are everywhere, marketed under the language of “evidence-based support.” Patients won’t be able to see the difference between good and bad AI. Many administrators won’t either.

Your ethical responsibility is very simple and very hard:

Protect the patient’s interest, not the algorithm’s reputation, not the vendor’s contract, not the hospital’s metrics.

That means:

  • Being willing to say “no” to AI when it’s wrong—even if the system wants you to comply.
  • Being willing to say “yes” to AI when it genuinely outperforms your own pattern recognition—without ego.
  • Being transparent with patients, at least in meaningful ways, about how these tools are shaping their care.
  • Being the one in the room who actually understands the difference between correlation, prediction, and causation.

And one more thing no one tells you: your generation will end up teaching us how to use this stuff safely. Many mid-career attendings are barely keeping up. If you can speak both languages—clinical nuance and AI literacy—you’re going to have a lot more power than your PGY level suggests.

Use it wisely.


FAQ

1. Should I mention AI tools explicitly in my notes when they influence a decision?

Yes, but carefully. Document AI as supporting context, not as the primary rationale. For example: “Based on clinical assessment and risk factors, patient judged low risk for PE; AI-based risk tool also estimated low risk, consistent with clinical judgment.” Do not write “Discharging because AI score low”—that invites trouble and undermines your professional role.

2. Is it ever acceptable to ignore an AI alert without documenting why?

Clinically, people do it all the time. Legally and ethically, it is risky for high-stakes alerts. For low-value, obviously spurious alerts (e.g., generic best practice reminders), ignoring without detailed documentation is common and usually defensible. For anything tied to major adverse outcomes—sepsis, PE, stroke—you should either meaningfully address it or briefly document why it’s not applicable in that case.

3. How can I ethically push back if my institution pressures us to follow AI-driven pathways?

You do it in the language institutions understand: quality, safety, and equity. Ask for local validation data. Ask whether disparate performance across demographic groups has been evaluated. Document cases where strict adherence would have harmed a patient. Bring those to M&M or quality meetings. Frame your pushback as protecting patients and the integrity of evidence-based care, not as technophobia. That’s how you get listened to instead of dismissed.

Key points:

  1. Most attendings are using AI tools more than they admit, but trust them less than leadership claims.
  2. Ethically, you must treat AI as a tool that informs—not replaces—your clinical judgment, and document accordingly.
  3. Your value in this new environment will come from being able to think critically with AI in the room, not blindly for or against it.
overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles