Residency Advisor Logo Residency Advisor

What If AI Makes My Chosen Specialty Obsolete? How to Think About Risk

January 8, 2026
15 minute read

Medical student looking at hospital of the future with AI technology -  for What If AI Makes My Chosen Specialty Obsolete? Ho

A third-year I worked with once stood in the radiology reading room, staring at three massive monitors. On one screen: a CT abdomen. On the second: a stack of studies waiting. On the third: an AI tool auto-highlighting lung nodules in bright yellow. He whispered, almost under his breath, “What if this thing does my future job better than me?”

I’ve heard versions of that sentence in so many rooms now. In the OR with a surgery resident, in clinic with a pathologist, on rounds with a hospitalist. It’s the same fear hiding behind different jokes: What if I pick a specialty that AI kills?

Let me say the quiet part out loud: you are not crazy for worrying about this. The uncertainty is real. But a lot of the panic is badly aimed.

The Core Fear: “What If I Bet My Life on the Wrong Specialty?”

This is what it actually feels like, right?

You’re about to lock in a path that takes a decade: Step exams, applications, residency, maybe fellowship. You’re trying to pick between radiology vs IM, path vs derm, surgery vs anesthesia. In the background, your feed is full of headlines:

  • “AI beats radiologists at reading mammograms”
  • “AI can predict sepsis hours before clinicians”
  • “Robot does surgery with better precision than humans”
  • “ChatGPT passes medical licensing exams”

And your brain goes straight to the worst-case scenario:
I pick a specialty. By the time I’m an attending, the algorithms do 80% of it. I become… what? A button-pusher? A redundancy? The awkward human “just in case”?

You’re essentially asking:
Can AI make my chosen specialty obsolete by the time I’m actually practicing?

Blunt answer:
Completely obsolete? Extremely unlikely.
Significantly changed? Almost guaranteed.
Made miserable if you pick poorly and don’t adapt? Possible.

So the real problem isn’t “What if AI replaces my specialty?”
It’s “What if I choose a path that leaves me trapped in the part of medicine AI eats first?”

That’s something you can think through rationally.

Where AI Is Actually Strong (And Where It’s Weirdly Dumb)

Before panicking, you have to be honest about what AI actually does well.

AI is terrifyingly good at:

  • Pattern recognition at scale: imaging, ECGs, pathology slides, skin lesion photos
  • Repetitive, rules-based tasks: prior-auth paperwork, basic documentation, order sets
  • Triaging risk from huge datasets: predicting readmissions, sepsis risk, resource use
  • Summarizing and drafting: notes, letters, templated patient instructions

AI is still hilariously bad (or at least unreliable) at:

  • Nuanced interpersonal work: reading family dynamics, managing emotions in real time
  • Responsibility and liability: standing in court and saying “I own this decision”
  • Edge cases and weird presentations: when the patient "just doesn’t fit"
  • Real-world constraints: social determinants, insurance insanity, resource limitations
  • Ethics in messy gray zones: balancing autonomy, harm, family wishes, cultural context

So if your future job is:

  • 90% staring at screens, scanning for abnormalities
  • 90% doing the same variation of a technical task or report
  • 90% protocol-driven decisions with clear inputs and outputs

Then yes, that’s the area where AI will lean the hardest.

But that doesn’t automatically mean “no humans needed.” It means: fewer humans, doing more oversight, exceptions, complex calls, and human-facing work.

bar chart: Repetitive Image Review, High-Volume Documentation, Complex Diagnosis, Procedures, Family Communication

AI Disruption Risk by Task Type
CategoryValue
Repetitive Image Review90
High-Volume Documentation85
Complex Diagnosis55
Procedures40
Family Communication20

The fear: My whole specialty is that first category.
The reality: Almost no specialty is only that category—unless we insist on practicing that way and refuse to adapt.

Specialties People Are Most Afraid AI Will “Delete”

Let’s just say the names instead of dancing around them: radiology, pathology, dermatology, some parts of anesthesiology, even primary care (because of chatbots).

You’ve probably heard:
“Don’t go into radiology, AI will kill it.”
“Path is doomed; computers read slides now.”
“Derm is just computer vision with prescriptions.”

I’ve watched this conversation mutate over the last few years. It always sounds confident. It’s almost always shallow.

Let’s walk through a few.

Radiology

The nightmare scenario:
You spend a decade training. Then some hospital buys an algorithm that reads CTs and MRIs faster, cheaper, and more accurately. Admins say: “We only need 20% of the radiologists we used to. Enjoy unemployment.”

What’s actually happening right now in many places:

  • AI flags critical findings (like brain bleeds) so they’re read sooner
  • AI pre-screens normal studies so radiologists can focus on complex ones
  • AI measures stuff (volumes, lengths, densities) faster than a human with a mouse
  • Radiologists still sign the report, still carry legal responsibility

Is job structure changing? Definitely.
Is demand evaporating? Not remotely. Imaging volume is exploding.

The real risk in radiology isn’t “no jobs.” It’s this:
You end up in a high-volume, high-efficiency environment where your day is supervising AI outputs, churning through edge cases, and signing your name under time pressure.

So the question becomes:
Would you be okay being the human at the center of that?
Some people absolutely would. Some would hate it.

Pathology

Similar fear: slides + AI = fewer humans.

AI is already strong at:

  • Screening Pap smears
  • Flagging suspicious areas on digital slides
  • Quantifying things like mitotic counts, staining

But pathology is also:

  • Tumor boards
  • Complex classification and staging
  • Integrating molecular data, clinical presentation, radiology
  • Talking to surgeons about margins, to oncologists about next steps

Most of that isn’t going away. But the low-complexity, high-volume stuff? That will get filtered.

Again, you’re not choosing “obsolete or safe.”
You’re choosing: “Am I okay being the person doing the complex 20% with the full responsibility, while AI helps with the low-yield 80%?”

Dermatology, Anesthesia, Primary Care… and Honestly Everyone

Derm: computer vision will absolutely assist with triage—who needs biopsy, who can be reassured. But there’s still:

  • Managing complex chronic disease (psoriasis, autoimmune stuff)
  • Cosmetic work (injectables, procedures—robots are nowhere close)
  • Dealing with patients who are anxious, demanding, or have overlapping conditions

Anesthesia: Yes, closed-loop systems will titrate drips. But people forget airway disasters, unexpected reactions, weird anatomy, and the absolutely brutal responsibility of “no one dies on my watch.” AI doesn’t want that job.

Primary care: chatbots can do “Should I see a doctor?” or “What does my lab mean?” But chronic disease in the context of poverty, trauma, family dysfunction, and 15 other meds? That’s not getting fully automated.

So what’s the pattern?

AI will hollow out certain tasks within every specialty. It will not neatly delete entire fields. It’ll reward people who:

  • Gravitate toward complex decision-making
  • Are comfortable being accountable
  • Lean into communication and systems-level thinking
  • Are willing to re-skill as tools evolve

That’s the part you have to plan for.

How to Actually Think About Risk (Instead of Just Spiraling)

Here’s the uncomfortable truth: There is no “safe specialty” from change.

If it’s not AI, it’ll be reimbursement shifts, policy changes, corporatization, workforce oversupply, or something else you can’t predict.

So how do you make a decision that won’t feel stupid in 20 years?

You stop asking “Which specialty is AI-proof?”
You start asking “Which specialty gives me leverage even with AI?”

Let’s break that down.

1. Identify What Parts of the Work Are Most Automatable

When you shadow or rotate, pay attention to tasks, not titles.

Ask yourself:

  • How much of this day is repetitive pattern recognition or simple decisions?
  • How much is “if X then Y” logic that an algorithm could do at scale?
  • How much is real human interaction, negotiation, coaching?
  • How much is high-stakes judgment where someone must own the outcome?

If 90% of what excites you about a specialty is literally scrolling through images solo in a dark room… that’s a flag. Not “run away,” but “this is exactly where AI will squeeze.”

On the other hand, if what you love is:

  • Running a complex code
  • Managing uncertainty
  • Explaining tradeoffs to families
  • Balancing evidence, guidelines, and patient-specific chaos

That’s much harder to automate away.

2. Separate “Task Threat” from “Career Threat”

AI can take over a task without killing a career.

Example: scribing and templated notes. AI will eat a lot of that. Does that mean physicians are obsolete? No. It means your “note-writing” time might morph into “note-editing + patient time + coordination.”

You should be asking:

  • If AI took the bottom 30–50% of low-complexity tasks off this specialty, what would be left?
  • Do I actually want to do what’s left?

If the answer is “the leftover work is all high-pressure liability with no joy,” that’s a problem.
If the answer is “the leftover stuff is what I actually care about,” that’s… ideal.

3. Look at How Adaptable the Specialty Culture Is

Some fields are already experimenting hard with AI. Others are pretending nothing is happening.

Pay attention during rotations and talking to residents/attendings:

  • Do they know what tools are coming?
  • Are they talking about how to integrate or oversee algorithms?
  • Are they bitter and dismissive (“this is all hype”) or curious and strategic?

You want to be in fields where people say things like:

  • “Yeah, we’re piloting this AI triage tool—my job is changing, but I’m excited.”
  • “We’re figuring out what parts of our workflow we can safely automate.”
  • “Residents now learn how to check when the AI is wrong, not just how to do everything from scratch.”

That’s not just about tech. It’s about survival instincts.

4. Think in Options, Not Traps

Medicine isn’t as rigid as it looks from the outside.

People pivot. They do:

  • Administrative roles (CMO, quality, informatics)
  • Industry (medtech, pharma, AI startups)
  • Subspecialty shifts (hospitalist to palliative, EM to urgent care, radiology to informatics-heavy roles)

You’re not nailing your feet to the floor forever. Yes, there’s a sunk cost. Yes, switching is painful. But it’s not impossible.

Mermaid flowchart TD diagram
Possible Career Evolutions in an AI-Heavy Future
StepDescription
Step 1Residency in Chosen Specialty
Step 2Clinical Attending
Step 3Subspecialize Further
Step 4Clinical Informatics Role
Step 5Quality and Safety Leadership
Step 6Medical Education Focus
Step 7Industry or Startup Role
Step 8AI Tool Oversight

The skill that matters most long-term? Being the kind of person who can learn a new system, ask good questions, and not cling to “we’ve always done it this way.”

How to Choose a Specialty Knowing AI Is Coming

So, practically, what do you do? Because you still have to rank programs, pick electives, commit to stuff while your brain is quietly screaming “What if this is all a mistake?”

I’d use a few filters.

Filter 1: Do you like being the person in the room who owns hard decisions?

AI will be good at suggesting things. It will not stand in front of a crying family and say, “We need to talk about what your mom would have wanted.”

If you:

  • Avoid tough conversations
  • Hate conflict
  • Don’t want to be “the decider”

Then the areas where you hide behind screens may be exactly where AI crowds the hardest.

If you:

  • Weirdly like being the point person
  • Don’t mind saying, “This is my recommendation and why”
  • Can tolerate emotional intensity

You’ll have leverage in almost any field.

Filter 2: Do you enjoy complexity more than repetition?

AI thrives on repetition. It struggles with messy, nonstandard situations.

Ask yourself honestly: did you love that one neat, textbook case because it was clean and tidy? Or did you like the confusing one where the labs didn’t match the story and you had to think?

If your brain lights up at complexity, ambiguity, and “it depends,” you’re on safer ground.

Filter 3: Does the specialty give you optionality?

Look at the off-ramps built into each field.

An example:

Examples of Specialty Optionality
Base SpecialtyLater Options With AI Experience
Internal MedicineHospitalist, subspecialist, palliative, informatics
RadiologyInterventional, informatics, AI oversight roles
PathologyMolecular, lab management, diagnostics companies
AnesthesiologyCritical care, pain, perioperative medicine
Family MedPrimary care, urgent care, leadership, pop health

You don’t need a perfect plan. Just avoid paths where there’s only one narrow way to practice and that way is exactly what AI is targeting.

Filter 4: Can you see yourself as someone who uses AI, not fights it?

If the idea of working with AI tools makes you defensive or disgusted, that’s a warning sign. Because they’re coming. Everywhere.

The people who will do best aren’t the ones who deny or worship the tech. They’re the ones who treat it like a very smart but unreliable intern:

  • Helpful
  • Fast
  • Not fully trustworthy
  • Needs supervision

If you can be that attending for an algorithm, you’re going to be fine.

Normalizing the Anxiety (Because Yeah, This Is Scary)

You’re not just choosing a job. You’re trying to predict a future you don’t control. That’s a horrible feeling.

A few grounding thoughts:

  • Every generation of physicians has faced some “this will destroy the profession” tech: antibiotics, CT scanners, EMRs, robotic surgery, telehealth. The work changed. The need for humans didn’t vanish.
  • The most “at-risk” specialties are also the ones leaning hardest into AI governance, safety, and leadership. Radiology societies, for example, are obsessed with being at the table, not on the menu.
  • You will graduate into a world where AI is already present. That actually helps. You’ll train with it, not suddenly be blindsided five years into practice.

The hopeless version of this story is:
“I picked the wrong specialty, AI took it, and I’m stuck.”

The more realistic version is:
“I picked a specialty that changed, learned new tools with everyone else, and ended up doing a different version of the job than my attendings—but still very much needed.”

Years from now, you’re not going to be judged on whether you picked the mathematically optimal “AI-proof” niche. No one can even calculate that. You’ll be judged on whether you adapted.

You can do that from almost any starting point.


FAQ

1. Should I avoid radiology or pathology entirely because of AI?

No. Avoiding an entire field just because it’s AI-exposed is overreacting. Those specialties will change a lot, but they’re also building some of the strongest roles in AI oversight, safety, and implementation. If you love image-based or slide-based thinking, it’s smarter to go in with eyes open, learn the tools early, and aim for the high-complexity, decision-heavy side of the field than to force yourself into a “safer” specialty you resent.

2. Is primary care at risk because of chatbots and virtual care?

Pieces of low-level triage and basic education will absolutely be handled by AI assistants and online tools. But that’s the shallow end of primary care. The core—managing multiple chronic conditions, complex social situations, mental health, family dynamics—doesn’t go away. What changes is how you get information and how much routine stuff you can offload. If you like long-term relationships and big-picture care, you’ll still have a role. The job description just won’t look identical to 1995 family medicine.

3. Will there be fewer residency spots or jobs because of AI?

Some specialties might feel pressure if their workload per physician drops dramatically. But real-world healthcare is already understaffed, overburdened, and drowning in demand. Most realistic scenarios use AI to keep up with volume and reduce burnout, not slash headcount. Your bigger job-market risks are location, oversupply in certain popular subspecialties, and health system consolidation—not AI alone. Paying attention to workforce trends and being a bit flexible about geography will help more than trying to “outguess” AI.

4. How do I talk about AI in interviews or personal statements without sounding naive?

Be concrete. Don’t say, “AI will revolutionize medicine.” Everyone says that. Instead, mention specific use cases you’ve seen or read about (like AI sepsis prediction, radiology triage tools, or auto-generated notes), then explain two things: what excites you and what worries you. Then make it personal: how you see your future role as someone who uses these tools, checks their errors, and advocates for patients. That sounds thoughtful, not gimmicky—and shows programs you’re not sleepwalking into the future.

Years from now, you won’t remember every article you read about AI and medicine. You’ll remember how you chose to move forward anyway, with incomplete information, and still built a life that felt like it was yours.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles