Residency Advisor Logo Residency Advisor

I’m Not ‘Techy’—Will I Fall Behind in an AI-Driven Healthcare System?

January 8, 2026
14 minute read

Medical student anxious in front of multiple AI healthcare dashboards -  for I’m Not ‘Techy’—Will I Fall Behind in an AI-Driv

What happens when every residency website starts bragging about “AI-enabled workflows” and you still have to Google how to print double-sided?

Same.

The Fear Underneath the Buzzwords

Let me just say the quiet part out loud: it feels like medicine is quietly turning into a computer science degree with a stethoscope on top.

You see headlines like “Radiologists replaced by AI,” “GPT diagnosing rare diseases,” hospitals advertising “AI command centers,” and you’re sitting there thinking: I barely survive Epic. I click the wrong tab and lose my whole note. If this is the direction everything is going… am I screwed?

That’s exactly where my brain goes:

  • What if programs start preferring “techy” applicants?
  • What if attendings expect you to just magically “optimize workflows” with AI tools?
  • What if my lack of coding or data science literally makes me a worse doctor?
  • What if, ten years from now, I’m the dinosaur no one wants to work with?

Here’s the uncomfortable truth no one on Twitter/X says directly:
AI is going to be everywhere in healthcare. Not optional. Not a fad.

But here’s the other side of that, which people conveniently skip:
Being “techy” is not the same as being employable in an AI-heavy hospital. And the bar to “not fall behind” is way lower than your anxiety is telling you.

Let’s dissect this like a Step question.


What “AI-Driven Healthcare” Actually Looks Like (Not the Sci-Fi Version)

People talk about AI like it’s one giant sentient brain. It’s not. It’s a bunch of awkward, sometimes helpful, sometimes stupid tools duct-taped onto already clunky hospital systems.

Most of what’s coming for you as a med student / resident is more like:

  • Your EHR (Epic, Cerner, whatever) adds AI note suggestions, draft assessment and plans, or auto-generated discharge summaries.
  • Decision support tools that pop up saying “This patient might meet sepsis criteria” or “Consider PE in this context.”
  • Radiology reports pre-drafted by AI, then reviewed by humans.
  • Triage systems that flag high-risk patients based on vitals and labs.
  • Chatbots for scheduling, simple follow-up instructions, maybe some patient messaging.

That’s the core reality for most frontline clinicians. Not that you’re suddenly coding neural networks between seeing patients.

So what do you actually have to do with this stuff? Three things:

  1. Know where it lives in your workflow (which buttons to click, what screens to check).
  2. Understand, at a high level, what it’s good at and where it fails.
  3. Decide when to trust it and when to ignore it.

That’s judgment. Not “being techy.”

Let me show you how this plays out.

bar chart: EHR Notes, Clinical Decision Support, Imaging Tools, Triage/Risk Scores, Patient Messaging

Where Clinicians Actually Interact With AI Daily
CategoryValue
EHR Notes70
Clinical Decision Support60
Imaging Tools45
Triage/Risk Scores50
Patient Messaging35

Most of your “AI interaction” will be invisible-ish. It’ll feel like “new features” more than “the robots are here.”


Your Worst-Case Scenarios vs Reality

1. “Programs will reject me because I’m not ‘AI fluent.’”

I’ve looked at a lot of program websites and talked to residents. You know what PDs still care about most? People who:

  • Show up
  • Don’t crumble on call
  • Can communicate like an adult with patients and nurses
  • Aren’t a nightmare to work with

Do some programs flex “AI research” and “informatics tracks”? Absolutely. They love that stuff for branding, donors, and attracting the small group of AI-obsessed applicants.

But are they actually filtering out applicants because you don’t have “machine learning” on your CV? No. That would be insane given the applicant pool.

What they might care about:

  • You’re not hostile to new tech.
  • You can learn new systems without melting down.
  • You won’t be the attending in 2035 who says, “We didn’t have this in my day, so I refuse.”

Being “open and trainable” beats “already an expert” for most clinical roles.

2. “If I don’t learn to code, I’ll be obsolete.”

This one annoys me the most.

Coding is a tool. Not a personality trait. Not a moral virtue.

There are three rough “lanes” you can exist in:

Clinician Lanes in an AI-Heavy Future
LaneTech Depth NeededWho This Fits
Everyday ClinicianLowMost residents, attendings
Clinician-BuilderMediumQI geeks, informatics-curious folks
Full Tech Hybrid (MD+CS)HighNiche, dual-degree, startup types

You only need hardcore coding if you want to:

  • Build tools
  • Publish AI-heavy research
  • Work in industry / startups / informatics

If your goal is “be a solid internist / pediatrician / surgeon who can use AI safely,” you do not need to read Python documentation at 2 a.m.

Being “AI literate” ≠ being a programmer.


What “Not Falling Behind” Actually Requires

Let me lower the bar for you, realistically.

You do not have to:

  • Understand how transformers work mathematically
  • Train your own models
  • Read every AI preprint people hype on social media
  • Become the “AI person” at your program

You do have to be able to:

  • Ask, “Where did this AI get its input data from?”
  • Recognize that AI is pattern-matching off past data, not magical future prediction
  • Catch when the AI’s suggestion clearly doesn’t fit your patient
  • Explain to a patient why you’re overriding or agreeing with a computer recommendation

That’s clinical reasoning plus a bit of skepticism. Which is already what you’re supposed to be doing.

Think “antibiotic stewardship,” but for AI outputs. Same vibe.


Concrete Skills That Actually Matter (Even If You’re “Not Techy”)

Here’s where your anxiety is useful: it’s telling you, “I don’t want to be clueless.” Good. You don’t have to be.

Let’s translate “I’m not techy” into a few specific, learnable things.

1. Get comfortable with using tools, not understanding their insides

You probably don’t know how your car’s engine works in detail. You still drive.

Same with AI systems in hospitals.

You should aim to:

  • Click around new interfaces without fear you’ll “break something.”
  • Learn shortcuts/basic workflows in your EHR rather than fighting it forever.
  • Be willing to try “new button → see what it does,” then ask questions.

If technology makes you freeze, that’s what can actually hurt you. Not your lack of deep knowledge.

2. Basic AI literacy (like, truly basic)

This is the level I’m talking about:

  • AI tools are trained on past data → bias happens if the data was skewed.
  • They’re good at pattern recognition, terrible at context and values.
  • They will sound confident even when wrong.
  • “Black box” doesn’t mean “mysterious god,” it just means you don’t see the internal steps.

You can get this level of understanding from a few good explainer videos or a short online course, not a second degree.

3. Protect your core clinical skills like they’re sacred

Here’s the thing no AI evangelist likes to admit: the people who will really drown aren’t the ones who “aren’t techy.” It’s the ones whose actual medicine is weak and who rely on tools as a crutch.

If you:

  • Take a good history
  • Can present a case coherently
  • Build reasonable differentials
  • Communicate clearly with patients
  • Know your bread-and-butter management cold

You will be in demand. In every version of the future.

AI will make those people faster and more efficient. It will not magically elevate someone who can’t think clinically.


How to Quiet the Panic Without Becoming a Full-On Tech Bro

If you’re anything like me, your brain goes all-or-nothing:
Either I become a “doctor-coder-entrepreneur-visionary” or I get left behind.

No. That’s a trap.

Here’s a middle path that doesn’t require you to fake being someone you’re not:

Do one tiny thing in each of these buckets:

  1. Awareness
    Subscribe to exactly one solid newsletter or YouTube channel about AI in healthcare. Just one. Skim once a week. Know the general direction of things.

  2. Hands-on
    When your school/hospital rolls out any new “AI feature” (note assistants, decision support, whatever), be the person who at least tries it and forms an opinion, not the one who refuses on principle.

  3. Language
    Learn to say phrases like:

  • “What was this system trained on?”
  • “How often is this tool audited for bias?”
  • “How does this integrate with our existing workflow?”

Not because you’re presenting at a conference. Because sounding remotely competent shuts down a lot of condescension.

  1. Boundary
    Pick one thing you’re not going to do. For example:
    “I’m not doing side projects in AI research unless it actually interests me.”
    You’re allowed to treat this as a part of your job, not your personality.

The Ugly Truth: Institutions Will Move Slower Than the Hype

Here’s a calming fact no one shouts about: hospitals are bureaucratic, risk-averse monsters.

Even if some super-powerful AI tool exists tomorrow, your hospital adopting it will involve:

  • Legal reviews
  • IT integration
  • Vendor contracts
  • Union concerns
  • Privacy reviews
  • Training sessions
  • Ten committees

That takes years.

So no, you’re not going to wake up next July to R2-D2 doing morning rounds while you’re locked outside the hospital.

Mermaid flowchart TD diagram
Typical AI Tool Adoption in a Hospital
StepDescription
Step 1AI vendor pitches tool
Step 2Leadership does pilot
Step 3IT integration
Step 4Policy and legal review
Step 5Training sessions
Step 6Limited rollout
Step 7Full adoption if not a disaster

You will have time to adjust. To learn. To complain. To adapt gradually.

The people at actual risk are the ones who dig in their heels and say, “I refuse to learn anything new.” Not the ones quietly worried and trying.


If You’re Already Behind, Is It Too Late?

Let’s say:

  • You’re MS3 and barely surviving wards.
  • You’ve never done AI research.
  • Your CV has zero “tech” on it.
  • You’re applying into something moderately competitive.

Does that hurt you? Maybe in hyper-niche AI-heavy fellowships or specific informatics tracks, yes.

For the rest? Not really.

If you want to slightly “future-proof” without rebranding your whole life:

  • Add one small project: QI project involving EHR optimization, documentation efficiency, something adjacent.
  • Learn to use one AI tool (even something like GPT) to help with studying or drafting patient education, so you at least know what it can and can’t do.
  • Be ready with exactly one thoughtful sentence in interviews if they ask about tech/AI, like:
    “I see AI as a decision support layer that still requires good clinical judgment and awareness of bias, and I’m interested in learning to use it safely rather than replacing the physician role.”

That’s enough. Seriously.

pie chart: None, Basic Awareness, Moderate (projects/QI), High (research/expert)

How Much AI Expertise Programs Actually Expect
CategoryValue
None30
Basic Awareness45
Moderate (projects/QI)20
High (research/expert)5

Most programs are thrilled if you’re just not a luddite obstructionist.


How to Tell If Your Anxiety Is Lying to You

Here’s a reality check I use on myself:

  • Is the fear “I might need to learn some new skills and that’s uncomfortable”?
    → Normal. Manageable.

  • Or is it “The entire system will turn against me and I will be unemployable”?
    → That’s catastrophizing. Not data-based.

Right now, AI literacy is a differentiator. Eventually, it’ll become like “can you use a computer” — assumed, but at a basic level.

You’re early. Not late.

You’re not expected to be a pioneer. Just not willfully clueless.


Quick Mental Reframe You Actually Need

Stop thinking: “I’m not techy, I’m going to fall behind.”

Start thinking: “I’m a clinician-in-training who’s going to have to work alongside algorithms. My job is to:

  • understand their limits,
  • protect my patients,
  • and use them when they genuinely help.”

That’s it.

You don’t have to love this. You just have to be willing to learn enough to be safe and competent. Your anxiety wants you to think this is all-or-nothing. It’s not.

Resident calmly using an AI-assisted EHR system overnight -  for I’m Not ‘Techy’—Will I Fall Behind in an AI-Driven Healthcar


FAQ (Exactly 6 Questions)

1. Do I need to learn to code to survive in an AI-driven healthcare system?

No. Coding is optional, not mandatory. It’s useful if you want to build tools, do informatics, or work in tech-heavy research or startups. For day-to-day clinical work, you need to be able to use AI tools, understand their limitations, and apply judgment. That’s a completely different skill set from writing code.

2. Will programs start preferring applicants with AI or tech experience?

For certain niches (informatics, AI research labs, some academic tracks), yes, tech experience can help. For most residency programs? They care much more about your clinical potential, teamwork, reliability, and communication. Being open to new tools and not hostile to change matters more than having an AI-heavy CV.

3. What’s the minimum I should know about AI as a med student or resident?

You should know that AI tools are trained on past data, can be biased, are good at pattern recognition, and will still be wrong sometimes in confident ways. You should be able to explain to a patient or attending why you’re agreeing or disagreeing with an AI-generated suggestion. You don’t need math or algorithm details to reach that level.

4. What if I’m already uncomfortable with technology in general?

Then your priority isn’t “learn AI,” it’s “get more comfortable using digital tools in low-stakes ways.” Click around new interfaces, ask co-residents to show you shortcuts, play with AI note-drafting in practice notes rather than real ones first. The real risk is freezing or refusing to learn, not being slow at first.

5. Could AI actually make my life easier instead of harder?

Yes, annoyingly, it might. Drafting notes, discharge summaries, patient letters; surfacing guidelines quickly; flagging abnormal trends — all of that can reduce cognitive and documentation load if the tools are well-implemented. You’ll still need to edit and think, but it can move you from “blank page panic” to “editing mode,” which is usually easier.

6. How do I show I’m not anti-technology without faking being an AI enthusiast?

In applications and interviews, frame yourself as curious and cautious. Mention you’re interested in using tools that improve patient care and efficiency, but that you care about safety, bias, and clinical oversight. Give one specific example (even small) of adapting to a new system — like learning a new EHR, using AI scribes in clinic, or participating in a QI project involving workflows. That’s enough to signal you’re not going to be a problem in an AI-heavy environment.


Key points to keep in your head when the panic spikes:

  1. You don’t need to be “techy”; you need to be teachable and clinically solid.
  2. AI will be everywhere, but mostly as decision support and workflow tools you can learn gradually.
  3. The people who actually fall behind will be those who refuse to adapt, not those who start anxious and try anyway.
overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles