
Most radiologists are arguing about the wrong AI question.
The real divide is not “AI vs radiologists.” It is “triage algorithms vs second‑read algorithms” – and they affect your job, your risk, and your day‑to‑day work in completely different ways.
Let me break this down specifically, the way you wish someone had done during fellowship.
1. What “AI in Radiology” Actually Means in Practice
Most vendors throw around phrases like “AI decision support” or “augmented radiology” as if that explains anything. It does not. Practically, nearly every clinical AI tool you will see in 2026 falls into one (or both) of two operational buckets:
- Triage (a.k.a. worklist prioritization / alerting)
- Second‑read (a.k.a. concurrent reader / computer‑aided detection & quantification)
They may use similar models under the hood, but their impact on your workflow, liability, and job security is very different.
Quick, concrete definitions
Triage algorithm
An AI process that runs before you interpret the study and changes:- The order in which you see cases
- The urgency flag or routing of a study
- Sometimes, whether you get real‑time alerts (e.g., text/page) for critical findings
You still read the case. But the AI decides which ones shout the loudest on your worklist.
Second‑read algorithm
An AI process that runs alongside or after your interpretation and:- Highlights suspected findings (heatmaps, bounding boxes, measurement suggestions)
- Provides probability scores, classifications, or structured outputs
- Sometimes generates quantitative reports (volumes, scores, fracture risk)
You are still the final reader. But now you have an extra “opinion” or set of markings on the screen.
If you are post‑residency, interviewing, or already in practice, you need to know which flavor your group is adopting. Because they change three big things:
- Your malpractice exposure (different failure modes)
- Your RVU throughput and fatigue pattern
- Your negotiating leverage as AI penetration increases
2. Triage Algorithms: Impacting the Worklist, Not the Dictation
Triage AI is about who gets read first. Nothing more glamorous than that, but the downstream effect is massive.
Classic triage use‑cases you will actually see
These are not hypothetical. These are live at big systems today:
Stroke triage on CTA head/neck
- AI flags suspected LVOs, moves them to the top of your neuro list
- May auto‑notify stroke team if score exceeds threshold
- Target metric: door‑to‑puncture time; not “radiologist accuracy”
PE detection on CT pulmonary angiography
- Positive cases bubble up your worklist
- Techs or ED physicians may get notified in parallel
- Goal: faster initiation of anticoagulation / escalation
Intracranial hemorrhage on non‑contrast CT head
- Head CTs with suspected ICH get prioritized
- Can feed simultaneous alerts to ED or neurosurgery
Pneumothorax / tension PTX on chest radiographs
- Portable CXR in ICU flagged red priority
- Sometimes triggers direct phone/page alerts
| Category | Value |
|---|---|
| ICH on CT | 85 |
| LVO on CTA | 70 |
| PE on CTPA | 65 |
| PTX on CXR | 60 |
| Aortic dissection | 40 |
(Percentages here conceptually represent the relative adoption rates I keep seeing discussed in hospital AI committees and vendor presentations – hemorrhage and stroke way ahead of the rest.)
How triage AI plugs into your daily workflow
Operationally, it works like this:
- Study hits PACS
- A parallel feed goes to the AI server (DICOM push or broker)
- AI processes the exam in seconds
- Returns:
- A triage flag (e.g., “STAT‑AI” or “Critical‑AI”)
- Optional findings overlay (bounding box, heatmap)
- A routing directive (e.g., send to neurorad vs general pool)
You then log into your worklist and see something like:
- CT Head – ED – Priority: Critical‑AI – “ICH suspected”
- CT Head – ED – Routine
- CT Head – Inpatient – Routine
You will absolutely click the AI‑flagged case first. If you do not, your chair will ask why your stroke metrics suck.
What triage actually changes for you
Response time and clinical visibility
You look like a hero because:- ED calls: “We just finished a head CT.”
- You: “Hemorrhage, right frontal, already in the note.”
AI bought you those minutes by pushing that case to the top while you were finishing another.
Liability shift: “failure to see” vs “failure to prioritize”
Traditional miss: you did not see a small ICH or subsegmental PE.
Triage miss:- AI fails to flag a large ICH → system delay → neurologist asks why no alert
- Or AI incorrectly flags a false positive and your group gets alert fatigue
The key: triage AI affects timeliness of care metrics. That is where lawyers and quality committees will look.
Work pattern distortion
On paper, it sounds ideal: the sickest patients first.
In reality:- Your cognitive load early in the shift spikes (everything urgent first)
- You back‑load the “easy” normals and outpatient studies
- Burnout risk shifts, not disappears
Perception of necessity
Hospital admin loves triage AI. They can show:- Reduced door‑to‑needle times
- Fewer out‑of‑window thrombectomies
- KPI dashboards with shiny time‑to‑read graphs
That means triage AI is politically sticky. Once installed and perceived to improve stroke metrics, it will not be turned off. Your job is to understand how that sticks to you when something goes wrong.
3. Second‑Read Algorithms: Changing the Way You Actually Read
Second‑read AI is the one radiologists intuitively fear more. Because it looks like it is coming for the interpretation itself.
That fear is slightly misplaced. But not entirely.
What second‑read AI actually does
You know this pattern from mammography CAD and now from tools like:
- Chest CT nodule detection / volumetry
- Coronary calcium scoring
- Vertebral fracture detection on CT
- Liver lesion segmentation and volumetry
- MS plaque quantification on brain MRI
Second‑read systems usually:
Run on the images and generate:
- Markers (dots, bounding boxes)
- “Finding cards” (nodule list, size, location, type)
- Quantitative outputs (volume, density, scores)
Integrate into your viewer so you see:
- A side panel with AI findings
- Overlays when you hover or toggle them on
- Structured report templates pre‑populated with numbers

How you actually interact with it as a practicing radiologist
In real life:
- You scan the study as you normally would
- You glance at the AI overlay or list:
- “Did it find anything I might have missed?”
- “Is this measurement consistent with mine?”
- You decide whether to:
- Accept its quantification
- Ignore a low‑confidence small lesion
- Disagree and override, often documenting that choice
Second‑read AI here acts like an overly eager fellow who never sleeps. Occasionally right, occasionally annoying.
Second‑read: real impact areas
Detection sensitivity vs specificity
- These tools mostly drive sensitivity up (fewer misses)
- Often at the cost of more false positives
- You have to reconcile that with your own threshold to avoid overcalling
The lawsuits you prevent: obvious misses (e.g., small lung cancer that was visible a year prior).
The lawsuits you may create: incidentaloma cascade from chasing AI‑flagged benign lesions.Standardization and quantification
This is where second‑read AI quietly becomes indispensable:- Tracking nodule volume doubling time over multiple CTs
- Consistent liver lesion volumes for oncology
- Automated CAC scoring for every noncontrast chest CT (even when the study was not ordered for that)
Oncology and cardiology love this. It lets them:
- Compare across time and sites
- Plug numbers into risk calculators and clinical trials
Time: does it slow you down or speed you up?
Honest answer: both, depending on where and how it is implemented.- Focused tools (e.g., CAC scoring with one‑click report) can speed you up.
- “Kitchen sink” systems that mark every bump and shadow slow you down and fatigue you.
The best systems:
- Aggregate their findings into a concise, sortable list
- Provide triage within second‑read: “high‑risk” vs “low‑significance” findings
Cognitive framing and bias
If AI draws a big red circle around something:- You are more likely to see it
- You are also more likely to overweight its importance
Conversely, if AI does not mark a lesion, weak readers might subconsciously think “probably nothing.” That is how second‑read AI can quietly erode independent vigilance if you are not careful.
4. Triage vs Second‑Read: Side‑by‑Side Comparison
Let me make the distinction painfully clear.
| Dimension | Triage Algorithms | Second‑Read Algorithms |
|---|---|---|
| Main purpose | Prioritize / route exams | Assist with detection and characterization |
| Workflow timing | Before you open the case | While or after you read the case |
| Primary hospital KPI | Time‑to‑diagnosis / intervention | Diagnostic accuracy / consistency |
| Typical use‑cases | Stroke, PE, ICH, PTX | Nodules, fractures, liver lesions, CAC |
| Liability focus | Delayed care / missed alert | Missed finding / mischaracterization |
There is overlap. Some vendors now bundle both:
- Triage: flag “suspicious for PE”
- Second‑read: show all segmental/subsegmental emboli, RV/LV ratio, clot burden
You need to ask, explicitly, for any system your group is buying:
- Is this changing priority?
- Or changing what I see and how I interpret?
- Or both?
Because the risk profile and your practice adaptation are not the same.
5. Regulatory and Legal Realities That Actually Matter to You
You do not need to know every FDA 510(k) number, but you do need to understand how regulators think about triage vs second‑read.
How the FDA has framed them
Triage tools are often cleared with language like:
- “Intended to flag and prioritize studies for review by a qualified physician.”
- They are explicitly not replacements for your read.
Second‑read tools:
- Are usually labeled as adjunctive.
- Sometimes compared against “standard of care radiologist performance” in their submissions.
There is a crucial nuance: triage products are framed as affecting workflow, whereas second‑read affects interpretation support.
Where malpractice attorneys will go first
For triage AI:
- “Did the hospital know an AI flag was raised?”
- “Who received the alert? How fast did the radiologist respond?”
- “Was there a documented policy on handling AI triage alerts?”
For second‑read AI:
- “Was AI available and active on that workstation for that modality?”
- “Did the radiologist see and ignore an AI flag?”
- “Was it reasonable for the radiologist to rely on AI lack of flagging as ‘no abnormality’?”
Your defense will hinge on:
- Clear policies: how AI outputs are used, handled, and documented
- Training logs: that you were oriented to the system’s capabilities and limitations
- Integration details: whether it behaved as designed
If your group adopts AI and does not rewrite its policies, they are asleep at the wheel.
6. Job Market Reality: How These Systems Affect Demand for Radiologists
Let me cut through the nonsense.
Triage AI and your job
Triage systems do not reduce the number of studies. They just reorder them.
Implications:
Volume problem remains
You still have a CT head to read. Just sooner.Perception of “coverage” increases
Admin will say:- “With AI triage and a slim overnight crew, we are still safe. Look at these metrics.”
That can justify keeping staffing leaner than you might like.
- “With AI triage and a slim overnight crew, we are still safe. Look at these metrics.”
Skill mix may shift
As triage improves speed for acute cases:- Hospitals may argue they need fewer in‑house overnight radiologists
- More can be remote, telerad, or cross‑site coverage
In short: triage AI is more about how many bodies on site than whether radiologists are needed at all.
Second‑read AI and your job
This is the area people worry will “replace” radiologists. The reality:
Second‑read AI is good at narrow tasks:
Nodules, fractures, classification of common pathologies.It is terrible at:
- Integrating comorbidities
- Interpreting clinical nuance
- Synthesizing across modalities and time
What second‑read AI will do:
Compress performance gaps
The worst readers get pulled up toward the mean.
Great readers are less “unique” for routine bread‑and‑butter studies.Push you upward
The job market will value:- Subspecialists who can handle complex cases and multidisciplinary work
- Radiologists who manage AI workflows, validate models, shape deployments
- People who can speak with IRBs, data science, and admin fluently
Increase throughput expectations
Admin sees AI second‑read as “you have a helper now, so you can read more.”
They will not automatically reduce your RVU targets because of AI. If anything, they will creep up.
The radiologists who get squeezed are:
- Those who only want to be “image readers” without clinical integration
- Those who refuse to engage with AI at all and look out‑of‑date during interviews
- Those in purely commoditized, high‑volume telerad positions where AI might genuinely automate a slice of work over the next decade
7. How to Critically Evaluate Any AI Tool a Practice Tries to Sell You
You are post‑residency. You are interviewing or negotiating. Here is the script I would actually use.
Key questions you ask about triage AI
What exams does it triage?
CT head? CTA? CTPA? All chest X‑rays? Just ICU films?How is the alert delivered?
Worklist color? Popup? Pager? Text to ED?What is the observed false positive rate here?
Not the vendor’s AUC. The actual live annoyance level in your practice.What is the protocol when AI flags something and no radiologist sees it within X minutes?
Who is called? Is it in a policy?Do you audit AI misses and AI‑detected but radiologist‑missed events?
You want to know whether they quietly track when the AI was “right and you were wrong,” and how they use that data.
Key questions you ask about second‑read AI
Where in the viewer does it appear, and can I toggle it?
You want control over overlays.How are its findings represented in the final report?
Are they:- Auto‑inserted text you must delete or modify?
- Just reference data you manually type in?
What is the false‑positive burden per study?
If it flags 30 “nodules” per CT chest, you will hate your life.Is there a feedback mechanism?
Can you mark AI suggestions as wrong, and does that feed back into QA?What indemnification or responsibility does the vendor assume?
They usually assume very little. But ask. It signals you are not naive.
| Step | Description |
|---|---|
| Step 1 | New AI Tool Proposed |
| Step 2 | Review Workflow Impact |
| Step 3 | Review Interpretation Impact |
| Step 4 | Ask About Alerts and Policies |
| Step 5 | Ask About False Positives and Integration |
| Step 6 | Decide Adoption |
| Step 7 | Triage or Second Read |
8. How to Actually Use These Systems Without Letting Them Use You
If you end up in a practice with both triage and second‑read AI, here is how a sane radiologist adapts.
With triage AI
Treat AI flags as “fast track,” not “truth.”
Yes, read flagged cases first.
No, do not assume non‑flagged cases are benign.Document timing when relevant.
For major acute findings discovered on AI‑flagged studies, your note can mention:- “Study prioritized by AI triage tool; interpreted at [time].”
That is a subtle way of reminding everyone: the system is a partnership.
Push for meaningful thresholds, not hypersensitive ones.
Overly sensitive triage → too many false alarms → people ignore them → quality metrics collapse.
With second‑read AI
Read first, then check AI.
Especially early on. Train your own eyes first. Then see what AI caught/added.Use it aggressively for quantification.
For oncologic volumes, CAC scoring, nodule volume trends – AI is tireless and consistent.Be explicit when you disagree.
For clinically significant disagreements:- Dictate something like: “AI suggested additional finding at [location] which, on review, is consistent with [normal variant/artifact].”
- You are showing that you are in control.
Refuse “black box” blame.
If something goes wrong and admin tries to dump it on “radiologist ignored AI,” ask:- Was there policy?
- Was the AI’s sensitivity/specificity validated locally?
- Was there adequate training and logging?
You are not a passive consumer of AI. You are the final gatekeeper. Start acting like it now, not in your first deposition.
9. Strategic Positioning for the Next 5–10 Years
You are not going to be replaced by a triage system. And you are not going to be replaced this decade by a second‑read tool either.
But the job description of “radiologist” is changing under your feet.
Where the value will concentrate
- Radiologists who:
- Understand both triage and second‑read AI at a workflow level
- Help shape deployment, thresholds, and policies
- Liaise between IT, vendors, clinicians, and admin
…will be the ones with influence, better schedules, and better contracts.
| Category | Triage Influence | Second Read Influence |
|---|---|---|
| Now | 30 | 20 |
| 5 Years | 40 | 35 |
| 10 Years | 45 | 45 |
Again, these numbers are conceptual, but the trend is clear: both triage and second‑read grow, and second‑read becomes more dominant over time.
Practical career moves
- Get involved in your hospital’s AI or imaging informatics committee early.
- Learn basic performance metrics: sensitivity, specificity, AUC, PPV in low‑prevalence settings. You will look more competent than 90% of the room.
- When choosing jobs, ask not just “Do you use AI?” but:
- “Who owns AI policy and quality?”
- “How has AI changed your staffing patterns and RVU expectations?”
You want to be on the side shaping those answers, not discovering them the hard way.
FAQ (Exactly 5 Questions)
1. Can triage AI miss a large, obvious finding and increase my legal risk?
Yes. If the system fails to flag a massive ICH or obvious PE and the hospital has leaned on AI as part of its safety net, plaintiffs will absolutely ask why that alert did not occur. You will be asked when you saw the study, how the worklist was configured, and whether any policies governed response times to AI flags. The key defense is that AI is adjunctive, not primary, and that your reading process does not depend on AI as the sole safety mechanism.
2. Should I trust AI second‑read more than a junior fellow?
For very narrow, well‑defined tasks (e.g., lung nodule volumetry, CAC score, hemorrhage volume), I would absolutely trust a validated AI more than a sleep‑deprived fellow. For nuanced pattern recognition (interstitial lung disease, subtle vasculitis), no. AI is a specialist in tiny slices of the job; it is not a replacement for a broadly trained human mind.
3. Will AI second‑read reduce my RVU targets or salary expectations?
Do not count on it. Most administrators see AI as enabling higher throughput at the same or lower staffing levels, not as a reason to pay radiologists more for less work. If anything, AI can be used to justify higher RVU expectations (“You have tools now”). The only way this shifts in your favor is if you negotiate around AI‑related responsibilities (governance, oversight, QA) as part of your role.
4. Are there situations where I should turn AI off for a specific case?
Yes. If the overlay is clearly malfunctioning (e.g., marking every rib as a nodule) or causing more confusion than clarity on a complex case, you should feel comfortable disabling it for that read and documenting why if relevant. You are responsible for the final interpretation. If AI is actively degrading your ability to interpret accurately, you are justified in ignoring or disabling it.
5. What one practical skill should I learn now to stay ahead of AI in radiology?
Learn how to read and critique an AI performance paper or vendor slide: understand prevalence, PPV/NPV, what the test set actually was, and whether performance numbers translate to your patient population. Radiologists who can say “Your sensitivity looks good, but in a 2% prevalence setting your PPV will be terrible, so our radiologists will drown in false positives” are the ones who shape deployments, not suffer them.
Key points, no fluff:
- Triage AI changes when you read; second‑read AI changes how you read.
- Triage affects workflow and time‑to‑care; second‑read affects detection, quantification, and standardization.
- Your job is not to fight AI, but to control it: demand sane thresholds, clear policies, and workflows where you stay the final authority.