
The most overused, least understood metric in residency program evaluation is the attrition rate. Everyone quotes it. Almost no one interprets it correctly.
If you treat “low attrition” as synonymous with “good program,” you are reading the data wrong. And you are setting yourself up for some ugly surprises on July 1.
Let’s fix that.
1. What “Attrition Rate” Actually Measures
Forget marketing slides for a moment. At its core, the residency attrition rate is:
Number of residents who leave the program before completion ÷ Number of residents who started, over a defined period.
Sounds simple. It is not.
Programs can define “leave” differently:
- Voluntary resignation (switching specialties, family relocation)
- Involuntary separation (fired, non-renewal, dismissal)
- Transfer to another program
- Long-term leave that is never reactivated
Some combine all of these. Some only track dismissals. Some blur the line intentionally.
Real-world example I have seen in internal meetings:
A PGY-2 in surgery fails Step 3 twice, gets pushed to a “research year,” then quietly “transfers” to a prelim IM position at another hospital. The internal tracking calls it a “mutual separation.” The public-facing number? Attrition “0” for that class.
So when you see “0% attrition,” you should immediately ask: over what time frame, for which cohorts, and how are you defining “left the program”?
2. Baseline Numbers: What Is Normal Attrition?
You cannot interpret a program’s attrition rate without a baseline. The literature is messy, but several patterns are consistent.
Across large U.S. datasets:
- Overall GME attrition rates hover roughly in the 3–5% range per year when averaged, but this hides big specialty-level variation.
- Some specialties are structurally low attrition (e.g., dermatology, radiation oncology).
- Others are historically higher because of workload, stress, or competitiveness (e.g., general surgery, neurosurgery, EM in recent cycles).
To make this less abstract, here is a stylized comparison using plausible ranges that mirror published trends:
| Specialty | 5-Year Cumulative Attrition |
|---|---|
| Internal Medicine | 5–8% |
| Family Medicine | 5–10% |
| General Surgery | 15–25% |
| Emergency Medicine | 8–15% |
| Psychiatry | 5–10% |
| Dermatology | 0–3% |
Read that table correctly: a general surgery program with a 5-year attrition of 15% might be completely average for surgery. That same number would be alarming for dermatology.
Context matters. A lot.
3. Why Raw Attrition Percentages Mislead Applicants
You are not comparing numbers in a vacuum. You are comparing systems:
- Program size
- Specialty norms
- Applicant competitiveness
- Local job market
- Institutional culture
The data show several systematic biases when people look at attrition.
3.1 Small Programs vs Large Programs
Attrition is noisy in small denominators.
If a four-resident-per-year program loses 1 resident in 4 years, that looks like this:
- One resident lost out of 16 total in the cohort window = 6.25% cumulative attrition.
Now take a twelve-resident-per-year program that loses 3 residents out of 48 over those same years:
- 3 / 48 = 6.25% cumulative attrition.
Same rate. Very different structure and risk profile. In the small program, it is one face that everyone remembers. In a large program, it may feel more diffuse.
But when you see marketing one-liners like “We have had only one resident leave in 5 years,” your brain does not automatically convert that into percent. It just thinks “small number ⇒ safe.” That is bad statistics.
3.2 Time Horizon Games
Programs can cherry-pick time frames:
- “No attrition in the last 3 years”
- “…in categorical positions”
- “…since the new PD took over”
A 3-year window with no attrition after a huge exodus 4–6 years ago is not the same as long-term stability. You want at least 5–7 years of data, ideally by entering class.
3.3 Voluntary vs Involuntary Attrition
Lumping “I left to go into dermatology after a prelim year” with “I was dismissed for performance” makes the metric almost useless.
The interpretation is opposite:
- Voluntary attrition may reflect strong residents getting better offers or changing life circumstances.
- Involuntary attrition typically reflects either:
- Weak selection / support systems, or
- A highly demanding, less forgiving environment.
You will not always get the split cleanly, but you should try.
4. Breaking Attrition Down Into Interpretable Pieces
To read attrition like a data analyst, break it into components.
At minimum, try to estimate:
- Total cumulative attrition over a fixed window (e.g., last 5 entering classes).
- Composition:
- Voluntary transfer to another program/specialty
- Personal reasons / health / family
- Academic or professionalism dismissal / non-renewal
- Timing:
- PGY-1 attrition
- Mid-program (PGY-2–3)
- Senior (PGY-4+)
Early versus late attrition tells very different stories.
4.1 Simple Example: Two IM Programs with “5% Attrition”
Program A (university IM, 20 residents per year, last 5 cohorts = 100 residents):
- 5 residents left → “5% attrition over 5 years”
- 3 left PGY-1 for competitive specialties (neuro, derm, radiology)
- 1 left PGY-2 for family reasons (spouse job out of state)
- 1 non-renewal PGY-1 for professionalism
Program B (community IM, 10 residents per year, last 5 cohorts = 50 residents):
- 3 residents left → “6% attrition” (they may round to 5% in conversation)
- 1 resigned PGY-1 citing workload, burnout, poor fit
- 1 dismissed PGY-2 for repeated exam failures
- 1 quietly transferred PGY-2 to another IM program after conflict with leadership
Raw numbers: 5% vs 6%. Functionally the same.
Risk signal: very different.
I have sat in applicant dinners where residents described Version B as “we almost never lose people,” because those three cases were painful and everyone wanted to minimize them. Do not rely on how it feels. Rely on how it adds up.
5. Using Attrition Alongside Other Program Health Metrics
Attrition is the canary in the coal mine, not the mine inspection report. You need other metrics around it.
Here is how attrition tends to correlate with common residency health indicators.
| Pattern | Interpretation Risk Level |
|---|---|
| Low attrition + strong board pass rates | Usually healthy |
| Low attrition + weak board pass rates | Possible tolerance / low standards |
| High attrition + strong board pass rates | High-pressure environment |
| High attrition + weak board pass rates | Serious systemic problems |
Now let’s visualize a stylized comparison of three hypothetical internal medicine programs:
| Category | Value |
|---|---|
| Prog X Attrition | 4 |
| Prog X Board Pass | 98 |
| Prog Y Attrition | 12 |
| Prog Y Board Pass | 99 |
| Prog Z Attrition | 3 |
| Prog Z Board Pass | 85 |
- Program X: 4% attrition, 98% board pass. Likely stable, supportive, but not lax.
- Program Y: 12% attrition, 99% board pass. Very high expectations. Residents who cannot keep up are filtered out.
- Program Z: 3% attrition, 85% board pass. Almost nobody leaves, but many struggle quietly and then fail boards.
If you are risk-averse and want a predictable, steady environment, Program X is your target.
If you are highly competitive and prioritize maximal clinical rigor and outcomes, Program Y might still be attractive, but you should be honest with yourself about your stress tolerance.
Program Z is a red flag. Low attrition there is not a virtue; it suggests poor remediation or low accountability.
6. How Programs Can Manipulate Attrition Data (And How To See Through It)
I have watched the internal discussions around how to present attrition on institutional dashboards. The pressure to look good is real. Numbers get “interpreted.”
Common tactics:
Redefining categories
Shifting borderline cases from “dismissal” to “voluntary resignation” after “coaching.”
To you as an applicant, it is still a serious negative signal when multiple people “choose” to leave after conflicts with leadership.Shortening the lookback window
Highlighting the last 3 classes because older classes had a PD who was “no longer here,” even though the culture has not fully changed.Focusing on categorical positions only
Excluding prelims or integrated spots that have high burnout or failure rates, then quoting a zero attrition for “categoricals.”Not tracking transfers as attrition
A resident who “transfers to be closer to family” after repeated tension might be reported as a neutral transfer, not attrition.
How do you cut through this?
You ask specific, quantitative questions.
7. Concrete Questions To Ask About Attrition on Interview Day
You do not need to be confrontational. Just precise.
To program leadership, you can ask:
- “For the last 5 entering classes, how many residents started and how many did not complete training here?”
- “Of those who did not complete, how many changed specialties, how many transferred to other programs in the same specialty, and how many were non-renewed or dismissed?”
- “Where in training do you most commonly see people leave? PGY-1 versus later?”
- “How has that changed over the last 5–10 years?”
With residents, you ask a different version:
- “Have there been residents who left the program or were asked to leave in the last few years?”
- “How transparent is leadership when there is a struggling resident?”
- “Do people ever talk about wanting to leave? Do they actually leave, or just grind it out unhappy?”
Then you mentally build a simple frequency table. I am not exaggerating. You want a mental tally like:
- 5-year cohorts: ~60 residents started
- 3 left: 1 to another specialty, 1 to another IM program, 1 dismissed PGY-1.
That is 5% total, 1 involuntary. For IM, that is neither amazing nor disastrous. You then overlay what residents say: was that dismissal a clear professionalism problem or a borderline case that everyone whispers about?
8. Specialty-Specific Attrition Patterns You Should Expect
You are not choosing in a vacuum. Different specialties generate different attrition structures.
8.1 Surgical Specialties
The data show higher attrition, especially early in training. Reasons:
- Workload intensity
- Physical and cognitive demands of the OR
- Realization that surgery is not the right career fit
For surgery, a 5-year cumulative attrition of 15–20% is not automatically a red flag. What matters more:
- Is it predominantly PGY-1/2 voluntary exits (“I realized I want anesthesia/IM”)
versus - PGY-3+ dismissals and non-renewals?
A surgery program where 10% leave in PGY-1 by mutual choice, but seniors almost always graduate, is still structurally sound. A program where chiefs are being pushed out is not.
8.2 Lifestyle-Competitive Fields (Derm, Rad Onc, etc.)
These draw highly screened, highly motivated applicants. True attrition tends to be very low. So any appreciable non-voluntary attrition is concerning:
- 0–3%: expected, often due to exceptional family/health issues.
- 5–10%: you should be asking hard questions about culture and expectations.
8.3 Emergency Medicine and Psychiatry
Recent market changes have influenced EM, but ignoring those, both EM and psych historically show:
- Attrition often linked to burnout, shift work, and exposure to high-acuity, high-psychological-load cases.
- Programs with strong wellness and support can keep attrition at the low end of the ranges.
You should explicitly ask EM and psych residents: “Do people talk about leaving EM/psych entirely?” Patterned responses there matter more than a single quoted percent.
9. How To Compare Programs Side-by-Side Using Attrition
Let’s put structure around the decision process. You are comparing Program A and Program B in the same specialty.
Step 1: Normalize by entering classes
Ask or infer: over the last 5 years, how many residents started?
Step 2: Count the exits
How many did not finish there? List them by reason as best you can.
Step 3: Compute rough cumulative attrition
Exits ÷ total entrants over that window.
Step 4: Classify the exits
Group them into:
- Voluntary specialty changes
- Voluntary transfers (same specialty)
- Personal/health/family
- Involuntary dismissals / non-renewals
Step 5: Align with other metrics
Compare:
- Board pass rates
- Fellowship match (for specialties where that matters)
- Resident-reported support and culture
Here is a simplified visual showing how a “high” attrition program can still be attractive depending on your risk appetite.
| Category | Graduated Without Issue | Graduated After Remediation | Voluntary Departure | Dismissal |
|---|---|---|---|---|
| Program A | 88 | 7 | 3 | 2 |
| Program B | 75 | 13 | 7 | 5 |
Program A has lower attrition and fewer remediation cases. Program B is more volatile, with more departures and dismissals, but may also have very high board pass and fellowship match rates for those who persist. You need to match that profile to who you are and how you handle adversity.
10. Red Flags and “Green Flags” in Attrition Data
You are not going to get perfect datasets. But even with partial information, there are patterns that should shift your rank list.
10.1 Red-Flag Patterns
- Residents give vague, tense answers when you ask about anyone leaving. You hear, “Well, there was a situation…” and then they change the subject.
- Leadership emphasizes “we are a family” but cannot tell you numbers on how many left in the last 5 years.
- Low attrition but board pass rates substantially below national averages.
- Reports of residents staying despite obvious burnout because they fear retaliation or cannot transfer.
- Multiple PGY-3+ dismissals in the last several years in a specialty where late dismissal is unusual.
10.2 Green-Flag Patterns
- PD/associate PD can quote approximate numbers and reasons without defensiveness: “We have lost 3 out of 60 in five years: 2 switched specialties, 1 non-renewal for repeated professionalism issues.”
- Residents can talk about struggling colleagues in a way that suggests real remediation: “We have had a few who struggled but they either improved with support or were helped to find a better fit elsewhere.”
- Early, voluntary attrition where people pivot to another field, with minimal late-stage dismissal.
These patterns tell you more than one glossy number on a website.
11. How To Use Attrition Data in Your Final Rank List
You should not rank purely on attrition. That would be lazy. But you would also be reckless to ignore it.
Here is a simple heuristic:
- Start with fit: clinical exposure, geography, culture, career goals.
- Check board pass and fellowship match.
- Use attrition as a tiebreaker and sanity check.
If two programs feel equivalent but one has:
- Transparent leadership
- Stable, low-to-moderate attrition consistent with specialty norms
- Strong resident support stories
and the other has:
- Vague answers
- Stories of “issues” without clarity
- Even modestly higher attrition without a good explanation
You move the first one up. Every time.
Do not let a slightly lower attrition number override your direct experience from interview day. The data are signal, not scripture.
12. Your Next Step: Building Your Own Program Comparison Sheet
If you want to treat this like an analyst rather than a tourist, do this:
- Create a simple spreadsheet with one row per program and columns for:
- Estimated total residents (5-year window)
- Estimated exits
- Voluntary vs involuntary
- Early (PGY-1) vs late
- Board pass rate
- Your subjective “culture score” (1–10)
- Fill it in after each interview day while details are fresh.
Patterns will appear very quickly. The programs that “felt a little off” will usually have data that back that up when you lay it side by side.
Once you have that sheet, you will not be guessing based on scraps of anecdotes from Reddit or hallway gossip. You will be making a quantified decision about where you are most likely to succeed, stay, and graduate.
And once you have matched and landed, the question shifts from “which attrition rate looks safest” to “how do I become one of the residents who thrives here, not just survives.” But that is another analysis for another time.