Residency Advisor Logo Residency Advisor

Match Position vs Resident Attrition: Is Being Low on the List Risky?

January 6, 2026
14 minute read

Residency program director reviewing rank list and attrition statistics -  for Match Position vs Resident Attrition: Is Being

The obsession with “how far down the list they went for you” is statistically misguided. The data show a different reality: match position is a terrible proxy for whether you are likely to quit or wash out.

The core question: does low match position = higher attrition?

Let me be direct. The common storyline goes like this:

  • “Top of the list” = the program’s favorites, likely stars, low risk.
  • “Way down the list” = leftovers, risky, more likely to struggle or leave.

That sounds intuitive. It just is not supported by actual outcomes in any robust, consistent way.

When you look at the limited data that exist on resident attrition and compare it to what we know about ranking behavior, one pattern stands out: attrition is far more tightly associated with fit, specialty choice, and program environment than with whether you were ranked #3 or #23.

Before we get into mechanisms, let’s anchor on what the numbers actually say.

What we know about resident attrition (big picture)

Attrition rates are not mysterious. Multiple national datasets have looked at this for over a decade.

Across specialties, approximate resident attrition rates look like this:

Approximate Resident Attrition by Specialty Category
CategoryTypical Total Attrition Over TrainingNotes
Primary Care IM/FM3–5%Mostly career change / burnout
Pediatrics3–4%Relatively stable
General Surgery15–25%Highest, well documented
OB/GYN7–10%Moderate-high
Road specialties (Derm, Rad, Anes)3–6%Mostly career redirection

These are program-completion attrition rates over the duration of residency, not yearly. In other words, in a typical categorical internal medicine program, out of 30 residents who start an intern class, maybe 1 will not complete that program. In a general surgery class of 10, it might be 2 or more.

Now here is the key point: these numbers are driven heavily by specialty culture, workload, and career misalignment. There is no credible national dataset that says, “Residents matched below position X on the rank list leave at 3 times the rate.” Programs simply do not track or report attrition by match position in any standardized way.

What we do have are:

  • Data on how deep programs go on rank lists.
  • Data on what predicts resident performance and completion.
  • Small internal analyses at individual institutions.

And when you combine those, the “low rank = high attrition” story falls apart.

How often do programs go far down their lists?

Step one: understand what “low on the list” actually means in practice.

Most programs go much deeper than applicants think. The NRMP’s Charting Outcomes and Program Director Surveys repeatedly show that programs rank far more applicants than they expect to match.

A typical pattern (for categorical programs):

  • Aim to fill 10 positions.
  • Rank 80–160 applicants.
  • Actually match their 10 across a spread that might be anywhere from #5 to #120.

That sounds dramatic. So let’s quantify it:

bar chart: Top 10, 11–25, 26–50, 51–100, 101–150

Illustrative Distribution of Fill Position on Rank Lists
CategoryValue
Top 1025
11–2530
26–5025
51–10015
101–1505

Interpretation of this hypothetical but realistic pattern:

  • Only about 25% of filled positions are from the “top 10” portion of rank lists.
  • The bulk are from 11–50.
  • A nontrivial fraction are from 51+.

Why? Because of how the algorithm works. Applicants are ranking dozens of programs, many highly competitive candidates cluster at the same “top” programs, and there is a lot of cross-competition. Program rank order is not a clean linear “quality” index; it is an interaction between perceived fit, interview timing, faculty lobbying, internal politics, and plain randomness (who interviewed on what day, which faculty were on service, etc.).

If a program routinely matches its #30–60 ranked applicants, it has already normalized that a large share of their incoming residents are not “top 10.” And yet:

  • Most programs do not have 30–40% attrition.
  • Their completion rates remain in line with national specialty norms.

If low rank position were a strong predictor of attrition, general program completion statistics would look wildly different. They do not.

What actually predicts resident attrition?

Now to the sharper part of the analysis. Several studies have looked at predictors of resident performance and completion. The most common variables evaluated:

  • Standardized test scores (USMLE/COMLEX).
  • Medical school performance (grades, AOA, class rank).
  • Interview scores and faculty assessments.
  • Demographic factors.
  • Program factors (size, support resources, workload).
  • Specialty-specific factors (operative volume, call burden, culture).

The consistent pattern: attrition correlates with:

  1. Specialty misalignment
    Residents who chose a specialty for external reasons (prestige, pressure, misunderstanding of lifestyle) are more likely to leave. Surgical programs report this explicitly. People realize 1–2 years in that they do not want that career.

  2. Program culture and support
    Toxic or unsupported environments have higher attrition regardless of applicant “strength.” I have watched programs lose very strong residents because the environment was dysfunctional.

  3. Academic or performance struggles (early)
    Low in-training exam scores, repeated remediation, professionalism issues in PGY1–2 are strong flags. Those often correlate with future attrition or non-renewal.

Notice what is missing: “lower down on the rank list.”

Most programs do not even record match position in any meaningful way after Match Day. Anecdotally, when I have seen internal reviews at large academic centers, the residents being disciplined, remediated, or counseled out are scattered across the original rank list. Some were top-5. Some were around 70. The index is weak.

To make this more concrete, imagine a generic 10-resident class, and suppose a program did track this obsessively for 5 years. You might see something like:

Hypothetical 5-Year Cohort: Match Position vs Completion
Match Position RangeResidents (5 years)Did Not CompleteAttrition Rate
1–102528%
11–3025312%
31–6025312%
61+5120%

Even if you had this sort of skewed pattern, the sample sizes are small, the confidence intervals wide, and confounding factors enormous (late adds to the list, couples match constraints, people with unique backgrounds, etc.).

The data are noisy. But what stands out is that attrition does not jump from 5% to 50% when you cross some magical threshold on the rank list. At most, there might be a modest gradient; in many real datasets, there is no meaningful pattern at all.

How match lists are built (and why “low” is often arbitrary)

To understand risk, you have to understand how lists form.

Residency rank lists are not pure “sorted by desirability” arrays. They are compromise artifacts. Typical ingredients:

  • Faculty interview scores or “global impression” ratings.
  • USMLE/COMLEX scores and transcript data.
  • Mentor advocacy (“I know this student personally, we must rank them high”).
  • Diversity and equity goals.
  • Couples Match and institutional needs (e.g., dual-physician hires).
  • Late applications or off-cycle interviews.
  • Subspecialty interest alignment (e.g., a cardiology-bound IM applicant, a resident with strong research in program’s niche).

Here is what happens in real rooms, in real meetings:

  • A highly scored applicant gets nudged down because they already have 12 Harvard-level interviews and are unlikely to rank this mid-tier program high.
  • A strong but non-traditional applicant with lower Step 1 is pushed up because a PD believes they will thrive clinically.
  • Someone with an outstanding away rotation experience is vaulted 30 spots overnight after a call from the service chief.
  • Someone who interviewed early in the season fades in memory; someone who interviewed last week is fresher and ends up higher.

By the time the list is submitted, the difference between #12 and #47 often has less to do with “resident quality” and more to do with noise and constraints.

So when people say “you were low on the list,” what they are often describing is:

  • You were in the middle band of many acceptable applicants.
  • Other people above you simply matched elsewhere (at places they ranked higher).
  • The algorithm did exactly what it is designed to do: place you in the highest program where the rankings mutually aligned.

That story does not predict whether you quit in PGY2.

The algorithm versus human psychology

The Match algorithm is applicant-proposing. That single fact destroys the emotional logic of “low on the list = unwanted.”

Here is the concise flow:

Mermaid flowchart TD diagram
NRMP Match Flow Simplified
StepDescription
Step 1Applicant makes rank list
Step 2Program makes rank list
Step 3Algorithm starts with applicant choices
Step 4Temporary position held
Step 5Try next program on applicant list
Step 6Higher ranked applicant may displace
Step 7Program ultimately fills
Step 8Can program accept?

Notice who is driving: applicants. The algorithm is designed to maximize applicant preferences, not program preferences.

That means:

  • Programs almost always have to go deeper than they emotionally expect.
  • Being matched at position #40 can simply reflect that the 39 above you either:
    • Ranked that program lower and matched at places they preferred.
    • Were displaced by higher-priority applicants at other programs.

It does not automatically reflect that a program viewed you as their 40th-choice human being. It reflects the equilibrium of thousands of preference lists trying to satisfy each other simultaneously.

From a risk-analysis standpoint, attributing your attrition risk to “my number on one program’s list” ignores the mechanics of the system.

So what does being low on the list actually predict?

Being honest: it predicts very little about your performance once you start residency.

There are a few mild interpretations that can sometimes hold:

  1. Relative competitiveness vs peers at that specific program
    If you were significantly lower on the list at a very competitive program, you may enter surrounded by residents with somewhat stronger traditional metrics (Step scores, research, letters). But those metrics are already known to you. “I was probably around the middle of the applicant pool here” is not new information.

  2. Fit vs first-impression alignment
    If you were ranked lower because the program was uncertain about fit, it might mean there is some misalignment in goals or personality. But again, that often comes from the same signals you saw on interview day. You likely felt that tension too.

  3. Randomness and tie-breaking
    In large cohorts, being lower is often a byproduct of tiebreakers: interview day, which faculty you met, minor details in your application. Statistically, these are noise. Noise does not predict systematic attrition outcomes.

When you examine actual attrition cases:

  • Residents who left due to mental health crises.
  • Residents dismissed for repeated unprofessional behavior.
  • Residents who requested transfer because they discovered a different specialty.
  • Residents who could not pass key exams.

You do not see “rank 1–10 are immune; rank 50–80 are collapsing.” It is scattered.

If anything, I have seen a non-trivial number of top-ranked residents leave because:

  • They were hyper-competitive, angling for something even “better,” and never truly committed to that training environment.
  • They overshot on lifestyle assumptions and burned out.
  • They were pushed as “must match” by faculty who knew them as students, not as residents.

None of that correlates cleanly with where you sat on a numeric list.

The one real risk: mindset contamination

There is one concrete way that being told you were “low on the list” can increase your attrition risk: by poisoning your mindset.

If you internalize:

  • “They did not really want me.”
  • “I am the backup plan, the leftover.”
  • “I am probably worse than my co-residents.”

…you are more likely to disengage, avoid asking for help, and interpret normal feedback as confirmation of your inferiority. That is a self-fulfilling path to burnout and quitting.

Psychologically, it pushes you toward attribution errors:

  • You miss a clinical detail → “Of course, I was low on their list. I am not good enough.”
  • Attending critiques your note → “They regret taking me.”

But those same events happen to everybody in residency. The PGY1 who was #2 on the list gets the same feedback; they just frame it as, “This is how I learn,” not “this proves I am a mistake.”

From a data perspective, framing matters. Perceived belonging and psychological safety are strongly associated with persistence in training programs. If “low on the list” becomes a narrative of non-belonging, that can drive attrition indirectly. Not because the number did anything, but because the belief did.

A cleaner way to think about your risk

If you actually want to quantify your own attrition risk, stop thinking about match position and start looking at variables that matter:

  • Specialty-level baseline attrition
    Surgery has higher dropout than family medicine. That is structural. If you chose a high-attrition field without truly knowing what the day-to-day looks like, your risk is inherently higher.

  • Your reasons for choosing the specialty
    If your primary drivers were prestige, income, or external pressure rather than real interest in the work, that is a risk factor. I have seen that pattern end badly more often than not.

  • Program support and culture
    Does your program have:

    • Mentorship.
    • Reasonable duty hours enforcement.
    • Wellness or mental health resources that people actually use.
    • A history of supporting struggling residents to successful completion.
  • Early performance signals
    Your first 6–12 months tell you more about your trajectory than any rank. If you respond to difficulty by seeking feedback and adjusting, your risk drops sharply.

If you want a mental “attrition risk estimate” for yourself, a crude conceptual model would weight variables like this:

doughnut chart: Specialty choice fit, Program environment, Personal resilience/support, Academic/performance factors, Match position

Conceptual Contributors to Individual Attrition Risk
CategoryValue
Specialty choice fit30
Program environment25
Personal resilience/support20
Academic/performance factors20
Match position5

Match position gets the smallest slice. Frankly, 5% is generous.

Should programs worry about matching “too low” on their lists?

From the program director side, the anxiety mirrors the applicant’s: “If we had to go to #70 to fill, are we increasing our attrition risk?”

Empirically, most programs that have tracked this for themselves find no strong relationship between how deep they went and:

What does matter:

  • How consistently they apply their selection criteria.
  • Whether they panic-rank unknown applicants late in the season.
  • Whether they adjust after outlier years (e.g., examining why several residents left and whether their interview/selection process missed red flags).

The biggest statistical mistake I see programs make is overfitting on the “one bad resident.” They discover that a problematic PGY2 was ranked at position #82, and they generalize that “we should never go that low again.” They ignore the four excellent graduates from the same numeric neighborhood.

That is selection bias. It is terrible decision-making.

If a program wants to reduce attrition, the data point them to:

  • Improve realistic preview of the specialty and program.
  • Enhance mentorship and support.
  • Intervene earlier when residents struggle.

Not obsess over whether an incoming intern was #14 or #58.

Bottom line: is being low on the list risky?

Condensed to the essentials:

  1. Match position is a noisy, indirect measure of anything important about your performance as a resident. It is influenced by algorithm mechanics, cross-competition, and program politics.

  2. The actual predictors of attrition are:

    • Specialty misfit.
    • Program environment.
    • Early academic and professionalism problems.
    • Personal mental health and support.
  3. Believing that being “low on the list” means you are unwanted or inferior is more dangerous than the reality. The mindset, not the number, can increase your risk.

If you matched, the data say this: the program saw you as rankable and trainable, and the algorithm did its job. Where you sat numerically on a spreadsheet does not decide whether you finish. How you adapt, seek support, and engage with the work does.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles