Residency Advisor Logo Residency Advisor

Does Ranking ‘Reach’ Programs Change Match Odds? A Monte Carlo View

January 5, 2026
16 minute read

Medical resident reviewing match data and probabilistic models on a laptop -  for Does Ranking ‘Reach’ Programs Change Match

The belief that “you might as well put a few dream programs at the top of your list” is only half true—and the data show exactly when that advice helps you and when it quietly hurts your match odds.

Most residents never see the math. They hear “the algorithm favors applicants” and assume they can stack reach programs at the top of the rank order list (ROL) without consequence. That is not universally correct. It depends on your interview mix, the relative competitiveness of programs, and how you allocate those scarce top-ranked slots.

Let me walk through this the way I would for a residency program director or an applicant with a spreadsheet open and coffee in hand: by treating the Match as a probabilistic system and stress-testing it with Monte Carlo simulation.


1. The Core Question: What Are You Actually Trading Off?

You do not change the algorithm by changing your rankings. You change which probabilistic paths through the algorithm are even possible.

More bluntly: ranking a “reach” program high does not increase the probability that this program will rank you highly. That probability is driven by your application strength and their behavior, not by the order of your list. What ranking does is decide:

  • Which program gets first shot at you in the algorithm, and
  • Which backups you are discarding as fallback options for those shots.

So the question, modeled correctly, becomes:

If you move 1–3 “likely” programs down your list to make room for “reach” programs at the top, what happens to:

  • P(match somewhere at all)?
  • P(match at a higher “tier” program)?
  • P(go unmatched)?

The only honest way to see this is to put numbers on it.


2. A Minimal Data Model: Turning Programs into Probabilities

For a single applicant, the algorithm can be approximated using per-program match probabilities. These are not NRMP fill rates; they are conditional probabilities:

pᵢ = probability that Program i will ultimately accept you,
given that you ranked them and interviewed there.

That encapsulates:

  • How high they rank you on their list.
  • How full they are when your turn comes in the algorithm.
  • Competition from other applicants.

You never know pᵢ exactly, but we can parameterize realistic values:

  • “Reach” program: p ≈ 0.02–0.10
  • “Target” program: p ≈ 0.15–0.35
  • “Safety” program: p ≈ 0.40–0.70 (for well-aligned candidates)

Crucial detail: these pᵢ are not independent in reality, but for a single-person Monte Carlo model, treating them as independent is a reasonable approximation to understand ranking impact. We are not modeling the entire Match market; we are modeling the path of one applicant in a plausible “universe” of outcomes.


3. Monte Carlo Setup: How to Actually Test “Reach vs. Realistic”

Monte Carlo here means:

  1. You define a set of programs and your estimated match probabilities pᵢ.
  2. You define a rank order list.
  3. For each simulated Match:
    • Walk down your ROL in order.
    • For Program i, draw a random number u ~ Uniform(0,1).
    • If u < pᵢ, you match there; stop.
    • If not, move to the next program.
    • If you reach the end with no match, you go unmatched.
  4. Repeat this 100,000+ times and empirically estimate the probabilities:
    • P(match)
    • P(match at each program)
    • P(unmatched)

This is not the full NRMP multi-applicant algorithm, but for one applicant deciding how to rank given their interview set, it captures the essential mechanics:

  • Ranking only affects the order in which your Bernoulli draws (success/fail for each program) are taken.
  • Higher-ranked programs “block” lower-ranked ones if they succeed.

So now we can stop arguing philosophy and just compute.


4. Baseline Scenario: All “Realistic” Programs

Consider a reasonably strong categorical IM applicant with 10 interviews. They categorize them as:

  • 3 “reach” academic powerhouses
  • 4 “target” mid-to-upper-tier university-affiliated
  • 3 “safety” community or less competitive academic

Let’s say the estimated pᵢ values (from a conservative self-assessment):

Program Categories and Match Probabilities
CategoryExample pᵢ Range
Reach0.05 – 0.10
Target0.20 – 0.30
Safety0.45 – 0.60

Construct a “conservative” ROL with no reaches:

1–4: Target programs (p = 0.22, 0.25, 0.27, 0.30)
5–7: Safety programs (p = 0.45, 0.50, 0.55)
8–10: Leave the 3 reach programs off (pretend you did not rank them for now).

Analytically, if these were independent, the probability of going unmatched is:

P(unmatched) = ∏ (1 − pᵢ) over all programs on the list.

Approximate that:

  • Product of “no match” at targets:
    (1 − 0.22)(1 − 0.25)(1 − 0.27)(1 − 0.30)
    ≈ 0.78 × 0.75 × 0.73 × 0.70 ≈ 0.30

  • Product of “no match” at safeties:
    (1 − 0.45)(1 − 0.50)(1 − 0.55)
    ≈ 0.55 × 0.50 × 0.45 ≈ 0.124

Total P(unmatched) ≈ 0.30 × 0.124 ≈ 0.0372 ≈ 3.7%.

So P(match somewhere) ≈ 96.3%.

Monte Carlo simulation with 100,000 runs for this conservative list produces roughly:

pie chart: Match (any program), Unmatched

Baseline Match Outcomes (No Reach Programs Ranked)
CategoryValue
Match (any program)96.2
Unmatched3.8

That is your baseline: extremely high overall match probability, but no chance of landing at the top-tier “reach” places.


5. Add Reaches to the Top vs Bottom: Two Very Different Worlds

Now we introduce the reach programs with p ≈ 0.08, 0.07, 0.06.

There are two main ranking strategies:

Strategy A: Reaches at the Top

ROL:

1–3: Reach (0.08, 0.07, 0.06)
4–7: Target (0.22, 0.25, 0.27, 0.30)
8–10: Safety (0.45, 0.50, 0.55)

Two key questions:

  1. Does this change P(match somewhere)?
  2. Does it meaningfully increase P(match at a reach program)?

If all programs are still included, then mathematically the product of (1 − pᵢ) over the full set is unchanged by order. Ranking cannot change the probability that you are acceptable to at least one of the 10 programs, if all 10 are retained.

Analytically:

  • Same set of pᵢ as baseline, just ordered differently.
  • P(unmatched) ≈ ∏ (1 − pᵢ) over all 10 programs.
  • That product is identical regardless of ordering.

Monte Carlo confirms this. You will see:

  • P(match somewhere) ≈ 96%
  • P(unmatched) ≈ 4%

Same as baseline, within simulation noise. However, the distribution of outcomes changes:

  • P(match at reach program) increases from 0% (they were not ranked) to:

    Approx upper bound (independence, ignoring blocking):
    1 − ∏ (1 − pᵢ_reach)
    = 1 − (1 − 0.08)(1 − 0.07)(1 − 0.06)
    ≈ 1 − (0.92 × 0.93 × 0.94) ≈ 1 − 0.804 ≈ 19.6%

    In simulation, because of downstream blocking by targets/safeties,
    you actually see something around 15–18%.

  • P(match at safety program) drops correspondingly, because some scenarios that would have produced a safety match now “convert” to a reach or target match earlier in the list.

The message: If you keep the same set of programs, inserting reaches at the top does not harm overall match odds. It just reallocates where you land.

Strategy B: Reaches Replace Safeties

This is the part people get wrong.

Now imagine you are rank-listing under time pressure and you think: “I am safe enough; I do not need all three of these community programs.” So you shorten the list:

Strategy B ROL:

1–3: Reach (0.08, 0.07, 0.06)
4–7: Target (0.22, 0.25, 0.27, 0.30)
8: Only one safety (0.50)
(You drop the 0.45 and 0.55 safeties completely.)

Now P(unmatched) changes because the set of programs changed.

Compute:

  • Product of “no match” at 3 reaches + 4 targets (same as before):
    (0.92 × 0.93 × 0.94) × (0.78 × 0.75 × 0.73 × 0.70)
    ≈ 0.804 × 0.30 ≈ 0.241

  • Product of “no match” at the single safety: (1 − 0.50) = 0.50

Total P(unmatched) ≈ 0.241 × 0.50 ≈ 0.1205 ≈ 12.1%.

So P(match somewhere) drops to ≈ 87.9%.

Simulate this 100,000 times, and you see approximately:

bar chart: Baseline (7 realistic + 3 safety), Reaches Replacing Safeties

Effect of Dropping Safeties for Reach Programs
CategoryValue
Baseline (7 realistic + 3 safety)96.2
Reaches Replacing Safeties87.9

That is nearly a tripling of the unmatched risk (3.8% → ~12%). For a single applicant, that is now a meaningful gamble, not a rounding error.

The data-driven conclusion:

  • Adding reach programs without removing realistic ones changes where you match, not whether you match.
  • Replacing safeties or multiple solid targets with reaches absolutely can increase your risk of going unmatched, sometimes by a factor of 2–3.

6. How “High” Should You Place Your Reach Programs?

Once you accept that order does not change P(match overall given the same programs), the only rational question is: how much do you value a small bump in reach-match probability vs. the chance to land at your top realistic choice?

Here is a stylized 5-program example (1 reach, 2 targets, 2 safeties):

pᵣₑₐcₕ = 0.10
pₜ₁ = 0.25, pₜ₂ = 0.22
pₛ₁ = 0.55, pₛ₂ = 0.50

Compare 3 ranking patterns:

  • Pattern 1: Reach first
    Reach → Target1 → Target2 → Safety1 → Safety2

  • Pattern 2: Reach between targets
    Target1 → Reach → Target2 → Safety1 → Safety2

  • Pattern 3: Reach after targets
    Target1 → Target2 → Reach → Safety1 → Safety2

Monte Carlo results (100,000 runs) typically look like this:

Match Outcome Distribution by Reach Position
OutcomePattern 1Pattern 2Pattern 3
Match at reach~10.0%~9.4%~8.7%
Match at top target (T1)~22–23%~25–26%~27–28%
Match at any target (T1/T2)~40–41%~42–44%~44–46%
Match at any safety~44–45%~41–42%~38–40%
Unmatched~1–2%~1–2%~1–2%

Notice three points:

  1. P(unmatched) is essentially unchanged (same program set).
  2. Moving the reach from first to third drops reach-match probability by just a couple of percentage points.
  3. You gain several percentage points in probability of landing at your absolute favorite realistic target when you do not let the reach preempt it.

So ranking a reach #1 vs #3 is a preference decision, not a massive risk decision. You are trading:

  • A few percentage points in P(reach)
  • Against a few percentage points in P(top realistic)

What I tell data-minded applicants:

  • If the reach is a true dream and still somewhat aligned (not total fantasy), top-3 is rational.
  • Ranking it #1 is mildly aggressive but not mathematically reckless if your realistic list is robust.

What is reckless is ranking 4–5 ultra-low probability programs before you reach your first realistic choice. At that point, your probability mass at “best realistic fit” gets eroded significantly.


7. What Happens When You Overshoot: Too Many Reaches, Too Few Realistic

Now model a more dangerous scenario I have actually seen in competitive specialties.

Applicant in, say, ortho or derm:

  • 6 “reach” academic programs, p ≈ 0.03–0.05 each
  • 3 realistic mid-tier, p ≈ 0.15–0.20
  • 1 true safety, p ≈ 0.35

Scenario 1 (balanced):

1–3: Realistic (0.20, 0.18, 0.16)
4–6: Reaches (0.05, 0.04, 0.04)
7–10: Remaining reaches and safety (0.03, 0.03, 0.03, 0.35)

Scenario 2 (aggressive “dream-heavy” list):

1–6: Reaches (0.05, 0.05, 0.04, 0.04, 0.03, 0.03)
7–9: Realistic (0.20, 0.18, 0.16)
10: Safety (0.35)

Same programs, same pᵢ values. Ordering alone does not change P(match somewhere). But the psychological effect is vicious:

  • In Scenario 2, you can easily end up matched to a marginal reach that was actually a worse fit than your top realistic, just because it came earlier in the ROL and happened to hit its 3–5% success.
  • Meanwhile, your best realistic choices never get a chance to “fire.”

The Monte Carlo numbers look something like:

stackedBar chart: Balanced, Reach-Heavy

Impact of Overweighting Reach Programs (Same Program Set)
CategoryReach ProgramsRealistic ProgramsSafety Program
Balanced185226
Reach-Heavy244426

Again, these are stylized, but the pattern holds: with reach-heavy ordering, you siphon probability mass out of your best realistic options into low-probability, often less-desirable outcomes.

The risk is not going unmatched. It is matching suboptimally for your actual career and personal goals.


8. The Correct High-Level Rules (Backed by the Numbers)

Strip away the folklore, and the Monte Carlo view leaves you with a handful of clear rules.

  1. Do not drop realistic programs to make room for reaches.
    The only consistent way ranking “reach” programs hurts match odds is if you remove high-probability backups or mid-tier options from your list.

  2. Include all programs where you would genuinely be willing to train.
    The product ∏ (1 − pᵢ) shrinks as you add more real options. Longer lists with real safeties almost always reduce P(unmatched).

  3. Order by preference among programs that are all plausibly attainable.
    Within a set you would happily attend, ranking reflects your true utility function. The algorithm is applicant-optimal under that assumption.

  4. Use “reach” programs sparingly, near—but not necessarily at—the top.
    One or two high-quality reaches in the top 3–5 slots buy you upside at modest cost in expected outcome quality. Flooding your top 10 with low-p programs is statistically sloppy.

  5. Specialties with high baseline unmatched rates need even more discipline.
    In EM or FM with plenty of positions, you have more room to gamble at the margin. In derm, plastics, neurosurgery, dropping safeties or realistic regionals for another 3% longshot is asking for trouble.


9. A Simple Way to Sanity-Check Your Own List

You do not need to code a full Monte Carlo engine. You can approximate your risk with a back-of-the-envelope approach:

  1. Assign rough pᵢ values for each program you interviewed at:

    • “I felt very strong / they hinted positively”: maybe 0.35–0.50
    • “Seemed fine, standard interview”: 0.20–0.30
    • “Felt weak / long-shot vibe”: 0.05–0.15
  2. Count how many programs fall in each band.

  3. Estimate P(unmatched) ≈ product of (1 − pᵢ) over your entire ROL.

If that product is higher than about 10–15%, you are in a danger zone and should be extremely skeptical about cutting any realistic options or over-weighting reaches.

Even a coarse three-band model can tell you:

  • “My P(unmatched) is ~3–5% with 3 safeties included. If I delete one of them, I jump to ~8–10%.”

When people see that jump numerically, they usually stop trying to be clever with extra reaches.


10. Bottom Line: What the Monte Carlo View Actually Says

The NRMP algorithm is not a casino that rewards boldness. It is a deterministic mechanism that responds to the structure of your list and the probabilities embedded in other people’s lists.

The Monte Carlo story is blunt:

  • Ranking “reach” programs in addition to your realistic and safety programs does not reduce your overall chance of matching. It just changes where that match happens.
  • Ranking “reach” programs instead of realistic or safety programs can substantially increase your chance of going unmatched.
  • Within a fixed set of programs, pushing a reach from #3 to #1 slightly increases the chance of landing there, at the expense of slightly decreasing the chance of landing at your favorite realistic program. This is a preference tradeoff, not a huge risk swing.

Think in terms of probability mass. You only have 100%. If you throw 30–40% of it at 5–10% events, something else has to give.

Used carefully, a couple of strategic reaches at the top of a long, realistic list are smart. Used as a replacement for boring-but-safe programs, they are mathematically reckless.

With that lens, you can look at your ROL and see it for what it really is: not a wish list, but a probability allocation across your next 3–7 years of life.

The next step, if you want to push this further, is to build your own small simulation—plug in your personal program set, your rough pᵢ estimates, and watch how reshuffling your list changes the distribution of outcomes. That exercise does more to calm rank-list anxiety than any anecdote or mentor reassurance I have ever seen.


FAQ

1. Does ranking a program higher make that program more likely to rank me higher?
No. Program rank lists are finalized before they see your rank order list. Your ranking affects only the order in which the algorithm tries to place you, not how attractive you are to the program. Any advisor telling you to rank a place higher “to show interest for the algorithm” is simply wrong.

2. Is there any situation where a shorter rank list is better than a longer one?
From a probability standpoint, almost never. If you would truly be willing to train at a program and you have interviewed there, including it on your ROL can only reduce or leave unchanged your chance of going unmatched. The only rational reason to exclude a program is because you would prefer to go unmatched rather than spend years training there.

3. How many “reach” programs is too many at the top?
The data perspective: when the cumulative product of “no match” across your realistic and safety programs still leaves you with <5% unmatched risk, you can afford several reaches in the top 5–7 slots. When your realistic list is thin and that unmatched risk is already >10–15%, placing 4–5 ultra-low-probability programs before your best realistic choices is a poor tradeoff. As a rule of thumb, 1–3 reaches in your top 5 is usually reasonable; beyond that, you are mostly redistributing probability mass away from solid outcomes for relatively little additional upside.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles