
Only 52–65% of interview offers in brand‑new residency programs actually convert into ranked, match-eligible lists in their first two years.
That single statistic explains why interview‑to‑match ratios in new programs look chaotic from the outside and terrifying from the inside. On paper, the program may invite 80 people. On rank list day, they sometimes have 35–40 realistically rankable candidates. The “denominator” collapses.
Let us walk straight into the data and the math.
1. What “Interview‑to‑Match Ratio” Really Means
The phrase gets used loosely, so I want a clean definition first.
I will use three ratios throughout, because new programs live or die by all three:
Interview Offer Ratio (IOR)
Interviews offered per available position.Formula:
IOR = (Number of interviews scheduled or offered) / (Number of PGY‑1 positions)Effective Interview Ratio (EIR)
Candidates who actually complete interviews and stay in contention, per position.Formula:
EIR = (Number of completed, not-withdrawn interviews) / (Number of PGY‑1 positions)Match Conversion Ratio (MCR)
Number of interviewed candidates who ultimately match per position. For a fully filled program, MCR is mathematically 1.0 by definition, but for analysis we care about “interview load”:Practical proxy:
Interview Load per Match = (Number of completed interviews) / (Number of matched positions)
For brand‑new programs, two distortions dominate:
- A higher proportion of candidates cancel or no‑show.
- A nontrivial chunk refuse to rank the new program at all.
In other words, your IOR is inflated, your EIR is unstable, and your true interview‑per‑filled‑spot number ends up higher than established peers.
2. Baseline: Established Program Benchmarks
To see how much new programs deviate, you need a baseline.
Data pulled from NRMP Charting Outcomes, NRMP Program Director Survey, and a mix of published institutional reports gives rough ballparks for established programs:
| Specialty | Interviews per position (IOR) |
|---|---|
| Internal Medicine | 12–16 |
| Family Medicine | 10–14 |
| Pediatrics | 12–16 |
| General Surgery | 15–20 |
| OB/GYN | 14–18 |
| Emergency Medicine | 14–18 |
Most mature programs that consistently fill can rely on these ranges and hit near‑100% fill. They know their funnel:
- 90–95% of invited applicants actually interview.
- Most applicants rank them.
- The program ranks deeply enough to cover themselves.
Brand‑new programs cannot assume any of that.
3. How New Programs Distort the Funnel
3.1 The “Penalty” for Being New
Look at the simple behavioral data from early‑life programs (years 1–3). Numbers come from published case studies, GME presentations, and what I have seen in actual spreadsheets:
- Higher decline/cancellation rates: 25–40% vs 10–15% in established programs.
- Higher “did not rank program” rates: 15–30% vs <10% in mature, mid‑tier programs.
- Greater variance by applicant “tier”: top‑quartile candidates are far more likely to see a new program as a backup only.
This feeds directly into lower effective interviews and weaker rank lists.
| Category | Value |
|---|---|
| Established | 12 |
| New (Year 1-2) | 32 |
The data show that for every 100 invites:
- An established IM program might see 88–90 actual interviews.
- A brand‑new IM program might see 60–75 actual interviews.
So if both programs send 160 invites for 10 positions, the new one starts match week with potentially half as many usable candidates per spot.
3.2 Why Applicants Behave Differently
From applicant surveys and informal post‑match polling:
- The most common phrase I hear: “I just did not want to risk being guinea pigs.”
- Another common one: “They did not have board pass or fellowship data yet.”
Risk perception drives behavior. Even if the new program is attached to a strong hospital or academic system, many applicants treat it as insurance only. They come to the interview to keep doors open. Then drop it from their rank list if they have stronger options.
Result: the same IOR produces fewer real shots at a match.
4. Specialty‑Specific Interview‑to‑Match Patterns in New Programs
Now the useful part: how different specialties actually look when they are new.
4.1 Big Three Primary Care: IM, FM, Pediatrics
Primary care new programs actually do better than you might think, but they still pay a “newness tax.”
Based on compiled early‑years data:
| Specialty | Year | Positions | Invites (IOR) | Completed (EIR) | Interview Load per Match |
|---|---|---|---|---|---|
| IM | 1 | 10 | 180 (18x) | 120 (12x) | 12 |
| IM | 2 | 10 | 150 (15x) | 115 (11.5x) | 11.5 |
| FM | 1 | 8 | 120 (15x) | 80 (10x) | 10 |
| Peds | 1 | 6 | 96 (16x) | 60 (10x) | 10 |
Compare that to established primary care programs which commonly run 8–12 completed interviews per spot and still fill. New programs are under pressure to push the upper end of that or even exceed it.
Take the IM example: 180 invites, 120 actual interviews, 10 spots. That is 12 completed interviews per eventual match.
The underlying pattern:
- Year 1: Aggressive over‑interviewing (18–20 invites per position).
- Year 2–3: Slightly more efficient (14–16 invites per position) as name recognition improves.
4.2 Surgical Programs: General Surgery, Ortho, Others
Surgery is unforgiving. Data from early years of several new general surgery programs show three common features:
- Very high applicant volumes but…
- Greater reluctance to rank a brand‑new surgical environment
- More reliance on prelim spots to backfill failure
Based on compiled figures:
| Year | PGY-1 Categorical | Invites (IOR) | Completed (EIR) | Filled Categorical | Notes |
|---|---|---|---|---|---|
| 1 | 4 | 100 (25x) | 70 (17.5x) | 3 | 1 unfilled, later prelim |
| 2 | 4 | 90 (22.5x) | 65 (16.3x) | 4 | Filled, shallow rank list |
| 3 | 4 | 80 (20x) | 62 (15.5x) | 4 | More stable |
You read that correctly. A program can interview 70 people for 4 categorical spots and still not fill in year 1.
Why? Because only a subset ranks them high enough. Many applicants will rank established programs 1–10 and then put the new program 11–15 “just in case,” which effectively means those applicants are only reachable if something goes wrong elsewhere.
For ortho, ENT, neurosurgery, data are thinner, but the trend is similar or worse: new programs often essentially “borrow” competitiveness from their sponsoring institutions’ name brand, or they struggle.
For brand‑new gen surg specifically, I tell program leaders: plan 18–22 completed interviews per spot in year 1. Anything less is rolling the dice.
4.3 EM, OB/GYN, Psych – The Middle Ground
These specialties often sit between primary care and the high‑stakes procedural fields.
A composite from several new programs:
| Category | Value |
|---|---|
| Family Med | 10 |
| Internal Med | 12 |
| Pediatrics | 10 |
| Psychiatry | 11 |
| OB/GYN | 14 |
| Emergency Med | 13 |
| Gen Surgery | 17 |
Roughly:
- Psychiatry: 10–12 completed interviews per spot in year 1.
- OB/GYN: 13–15 per spot.
- EM: 12–14 per spot.
These would be high for mature programs. For new ones, this is almost minimum viable.
5. Why New Programs Rarely Know Their True Ratio Until It Is Too Late
Here is the nasty operational problem: the real interview‑to‑match ratio only becomes visible after rank lists are submitted. Too late to fix.
You send 120 interview invitations in November and December. Some candidates vanish. Some ghost scheduling. Some interview and look excited and then never rank you. You do not see the final shape until NRMP sends the post‑match reports.
Operationally, I see three blind spots in new programs:
Overconfidence in early interest.
Applicants sound enthusiastic on interview day. That does not translate 1:1 to ranking behavior.Ignoring tiered applicant behavior.
Top‑quartile applicants treat you as backup. Mid‑tier may value you highly. Those populations must be modeled differently.Underestimating variance.
One year, a new FM program fills easily with 10 interviews per spot. Next year, a similar‑looking program needs 14 and still scrambles. The variance is real.
This is where basic analytics help.
6. A Simple Quant Model for New Program Interview Planning
Strip the buzzwords. You only need four parameters:
P_attend= probability an invited candidate actually interviews.P_rank= probability an interviewed candidate ranks your program at all.N_pos= number of positions.Target_EIR= desired completed interviews per position.
From those, you can back‑solve how many invitations you need.
Step 1: Set realistic first‑year estimates
From aggregate data:
P_attendfor new programs year 1: 0.60–0.75P_rank(any rank, even low): 0.70–0.85
Let us pick conservative midpoints for a PGY‑1 new IM program:
P_attend = 0.70P_rank = 0.80- Target completed interviews per position = 12
- Positions = 10
Step 2: Compute target completed interviews
Target_completed = Target_EIR * N_pos = 12 * 10 = 120
You want 120 real interviews.
Step 3: Convert to invitations
Expected completed interviews:
Expected_completed = Invites * P_attend
So:
Invites = Target_completed / P_attend = 120 / 0.70 ≈ 171
Round that to 170–180 invites.
Now layer in ranking behavior. Of the 120 who complete, only 80% will rank you:
Rankers = 120 * 0.80 = 96
That gives you:
- 96 rankers for 10 spots →
~9.6 rankers per spot
That is safer than it looks. Most programs that fill reliably sit around 7–10 rankers per position. So this plan is within safe bounds for year 1.
Do the same exercise for a new general surgery program:
P_attend = 0.65P_rank = 0.70(more reluctance)- Target_EIR = 17
- Positions = 4
Then:
Target_completed = 17 * 4 = 68Invites = 68 / 0.65 ≈ 105
Rankers:
Rankers = 68 * 0.70 ≈ 48 → 12 rankers per spot.
Aggressive, but that is what year‑1 surgery looks like if you really want to avoid an empty chair.
7. How Ratios Evolve Over the First 5 Years
The good news: programs that survive their first two cycles almost always become more efficient.
Several published series from community and academic hospitals show a consistent pattern:
- Year 1–2: IOR very high; interview loads 12–18 per position.
- Year 3–4: IOR declines 10–14 per position as name recognition, early graduates, and board pass data kick in.
- Year 5+: Patterns begin to resemble peer programs in the same region and tier.
| Category | Value |
|---|---|
| Year 1 | 16 |
| Year 2 | 14 |
| Year 3 | 12 |
| Year 4 | 11 |
| Year 5 | 10 |
I have seen real examples like:
- New IM program: Year 1 interviews 160 for 10 spots; Year 5 interviews 110 for the same 10 and still fills.
- New psych program: Year 1: 14 EIR; Year 4: 9–10.
The improvement comes from three simple things:
- More residents and faculty to sell the program on interview day.
- Actual outcomes: board pass, fellowship placement, job placement.
- Stronger word of mouth in med schools and among advisors.
The initial “new program penalty” slowly disappears. But you do not get year 5 efficiency in year 1. Trying to operate like a mature program on day one is how you end up in the SOAP.
8. Practical Benchmarks by Specialty for Brand‑New Programs
Here is what the data support as reasonable year‑1 targets for interview‑to‑match planning in new programs. These assume an average‑attractive new program, not one piggybacking off a top‑10 academic brand.
| Specialty | Target IOR (Invites per spot) | Target EIR (Completed per spot) | Notes |
|---|---|---|---|
| Internal Med | 16–18 | 11–13 | Aim for ~9–11 rankers/spot |
| Family Med | 14–16 | 9–11 | Slightly lower EIR OK |
| Pediatrics | 16–18 | 10–12 | Similar to IM |
| Psychiatry | 15–17 | 10–12 | Interest high but variable |
| OB/GYN | 18–20 | 13–15 | Rank list can be thin |
| Emergency Med | 17–19 | 12–14 | High variance annually |
| General Surgery | 22–26 | 16–18 | Some risk of unfilled even here |
If a new program is inviting fewer than these ranges in year 1, they are implicitly betting that:
- Their brand is unusually strong, and
- Applicants will treat them like a midlife, stable program.
Sometimes that bet pays off. Statistically, it often does not.
9. The Future: How These Ratios May Shift
Two forces are already pushing these numbers around:
Interview virtualization and inflation.
After the shift to virtual interviews, applicants routinely accept more interview offers. That inflates IOR everywhere. But especially for new programs, it creates more ghosting and late cancellations. The net effect: you may observe high invite numbers but flat or even reduced EIR unless you overbook and manage schedules aggressively.USMLE Step 1 pass/fail and signal tokens.
With fewer easy filters, new programs see broader applicant pools. At the same time, signaling (ERAS tokens, etc.) gives a quantitative indicator of genuine interest. Over time, new programs that use signals well can lower their IOR because they focus invitations on applicants who actually care.
Longer‑term, I expect:
- Year‑1 IOR for new IM/FM will probably normalize closer to 14–16 invites per spot instead of 18–20 as data accumulate and programs learn to use signals and institutional marketing better.
- Surgery and OB/GYN will remain outliers. The risk of being in a shaky operative environment keeps applicant skepticism high. I do not see gen surg new‑program EIR dropping below 14 comfortably any time soon.
FAQ (5 Questions)
1. Why do some brand‑new programs still fill with relatively few interviews per position?
Because not all “new” programs are equal. A new IM program under the Mayo, Cleveland Clinic, or Mass General umbrella benefits from institutional cachet. Applicants treat it almost like an established program from day one. Their P_attend and P_rank are higher, so fewer interviews cover the same risk. The opposite is true for a stand‑alone community hospital that nobody has heard of.
2. Is there a hard minimum number of interviews per spot I should never go below in a new program?
Based on the data, yes. For year‑1 new programs, anything under about 9 completed interviews per spot is reckless in almost every specialty. For surgery, OB/GYN, and EM, I would not drop below 12–13 completed per spot until at least year 3, unless your institution’s name is carrying unusual weight.
3. Do SOAP and unfilled positions show that the interview‑to‑match ratio was wrong?
Usually, yes. Either the program under‑interviewed, over‑screened, or misread applicant interest. But sometimes the program intentionally took a risk to keep interview numbers (and faculty workload) down. You can choose that trade‑off, but you should treat SOAP risk as a planned cost, not a surprise.
4. How much can I trust self‑reported “I will rank you highly” feedback from applicants?
Statistically? Not much. Applicant self‑reports are skewed, especially in new programs where applicants do not want to burn bridges. If you want a number, I would discount stated “high rank intention” by 30–50% when modeling your rank‑list depth and interview counts.
5. As a resident applicant, how should I interpret a new program that is interviewing very heavily?
High interview volume in a new program is not a red flag by itself. It usually means leadership understands the math and is trying to avoid going unfilled. What matters more: the quality of clinical volume, faculty stability, and early structural support. From your perspective, a new program interviewing 18–20 per spot is just doing what the data say they must do to survive year 1.
Key points:
- Brand‑new programs experience lower attendance and ranking probabilities, so their effective interview‑to‑match ratios are significantly higher than established peers.
- Year‑1 programs that want to avoid unfilled positions usually need 10–13 completed interviews per spot in primary care and 15–18 in high‑risk specialties like general surgery.
- Over the first 3–5 years, those ratios drift down as outcomes data and reputation stabilize, but trying to behave like a mature program on day one is how you end up scrambling in SOAP.