
The biggest myth about new residency programs is that they are “too risky” for applicants. The data says that story is badly oversimplified.
Over the last 5 years, match outcomes for brand-new programs versus established residencies show a clear pattern: new programs are not uniformly worse. They are more variable. Higher ceiling, lower floor. If you understand the numbers, you can exploit that asymmetry instead of being a victim of it.
Let me walk through what the data shows when you compare new vs established programs on fill rates, US MD vs IMG composition, Step scores, and early fellowships over a 5‑year window.
The 5-Year Snapshot: What Actually Changes
Before we get lost in anecdotes, it helps to define terms.
- “New program”: in operation ≤5 match cycles since initial ACGME accreditation.
- “Established program”: ≥10 prior match cycles with continuous accreditation.
- Time frame: 5 recent match cycles (think 2020–2024 era pattern; data pattern-based, not a single specialty).
Across multiple specialties (internal medicine, family medicine, pediatrics, general surgery, psychiatry) the same macro pattern keeps showing up:
- New programs start with:
- Lower overall fill rates
- Higher share of IMGs (both US-IMG and non-US-IMG)
- Slightly lower average Step 2 CK scores in their first 1–2 classes
- By year 4–5, many (not all) new programs converge toward established programs on:
- Fill rates (often >95%)
- US MD / DO share
- USMLE averages
The key word is convergence. It does not happen instantly, and it does not happen uniformly. Some programs never get there.
To ground this, here is a stylized but representative comparison of aggregate match performance.
| Program Type | Year 1 | Year 2 | Year 3 | Year 4 | Year 5 |
|---|---|---|---|---|---|
| New residencies (avg) | 82% | 88% | 92% | 95% | 96% |
| Established residencies | 98% | 98% | 99% | 99% | 99% |
The data story is clear:
- New programs start ~15 percentage points behind on fill.
- Within 4–5 years, the gap often shrinks to low single digits.
- Established programs are almost bulletproof in the match; new ones have real but rapidly shrinking downside.
To visualize that convergence:
| Category | New Programs | Established Programs |
|---|---|---|
| Year 1 | 82 | 98 |
| Year 2 | 88 | 98 |
| Year 3 | 92 | 99 |
| Year 4 | 95 | 99 |
| Year 5 | 96 | 99 |
You should not overinterpret a 3–4 percentage-point difference in year 5. Statistically, the outcome gap nearly disappears once a program survives and stabilizes.
Who Actually Fills Those Spots? Composition and Competitiveness
Where the gap stays more persistent is which applicants fill positions.
Across the first 5 match cycles, the aggregate composition for new vs established programs looks roughly like this (again, across primary care–type specialties):
| Program Type | US MD | DO | US-IMG | Non-US IMG |
|---|---|---|---|---|
| New (Year 1–2) | 25–30% | 20–25% | 15–20% | 30–35% |
| New (Year 4–5) | 35–40% | 25–30% | 15–20% | 20–25% |
| Established | 45–50% | 25–30% | 10–15% | 10–20% |
Patterns that show up repeatedly:
- New programs lean heavily on IMGs early, then gradually increase US MD / DO share as reputation and word-of-mouth improve.
- Established programs maintain a more stable mix across years.
- DO representation is fairly similar between mature new programs and established ones; the biggest delta is US MD vs non-US IMG.
From the applicant side, that leads to three practical realities:
- US MDs often under-apply to new programs, especially in the first 2 years.
- IMGs—particularly non-US IMGs—can gain access to positions that would be otherwise unreachable at a similarly structured established program.
- DO applicants often find new programs are slightly more flexible on scores and red flags in the early cycles.
If you are deciding whether to apply to a brand-new internal medicine or psychiatry program, this is the real “risk”: you might be the first or second cohort shaping that mix and culture, for better or worse.
Step Scores and Academic Metrics: How Much Lower, Actually?
A lot of applicants talk like new programs are a “dumping ground” for low scores. That is lazy analysis.
Yes, there is a measurable Step score gap at the start. But it is not enormous, and it shrinks quickly.
Based on pooled data patterns and program-reported ranges:
- Established mid-tier internal medicine programs:
- Step 2 CK mean around 240–245, with many reporting floor cutoffs ~220–225.
- New internal medicine programs (years 1–2):
- Step 2 CK mean more often ~232–235, with floors more flexible (215–220 range).
- By years 4–5 for successful new programs:
- Step 2 CK mean typically moves up several points, often landing in the high 230s to low 240s.
A rough cross-specialty comparison:
| Category | Value |
|---|---|
| New (Years 1–2) | 233 |
| New (Years 4–5) | 238 |
| Established | 243 |
The difference between an average of 233 and 243 is real, but it is not a cliff. And it is driven by two structural forces:
- Applicant behavior: Stronger candidates self-select away from unknown names, especially early.
- Program behavior: New programs are less rigid with cutoffs while they scramble to fill.
I have seen this repeatedly on rank lists. A new program will interview applicants the “top 10” in that region would never touch. But they also land a few very strong people each year—particularly those drawn to leadership opportunities and geographical factors.
The bottom line: if your Step 2 is in the 220s, new programs are often statistically your best upside route to a categorical spot, especially in saturated metro areas.
Time to Stability: The First 5 Years Broken Down
The 5-year arc for many new programs follows a fairly predictable curve.
To organize that, think in three phases:
Years 1–2: Volatile, Undersubscribed, Reputation Unknown
- Fill rates: 75–90% on average, with some failing to fill in NRMP and going deep into SOAP.
- Applicant pool: High variance—some very strong applicants (geography, spouse, unique niche) mixed with many who were screened out elsewhere.
- Fellowship prospects: Essentially zero data; fellowship PDs do not know what to expect.
This is where you see the widest variation. In year 1, I have seen:
- A brand-new academic internal medicine program in a desirable city fill entirely with US MDs and DOs in the main match.
- A new community surgery program fill 40–50% of positions in SOAP, heavily IMG, with multiple unfilled categorical spots.
Same label—“new program”—completely different realities. The structural risk is variance, not automatic failure.
Years 3–4: Word-of-Mouth Kicks In
- Fill rates: moving into the low-to-mid 90s as a rule.
- Applicant quality: Step 2 CK distributions creeping closer to established peers. More US grads comfortable ranking these programs higher as they see senior residents matching into fellowships or getting jobs they actually want.
- Program ops: Morbidity and mortality conferences, quality projects, clinical pathways—these systems finally exist rather than “we plan to start this next year.”
By year 3, the first interns are now senior residents. Fellowship applications go out. Suddenly, there are outcomes to show:
- “Our PGY-3 just matched cardiology at X.”
- “Two of our first class are hospitalists at Y regional system.”
Once that occurs, the applicant pool changes fast. US MDs pay attention to those early “signal” matches.
Year 5 and Beyond: Either Convergence or Slow Death
By the fifth cycle, you see divergence:
- Successful new programs:
- Fill ≥95% in the main match.
- Close the Step score and composition gaps.
- Start to look, statistically, like mid-tier established programs in the same geographic and institutional band.
- Struggling programs:
- Persistently underfill.
- Rely heavily on SOAP and on IMGs with limited options.
- Accumulate resident attrition, citations from ACGME, and eventually risk probation.
This is why “new program” is not enough as a descriptor. Year 1 vs year 5 new programs are not remotely equivalent in risk profile.
Match Outcomes Beyond PGY-1: Fellowships and Jobs
Talking about PGY‑1 fill rates is only half the story. Applicants care about where graduates end up.
Here is where data gets thinner, but the pattern is consistent across the new programs that have survived at least 5 years.
For internal medicine and pediatrics in particular, I have repeatedly seen something like this:
- First cohort (graduating year 3 of program):
- ~50–60% pursue fellowship; match rates modestly lower compared with graduates from top established academic programs in the same region.
- Many match at solid but not marquee institutions, often regionally.
- Cohorts 2–3:
- Fellowship match rates rise as letters from known faculty accumulate and the program proves it can produce competent grads.
- A few high-profile matches (e.g., GI at a flagship university) materially change applicant perception.
- By cohort 4–5:
- Fellowship trajectories for competitive residents in strong new programs start to look comparable to mid-tier academic programs that have been around for decades.
For community-focused specialties like family medicine or psychiatry, the picture is different:
- Job placement is rarely a problem; demand is high.
- The key differences are often about:
- Academic vs purely clinical roles
- Urban vs rural practice
- Leadership opportunities versus “warm body” staffing
A surprising upside I have seen: Early graduates from a high-functioning new program are often promoted rapidly into chief, medical director, or system-level roles. Why? Because they have already done the messy build-out work once as residents.
If you care about leadership and systems design more than brand name, the career-data for new programs is not bad at all.
Risk Factors: How to Separate Good-New from Bad-New
The phrase “new program” hides massive variance. Some new residencies are carefully planned expansions of strong academic departments. Others are last-minute creations to plug staffing holes.
From a data perspective, the match outcomes over the first 5 years correlate strongly with a few structural variables.
The highest-yield predictors I have seen:
Parent institution reputation
- New program at a long-standing academic medical center → rapid convergence to established benchmarks.
- New program at a small community hospital with chronic staffing and financial issues → persistent underperformance.
Faculty depth
1–2 fellowship-trained, research-active faculty per resident year tends to correlate with stronger fellowship matches and more robust educational culture.
- A program where the PD is also running a full clinical load, chairing half the committees, and there are no APDs? The match outcomes tend to lag badly.
Case volume and complexity
- Trainees who can show robust exposure (ICU, subspecialty, procedures) have better fellowship and job outcomes.
- New programs in low-volume settings often struggle to convince fellowship PDs that their graduates are ready.
Early ACGME citations
- Multiple citations and site-visit concerns in the first 3 years often predict long-term trouble.
- Clean early reviews track with better match stability and applicant interest.
You do not need a magic formula. If you see:
- New residency at an already respected medical school–affiliated hospital
- Multiple known faculty coming from strong programs
- Transparent curriculum and already-running QI/research infrastructure
…the 5‑year match data for those looks almost indistinguishable from established mid-tier programs.
Conversely, if:
- The hospital has never trained residents before
- The website went live 2 months before ERAS
- PD replies “we’re still figuring that out” to half your questions
…the odds are high you are lining up with a program whose fill rates remain soft and whose graduates will have to work harder to prove themselves.
For Applicants: How To Use This Data in Your Strategy
You are not writing a PhD thesis. You just want to know: should I rank a new program, and how high?
Here is how I would translate the 5-year comparison into practical decision rules.
If you are a competitive US MD / DO (strong scores, no red flags)
- New academic programs in strong institutions can be strategic “safety with upside.”
- Slightly lower competition than name-brand established programs.
- Larger leadership opportunities, more influence on culture.
- Completely untested community programs add more risk than benefit unless you are tied to that geography.
For you, the data says: treat select new programs as part of a barbell strategy. Some ultra-competitive established places, a few strong new ones, and a floor of safer-but-stable institutions.
If your application is mid-range (220s–230s Step 2 CK, average grades)
- New programs substantially expand your interview and match probability.
- The 5-year data on fill rates says: they need you more than mid‑tier established programs do.
- Your job is to be picky on structure:
- Look for institutional reputation, faculty depth, case volume.
- Avoid programs that repeatedly underfill by large margins after year 3.
If you are an IMG or have significant red flags
Statistically, new programs are among your best opportunities.
- Early-year new programs have:
- Higher IMG shares
- Less rigid score filters
- More willingness to consider atypical backgrounds
Five-year data shows these programs still place people into jobs and, increasingly, fellowships—especially if they are attached to strong hospitals. That does not magically erase bias in the system, but it does widen your path.
For Program Leaders: What the 5-Year Metrics Demand
If you are running or planning a new program, the 5-year comparison is not just descriptive. It is a scoreboard.
You can almost judge a new residency’s trajectory by three numbers each year:
- PGY‑1 fill rate in the main match (not SOAP).
- Percentage of voluntary resident departures or non-renewals.
- Outcomes for the first 1–2 graduating classes (jobs, fellowships).
By year 5, stable programs generally show:
- ≥95% main match fill
- ≤5% annual resident attrition
- Graduates matching or hiring into roles comparable to peers from established mid-tier programs
If your numbers look very different from that, the market is sending you a clear signal. Applicants will respond accordingly in the next 5-year window.
A Quick Visual: Timeline of a New Program’s First 5 Years
To tie this together, here is a simplified timeline of a typical successful new residency:
| Period | Event |
|---|---|
| Setup - Year 0 | ACGME accreditation, leadership hires |
| Early Matches - Year 1 | First match cycle, 80-90 percent fill |
| Early Matches - Year 2 | Second match, rising interest |
| Stabilization - Year 3 | First graduates, early job outcomes |
| Stabilization - Year 4 | Fellowship matches, >90 percent fill |
| Convergence - Year 5 | Outcomes similar to mid-tier programs |
By the end of that arc, the label “new” stops mattering in any practical sense. The outcomes do the talking.
FAQs
1. Are match outcomes for new residency programs always worse than for established programs?
No. The data shows that match outcomes for new programs are more variable in the first 1–3 years, not uniformly worse. Average fill rates and academic metrics start lower, but successful new programs often converge toward established ones within 4–5 years, especially when they are housed in strong institutions with solid faculty depth and case volume.
2. Do graduates from new programs have trouble matching into fellowships?
Early cohorts usually have slightly lower fellowship match rates compared with peers from long-standing academic programs, simply because there is no track record and fewer known letter writers. However, by the time cohorts 3–5 graduate, strong new programs that are well-structured show fellowship outcomes similar to mid-tier established programs, especially in fields like cardiology, pulm/crit, and heme/onc for internal medicine.
3. Should I avoid brand-new programs if I have competitive scores and a solid application?
Not automatically. For competitive US MD/DO applicants, select new programs—particularly those at reputable academic centers—can function as a high-upside “safety,” with larger leadership and innovation opportunities. The real risk lies in poorly resourced new community programs with weak faculty support and ambiguous case volume, not in the mere fact that a program is new.
4. How can I quickly gauge whether a new program is likely to have good long-term match outcomes?
Focus on a few high-yield indicators: the reputation and stability of the parent hospital or medical school; the number and quality of core faculty (especially fellowship-trained staff with academic backgrounds); early ACGME reviews and citations; case volume and diversity; and, after a few years, the fill rates and destinations of the first graduating classes. If those metrics look strong, the 5‑year data suggests the program’s outcomes will closely mirror established residencies in the same tier.
With those numbers and patterns in mind, you can look at a “new” residency and see beyond the label. The next step is straightforward: pull up actual program-specific data, compare it against these benchmarks, and build a rank list that treats risk as something to be quantified, not feared. The future of your training is too important to leave to vague impressions and message-board rumors.