Residency Advisor Logo Residency Advisor

Building a Smart Rank List When Half Your Interviews Are New Programs

January 8, 2026
16 minute read

Resident reviewing rank list on laptop with notes -  for Building a Smart Rank List When Half Your Interviews Are New Program

The worst rank lists are built on vibes and fear. The best ones are built on structure and cold, boring criteria—especially when half your interviews are at brand‑new programs.

You are not just ranking hospitals. You are ranking risk. And new residency programs, by definition, are risk multipliers.

Here is how to take that chaos and turn it into a smart, defensible rank list when many of your options are unproven.


Step 1: Admit the Reality – New Programs Change the Game

If half your interviews are at new or very young programs, you are not in a standard cycle. The usual “just rank by gut feeling and location” advice is lazy and, for you, dangerous.

New programs bring three big unknowns:

  1. Educational qualityNo track record. No word-of-mouth from senior residents.
  2. Board pass and fellowship outcomes – Unknown or very small sample sizes.
  3. Program stability – Leadership could change. Funding could shift. Clinical volume might not match projections.

This does not mean you should avoid them. I have seen residents thrive in brand-new programs and get excellent jobs and fellowships. But they did not walk in blindly.

You need to answer one question clearly:

“What is the price I am willing to pay for upside?”

The upside is: more early autonomy, leadership opportunities, often more individual attention, sometimes faster access to procedures or QI projects.

The price is: less structure, more chaos, weaker name recognition early on.

You are going to build a rank list that quantifies that tradeoff instead of hand‑waving it.


Step 2: Classify Every Program into Clear Buckets

Start by bucketing your interviews into three categories:

  • Category A – Established programs (≥ 10 years)
    Stable, known reputation, alumni network, board pass data, fellowship match lists.

  • Category B – Young but proven (3–9 years)
    Still new-ish but with multiple classes graduated, some board data, and at least one full cohort that has gone through.

  • Category C – New / unproven (0–2 years OR first match)
    No graduates yet, or just one class. Limited or no outcome data.

Write this down. Do not keep it in your head.

Program Age Buckets for Rank Planning
CategoryAge of ProgramTypical Data Available
A≥ 10 yearsBoards, fellowships, alumni, reputation
B3–9 yearsEarly boards, early fellowship/job outcomes
C0–2 yearsLittle or no long-term outcome data

Then count:

  • How many in A?
  • How many in B?
  • How many in C?

If half your interviews are new programs, you probably have:

  • Very few A’s
  • A handful of B’s
  • A lot of C’s

This matters because the strategy for ranking within each bucket is different from the strategy for ranking across buckets. You will do both.


Step 3: Build a Risk‑Aware Scoring System (But Make It Brutally Simple)

Do not build a 50‑item spreadsheet with weighted z‑scores. You will drown in your own system and ignore it.

You need something you can actually use in one evening.

Use five core domains that matter across all specialties:

  1. Clinical training strength
  2. Program stability & leadership
  3. Resident support & culture
  4. Reputation / outcomes
  5. Personal fit (location, family, lifestyle)

Score each program from 1–5 in each domain:

  • 1 = significant concern
  • 3 = acceptable / neutral
  • 5 = clear strength

Then add a sixth modifier only for new/young programs:

  1. New Program Risk Modifier (−2 to +2)
    • −2 = red flags, high chaos risk
    • 0 = neutral / unclear
    • +2 = unusually strong for new program (backed by big name, strong leadership, clear structure)

Your total program score:

Total = D1 + D2 + D3 + D4 + D5 + Modifier

Where D1–D5 are the five domains.

Let’s be concrete. Suppose you interviewed at:

  • A well-known, mid‑tier university program (Category A)
  • A 3‑year‑old community program affiliated with a major academic center (Category B)
  • A brand‑new program starting its first class this year at a regional hospital (Category C)

You might score like this (made‑up numbers):

  • Established University (A)

    • Clinical: 4
    • Stability: 5
    • Culture: 3
    • Reputation: 4
    • Fit: 3
    • Modifier: 0
    • Total: 19
  • Young Community w/Strong Affiliation (B)

    • Clinical: 4
    • Stability: 3
    • Culture: 4
    • Reputation: 3
    • Fit: 4
    • Modifier: +1
    • Total: 19
  • Brand New Regional (C)

    • Clinical: 3 (on paper)
    • Stability: 2
    • Culture: 4 (small, close‑knit vibe)
    • Reputation: 1
    • Fit: 5 (close to home)
    • Modifier: −1
    • Total: 14

Already you see the problem: a program that “felt great” in person (like that brand-new regional one) can lag significantly when you actually score risk.

That is the point.


Step 4: Evaluate New Programs with a Different Lens

You cannot evaluate a first‑year program the way you evaluate a 30‑year one. There is no board data. No prior residents to call. The website is mostly promises.

So you change the questions.

Here is the checklist I use when I help people dissect new programs. Put this next to your notes and fill it in for every Category C program.

1. Who is actually backing this program?

You want to know the parent muscle.

  • Is it part of a large, financially stable health system?
  • Is there a major academic affiliate (e.g., “University of X School of Medicine”)?
  • Are rotations at a flagship hospital or scattered across small sites?

A new program backed by a huge system with multiple existing residencies is much safer than a stand‑alone hospital jumping into GME for the first time.

2. Who is the program director and what have they done before?

Red flags and green flags live here.

Ask or research:

  • Have they run or held significant leadership in another residency or fellowship?
  • Do they have educational credentials (e.g., prior APD, GME committee work)?
  • Are they full‑time or splitting time between admin and heavy clinical work?

If the PD is brand‑new to education leadership and the hospital is new to GME, that is double risk. Not necessarily disqualifying, but it should drop their score on stability.

3. What is the actual clinical volume?

Promises are cheap. Numbers are not.

You want to see or ask:

  • Annual ED visits, inpatient admissions, births, procedures (depending on specialty)
  • Case mix – is it bread‑and‑butter or do they off‑load complex cases to another center?
  • Call structure for the first years – are you actually seeing enough?

If the answer is, “We are growing, we expect volume to increase,” translate that as “You are the test case.”

4. How intentional is their curriculum and evaluation system?

You are looking for:

  • A clear block schedule that already exists, not “we are still finalizing” in January.
  • A defined didactic schedule with named faculty, not generic talk.
  • A plan for resident feedback, remediation, and mentorship.

Programs that say “we will be very flexible” often mean “we do not have this built yet.”

5. How many other residencies or fellowships are at the institution?

More GME around you is usually good:

  • Means the hospital knows how to teach.
  • Means there is a GME office, policies, grievance processes, etc.
  • Means you are not the only trainee trying to fix systems from scratch.

If you are the first and only residency, assume you will be writing some of the policy yourself, sometimes the hard way.


Step 5: Put New Programs on a Controlled “Max Height” in Your List

Here is where people mess up: they fall in love with a new program’s location or friendly PD and then rank it #1 above all established options.

Could that be right for you? Possibly. But only after you impose structure.

Use a “max height” rule for Category C programs:

  1. If you have at least 3–4 Category A or B programs you could live with

    • Do not rank a Category C program above all of them unless:
      • It scores significantly higher (≥ 3 points) on your 5‑domain scale
      • And your risk modifier for that program is 0 or +1 (not negative)
  2. If your interview list is mostly Category C with 1–2 B’s and maybe 1 A

    • It is reasonable to have a new program in your top 3.
    • But still avoid stacking brand‑new, high‑risk programs as #1 and #2 if you have any decently scoring A/B options.

In other words, you are going to use your scoring sheet to constrain your emotions.

If a new program gives you butterflies but lands a total score of 14 while a “boring” university program is at 19, do not pretend those are equals on paper.


Step 6: Separate “Where I Would Be Happy” from “Where I Could Survive”

You need two thresholds:

  1. Top tier – places you genuinely want to go
    These are programs you would be proud to match at and reasonably expect to thrive in.

  2. Floor tier – places you could tolerate / survive if you matched
    Not dream scenarios, but acceptable. Training would be safe and adequate.

Anything below your survival threshold should probably not be ranked. Yes, even in a competitive specialty. Matching into a truly unhealthy or unstable program can be worse than not matching and reapplying with a stronger strategy.

Be honest here:

  • Look at any program with:
    • Obvious toxicity
    • Serious instability (mass faculty resignations, financial turmoil)
    • Major under‑volume or clear gaps in case exposure
  • Those should fall below the floor. Score them if you want, but do not let a high “Fit” score trick you into ranking them if training quality is at risk.

Step 7: Convert Scores into a First‑Pass Rank List

Once you have scores for each program, do this:

  1. Sort by Total Score (including modifier)
    This gives you an objective draft order.

  2. Mark program type next to each
    Label each as A, B, or C.

  3. Circle your non‑negotiables

    • Programs near family you truly need to be near
    • Programs with deal‑breaker positives (unique track, visa sponsorship, couples match)
      You are allowed to bend rankings around these.
  4. Apply the Max Height Rule for new programs
    If a Category C program is sitting above multiple strong A/B programs with similar scores, force it down a notch or two unless there is an overwhelming reason not to.

You now have a rational starting point. Not final. But way better than “which PD smiled more.”


Step 8: Pressure‑Test Your List Against Worst‑Case Scenarios

This is the part almost nobody does. It is also the part that keeps you from making panicked, regretful decisions later.

Ask yourself:

  1. If I matched at my #1 program, how would I feel?
    If the answer is anything less than “relieved or happy,” rethink your #1.

  2. If I slid down and matched at #5, is that still acceptable?
    If #5 is a fragile new program you are secretly scared of, it should not be that high.

  3. If the only place I matched was my last ranked program, would I rather reapply?
    If you would rather reapply, do not rank that program. The rank list is a contract with yourself.

You can even sketch a quick mental “match outcome vs. satisfaction” graph:

line chart: Rank 1, Rank 3, Rank 5, Rank 8, Rank 10

Satisfaction by Rank Position
CategoryValue
Rank 19
Rank 38
Rank 57
Rank 85
Rank 103

If the satisfaction falls off a cliff beyond a certain point, move clearly unacceptable options below your ranking floor (or off the list entirely).


Step 9: Use Residents and Fellows Strategically (Even When There Are None)

With established programs, you know the drill: contact current residents, ask about culture, hours, support, actual training quality.

With brand‑new programs, there might be:

  • 0 residents (first year starting with you)
  • Or 1–2 classes who are still sorting everything out

Here is what I have seen work:

For programs with no residents yet

You cannot talk to what does not exist. So you:

  • Ask to speak to:
    • GME office leadership
    • Residents from other programs at the same institution
    • Faculty who trained elsewhere and can compare

Questions to ask those other residents:

  • “When new programs start here, how has the hospital treated them?”
  • “Do residents generally feel supported by GME?”
  • “Do you see consistent patient volume for your specialty? Enough complex cases?”

You are trying to infer institutional culture around trainees.

For programs with 1–2 classes

Ask residents:

  • “What has changed for the better since you started?”
  • “What still feels like it is being built?”
  • “If a younger sibling wanted to come here, what would you warn them about?”

If they pause a long time, or give you the “off the record” vibe, listen carefully. That usually means there are real pain points.


Step 10: Special Considerations by Specialty

The “risk price” you can tolerate depends heavily on your specialty.

In some specialties, training volume and operative/case numbers are life‑or‑death for your career. In others, a slightly less structured environment is survivable.

Surgical fields (Gen Surg, Ortho, ENT, etc.)

You need:

  • High, documented operative volume
  • Real subspecialty exposure
  • Strong mentorship and letters for fellowship

New surgical programs can work if:

  • They are at very high‑volume centers with busy ORs
  • Faculty are fellowship‑trained and well‑connected
  • You are not competing with multiple other surgical residencies and fellows for cases

If those are missing, I recommend being extremely conservative about ranking them high.

Competitive fellowship‑driven fields (Cards, GI from IM, etc.)

You need:

  • Strong research infrastructure
  • Faculty with national presence and connections
  • A clear track record is ideal, but if missing, you want at least:
    • PD and department chair known in the field
    • Existing fellowships in related areas
    • Protected time for scholarly work

A brand‑new IM program with zero research support and no prior fellowships trying to promise you a future Cardiology pipeline? I would be skeptical.

Primary care–oriented fields (FM, Psych in many regions, etc.)

New programs may be less risky if:

  • The hospital has robust outpatient and community clinics
  • There is strong behavioral health or longitudinal care built in
  • You are not dependent on niche procedures for your future plans

Still: don’t ignore stability and leadership. A disorganized new FM or Psych program can burn you out faster than any surgical residency.


Step 11: Watch for Data‑Backed Red Flags vs. Normal New‑Program Noise

Every young program has noise:

  • Schedules changing from year to year
  • Policies being refined
  • Leadership adjusting expectations

That is normal.

The red flags that should meaningfully drop a program on your rank list:

  • Residents leaving mid‑year without clear personal reasons
  • Major faculty turnover (especially program director and core faculty)
  • Hospital financial distress, merger chaos, or closures of services
  • Chronic duty hour violations brushed off as “part of the culture”
  • No one can clearly explain how they monitor resident performance or wellness

If you hear more than one of these at a new program, that is a serious risk signal. I would move that program down, possibly off, the list.


Step 12: Final Pass – Integrate Logic with Your Gut (In That Order)

Once your list is structured and scored, only then do you let your gut have a say.

Here is the order of operations:

  1. Start from your scored list

  2. Ask: “Where does my instinct disagree strongly?”

    • Maybe a program scored well but felt cold and transactional.
    • Maybe a slightly lower‑scoring program felt like home.
  3. For each disagreement, force yourself to answer:

    • “What specific facts am I overriding?”
    • “Is this a values‑based override (family, safety, mental health) or just vibes?”
  4. Adjust only when you can justify the change in a sentence:

    • “I am moving Program X above Program Y because X is closer to my support system, and my mental health and family needs matter more than slight differences in reputation.”

That is rational use of intuition, not blind emotion.


A Simple Visual to Keep You Honest

Create a very rough “risk vs. reward” map for your top 10–12 programs:

scatter chart: Prog A, Prog B, Prog C, Prog D, Prog E, Prog F

Perceived Program Risk vs Reward
CategoryValue
Prog A2,9
Prog B3,8
Prog C4,7
Prog D6,9
Prog E7,6
Prog F8,5

  • X‑axis (left to right): Risk (lower is safer)
  • Y‑axis (bottom to top): Reward (training quality + fit)

New programs will usually sit further right (more risk). Ask yourself:

  • Am I ranking a far‑right, mid‑reward program above safer, high‑reward ones for no good reason?
  • Are there any far‑right, low‑reward programs still on my list at all?

Use this to sanity‑check final placement of your Category C interviews.


What To Do Today

Print your interview list. Or open the file on your laptop.

Then:

  1. Label each program A, B, or C based on age.
  2. Score each program quickly on the 5 domains plus the new‑program modifier. Do not overthink the numbers.
  3. Sort by total score and see where your “favorite” new programs actually land.
  4. Move any you now realize are high‑risk / low‑benefit down below safer alternatives.

If you can end tonight with a draft rank list that you could defend out loud to a skeptical attending—especially about why each new program is where it is—you are doing this right.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles