Residency Advisor Logo Residency Advisor

Too Many ‘Middle Tier’ Programs? How to Create a Clear Rank Order

January 5, 2026
17 minute read

Medical resident reviewing residency rank list late at night -  for Too Many ‘Middle Tier’ Programs? How to Create a Clear Ra

It is January. You have 15–25 interview stickers on your calendar. Most of them felt… fine. No obvious red flags. No obvious “this is my dream” either. When you try to sort your programs, you have 2–3 obvious top choices, 1–2 definite “no way” programs, and then this giant, undifferentiated blob of “middle tier” where everything feels the same.

You open your spreadsheet and stare at it. Again. You move a program up two spots. Down one. Swap two. None of it feels grounded. It feels like guessing.

You are not stuck because the programs are truly identical. You are stuck because your decision process is vague.

Let me fix that.


Step 1: Stop Using “Tier” As Your Main Lens

“Top tier.” “Mid tier.” “Low tier.”
People throw these labels around like they are currency. They are not.

Most of what applicants call “middle tier” is actually:

  • Solid training
  • Decent or good fellowship placement
  • Normal call
  • Mix of patients
  • Reasonable city

Translation: good enough to make you a competent attending.

The problem: tier is usually shorthand for:

  • Brand name / reputation
  • Research power
  • Prestige among other residents / attendings

Prestige is not useless. But if you let the vague idea of “tier” drive your list, you end up paralyzed, because 70% of programs sit in some big mushy middle from a reputation standpoint.

So first rule:

Do not rank programs by tier. Rank them by how well they fit your life and your next 10 years.

We will use tier only as one input among several. Never the backbone.


Step 2: Build a Brutal, Non‑Negotiable Criteria List

Right now your brain is mixing everything together:

  • “Nice PD”
  • “Good trauma exposure”
  • Partner can find a job
  • “Close to airport”
  • “People seemed chill”
  • “Better name”

All floating around with equal weight. That is why it feels messy.

You need two buckets:

  1. Non‑negotiables – a program must satisfy these or it drops near the bottom
  2. Preferences – strong pluses and minuses, but not automatic deal‑breakers

2.1. Define Your Non‑Negotiables

Non‑negotiables should be few. Five to seven max. If you have 20, you have none.

Common legitimate non‑negotiables:

  • Geography anchor

    • Must be within X hours of family
    • Must be in driving distance of co‑parent or children
    • Must be in specific region for visa/job reasons
  • Program type

    • University vs community vs hybrid (depending on your fellowship or career goals)
    • Required: Level 1 trauma (for EM / surgery / anesthesia)
    • Required: NICU size (for peds/OB)
  • Fellowship / career viability

    • Has in‑house fellowship(s) you realistically want
    • Track record of placing grads into specific competitive fellowships you are targeting
    • Or, if you are primary care‑focused, strong continuity clinic and outpatient exposure
  • Lifestyle hard stops

    • No malignant culture (bullying, retaliation, unsafe coverage – more on that later)
    • Call schedule within a certain threshold: you are ok with “busy,” not ok with “dangerous”
    • Reasonable cost of living given PGY‑1 salary + your debt

Pick your 5–7, write them down in plain English, and freeze them. Do not keep shifting the bar just to justify including a program you “liked.”

Any program that clearly fails a non‑negotiable goes into a lower tier bucket immediately. You can still rank them, and you probably should, but they are not competing with your true options.


Step 3: Convert Vibes Into Measurable Factors

Most of what you remember from interview day are vibes:

  • “Residents seemed happy”
  • “Faculty supportive”
  • “Good work‑life balance”
  • “Strong training”

Vibes are real. Vibes matter. But you cannot rank 12 “pretty good vibe” programs against each other with gut alone and expect clarity.

You need to translate vibes into scorable domains.

Here is a simple, effective scoring framework I have seen applicants use across multiple specialties.

Residency Program Scoring Domains
DomainScore RangeDescription
Training Quality1–5Case volume, complexity, supervision
Culture / Wellness1–5Resident happiness, support, burnout risk
Location / Life1–5City, cost of living, family/partner fit
Career Support1–5Fellowship, mentorship, networking
Schedule / Workload1–5Call, nights, ancillary support

You can add or rename categories, but do not go beyond 7. If you have 12 categories, you will stop scoring.

3.1. Define What “1” and “5” Actually Mean

If you do not define your scale, you will end up giving every program 3 or 4 for everything. Useless.

Example for Culture / Wellness:

  • 5 = Residents clearly happy, laughed freely on tour; no hesitation about support; specific examples of wellness (schedule adjustments after life events, real back‑up coverage, flexible vacations)
  • 4 = Generally positive, some complaints but felt like normal residency; PD and chiefs talked concretely about well‑being
  • 3 = Neutral / mixed signals, some residents warned “it is busy but we manage,” did not feel unsafe but not enthusiastic
  • 2 = Noticeable tension; residents warned you off certain rotations; dodged questions; multiple stories of people leaving
  • 1 = Malignant: gaslighting about hours, retaliation stories, unsafe coverage, multiple residents explicitly dissatisfied

Do this briefly for each domain.

Then, for each program, you force yourself to pick numbers. Do not agonize over perfection. You are converting fuzzy impressions into something at least somewhat structured.


Step 4: Use a Weighted Score Instead of Raw Intuition

Not all domains are equal for you.

Someone gunning for academic cardiology will value research and fellowship support more than city nightlife. Someone with two kids and a partner in a specific job market might put location at the top.

So you assign weights to each domain.

Example weighting scheme:

  • Training Quality – 30%
  • Culture / Wellness – 25%
  • Location / Life – 20%
  • Career Support – 15%
  • Schedule / Workload – 10%

Convert that to weight multipliers (e.g., 0.30, 0.25, etc.), then calculate:

Weighted score = Σ (domain score × weight)

Do this in a basic spreadsheet.

Here’s a concrete illustrative example.

bar chart: Program A, Program B, Program C

Weighted Scores for Three 'Middle Tier' Programs
CategoryValue
Program A4.1
Program B3.7
Program C3.9

That 0.4 gap between Program A and Program B can represent a real, meaningful difference when you unpack the underlying domains.

This gives you:

  • A first‑pass objective ranking (by weighted score)
  • A way to explain to yourself why one program is above another
  • A structure to challenge your own bias (“Why did I mentally rank B above A when A fits my criteria better?”)

Step 5: Break Ties With Scenario‑Based Thinking

You will still have ties or near‑ties. That is normal.

Now switch from numbers to real‑life thought experiments. This is where decisions get clearer.

Ask yourself three specific questions for any two “tied” programs:

  1. Bad week test
    You have just had a brutal week. You are exhausted, you feel incompetent, and you are questioned by an attending in front of the team.

    • Where would you rather be?
    • Which residency culture do you trust more to not break you?
  2. Future self test
    Five years from now, as a fellow or attending, you introduce yourself and your training background.

    • Which program’s name, city, and training style feel more aligned with who you want to be?
      Not prestige‑wise only. Identity‑wise.
  3. Exit strategy test
    If life goes sideways (family emergency, health issue, burnout, desire to change specialty or location):

    • Which program gives you more options?
    • Which PD is more likely to go to bat for you?

When I have watched applicants do this honestly, they almost always say something like, “Yeah, when I imagine those situations, I clearly prefer X over Y.” Even when their spreadsheet scores were almost identical.

Use that clarity. Adjust your rank list accordingly.


Step 6: Use a Forced‑Choice Tournament Instead of Linear Ranking

Your brain is bad at ranking 15 things at once. It is better at deciding between 2–3 options at a time.

Here is a simple “knockout bracket” method that works extremely well when you feel stuck with a wall of middle programs:

  1. Put all your “middle tier” programs in a list (exclude obvious top and bottom for now).
  2. Randomly order them (shuffle in Excel or just rearrange manually).
  3. Compare the first two:
    • Ask: “If I could only match at one of these, which would I pick?”
    • The winner moves on, the loser goes to a “lower” pile.
  4. Winner then faces the next program in the list. Repeat the question.
  5. Keep doing this until you have gone through the full list.

The last program standing is your top choice among the middle blob. That becomes the highest in that cluster on your rank list.

Then:

  • Take the “loser” pile.
  • Run the same forced‑choice tournament to sort them among themselves.

It is crude but extremely effective. You are leveraging head‑to‑head comparison instead of vague overall impression.

You can combine this with your weighted scores:

  • Start by seeding programs roughly according to their weighted score.
  • Then let the tournament refine borderline cases where your intuition pushes against the numbers.

Step 7: Sober Reality Check – Malignancy, Safety, and Red Flags

Some of you are putting malignant programs in the middle tier because the name is shiny. Or because you liked the city. That is how people end up miserable by PGY‑2.

You must explicitly separate prestige from safety.

Hard red flags that should push a program down your rank list:

  • Multiple residents independently saying:
    • “We are not allowed to report hours accurately.”
    • “If you complain, you will be labeled not a team player.”
    • “We have had multiple people leave early / transfer.”
  • PD or faculty openly mocking other residents, other specialties, or applicants on interview day.
  • Dodgy answers about:
    • How they handle pregnancy, major illness, or family emergencies.
    • What happens when someone is struggling academically or clinically.

You do not ignore those because the program is “mid‑high tier” academically. You discount prestige when the environment is clearly toxic. You are not ranking fellowships; you are ranking the next 3–7 years of your physical and mental health.


Step 8: Use Actual Data, Not Rumors, for “Tier” and Outcomes

If you truly care about career outcomes, stop relying on Reddit program tiers and hallway gossip. Pull actual data where you can.

Here are objective indicators to use for “program power” instead of “this is mid tier I guess”:

  • Fellowship match lists

    • Where did graduates match in the last 3–5 years?
    • Do you see repeated matches at solid, recognizable programs in your field of interest?
  • Case volume and pathology

    • For surgical / procedural fields: number of key cases per resident.
    • For IM / EM / Peds: ICU months, ED visits, patient complexity.
  • Board pass rates

    • Consistently high pass rates = structured education and reasonable support.
    • Repeated low pass rates = problems.
  • Research infrastructure (if relevant)

    • Protected time.
    • Mentorship (actual, not promised).
    • Publication patterns from residents.

doughnut chart: Case Volume & Complexity, Fellowship Match History, Mentorship & Culture, Brand Name Alone

Factors That Actually Predict Strong Outcomes
CategoryValue
Case Volume & Complexity35
Fellowship Match History30
Mentorship & Culture25
Brand Name Alone10

That last slice is the point: brand name alone is the least predictive. Plenty of “mid tier” programs by reputation quietly churn out excellent clinicians and strong fellowship applicants every year.


Step 9: Integrate Life Outside of Medicine Without Apology

You are not ranking abstract training environments. You are ranking your actual life.

Here are factors people minimize publicly but obsess over privately:

  • Partner’s job market and visa status
  • Proximity to aging parents or sick relatives
  • Cost of childcare
  • Safe neighborhoods at resident salary
  • Need for airport proximity for family or long‑distance partner
  • Climate issues (seasonal depression, chronic illness that flares in certain weather, etc.)

You are allowed to weigh these heavily.

If two programs are equivalent on training, but one lets your partner keep their job, offers affordable housing, and keeps you close to support, you should rank that one higher. That is not weakness. That is strategy.

You can even model this explicitly:

  • Add a “Life Stability” domain to your score.
  • Weight it appropriately (15–30% for many people is reasonable).
  • Be honest about the impact on your day‑to‑day.

Step 10: Use a Timeline and Lock Things In

Endless tweaking is a way to turn stress into a hobby.

Instead, set a personal deadline for your rank list that is at least 3–5 days before the official NRMP certification deadline.

Then structure your process like this:

Mermaid timeline diagram
Rank List Decision Timeline
PeriodEvent
Week 1 - Define non-negotiables1 day
Week 1 - Build scoring rubric1 day
Week 1 - Score all programs2 days
Week 2 - Run forced-choice comparisons2 days
Week 2 - Reorder based on scenarios2 days
Week 3 - Reality check with mentor1 day
Week 3 - Final tweaks and lock-in1 day

Once you hit your personal deadline:

  • Stop re‑ordering based on random comments online.
  • Only allow a change if new, concrete information appears (for example, you discover falsified board pass data, or you learn of major program leadership implosion).

You will never have a “perfect” list. You are aiming for a deliberate, defensible one.


Step 11: Talk to the Right People (And Ignore the Wrong Ones)

Some input helps. Too much input scrambles your internal signal.

Who to actually listen to:

  • Recent grads from your own medical school who matched to those programs
  • Fellows / attendings in your target specialty who know multiple residents in those programs
  • A small handful of mentors who:
    • Know your strengths and weaknesses
    • Actually listen to your priorities, not their ego

Who to discount heavily:

  • Anonymous internet tier lists
  • Classmates who base everything on brand prestige and nothing on reality
  • Attendings who trained decades ago at a place and assume nothing has changed

Use these conversations surgically:

  1. Prepare specific questions:
    • “How did your program respond when residents were struggling?”
    • “Would you choose the same program again?”
    • “What are the 2–3 most annoying things about your program?”
  2. After the call, adjust your scores if needed:
    • Culture, schedule, and training quality scores are usually where external intel is most useful.

Do not crowdsource your rank list. Use people to sharpen it, not to write it for you.


Step 12: Sanity Check With a Simple “Would I Be Ok Matching Here?” Pass

Once you have a draft ordered list, run one last quick pass:

For each program from top to bottom, ask:

“If I matched here tomorrow, would I feel:
(A) Excited,
(B) Accepting/OK, or
(C) Dread?”

  • A = Great. That program stays.
  • B = Still probably fine. Keep them ranked, but see if any B’s should be below other B’s based on your scores.
  • C = Dangerous. Why are you ranking a program that inspires dread?
    If the answer is “because I am scared of not matching,” fine — but push all C’s to the very bottom of your list.

No C‑feeling program should be above any A/B program. Fear of the SOAP is real, but living in dread for 3–7 years is worse than ending up at a slightly smaller name with a stable environment.


Quick Example: From “All Middle Tier” to Clear Order

Let me walk through a simplified example.

You have:

  • Program X – Mid‑sized university, Midwest city, solid fellowship match, residents seemed tired but not broken.
  • Program Y – Coastal, bigger name, great city, residents dodged questions about hours, chief joked “you will live here.”
  • Program Z – Community‑university hybrid, near your partner, good case volume, fewer big‑name fellowships but strong regional reputation.

You score them (1–5), with your weights in parentheses:

  • Training (0.30), Culture (0.25), Location/Life (0.20), Career Support (0.15), Schedule (0.10)

Program X:

  • Training 4, Culture 3, Location 3, Career 4, Schedule 3
    Weighted: 4×0.3 + 3×0.25 + 3×0.2 + 4×0.15 + 3×0.1 = 3.45

Program Y:

  • Training 5, Culture 2, Location 4, Career 5, Schedule 2
    Weighted: 5×0.3 + 2×0.25 + 4×0.2 + 5×0.15 + 2×0.1 = 3.65

Program Z:

  • Training 4, Culture 4, Location 5, Career 3, Schedule 4
    Weighted: 4×0.3 + 4×0.25 + 5×0.2 + 3×0.15 + 4×0.1 = 4.00

Raw order by score: Z > Y > X.
Now add red‑flag and scenario thinking:

  • Program Y has serious culture/schedule issues.
  • In a “bad week” scenario, you would clearly rather be at Z or X.
  • Your partner’s job sits near Z.

So you end up with:

  1. Program Z
  2. Program X
  3. Program Y

Even though Y has the “bigger name.” And that is the correct call for most people in that scenario.

You have converted “all middle tier” into a clear, justifiable order.


FAQs

1. Should I ever rank a “prestige” program higher even if the culture seemed worse?

Only if three conditions are all true:

  1. You are very committed to a hyper‑competitive fellowship or niche academic path where that specific program’s brand and connections clearly open doors that others cannot.
  2. The culture is not malignant, just tougher — higher expectations, busier service, more intensity, but still fundamentally supportive.
  3. You personally thrive in high‑pressure, high‑expectation environments and have done well in them before.

If what you saw felt unsafe, retaliatory, or dismissive of resident well‑being, do not “prestige” your way into misery. No fellowship is worth 3–7 years of burnout and resentment.

2. What if my gut feeling about a program conflicts with my scoring system?

Then you treat your gut as data, not as the only truth or something to ignore.

  • Re‑examine the domains where the program scored surprisingly high or low.
  • Ask: “Did I overrate this because of name/location?” or “Did I underrate it because the interview day was awkward?”
  • Use scenario tests: in your worst month, which place would you rather be?

If, after that, your gut still says “I want Program B over Program A,” even though the score is slightly lower, put B higher. The scoring system is a decision aid, not a dictator.

3. How many “middle tier” programs should I realistically rank?

For most applicants:

  • Rank every program where you could reasonably function and not be unsafe or utterly miserable.
  • Push true red‑flag programs to the bottom, but still rank them, unless you are 100% certain you would rather go unmatched than train there.

The match algorithm favors the applicant. Ranking more acceptable programs never hurts you. The whole point of this process is not to eliminate most of your list. It is to structure it so you are not leaving major fit advantages on the table because everything felt “middle tier” at first glance.


Key Takeaways:

  1. Replace vague “tier” thinking with a non‑negotiable list, structured domains, and weighted scores.
  2. Use forced head‑to‑head decisions and real‑life scenarios to break ties among similar programs.
  3. Do not let prestige override safety, culture, and life stability. You are ranking the next several years of your actual life, not just a line on your CV.
overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles