Residency Advisor Logo Residency Advisor

Creating a Decision Matrix to Compare Residency Programs Objectively

January 7, 2026
16 minute read

Resident physician comparing residency programs using a decision matrix on a laptop -  for Creating a Decision Matrix to Comp

The way most applicants “compare” residency programs is lazy and dangerously subjective.

“Vibes,” name recognition, whatever your favorite attending said on a random Thursday. That is how people end up miserable in “top” programs that were never a good fit in the first place.

You need an objective system. That system is a decision matrix. And it is not complicated. But you do have to treat it like part of your application, not an afterthought.

Here is how to build a decision matrix that actually works, step by step.


Step 1: Stop Pretending All Factors Are Equal

Before you open Excel or Google Sheets, you need to get brutally clear on what actually matters to you. Not your classmates. Not Reddit. You.

Common mistake: applicants say “Education, culture, location – everything is important.” That is how you end up paralyzed in February staring at a blank rank list.

You are going to:

  1. Brain dump all relevant factors.
  2. Cut the list to the essentials.
  3. Assign weights that reflect priority.

1. Brain dump the real decision factors

Think about:

  • What has made prior rotations great or miserable.
  • What you complain about when you are exhausted.
  • What you value long term (fellowship, academic career, lifestyle, geography).

Start a rough list like this (adapt to your specialty):

  • Clinical training breadth and autonomy
  • Procedural or operative volume
  • Fellow vs resident case distribution
  • Board pass rates
  • Fellowship match outcomes
  • Program reputation (national, regional)
  • Culture and resident happiness
  • Resident support (wellness, backup, admin help)
  • Call schedule and hours
  • Schedule flexibility (electives, research time)
  • Mentorship quality
  • Research opportunities and expectations
  • Location (region, city vs rural)
  • Cost of living
  • Proximity to family / support system
  • Compensation and benefits
  • Moonlighting opportunities (later years)
  • Program stability and leadership

Do not filter while you write. Just get it all out.

2. Compress to a usable list

Now you cut. A good matrix has 8–15 factors, not 40.

Merge overlapping items:

  • “Fellowship match” + “reputation” could become “Career outcomes” if you are not hyper-competitive for a niche fellowship.
  • “Location,” “cost of living,” and “proximity to family” might stay separate if they push in different directions for you.

Ask for each candidate factor:

  • Would I realistically choose Program A over Program B just because of this?
  • Have I actually cared about this in real life before, or does it just sound good?

If the answer is no, delete it.

3. Assign weights like a grown-up

This is where most people chicken out. They say everything is “important” and then put 10/10 on all the things.

You are going to assign a weight from 1–5 or 1–10 to each factor. Higher = more important to your final decision.

Concrete rule:
If everything is a 4 or 5, you did it wrong. You should have real variation.

Example (Internal Medicine applicant aiming for cards fellowship, cares a lot about teaching but also about family location):

  • Clinical training quality – 9
  • Fellowship matches (cards-heavy) – 9
  • Resident culture / happiness – 8
  • Location (region) – 8
  • Proximity to partner/family – 7
  • Research opportunities – 6
  • Work hours / call burden – 6
  • Program reputation – 5
  • Cost of living – 4
  • Moonlighting later in residency – 3

You can refine these weights after you start scoring programs, but commit to a version and stick to it across all programs.


Step 2: Build the Actual Decision Matrix

Now we convert your priorities into a structure that will force you to compare programs on the same scale.

You can do this in:

  • Excel
  • Google Sheets
  • Notion (with formulas)
  • Any spreadsheet you will actually open more than once

Basic matrix structure

Set it up like this:

  • Rows: Programs
  • Columns: Criteria, plus weights and total score

At minimum:

  • Column A: Program name
  • Columns B–K: Criteria (Clinical Training, Culture, Location, etc.)
  • Column L: Total weighted score

Then, somewhere else on the sheet or in row 1, store the weight for each criterion.

Example layout:

Residency Decision Matrix Structure
ColumnContent
AProgram name
B–JCriterion scores (0–10)
KTotal weighted score
Row 1Criterion names
Row 2Criterion weights

Scoring scale: use 0–10.

  • 0–2: Terrible / unacceptable
  • 3–4: Weak
  • 5–6: Adequate
  • 7–8: Strong
  • 9–10: Outstanding

You need that spread so the matrix can actually separate programs.


Step 3: Define Each Criterion So You Can Score Objectively

If you do not define what a “10” vs “5” means, you will bend the scores to justify what you already want.

You are going to write concrete anchors for each criterion. Short bullet notes are enough.

Example for “Clinical Training Quality” (IM):

  • 9–10: High patient volume, broad pathology, consistent resident autonomy, strong subspecialty exposure, minimal scut. Residents routinely say, “We feel very prepared as attendings.”
  • 7–8: Good volume, occasional gaps in certain pathologies or autonomy, but overall strong. Residents say, “We are well-prepared, maybe could use more autonomy in X.”
  • 5–6: Adequate volume, some known training gaps, significant variation between services. Residents say, “We make it work, but you have to be proactive to get enough experience.”
  • 3–4: Low volume or overly protected to the point of undertraining. Multiple residents quietly admit they are worried about independence.
  • 0–2: Chronic undertraining, major red flags, or residents openly say they do not feel ready.

Example for “Culture / Resident Happiness”:

  • 9–10: Residents consistently describe the program as supportive, collegial, and humane. Minimal malignant behavior. You see real laughter and warmth on interview day and pre-interview socials.
  • 7–8: Mostly positive, some stress and normal complaints, but no major toxicity themes.
  • 5–6: Mixed. Clear red flags in certain departments or with specific attendings. People say “it depends who you work with.”
  • 3–4: Multiple residents use phrases like “sink or swim,” “tough environment,” “you develop thick skin.”
  • 0–2: Openly hostile, bullying, or exploitative. You leave with a pit in your stomach.

Yes, you are relying on subjective inputs (resident comments, your gut). The matrix does not remove subjectivity. It forces consistency in how you convert that into a score.


Step 4: Collect Real Data, Not Just Vibes

You cannot score what you do not measure. The matrix is only as good as the data you feed it.

Here is what to gather and where to get it.

Hard numbers (from websites, FREIDA, program handouts)

Examples:

  • Board pass rates (3-year rolling)
  • Number of residents per year
  • Number of fellows (by subspecialty)
  • Case numbers or procedure logs (for surgical fields)
  • ICU months, ward months, clinic time
  • Elective time and research blocks
  • Call schedule structure (q4, night float, etc.)
  • Salary and benefits
  • Location cost of living (approximate from external sites)

For surgical and procedural specialties, you should strongly consider a quick visual comparison of volume and operative exposure:

bar chart: Program A, Program B, Program C, Program D

Operative Case Volume by Program
CategoryValue
Program A1050
Program B900
Program C1250
Program D800

You are not trying to be perfect. You are trying to not be blind.

Soft data (from interviews, socials, residents, alumni)

This is where most of the matrix value sits.

Collect:

  • Phrases residents actually use (“We are really like a family” vs “You learn to survive”)
  • How often they mention being “tired but happy” vs “burnt out”
  • Whether senior residents seem excited or dead behind the eyes
  • How faculty talk about residents (partners vs workhorses)
  • How they respond when you ask, “What would you change about this program?”

You should have a one-page note per program immediately after interview day. Do this the same night. Do not trust memory.

Translate data into scores

Once you have your notes and numbers, you sit down and score each criterion, one program at a time.

Key rule: Score by column, not by row.

Meaning:

  • Pick one criterion (e.g., Culture).
  • Go down the list of programs and assign Culture scores for all of them at once, while your mental comparison is fresh.
  • Then move to the next criterion.

This prevents “halo effect” where you give a program across-the-board high scores because you liked one aspect.


Step 5: Apply Weights and Calculate Total Scores

Once you have raw scores (0–10) for each program and each criterion, you multiply by the weight and sum.

Formula structure (in Google Sheets / Excel):

  • If row 3 is Program A, columns B–J are criteria, and row 2 has weights:

Total Score (K3) = SUMPRODUCT(B3:J3, B2:J2)

That is it. You now have an overall score that accounts for:

  • Your priorities (weights)
  • Each program’s performance on those priorities

You will likely see something surprising here. A program you assumed is top-tier for you may drop once you actually look at autonomy, location, and culture simultaneously.

Good. That is the point.


Step 6: Compare Programs Head-to-Head, Not Just by Rank

The total score is not the only output. The pattern of scores matters.

Use conditional formatting or simple visual cues:

  • Color-scale each criterion column from red (low) to green (high).
  • Bold or highlight the top 2–3 programs in each criterion.

You should quickly see:

  • Which programs are balanced (solid across the board).
  • Which programs are spiky (amazing for one thing, bad for others).

For key decisions (e.g., how to rank your top 5), do side-by-side comparisons.

Example: Head-to-head comparison

Let me give you a concrete example. Two hypothetical IM programs, you want cardiology eventually:

Head-to-Head Program Comparison Example
CriterionWeightProgram XProgram Y
Clinical training989
Cards fellowship matches979
Culture / happiness896
Location (region)869
Proximity to family794
Research opportunities679
Work hours / call675
Reputation569
Cost of living484

You do the math, and Program Y might edge out on total score because of fellowship, research, and reputation. But look at the texture:

  • Program X: Better culture, better proximity to family, better hours, cheaper.
  • Program Y: Stronger academically, better cards pipeline, more prestigious, but worse lifestyle and support.

Now at least you are having an honest conversation with yourself: Am I willing to trade culture and family for a stronger cards pipeline? There is no algorithm for that. But the matrix clarifies what the real trade-off is.


Step 7: Use a Sanity Check – Does the Matrix Match Your Gut?

You will have an initial gut rank list in your head. The matrix will spit out something else. They should not match perfectly, but if they are completely misaligned, two things might be happening:

  1. Your weights are wrong.
  2. Your gut is anchored to prestige or superficial impressions.

I do this sanity check with residents all the time:

  • Circle your top 3 programs by matrix score.
  • Separately, write your top 3 by gut.
  • Compare the lists.

If a program is high on your gut list but low on the matrix, ask:

  • Did I overweight name recognition or location?
  • Did I underweight culture, hours, or training quality?
  • Did I mis-score something because I liked the PD?

You are allowed to override the matrix. You are not allowed to ignore why you are overriding it.

Sometimes I tell people: “If you are going to choose Program B despite it scoring lower, at least say the reason out loud and write it next to your rank list.” That alone prevents a lot of regret.


Step 8: Adjust for Specialty-Specific Realities

Different specialties have different non-negotiables. You should tune your matrix accordingly.

Surgical specialties (Gen Surg, Ortho, ENT, etc.)

You must heavily weight:

  • Case volume and autonomy – low volume or chronic fellow-stealing is a reason to drop a program significantly.
  • Operative breadth – are you seeing bread-and-butter or just ultra-tertiary zebras?
  • Fellow presence – not automatically bad, but you need clarity on how residents get cases.
  • Technical teaching culture – are attendings invested in training your hands, not just having you retract?

A good quick visualization for 3–4 key criteria can help you see patterns at a glance:

hbar chart: Program A, Program B, Program C

Key Surgical Criteria Comparison
CategoryValue
Program A85
Program B70
Program C95

(Think of that as a composite score of volume + autonomy; in your real sheet it would be separate columns.)

Medical specialties (IM, Peds, Neuro, etc.)

You usually prioritize:

  • Clinical training and autonomy – you want to function as an independent attending by graduation.
  • Fellowship match support – especially if you want cards, GI, heme/onc, etc.
  • Outpatient vs inpatient balance – depending on your eventual career.
  • Geographic network – where you want to work after residency.

Lifestyle-heavy specialties (Derm, Psych, PM&R, Radiology)

People get sloppy here because the programs all “seem good.” Do not do that.

You still need to differentiate on:

  • Mentorship and career development – academic vs private practice preparation.
  • Exposure to subspecialty clinics – for derm, for example: complex derm, cosmetics, peds derm, etc.
  • Job market connections – where alumni end up.

Exact same method, different weights

Do not reinvent the matrix. Just change the importance of criteria.


Step 9: Watch Out for These Classic Pitfalls

I have watched many smart people sabotage their own decision matrix. Here is what they typically do wrong.

  1. Changing weights mid-stream to “fix” the outcome
    They see a surprising result and then tweak weights so their favorite program climbs back to #1. That defeats the entire purpose. If you must change a weight, do it globally before you look at how it affects ranking.

  2. Using binary yes/no instead of a 0–10 scale
    “Has research? Yes/No.” That is useless. You need gradations. A program with heavy, structured research support should not look the same as a place where “you can find something if you really want to.”

  3. Letting one interaction dominate a whole criterion
    One charismatic resident or one awkward PD meeting should not hijack “culture” or “leadership.” Look for patterns across multiple conversations.

  4. Not updating scores when new information arrives
    Mid-season emails about schedule changes, leadership turnover, or new fellowship tracks matter. Adjust your matrix as you go, not just at the end.

  5. Overcomplicating the model
    Ten criteria is enough. 30 is a stall tactic. The goal is clarity, not a PhD thesis.


Step 10: Use the Matrix to Build Your Rank List

When Match rank list time comes, you are not starting from a blank page. You are starting with:

  • Matrix-generated order
  • Your gut list
  • Notes about each program’s strengths and weaknesses

The workflow is straightforward:

  1. Sort by total score, descending.
  2. Go down the list from top to bottom.
  3. For each adjacent pair, ask:
    “Given what I know, would I truly rather match at B than A?”
    If yes, swap them.
  4. Do a final pass: look at top 5 and bottom 5. Confirm there are no “never” programs sitting in the middle just because they scored OK.

Your matrix is a tool, not a dictator. But it is far superior to the “I liked the pre-interview dinner at that one place” method.


A Simple Visual: Your Decision Matrix Process

If you like seeing the whole process in one shot, this is it:

Mermaid flowchart TD diagram
Residency Program Decision Matrix Workflow
StepDescription
Step 1List priorities
Step 2Select 8-15 criteria
Step 3Assign weights
Step 4Build spreadsheet
Step 5Collect data per program
Step 6Score each criterion 0-10
Step 7Apply weights and total scores
Step 8Compare programs
Step 9Adjust for gut and trade offs
Step 10Create final rank list

Nothing magical. Just structured thinking.


Do Not Wait – Build the Skeleton Now

If you are still early in interview season, this is the best time to act.

Today, not “when interviews are done,” do this:

  1. Open a blank Google Sheet.
  2. Create columns for:
    • Program
    • At least 8 core criteria that actually matter to you.
  3. Assign preliminary weights in row 2. Imperfect is fine.
  4. Fill in data for the 2–3 programs you have already seen or researched, even roughly.

Then, after your very next interview, force yourself to:

  • Take 10 minutes that evening.
  • Fill in your notes and approximate scores for that program while your memory is fresh.

If you do that consistently, by the time you sit down to build your rank list, you will not be guessing. You will have a clear, defensible, honest comparison of your residency options.

So open a spreadsheet right now and label your first two columns: “Program” and “Clinical Training Quality.” That single act will pull your decision out of the realm of vibes and into something you can actually trust.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles