
Your gut feeling is not enough to build a smart rank list.
You are trying to balance location, prestige, partner’s job, procedural volume, malignant rumors, fellowship prospects, and ten other things. Then programs start blurring together. Your brain does what all tired brains do: it latches onto one shiny thing (big name, flashy facilities, or the nicest free lunch) and quietly ignores everything else.
This is how people end up regretting their rank lists.
You need a system. Not vibes. Not last‑minute panic. A structured, weighted scoring tool that forces your priorities into the open and turns them into numbers you can actually compare.
That is what I am going to build with you.
Step 1: Define What Actually Matters (Not What You Think Should Matter)
Most applicants start backward. They try to fit themselves to what they think programs want, or what their classmates boast about. That is a fast way to end up miserable.
Start with you.
Grab a blank page (physical or digital) and list everything that has been bouncing around your head about programs. Do not sanitize it. Real examples I have seen scribbled down:
- “Must be within 1 hour of my partner.”
- “I hate snow.”
- “I want a trauma-heavy program; I get bored on slow services.”
- “I will burn out if call is brutal.”
- “I want to match heme/onc, need research.”
- “I am the primary caregiver for a parent, cannot move too far.”
- “I want a city with a decent dating pool.”
Now turn that big messy list into categories. Common buckets:
- Location and geography
- Family / relationships / life logistics
- Program culture and wellness
- Clinical training and volume
- Reputation and fellowship opportunities
- Schedule, call, and hours
- Compensation and financial factors
- Diversity, patient population, and mission fit
You will end up with 8–12 categories. That is fine. More than 15 is too many; you will get lost in the weeds.
Now the uncomfortable part: decide your top 5 non‑negotiable categories. Not “it would be nice.” I mean: if a program is weak here, you will quietly hate your life.
Example for a categorical IM applicant:
- Location near family / partner
- Culture and resident happiness
- Fellowship placement in cards/heme‑onc
- Clinical volume / acuity
- Cost of living
If you cannot narrow beyond “everything is important,” that is a red flag. You are trying to avoid trade‑offs. Residency does not care. Trade‑offs are coming either way. Better that you choose them deliberately.
Step 2: Assign Weights – Force Yourself to Choose
Here is where the “weighted” part comes in. Not all categories are equal. Stop pretending they are.
You are going to give each category a weight, reflecting how much it matters relative to the others.
The easiest system: percentages totaling 100.
Example for that same IM applicant:
| Category | Weight (%) |
|---|---|
| Location / Family | 30 |
| Culture / Wellness | 25 |
| Fellowship Prospects | 20 |
| Clinical Volume | 15 |
| Cost of Living / Salary | 10 |
Notice the sharp drop. That is deliberate. If everything is between 15–25%, you are still avoiding commitment.
How to set your own weights:
- Write your top 5–8 categories in a column.
- Next to each, write a gut weight (e.g., 10–40%).
- Add them up. Adjust until the total is exactly 100%.
- Ask yourself one brutal question:
“If Program A is perfect on this category and mediocre on everything else, would I still strongly consider it?”
If yes, that category should probably be ≥25%.
A quick check I use:
If I cover the numbers and just look at the order, does it feel honestly reflective of what you rant about to your close friends? If you complain about location and your partner non‑stop, but “prestige” has the biggest weight, you are lying to yourself.
To visualize where your brain is focusing, a simple chart helps.
| Category | Value |
|---|---|
| Location | 30 |
| Culture | 25 |
| Fellowships | 20 |
| Clinical Volume | 15 |
| Cost of Living | 10 |
If you see a flat, evenly sliced donut, you did it wrong. You want clear winners.
Step 3: Build a Simple Scoring Scale for Each Category
Now you have categories and their weights. You need a way to rate each program on each category.
Keep it simple:
- Use a 1–5 scale:
- 1 = terrible / unacceptable
- 2 = below average
- 3 = acceptable / fine
- 4 = strong
- 5 = outstanding / ideal
Resist the urge to go 1–10. It looks more “precise” but it is fake precision. You do not know the difference between an 8 and a 9 on “culture.” You barely know between a 3 and a 4.
Define a quick anchor for each category, so you stay consistent. Examples:
Location / Family (1–5)
- 5 = Same city as partner/family or ideal city you actively want to live in
- 4 = Same region or easy travel; you feel good about it
- 3 = Neutral / acceptable but not exciting
- 2 = Inconvenient or somewhat distressing but maybe tolerable
- 1 = Dealbreaker location
Culture / Wellness (1–5)
- 5 = Residents uniformly describe supportive leadership, protected time, low toxicity
- 4 = Mostly positive; minor gripes but no major red flags
- 3 = Mixed; some concerns but not outright malignant
- 2 = Multiple serious red flags or clear burnout vibes
- 1 = Absolutely malignant / unsafe
Write short notes for each scale if you need them. It sounds tedious, but it will save you from emotional whiplash after each interview.
Step 4: Build the Actual Weighted Scoring Tool
You can do this in Excel, Google Sheets, or Notion. I will describe it like a spreadsheet because that is what most people use.
4.1. Set up your template
Columns:
- Column A: Program Name
- Column B: Location Score (1–5)
- Column C: Culture Score (1–5)
- Column D: Fellowship Score (1–5)
- Column E: Volume Score (1–5)
- Column F: Cost of Living Score (1–5)
- Column G: Weighted Total Score
Then, above or to the side, store your weights:
- Location weight = 0.30
- Culture weight = 0.25
- Fellowship = 0.20
- Volume = 0.15
- Cost = 0.10
4.2. The formula
For each program’s row, your total score looks like this:
Total = (Location Score × 0.30)
- (Culture Score × 0.25)
- (Fellowship Score × 0.20)
- (Volume Score × 0.15)
- (Cost Score × 0.10)
You do this for every program. Then sort by Total Score from highest to lowest. That gives you an initial numeric rank list.
Is it perfect? No. Is it 10x better than pure vibes? Yes.
To see how the math separates programs, a table helps:
| Program | Location (30%) | Culture (25%) | Fellowships (20%) | Volume (15%) | Cost (10%) | Total Score |
|---|---|---|---|---|---|---|
| Program A | 5 | 3 | 4 | 4 | 2 | 3.9 |
| Program B | 3 | 5 | 3 | 3 | 4 | 3.6 |
| Program C | 4 | 4 | 5 | 5 | 1 | 4.1 |
Without the tool, most people would probably pick Program A (best location) or B (happiest residents). The tool makes it clear: if you care a lot about fellowships and volume, Program C slightly edges out the others despite the cost hit.
Step 5: Collect Data Right After Each Interview
Your scores are only as good as the notes you feed them.
Here is the failure pattern I see every year:
- Week 1: You take beautiful notes on Program #1.
- Week 3: You are on interview #7. Your note-taking is now “Good people. Seemed nice. Free burrito.”
- January: You are trying to remember which program had the weird PD and which had the MICU dungeon.
Fix this with a tight process.
5.1. Immediately post‑interview routine (same day)
As soon as the day ends:
- Sit alone for 15–20 minutes.
- Rate each category (1–5) while the impressions are hot.
- Write 3 short bullets:
- Biggest pro
- Biggest con
- One “gut” reaction sentence (“I can see myself here” vs “Something felt off”)
You are not writing a novel. You are capturing raw data before memory gets airbrushed.
5.2. Use consistent questions for culture and fit
Culture is where people get fuzzy. Ask the same questions each time:
- How did residents talk about leadership? Defeated or respected?
- Did anyone quietly warn you about burnout, hidden call, or certain attendings?
- How often did residents talk about leaving early vs surviving?
- Would you trust these people at 2 a.m. when you are drowning?
Then translate that vibe into your 1–5 culture score. It is subjective, obviously. That is fine. The point is consistency across programs.
To keep your process organized, a simple flow chart helps you ritualize it.
| Step | Description |
|---|---|
| Step 1 | Finish Interview Day |
| Step 2 | Find Quiet Space |
| Step 3 | Rate Each Category 1-5 |
| Step 4 | Write 3 Bullet Notes |
| Step 5 | Update Spreadsheet Scores |
| Step 6 | Save Gut Rank Position |
The “Gut Rank Position” is simply: right now, where would I slot this program relative to the others? Above Program X, below Program Y. You will use this later.
Step 6: Use the Tool to Resolve Conflicting Priorities
This is where a weighted scoring tool pays off. It forces clarity when programs trade blows on different dimensions.
Scenario you will almost certainly face:
- Program 1: Great location, okay training.
- Program 2: Worse location, stronger training.
- Program 3: Best culture, mid‑tier otherwise.
Your brain starts spinning. Your group chat gives completely contradictory advice. You start refreshing Reddit, which only makes you more anxious.
Here is how to use the tool in practice:
- Look at the Total Scores across your top contenders.
- Identify programs that are:
- Clearly top tier (cluster within ~0.2–0.3 points of each other and higher than the rest).
- Clearly bottom tier (consistently 0.5–1.0 below your middle pack).
- Move bottom tier programs down. They are now background noise.
- For the top pack, ask:
- Is there a single category where one program utterly outperforms the others in a way that matters to me long term?
Sometimes the numbers will surprise you. I have seen applicants convinced they “have to” rank the big‑name program first—until they see that on their own weighted score sheet, it lands fourth because location, culture, and cost are all mediocre.
You can also visualize how your top few programs compare across dimensions.
| Category | Location | Culture | Fellowships | Volume | Cost |
|---|---|---|---|---|---|
| Program A | 5 | 3 | 4 | 4 | 2 |
| Program B | 3 | 5 | 3 | 3 | 4 |
| Program C | 4 | 4 | 5 | 5 | 1 |
This stacked view makes it explicit where each program is giving you value.
Step 7: Stress‑Test Your Weights (Sensitivity Analysis Lite)
Here is where we get a bit nerdy but it is worth it.
Your weights are guesses. Good guesses, but still guesses. So you stress‑test them.
The question:
If my priorities shifted slightly, would my top choice still hold?
Two ways to do this without turning into a statistician:
7.1. Best‑case vs worst‑case for location and culture
Since those two often dominate:
- Make a copy of your sheet.
- In copy #1, increase Location weight by 10–15% and decrease something else.
- In copy #2, increase Culture weight similarly.
- See if your #1 program stays #1 in both scenarios.
If a program only wins in a very narrow weighting setup, you are taking a bigger gamble. If it consistently rises to the top even when weights shift, that is reassuring.
7.2. Long‑term vs short‑term weighting
Build two versions:
- Version A: “First two years happiness” heavy (location, culture, schedule).
- Version B: “10‑year career impact” heavy (fellowships, reputation, research).
Compare where programs land in each. If a program is:
- Top 3 for happiness, but mediocre for career → you are choosing quality of life over prestige. Own that choice consciously.
- Top 3 for career but bottom for happiness → you are signing up for a grind. Sometimes that is rational, but do not pretend it is not.
To see how this plays out numerically, imagine how different priority profiles shift scores.
| Category | Happiness-Weighted | Career-Weighted |
|---|---|---|
| Program A | 4.2 | 3.8 |
| Program B | 4 | 4.1 |
| Program C | 3.6 | 4.3 |
You might realize that the “career” optimal program makes you miserable, or that a slightly lower‑ranked program is the best compromise across both worlds.
Step 8: Integrate the Numbers With Your Gut (The Right Way)
Here is where most people misuse tools like this. They either:
- Worship the spreadsheet (“Spreadsheet says Program C is 4.12 vs A at 4.09, so C goes #1”), or
- Ignore it completely when their feelings get loud.
Do neither.
Use this rule:
The weighted score defines your default ranking.
Your gut is allowed to override it only when you can write a one‑sentence, specific reason.
Wrong override rationale:
- “I just liked them more.”
- “Big name.”
- “Seemed nice.”
Valid override rationale:
- “Program B is 0.1 points lower, but my partner’s job opportunities are dramatically better there; I under‑weighted that factor initially.”
- “Program A scored high on culture because of polished interview day, but three residents privately told me about serious burnout issues; my data were flawed.”
If your gut and the spreadsheet disagree, do this:
- Write down why.
- Check whether the mismatch is:
- A data problem (you misjudged a category).
- A weighting problem (you under‑ or over‑valued a category).
- Pure FOMO or prestige anxiety.
If it is FOMO or prestige alone—no concrete reason—trust the tool.
Step 9: Special Cases – Couples Match, Visa, and Red Flags
Some situations need extra rules baked into your scoring system.
9.1. Couples Match
You are essentially solving a constrained optimization problem. The weighted tool still helps, but you must integrate:
- Overlapping geography
- Both partners’ program strength and happiness
Approach:
- Each partner builds their own weighted score sheet.
- Identify high‑scoring programs for both within the same city/region.
- For each pair combination, create a “combined happiness score”:
- Example: 60% your score + 40% partner’s, if one of you is more location‑dependent.
- Any city where one partner would be miserable should be heavily penalized or eliminated, regardless of how great it is for the other.
9.2. Visa constraints / IMGs
If visa sponsorship is non‑negotiable, that is not a category. It is a filter.
- Any program that cannot sponsor your visa = auto‑exclude or placed at the bottom, no matter what the score says.
- For programs that do sponsor, you might add an extra category like “Historical support for IMGs / visa holders” and weight it.
9.3. True red flags
There are some things the scoring tool should not override:
- Systemic patient safety issues residents were afraid to discuss openly.
- Multiple residents warning you, “I wish I had not ranked this place so high.”
- Clear evidence of toxic, retaliatory leadership.
For any program with a serious red flag, I recommend:
- Either scoring culture as 1, which will nuke its total.
- Or tagging it “Hard No” and removing it from the ranking exercise entirely.
Do not rationalize your way into a bad environment with math.
Step 10: Lock in Your Final Rank List With Confidence
By the time you finish all interviews, your sheet should contain:
- Scores for each category for each program
- A weighted total
- Brief notes and gut reactions
- A few stress‑tested weight profiles
Final pass process:
- Sort by your primary weighted total.
- Mark natural tiers:
- Tier 1: “Would be genuinely happy”
- Tier 2: “Acceptable but not exciting”
- Tier 3: “Only if the Match gods force me”
- Order programs within each tier using:
- Weighted score as default
- Rare, well‑justified gut overrides
- Double‑check for:
- Any hard constraints (family, visa, medical needs) accidentally ignored
- Any program with red flags ranked too high because of prestige
At the end, you should be able to look at your #1, #2, and #3 and say:
- “Given what I value and the data I had, this is a rational order.”
- “If I end up at any of my top 5, I know exactly why I ranked them where I did.”
That is a very different feeling from “I hope I did not screw this up based on one good dinner.”
Quick Implementation Checklist
If you want a punch‑list, here it is:
- List what genuinely matters to you.
- Pick your top 5–8 categories.
- Assign weights that add to 100%, with clear priority separation.
- Define a 1–5 scoring scale for each category with simple anchors.
- Build a spreadsheet with:
- Program rows
- Category columns
- A weighted total formula
- After each interview:
- Score each category 1–5
- Write 3 bullets (pro, con, gut)
- Slot the program in your rough gut rank
- After all interviews:
- Sort by weighted totals
- Tier programs
- Stress‑test with adjusted weights
- Override only with specific, written reasons
- Transfer your final ordered list into the NRMP rank system.
If you follow that, you are not guessing. You are making a structured decision under uncertainty. That is the best you can do.
FAQ
1. What if I am late in the season and do not have time to build a full spreadsheet?
Then build a lean version. Pick only your top 4–5 categories, assign clear weights, and score just your top 8–10 programs. You do not need perfect data on every one of 30 interviews. You need clarity on the places you might actually match. A crude but honest weighted list for your top tier is far better than no structure at all.
2. Should I include program “prestige” or reputation as its own category?
Only if it has real downstream impact for your goals. For competitive subspecialty fellowships or academic careers, reputation and prior fellowship matches can matter. But label it honestly: “Fellowship / academic prospects,” not vague “prestige.” If you are planning a community outpatient career in a common specialty, giving a big weight to “prestige” is usually ego, not strategy.
3. How do I handle programs where I feel I interviewed badly, but the program itself is strong?
Do not let one awkward interview tank a program’s score. Your scoring tool is about the program, not your performance. Only factor your performance in if it meaningfully changes the odds the program ranks you highly (e.g., you had serious professionalism issues). Otherwise, score the program objectively. The Match algorithm already accounts for how both sides rank each other.
4. What if my partner/family strongly disagrees with my weights or rankings?
Then you have a values conflict, not a spreadsheet problem. Use the tool as a neutral starting point for negotiation: show them the categories, the weights, and the scores. Ask them which categories they think are under‑weighted, and why. You can adjust weights together—maybe location or support network becomes heavier. The goal is not to “win” the argument; it is to make the trade‑offs visible so you can agree on them as adults, not discover them after Match Day.