
The way most applicants talk about “big city vs small city” is statistically lazy.
People toss around lines like “I just want a big academic center in a cool city” as if that actually narrows the field. It does not. If you want better match odds, you need to think like a portfolio manager, not a tourist. City size, program type, and your own competitiveness interact in very predictable ways. The data shows repeatable patterns. If you ignore them, you leak match probability for no good reason.
Let me walk through how to treat location and program type as quantifiable variables, not vibes.
1. The Three Variables That Quietly Drive Your Match Odds
Strip away the personal statements and glossy program brochures. For probability of matching, three structural variables matter a lot:
- City size / market size
- Program type and tier
- Your objective competitiveness
You cannot control the third one right now. You can absolutely control how you expose yourself to risk with the first two.
City size is a proxy for demand, not just lifestyle
Applicants obsess over “NYC, Boston, SF, Chicago” as if each were one choice. They are not. They are hyper‑competitive demand clusters. A rough, but surprisingly predictive, way to think about markets:
- Mega metros (top ~10–15 MSAs): NY, LA, Chicago, Boston, SF Bay, DC, Houston, etc.
- Large cities / regions: mid‑to‑large MSAs with at least one major academic center (Cleveland, St. Louis, Denver, Pittsburgh, Minneapolis, etc.).
- Mid / small cities: standalone cities or regions with 1–3 residency programs in a specialty; often state or regional hubs (Des Moines, Omaha, Greenville, Rochester MN if you exclude “brand,” etc.).
- Rural / distributed: community or hybrid programs in small towns or multi‑site across a region.
Demand is not evenly distributed. Applicants flock to mega metros at rates wildly disproportionate to the number of positions. You see this every year when people complain that “everyone applied to the same 15 coastal cities.”
Program type is a proxy for both training style and applicant congestion
At a high level, you are almost always comparing:
- Large university / academic medical centers
- University‑affiliated community programs
- Pure community programs (often single hospital)
- Hybrid / county‑based systems
The national data (NRMP Charting Outcomes, Program Director Surveys) make one thing painfully clear: academic programs in mega metros receive far more applications per position than community programs in mid‑size cities. Often 2–3x. That ratio alone should make you rethink where you place your rank list “risk.”
Competitiveness sets your safe vs stretch boundaries
USMLE / COMLEX scores, number and type of letters, research output, red flags, and home institution clout all anchor your realistic range.
The pattern I see every year:
- Strong applicants (top quartile) over‑concentrate their rank list in glamorous locations and academic brands. Many still match, but they increase the tail‑risk of sliding way down.
- Mid applicants (middle 50%) behave like they are top quartile, especially in popular specialties, and then act surprised when they match in their bottom 3 or do not match at all.
- Weaker applicants (bottom quartile) often either overreach or do the opposite: only apply locally, drastically shrinking the pool.
Ranking strategy is basically: align your “city size / program type” mix to your actual—not aspirational—percentile.
2. What City Size Actually Does to Your Odds
You cannot get precise program‑level application data for every specialty, but sampling from publicly shared numbers and NRMP surveys gives consistent patterns.
Here is a stylized example using internal medicine categorical positions, based on typical ranges programs report:
| Market Type | Typical Apps per Position | Fill Rate by US MD/DO | IMG Proportion of Matched |
|---|---|---|---|
| Mega Metro Academic | 120–200+ | 90–98% | 5–10% |
| Large City Academic | 80–140 | 80–95% | 10–20% |
| Mid/Small City Univ | 50–100 | 70–90% | 15–30% |
| Community Mid/Small | 30–70 | 50–80% | 20–40% |
This is not exact, but the ratios are real. A mega‑metro academic IM program can see triple the applications per position compared to a smaller‑city community program.
So, what does that mean for your rank list?
If you rank 12 mega‑metro academic programs first, you are essentially putting your early ranks into the densest part of the applicant funnel. Even if you “like” them more, the incremental difference in happiness between #3 and #13 is often far smaller than the increased probability of not matching until #13 or lower if those first twelve all stuff their lists with higher‑scoring people.
The silent benefit of mid‑size cities
Mid‑size cities and “unsexy” regions behave like mispriced assets. Good training. Less applicant congestion. Fewer people bragging about it on Instagram.
Programs in places like Milwaukee, Indianapolis, Kansas City, or Cincinnati often have:
- Strong clinical volume
- Reasonable cost of living
- Less brutal applicant‑to‑seat ratios
Yet year after year, people treat them as “safeties” and rank them low, even when those places would give them better case numbers and less burnout than the coastal brand name they fetishize.
Visualizing the risk–reward by market
Here is a rough visualization of how applicant competition scales across markets:
| Category | Value |
|---|---|
| Mega Metro Academic | 100 |
| Large City Academic | 75 |
| Mid/Small Univ | 50 |
| Community Mid/Small | 35 |
Interpretation: if you call mega‑metro academic a “100” competition index, large city academic runs around 70–80, mid‑size univ 40–60, community programs ~30–40. This is why the “all big city academic” rank list is a high‑variance gamble.
3. Program Type: Prestige vs Match Probability
Let me be blunt. Applicants massively overvalue “prestige” and systematically undervalue:
- Fit
- Culture
- Daily work conditions
From a numbers standpoint, there are three big distortions.
1. Brand name inflation
Programs with big‑name universities or cancer centers attract applications from:
- Applicants who have no realistic chance
- Applicants who do have a realistic chance but apply “just in case” or for ego
- Applicants who already have multiple similar‑tier options
The result: top‑tier academic programs can end up with wildly long rank lists, and you sit somewhere on line 140 hoping they rank 24 categorical spots deep enough to get to you.
2. Community program under‑evaluation
A lot of community programs receive fewer applications per position and have shorter rank lists. That does not mean weaker training. It just means:
- Less research
- Fewer fellows
- Less brand cachet
For many people going into primary care, hospitalist work, or noncompetitive subspecialties, a strong community or hybrid program is a smart choice. You get heavy clinical exposure, earlier responsibility, and less crowding around each case.
3. The mismatch between program needs and applicant goals
Data from Program Director Surveys consistently show program directors weight:
- Letters of recommendation
- MSPE and transcript
- USMLE/COMLEX scores
- Interview performance
- Evidence of professionalism / no red flags
They are not ranking people based on “loves big cities” or “will be happy living near great brunch spots.” They rank to fill the service and training needs of the program.
Academic centers often want research‑friendly, fellowship‑bound residents. Many large community programs want reliable clinical workhorses. If you are clinically strong but research‑light, chasing a stack of “research‑heavy” academic programs as your top 10 ranks is not brave. It is misaligned with the demand.
4. How to Use Data to Shape Your Rank List
Let’s build a more quantitative way to think about your rank list instead of the usual “vibes and geography” method.
Step 1: Define your competitiveness band
Be honest. Use actual numbers.
For your specialty, pull:
- NRMP Charting Outcomes for your applicant type (US MD, US DO, US‑IMG, Non‑US IMG).
- Median and interquartile ranges for matched applicants’ Step 2 CK (now that Step 1 is pass/fail for MDs), research, and number of ranks.
Rough categorization (varies by specialty, but as a heuristic):
- Top quartile: Step/COMLEX well above median; strong letters; relevant research for competitive fields.
- Middle 50%: within ±0.5 SD of median; adequate letters; decent but not standout extras.
- Bottom quartile: scores below median; weaker letters or institutional backing; red flags.
You are not looking for perfection. You are trying to understand whether you should bias more toward safety or more toward upside.
Step 2: Assign a “competition index” to programs
You will not get exact “applications per position” for every program, but you can approximate a competition index based on:
- City size / desirability
- Program type (big‑name academic, academic‑affiliated community, pure community)
- Reputation within the specialty (talk to seniors, look at where graduates match for fellowship)
You can literally give each program a 1–5 score where:
1 = Lower competition (smaller city, community/hybrid, fewer applicants) 5 = Extreme competition (mega‑metro, top academic / brand name)
Then check your list distribution.
If you are a mid applicant and half of your rank list has index 4–5 programs in mega metros, your probabilities are skewed. You are banking on winning in the densest part of the funnel.
Step 3: Calibrate your market mix
Here is a simple target mix framework for many core specialties (IM, FM, Peds, Psych), assuming you have enough interviews. Adjust slightly upward for hyper‑competitive specialties.
| Applicant Band | Mega Metro Academic | Large City Academic | Mid/Small Univ/Hybrid | Community Mid/Small |
|---|---|---|---|---|
| Top Quartile | 30–40% | 30–40% | 10–20% | 10–20% |
| Middle 50% | 10–25% | 25–35% | 25–35% | 20–30% |
| Bottom Quartile | 0–10% | 15–25% | 30–40% | 30–50% |
This is not prescriptive. It gives you a sanity check. If you are in the middle 50% but your rank list is 70% mega‑metro academic, you are taking on unnecessary risk.
5. Common Ranking Mistakes Quantified
Let me run through the patterns I see every season and how they look from a probabilities perspective.
Error 1: Overconcentrating in one elite city
Scenario: Applicant with middle‑of‑the‑pack metrics in internal medicine ranks:
1–8: NYC academic/university‑affiliated
9–12: Boston academic
13–15: DC/Baltimore academic
16–20: Scattered mid‑size city community/univ
Mathematically, they stacked the first 12 positions in some of the most application‑dense markets in the country. Even if they receive interviews, the rank lists at those programs are long, and many are filled with stronger applicants. The true match probability for ranks 1–12 is nowhere near “12× a single program’s probability.” Those events are highly correlated: if you are not competitive enough to be at the top of one similar‑tier list, you are likely not at the top of the others.
Effect: High probability you slide to ranks 13–20 or lower. Or fail to match if 16–20 are also competitive.
Error 2: Over‑weighting local geographic comfort
Scenario: Applicant wants to stay “within 2 hours of home” and only interviews locally. They end up with 9 programs in a highly competitive region and rank all 9.
NRMP data consistently show that beyond a certain point, the number of ranks matters as much as, or more than, marginal “fit” at each program. An applicant who ranks 9 programs all in one hyper‑competitive corridor is mathematically more vulnerable than someone who ranks 14 programs spread across markets, even if some are less preferred cities.
You are effectively trading a modest increase in lifestyle comfort for a substantial increase in not matching. That trade often looks irrational on paper.
Error 3: Under‑ranking less glamorous community programs
I see this constantly. Someone interviews at:
- 7 academic programs (3 mega metro, 4 large city)
- 6 community/university‑affiliated programs in mid‑size cities
They “feel better” about the places in big cities and rank them 1–7. Then, instead of sorting the community programs by actual training, culture, and risk hedge, they drop them to 8–13 as an afterthought.
From a probability standpoint, this is backwards. Given how congested the applicant pool is at the top 3–4 big city sites, many middle‑band applicants would be better off ranking at least one or two high‑quality, smaller‑market community programs higher, especially those offering strong training in the exact career they want.
6. Integrating City Size, Program Type, and Your Personal Utility
You are not a robot. I am not suggesting you ignore your own preferences. I am saying: quantify them.
Think in terms of a simple utility function. Something like:
Total value = 0.5 × Match Probability + 0.3 × Training Quality + 0.2 × Personal Fit/Location
You can change the weights, but keep “Match Probability” non‑trivial. Many people implicitly assign it something like 0.1 and give 0.9 to vibe and geography. That mispricing is how people end up scrambling.
Build a quick scoring sheet
For each program you interviewed at, assign:
- Match Probability (1–5): based on city size, program type, how interview went, and how competitive you are vs their typical match profile.
- Training Quality (1–5): case volume, fellowship placement if relevant, reputation from seniors and attendings.
- Personal Fit/Location (1–5): city size preference, support system, partner/job issues, cost of living.
Then compute a weighted score. You do not have to follow it rigidly, but you will immediately see where you are overvaluing “location fun” at the expense of obvious probability and training advantages.
Visualizing your portfolio
A simple way to check balance:
| Category | Match Probability Score | Training Quality Score | Location/Fit Score |
|---|---|---|---|
| Program 1 | 2 | 5 | 5 |
| Program 2 | 3 | 4 | 4 |
| Program 3 | 4 | 4 | 3 |
| Program 4 | 5 | 3 | 3 |
| Program 5 | 4 | 3 | 2 |
If your top 5 ranks all show low match probability scores but great city/fun scores, that is a red flag. You need more hedging with programs where your probability bar is higher, even if the brunch scene is worse.
7. Specialty Differences: Not All Markets Behave the Same
The city size and program type dynamics get amplified in competitive specialties.
- Dermatology, plastics, ortho, ENT, neurosurgery: prestige and location demand are so concentrated that “mid‑size city” university programs can be as competitive as mega‑metro ones in less competitive specialties. You cannot rely on city size alone as a signal of easier entry.
- Psychiatry, family medicine, pediatrics: community and mid‑size markets are often genuinely less competitive, and program type/city size mixing is a powerful tool to shift your odds.
- Internal medicine: giant range. Top‑tier academic in Boston vs a solid community program in a midwestern city are essentially different universes in terms of competition.
The principle holds: where there is excess demand—popular cities and big brands—the applicant‑to‑position ratio spikes. Your rank strategy should not ignore that.
8. A Process You Can Actually Use This Week
You have your interviews done or nearly done. You are staring at a blank rank list. Here is a concrete process, not just theory.
- List every program you interviewed at in a spreadsheet.
- Add columns:
- City size category (Mega, Large, Mid, Small/Rural)
- Program type (Academic, Univ‑affiliated, Community, Hybrid/County)
- Rough competition index (1–5)
- Personal fit score (1–5)
- Training quality score (1–5)
- Calculate a “match utility” score: for example
- 0.5 × (6 − competition index) + 0.3 × training quality + 0.2 × personal fit
(Lower competition index = higher probability, so invert it.)
- 0.5 × (6 − competition index) + 0.3 × training quality + 0.2 × personal fit
- Sort by this utility and compare to your gut ranking.
- Identify programs where your gut rank is dramatically above or below the data‑weighted rank. Ask yourself whether that’s rational or just story‑driven.
- Adjust, but do not let yourself ignore match probability completely without a very clear reason (e.g., partner’s job, visa restrictions, strict geographic constraint).
If you want to see the overall balance, chart your city size mix and program type mix.
| Category | Value |
|---|---|
| Mega Metro | 25 |
| Large City | 35 |
| Mid City | 30 |
| Small/Rural | 10 |
If you are a mid‑band applicant in internal medicine and 60–70% of your ranks are mega‑metro academic, that pie chart should make you uncomfortable.
Key Takeaways
- City size and program type are not lifestyle footnotes. They are strong proxies for applicant congestion and match probability.
- Most applicants overconcentrate in mega‑metro academic programs, effectively betting their future on the noisiest, most competitive segment of the market.
- A simple, numbers‑driven ranking approach—scoring competition, training quality, and personal fit—produces a more balanced rank list and reduces your odds of an avoidable miss on Match Day.