
The mythology that “everyone wants to go back home” for residency is only half true. The data show a more nuanced — and frankly more interesting — story about geographic mobility, how much people actually move, and where second look visits fit into that equation.
If you are trying to decide whether to burn a weekend and a thousand dollars on a second look, you should not be guessing. You should be thinking in terms of probabilities, match patterns, and marginal impact. That is what this piece is about.
What the Data Actually Say About Staying Close
Let me start with the blunt numbers.
NRMP’s data on US MD seniors and DO seniors, plus several specialty-specific surveys, consistently converge on a simple pattern:
- Roughly 45–55% of residents train in the same census division where they went to medical school.
- About 30–40% stay within the same state as their medical school.
- A smaller but nontrivial fraction — around 15–25% depending on specialty — end up training within 50–100 miles of where they grew up.
The gradients are steep. The farther you define “home,” the higher the probability you “stayed.” But when people say “I want to stay close to family,” in practice they often mean a 1–3 hour drive, not “same ZIP code.”
To make it more concrete, here is a simplified view combining findings from multiple NRMP and AAMC summary tables (rounded for clarity):
| Metric | Primary Care (FM, IM, Peds) | Competitive Specialties (Derm, Ortho, Plastics) |
|---|---|---|
| Same state as medical school | ~45% | ~25% |
| Same census division as medical school | ~60% | ~40% |
| Same state as permanent residence (at med school entry) | ~35% | ~20% |
You see the gap immediately. Primary care has a much stronger “regional gravity” than the hyper-competitive fields. In competitive specialties, program prestige and case volume pull residents across the country much more aggressively.
Now add a second layer: applicants’ stated preferences vs. outcomes.
Survey data from graduating students (AAMC GQ-style surveys) show:
- Around 65–70% report that “proximity to family/significant other” is an important or very important factor in ranking programs.
- Yet only about half of them actually end up within a half‑day drive of those people.
Translation: geography is important, but the match algorithm, competitiveness, and program availability dilute that preference. People want to be close. The system is only partially responsive.
Gravity, Distance, and Reality: A Quantitative View
Think of geographic mobility in residency like a gravity model from economics: applicants are more likely to match where there is more “mass” (programs, prestige, connections) and less “distance” (literal and psychological).
You can frame it roughly like this:
- Big metro areas with many programs (New York, Boston, Chicago, LA, Houston) have huge gravitational pull.
- Home region adds its own gravity in the form of:
- Existing networks
- Local letters
- “Fit” based on shared background
- Distance dampens the probability you will choose a place if all else is equal. But all else is rarely equal.
If you look at match data by school region, a few patterns stand out repeatedly:
Students from the Northeast show moderate mobility but a strong preference to remain along the I‑95 corridor. They might “move,” but from Boston to Philly to DC — still a few hours from home.
Midwest students often show high intra‑region retention in primary care, but much more national scattering in surgical subspecialties. There are fewer hyper‑competitive seats locally, so they travel.
West Coast students have some of the highest outbound mobility simply because there are fewer total residency positions per applicant, particularly in certain fields. The denominator matters.
To illustrate how “same state” vs “same region” diverge, consider this stylized example:
| Category | Value |
|---|---|
| Northeast | 58 |
| South | 52 |
| Midwest | 49 |
| West | 44 |
Those values approximate the percentage of internal medicine residents who attended medical school in the same census division as their residency program. Not the same state, not the same city — just same broad region.
You are looking at a coin‑flip level probability in most regions. Which means at the population level, “staying close” is as much outcome of structural factors (program density, competitiveness, school reputation) as it is about individual desire.
Where Second Looks Fit in This Picture
Now to the second half of the title: second look visits.
There is a lot of superstition here. I have heard all versions:
- “If you do not do a second look, they will think you are not interested.”
- “Second looks are essential if you want to move into a different region.”
- “Programs use second looks to finalize their rank list.”
The data‑driven answer is more boring and more brutal:
For most programs, second look visits have minimal measurable impact on match outcomes once you control for the real drivers: board scores, letters, interview performance, and program-specific needs.
Program directors themselves have told you this. In NRMP’s Program Director Survey, when asked to rate factors influencing their rank lists, “second visit / second look” consistently ranks near the bottom of the list and is reported as being used by a minority of programs at all.
You see patterns like:
- USMLE/COMLEX scores: used by 90–95% of programs; high importance.
- Interview performance: nearly universal, high importance.
- Letters of recommendation: 85–95%, high importance.
- Perceived interest in program: moderate importance across many programs.
- Second look visit: used by a small subset (often <30%), usually low to moderate importance.
Let’s make that concrete:
| Factor | Programs Using Factor | Mean Importance (1–5) |
|---|---|---|
| Board scores | ~90–95% | 3.5–4.5 |
| Interview | ~98–100% | 4.0–4.8 |
| Letters of recommendation | ~90–95% | 3.8–4.5 |
| Demonstrated interest (generic) | ~70–80% | 3.0–3.8 |
| Second look visit | ~20–30% | 1.5–2.2 |
That last line is the one people hand‑wave away when arguing that second looks are “crucial.”
No — most of the time they are not. They can matter for specific programs or edge cases, but the base rate is low.
However, here is the nuance that actually matters for your decision:
Second looks rarely move you from “not rankable” to “ranked.” But they can occasionally shift you a few positions within a program’s list when the field is tight and the program interprets the visit as a strong signal of interest.
Those shifts matter most when:
- You are genuinely in their mid‑tier: not an obvious match, not obviously out. Think “somewhere around the median interviewee.”
- The program has a strong local vs non‑local debate internally. Faculty disagree about whether to prioritize applicants who are more likely to stay in the region.
- You are trying to cross regions — say, Midwest med school to West Coast residency — where they cannot rely on your geographic history as a proxy for commitment to that location.
In that narrow band, a second look is not magic, but it is a signal. And medicine is full of decisions made on signals when the numbers are tied.
Geographic Mobility + Second Looks: When Do They Interact?
Now let’s combine the two dimensions: mobility and visits.
Here is the actual question you care about:
If I want to stay close (or move closer to a specific region), does a second look visit materially change my odds?
We can frame this using approximate probabilities from existing mobility data and realistic assumptions about second look influence.
Imagine this simplified scenario:
- Baseline probability that a given program ranks you high enough to match, conditional on an interview: 20%.
- For a “home region” applicant (med school and/or permanent address near program), that probability might be 25–30%, because faculty subconsciously see you as less likely to bolt, and you likely have more local fits.
- For an “out of region” applicant, baseline might be more like 15–20%.
Now add a second look:
- For the majority of programs (those that do not care much about second looks), effect size ≈ 0. Your 20% stays 20%.
- For the minority of programs that do treat them as a real signal, you might see a modest bump — say from 20% to 24–26% — if your file is already competitive.
Graphically, if we pretend these probabilities are typical:
| Category | Value |
|---|---|
| Home region, no second look | 0.28 |
| Home region, with second look | 0.32 |
| Out of region, no second look | 0.18 |
| Out of region, with second look | 0.23 |
These are not exact NRMP numbers. They are structured to show a realistic pattern:
- Home region already has a higher base probability.
- Second look boosts both, but proportionally more for out‑of‑region, where the signal of commitment carries more informational value.
Here is the harsh part: even with the boost, your absolute probability gain at a single program is small. A 4–5 percentage point increase at best. Against the cost of flights, hotels, lost shifts, and fatigue.
Where second looks start to look better in expected‑value terms is when:
- The program is very high on your list (top 3).
- You strongly suspect you sit in that “maybe” tier for them.
- Your geography is misaligned in some way (you are trying to move regions, or you want to stay in a region where your med school is an outlier).
In other words, if a single 4–5 percentage point gain would be massively valuable for you at that specific program, the trip may be justified.
Cost, Equity, and the Future of Second Looks
The economics side of this cannot be ignored.
A realistic second look often costs:
- Round‑trip flight: $300–600 (or more, depending on region and timing).
- Hotel or Airbnb: $150–400 for 1–2 nights.
- Food and local transit: $50–150.
- Opportunity cost: lost income from moonlighting, lost study time, burnout.
You are looking at $500–1,000+ per visit for a marginal probability gain that is usually single‑digit percentages and sometimes zero.
From an equity standpoint, this is terrible. You are essentially asking students with fewer resources to either:
- Overextend themselves financially to keep up with wealthier peers who are “blanket second looking,” or
- Decline to do second looks and accept a small but real risk of being perceived as less interested.
Program leadership is not blind to this. That is why you see more and more programs taking formal positions like:
- “We do not offer or consider second looks.”
- “Second looks are optional and will not affect your position on the rank list.”
- “Any post‑interview visit is purely for your information and cannot be used in ranking.”
Are those statements always enforced perfectly? No. Faculty are human. If someone flies across the country to see your program, people talk about it. But the direction of policy is clear: downplay or abolish second looks as a selection tool.
The pandemic accidentally accelerated this. Once everyone was forced into virtual interviews, many programs realized they could run high‑quality matches without in‑person second visits at all. A lot of them are not eager to go back to the old, expensive, opaque system.
How to Use This Data for Your Own Decisions
Let me be practical here. You care less about national averages and more about your situation.
You are deciding: Do I stay close? Do I stretch to another region? Do I spend on second looks?
Here is a data‑driven way to think about it:
Know your specialty’s mobility pattern.
- If you are in a primary care field, odds are already decent you will stay within your region if you rank local programs reasonably high.
- If you are in derm, plastics, ortho, ENT, neurosurgery, etc., expect much higher mobility; you will very likely have to move, and programs expect that of you.
Quantify your preferences.
Instead of “I want to be close to family,” try to answer:- Maximum acceptable distance (hours of travel) you actually tolerate.
- How you weight geography vs training quality vs fellowship chances. If geography is truly 50% of your utility function, you should behave differently than someone who puts it at 10%.
Rank programs for yourself using expected value, not vibes.
For each program, consider:- Realistic probability you could match there (based on your competitiveness, interview feel, and how many similar applicants they usually take).
- Utility if matched (training + geography + quality of life). Multiply those mentally. A local mid‑tier program might beat a distant “name” if your personal utility for staying close is high enough.
Be selective with second looks.
Criteria where I have actually seen second looks make sense:- The program is in your top 3.
- The program is out of your home region or is in your home region but you have weak geographic ties on paper (out‑of‑state med school, recent move).
- The program has explicitly said second looks are optional but welcome and will be considered as a sign of interest.
- You are financially able to absorb the cost without wrecking your budget.
If those boxes are not checked, your money is likely better spent elsewhere (or not spent at all).
Do not try to brute‑force geography with second looks alone.
Your best “stay close” levers are:- Ranking local programs higher.
- Being realistic about competitiveness and not over‑ranking long‑shot distant programs at the expense of very likely local ones.
- Leveraging local mentors and letter writers who know the PDs and can vouch for your long‑term interest in the region.
Second looks are a weak auxiliary signal on top of that, not a primary driver.
Where This Is Likely Headed
The trend lines are pointing in one direction: more structure, less chaos.
I would expect over the next 5–10 years:
- More programs will move toward formal bans or strict neutralization of second looks in ranking decisions, under pressure for equity and transparency.
- Virtual or structured “open house” formats may replace ad hoc visits, with clear statements that no attendance signal will be used for ranking. (Whether faculty perfectly obey that will vary.)
- Geographic mobility may actually increase in some fields if virtual recruitment weakens the “local familiarity” effect that used to favor home region students.
- At the same time, workforce planning and state-level incentives may push primary care programs to favor in‑state and in‑region applicants explicitly, strengthening “stay close” forces in those areas.
In other words, the system will split: high‑mobility competitive specialties getting even less tied to geography, and regionally‑oriented primary care tracks doubling down on local retention.
Second looks, in that future, become more of a personal due diligence tool — “Can I actually live here?” — than a strategic weapon to move your place on a rank list.
Bottom Line: Numbers Behind Staying Close
Three takeaways, without the fluff:
About half of residents stay in their medical school region; fewer stay in the same state. Staying close is common but far from guaranteed, especially in competitive specialties where national mobility is the norm.
Second looks are a weak signal, not a primary driver. Program director data put them near the bottom of factors that influence ranking. At best, they provide a small bump in perceived interest for applicants already in contention.
Use second looks surgically, not reflexively. Save them for one or two top‑choice programs where a modest probability gain is genuinely worth hundreds of dollars and lost time. For everything else, let your rank list, not your travel receipts, do the heavy lifting.