Residency Advisor Logo Residency Advisor

Residency Rank List Behavior Before and After Virtual Interview Adoption

January 6, 2026
14 minute read

Residency applicants reviewing rank lists on laptops during virtual interview era -  for Residency Rank List Behavior Before

The story everyone tells about rank lists since virtual interviews is too simple. The data shows something messier: costs collapsed, geography loosened, but applicant ranking behavior became more extreme and more polarized.

What Actually Changed With Virtual Interviews

Virtual interviews did not just replace flights with Zoom. They changed the economics of the entire ranking decision.

Before 2020, a typical categorical applicant might interview at 10–15 programs, occasionally 18–20 for more competitive specialties. After widespread virtual adoption, the number of interviews per applicant jumped sharply across specialties.

bar chart: Pre-Virtual (2018-2019), Transition (2020), Virtual Era (2021-2023)

Average Number of Residency Interviews per Applicant
CategoryValue
Pre-Virtual (2018-2019)11
Transition (2020)14
Virtual Era (2021-2023)17

Three immediate, measurable consequences for rank lists:

  1. Rank lists got longer.
  2. Rank-order curves got steeper at the top.
  3. Internal consistency between “fit” and rank order got weaker.

Applicants could afford to “hoard” interviews. Programs had to cast wider nets. Rank lists on both sides stretched further down, but the signal in the top third became more distorted.

Rank List Length: Before vs After Virtual Adoption

Let me start with something concrete: list length distributions.

Across multiple specialties (IM, Peds, FM, Psych) and data from NRMP plus program internal tracking, the pattern is clear: the median number of programs ranked increased meaningfully in the virtual era.

Median Programs Ranked by Applicant Type
Applicant Type2017-2019 Median2021-2023 Median
US MD Seniors1215
US DO Seniors1013
IMGs711

Those numbers line up with what residents tell me anecdotally. In 2018 you heard, “I ranked 11 core programs and a couple of safeties.” By 2022 it turned into, “I ranked 18 just in case; it cost me nothing to interview.”

Look at the shape of the distribution—this is where the behavior really shifted.

boxplot chart: 2017-2019, 2021-2023

Distribution of Programs Ranked per Applicant (US MD Seniors)
CategoryMinQ1MedianQ3Max
2017-201959121520
2021-2023711151925

You see:

  • Low end moved up a bit: fewer people ranking only 5–6 programs.
  • Median and upper quartile jumped several programs.
  • Maximums climbed into the mid-20s regularly.

This is entirely rational from the applicant side. Marginal cost of one more virtual interview is close to zero: a half day off rotations, maybe some prep, no plane ticket. Expected benefit of that “extra” safety program is non-trivial if you are anxious about matching.

From a ranking-behavior perspective, that created:

  • Longer tail of “low-commitment” programs tacked onto the bottom of lists.
  • More applicants ranking programs they had little true interest in, purely as insurance.
  • Rank lists that are less tightly aligned with actual preference strength past about rank 8–10.

Geographic Behavior: Distance and Regional Anchoring

The biggest myth is that virtual interviews completely broke geographic constraints. That is wrong. The data shows geography loosened but did not disappear.

Before virtual interviews, travel cost and time created natural friction. You had to decide whether a cross-country program was “worth a flight.” That filtered a lot of low-probability, far-distance options out at the interview step.

Post-virtual, that filter is gone. You can interview with a California program at 9 am and an East Coast program at 2 pm from the same chair.

Let us quantify the shift using approximate distributions of matched distances based on published NRMP and institutional data.

pie chart: Same State, Same Region, Different Region

Approximate Distance Between Medical School and Residency (US MD Seniors)
CategoryValue
Same State40
Same Region35
Different Region25

Pre-virtual era (2017–2019) numbers for many core specialties looked roughly like:

  • Same state: ~45–50%
  • Same region: ~30–35%
  • Different region: ~15–20%

In the virtual era (2021–2023), that mix shifted:

  • Same state: down ~5–10 percentage points
  • Different region: up ~5–8 percentage points

So, yes, people matched farther from home somewhat more often. But the majority still matched in-state or in-region. Geography weakened, it did not evaporate.

Rank list behavior mirrored this. What changed:

  1. More “geographic reach” programs entered the middle of lists (ranks 5–12).
  2. Applicants more often created bimodal lists: cluster of local programs plus cluster of distant “reach” or prestige programs.
  3. The correlation between program distance and rank position decreased but stayed negative (closer = usually higher rank, on average).

Program directors noticed this too. I have heard versions of: “We are interviewing many more out-of-region students than before, and we cannot tell who is genuinely willing to move.”

That ambiguity shows up directly in rank lists. Applicants often rank far-away programs higher than they realistically intend to attend if family or partner constraints become binding late in the process. Which means rank order does not always equal “true” preference, especially once you factor in last-minute life changes that would have been filtered out by travel constraints earlier.

Prestige, Competitiveness, and “Reach” Ranking Behavior

Virtual interviews also supercharged prestige chasing. When flying to 5 “top-10” programs cost thousands of dollars, most applicants self-limited. Once those interviews went online, you saw a clear uptick in:

  • Applications to top-quartile programs.
  • Interviews attended at reach programs.
  • Top programs interviewing a more geographically and numerically bloated pool.

Look at an approximate pattern for a moderately competitive specialty (e.g., anesthesiology) among mid-to-high performing US MD seniors.

Share of Rank List Allocated to Higher-Tier Programs
Period% of Ranked Programs in Top Quartile by Prestige
2017-201928%
2021-202337%

This does not mean these applicants matched more often at those places. It means they filled a larger proportion of the top of their lists with them.

Subjectively, you can hear it in their language: “I added 4 big-name programs because why not, the interviews are virtual.” The phrase “because why not” is usually a red flag in rational choice modeling. It often translates to “I am overweighting low-probability prestige outcomes.”

On the program side, top centers responded with longer rank lists and more cautious ranking of “tourist” applicants—people who seemed enthusiastic but were non-committal about geography, or clearly interviewing everywhere.

Net behavior effect on applicants’ lists:

  • More clustering of high-prestige programs in ranks 1–5.
  • Compression of realistic “good fit” mid-tier programs down to ranks 6–12.
  • Increased risk of falling further down the list if the reach cluster does not pan out.

The match algorithm still strongly favors the applicant side, but more applicants are effectively gambling more of their top slots on reach choices than before.

Signal Quality: How Well Do Rank Lists Reflect True Preferences?

Here is where the data gets ugly.

In the in-person era, constraints forced you to clarify your preferences before rank list submission. Cost, time, and fatigue all pushed you to:

  • Only interview at programs you might actually rank.
  • Cancel visits when a program fell off your mental list.
  • Use in-person visits to quickly demote poor fits.

With virtual, two decouplings happened:

  1. Interview acceptance behavior decoupled from serious interest.
  2. Rank-ordering became less grounded in embodied experience and more in proxies: website quality, virtual social performance, rumor, and brand.

I have seen internal surveys where programs asked: “If you could re-rank after visiting in person, how many programs would move more than 5 spots?” In virtual-only cycles, 35–45% of respondents said yes. In mixed or in-person cycles that number was much lower, frequently under 20%.

You can think of this as “preference noise.”

Let us frame it with a crude metric: proportion of applicants who later reported regretting the relative order of at least 3 adjacent programs on their lists.

bar chart: 2017-2019, 2021-2023

Self-Reported Rank List Regret (Relative Ordering of Programs)
CategoryValue
2017-201918
2021-202331

Does that mean virtual interviews are bad? Not exactly. It means virtual interviews weakened several historically strong signals:

  • Physical environment and city “feel.”
  • Day-long informal interactions with residents.
  • Non-verbal interpersonal fit markers.

Applicants replaced those with:

  • Website quality, which heavily favors better-resourced or brand-name programs.
  • Social media presence.
  • One-hour Zoom socials with 6–10 residents.

The result: more weight on reputation and less on fit details that often drive long-term satisfaction.

Behavioral Patterns Inside the Rank List

Look at how applicants structure their lists now versus pre-virtual. You see a few recurring patterns.

Pattern 1: Steep Drop After Top 3

Many rank lists show a very steep preference drop between ranks 1–3 and everything below. That was always somewhat true. Virtual interviews magnified it.

Why? Because adding more programs later is cheap, and psychological bandwidth to deeply evaluate each one is limited. Many applicants go deep on a handful of favorites and then treat the rest as a roughly interchangeable safety net.

The curve tends to look like:

  • Rank 1–3: Heavily researched, emotionally invested.
  • Rank 4–8: “I liked these a lot too, but less clear differentiation.”
  • Rank 9+: “Programs I would attend, but did not have energy to distinguish precisely.”

Pattern 2: “Chaos Middle”

The “chaos middle” is where I see the most noise—programs that could easily swap ±3–4 positions without any real change in satisfaction.

Virtual interviews increase chaos because:

  • Shorter, more compressed interview days reduce distinctive impressions.
  • Many programs converged on similar virtual formats, making them blur together.
  • Applicants often interviewed at 15–20 sites; cognitive compression happens.

From a decision quality standpoint, the chaos middle is where applicants could most rationally benefit from structured scoring systems: spreadsheets, weighted criteria, resident satisfaction data. Few actually do this rigorously.

Pattern 3: Late-Added Insurance Programs

Virtual convenience made it easy to add interviews late in the cycle—especially for programs that sent out later waves when others had cancellations. These often show up at the very bottom of lists:

  • Programs without strong geographic appeal.
  • Locations not initially desired.
  • Perceived to be safe matches.

Behaviorally, these bottom programs are “acceptable but undesired.” They exist because “any residency is better than no residency,” a statement that is statistically accurate given the career consequences of not matching.

Match Outcomes and Unintended Consequences

Let us talk outcomes, not just behavior.

Two specific outcome shifts tied to virtual interviews and rank list behavior:

  1. Increased interview and rank list asymmetry.
  2. Slightly higher proportion of unfilled positions in some specialties, despite excess interview volume.

line chart: 2017, 2018, 2019, 2021, 2022, 2023

Approximate Unfilled Position Rate by Era (Selected Specialties)
CategoryPre-Virtual EraVirtual Era
20173null
20183.2null
20193.1null
2021null3.8
2022null4.1
2023null4

You see a modest but real bump in unfilled slots in some fields in the virtual era, even though:

That is a textbook coordination failure. Plenty of interest. Poor alignment.

From an algorithm standpoint the NRMP match function did not change. The inputs did:

  • Higher interview hoarding by competitive applicants.
  • More programs over-ranking candidates who were never realistically going to attend.
  • Longer, noisier tail segments of rank lists.

The system produced more unmatched applicants in certain segments and more unfilled positions, especially at less geographically desirable or less prestigious programs.

Practical Data-Driven Advice for Current Applicants

You cannot change the macro system. You can absolutely change how you behave inside it.

A few strong, data-backed recommendations:

  1. Do not just maximize interview count; optimize rank list signal quality. Beyond about 14–16 interviews in most core specialties, the marginal probability of matching does not improve much. Focus more on depth of evaluation than sheer count once you cross a safe threshold.

  2. Treat virtual format as a reason to gather more structured data, not less. Create a scoring rubric before interview season. Weights for things like:

    • Resident happiness (subjective but crucial).
    • Case volume and educational structure.
    • Geographic acceptability for you and any partner / family.
    • Fellowship match outcomes if relevant.
  3. Explicitly tier your list. I tell applicants to define tiers before ranking:

    • Tier 1: “Would be genuinely thrilled here.”
    • Tier 2: “Solid fit; I expect to be happy.”
    • Tier 3: “Acceptable. Prefer not, but okay if matched.” Then sort within tiers with finer-grain criteria. This reduces chaos and regret.
  4. Be honest with yourself about geography. The data shows most people still match near where they trained or lived. If you know you would strongly prefer to stay near family, do not pretend otherwise in your rank list. Match algorithm rewards honest ordering.

  5. Use second looks very selectively. Virtual second looks (emails, extra Zooms) can help resolve top-5 uncertainty, but they also risk overweighing recent impressions. Anchor yourself with the data you already collected instead of chasing confirmation.

Mermaid flowchart TD diagram
Residency Rank List Construction Flow in Virtual Era
StepDescription
Step 1Set Priorities & Tiers
Step 2Collect Structured Data During Interviews
Step 3Score Programs Objectively
Step 4Consider Adding More Interviews
Step 5Refine Top 8 Using Fit & Gut
Step 6Finalize Rank List Based on True Preference
Step 7Within Safe Match Range?

That is the process I see high-performing applicants using now. Less romantic, more analytical. That is not a bad thing.

Key Takeaways

  1. Virtual interviews lengthened rank lists and increased geographic reach, but they also injected more noise into the middle and bottom of lists.
  2. Prestige chasing intensified; applicants allocate more top slots to reach programs than before, sometimes at the expense of realistic, high-satisfaction options.
  3. The strongest applicants in this environment act like analysts: they build structured evaluation systems so their rank lists reflect real preferences in a noisier, lower-fidelity interview world.

FAQ

1. Did virtual interviews actually decrease match rates overall?
No. Aggregate match rates for US MD and DO seniors remained relatively stable. The bigger shifts occurred in distribution: more unfilled positions in certain programs and specialties, more applicants clustering at popular programs, and more volatility in where individual applicants landed relative to their list.

2. Should I limit the number of programs I rank to avoid looking “desperate”?
No. The algorithm does not penalize you for ranking more programs. Ranking more acceptable programs strictly increases your probability of matching. The problem is not list length; it is putting programs you truly do not want on the list at all. If you would rather go unmatched than attend a program, do not rank it. Otherwise, rank it and do not overthink the optics. Programs never see your full list.

3. How do I compare programs fairly when all the virtual interview days feel the same?
Standardize your data collection. Use the same 8–10 questions for every interview day and rate each program immediately after. Ask residents explicitly about work hours, off-service rotations, and how they feel about leadership responsiveness. Over time those structured notes will differentiate programs that felt identical in the moment.

4. Is it still worth flying for in-person second looks if allowed?
Only if you are truly torn between a small number of top choices and the travel cost and time are manageable. For many applicants, structured virtual data plus conversations with current residents is enough. In-person visits can clarify extreme uncertainty about city or culture, but they can also introduce recency bias and overemphasize superficial impressions.

5. Do virtual interviews favor or hurt IMGs and DO applicants in rank list outcomes?
Mixed effect. Virtual interviews reduced financial and visa-related travel barriers, allowing IMGs and DOs to attend more interviews and appear on more rank lists. However, prestige amplification and heavier reliance on online reputation can disadvantage lesser-known schools and backgrounds. The net result: more opportunities to get in the door, but still significant competition at the ranking stage, especially for geographically or prestige-constrained applicants.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles