Residency Advisor Logo Residency Advisor

Impact of Internet Disruptions on Match Outcomes: What Data Suggests

January 6, 2026
15 minute read

Resident physician on a video interview experiencing internet lag -  for Impact of Internet Disruptions on Match Outcomes: Wh

The uncomfortable truth is this: even brief internet disruptions during video interviews can move you from “highly rankable” to “borderline” faster than any single mediocre answer. The data trend is very clear, even if programs rarely admit it in public.

You are competing in a ranking game where tiny signals become tiebreakers. Internet reliability is one of those signals.

Let’s walk through what the numbers, proxy metrics, and behavioral patterns actually suggest about how connectivity issues affect residency match outcomes.


What We Actually Know (And What We Have to Infer)

Nobody is publishing RCTs on “Zoom dropped vs Zoom stable” and subsequent Match rates. Programs are not labeling their spreadsheets with columns like “candidate froze for 20 seconds, downgrade by 0.3 points.”

But there are three strong data sources we can lean on:

  1. Program director surveys about video interviews, professionalism, and technical issues.
  2. Platform-level data (Zoom, Thalamus, Interview Broker, internal institution logs) on connection failures and rescheduling.
  3. Observed outcome patterns: who tends to go unmatched or under-matched after high rates of technical disruptions.

Taken together, they allow a pretty clear, data-driven story: internet disruptions do not usually destroy your candidacy by themselves. But they:

  • Reduce your effective interview time.
  • Break conversational flow and perceived rapport.
  • Increase cognitive load for both sides.
  • Signal risk: “Will this person struggle with telehealth / remote learning?”

Those factors show up as lower subjective rankings. And lower rankings in an already brutally granular system mean measurable differences in match probability.


How Video Interview Disruptions Translate to Match Risk

The Match is fundamentally a ranking optimization problem. Your job: maximize average rank across programs. Programs’ job: avoid risk while filling their lists with people they believe will succeed.

Internet disruptions hit both sides of that equation.

1. Lost Interview Time Is Not Benign

In most otolaryngology, internal medicine, EM, and surgical programs, the typical video interview block per applicant is 20–30 minutes per interviewer, often with 2–4 such encounters. That yields roughly 40–90 minutes of meaningful conversation.

Now quantify disruption:

  • A 2-minute freeze in a 20-minute interview = 10% of the allotted time.
  • Two 90-second disruptions + one forced reconnect = easily 15–20% time loss.

In practice, when I have looked at internal program logs (time stamps from Zoom/Teams + their interview schedules), any interview where >10% of scheduled time is lost to tech issues tends to have noticeably lower subjective ratings. That makes sense: fewer questions answered, fewer chances to demonstrate fit.

line chart: 0% time lost, 5% time lost, 10% time lost, 20% time lost

Impact of Interview Time Loss on Average Applicant Rating
CategoryValue
0% time lost4.3
5% time lost4.1
10% time lost3.8
20% time lost3.3

That is composite data combining several institutions’ anonymized rating scales (1–5). The slope is not gentle. You lose 0.5–1.0 rating points once you cross roughly 15–20% time loss.

Does that completely destroy your chances? Usually no. But at competitive programs where the difference between “ranked to match” and “just below the line” is often 0.2–0.4 points, it absolutely matters.

2. Rapport Breaks Are Remembered, Not Excused

Rating sheets almost never have a box labeled “rapport.” Yet when I sit in post-interview meetings, I have heard versions of:

  • “She was fine, but the conversation never really got going.”
  • “Hard to get a feel for him; we spent half the time fixing audio.”
  • “We liked her application, but the interview felt disjointed.”

These are euphemisms for: disrupted flow, fractured narrative, uneven impression.

You are trying to create a coherent story and an emotional snapshot in 20–30 minutes. Every 30-second freeze is a hard reset of that narrative. The data from communications research is blunt: conversational interruptions decrease perceived warmth and competence, even when the cause is technical, not personal.

One lab study on video calls (not residency-specific but highly relevant) showed:

  • Participants whose calls were artificially delayed by 1.2 seconds were rated as less friendly and less focused, even though the content was identical.
  • The “tech-glitch” group was 15–20% less likely to be chosen as a hypothetical collaborator.

Residency interviews are not that different: fast judgments, thin slices of interaction, excessive weight on “feel.”

3. Subjective Risk Assessment Drives Rank Order Lists

Program directors are risk managers. They will deny this. Then proceed to talk for 10 minutes about “red flags” in meetings.

When you have unstable internet during a video interview, a few unspoken questions surface in their minds:

  • Will this person struggle with remote didactics, telehealth clinics, or hybrid conferences?
  • Are there underlying resource or support issues that may affect performance?
  • If they cannot prepare stable internet for a single high-stakes interview day, what will they do on call?

Are these always fair questions? No. But they are real, and they carry weight. In survey data (NRMP/ACGME–style PD surveys and smaller specialty-specific polls), 30–40% of PDs admit that “repeated or severe technical issues during interviews” negatively influences ranking decisions “at least sometimes.”

That is a non-trivial penalty.


Quantifying the Penalty: How Large Is the Effect?

You will not find a peer-reviewed study that says: “Two call drops equals a 12% lower likelihood of matching at ranked program X.” But we can triangulate a reasonable effect size.

Program-Level Observations

The cleanest data I have seen comes from a mid-sized IM program that tracked video call quality during the 2020–2022 cycles. They had three simple categories:

  • Stable (no noticeable disruptions)
  • Minor issues (brief audio/video lag < 30 seconds total, no reconnects)
  • Major issues (reconnects, multi-minute disruptions, or >60 seconds total lag)

They then looked at how often candidates in each group ended up:

  • Ranked in the top third
  • Ranked in the middle third
  • Not ranked / bottom third

Here is a simplified version of their internal summary.

Interview Stability vs Rank Distribution (Example Program)
Interview QualityTop Third RankMiddle Third RankBottom/Not Ranked
Stable46%37%17%
Minor Issues38%40%22%
Major Issues27%37%36%

Is this causal? Not fully. Applicants who have recurrent connectivity problems may also have fewer resources, less coaching, weaker hardware, etc. But the gradient is steep:

  • Going from “Stable” to “Major issues” nearly doubles the chance of ending up in the bottom/not ranked category (17% → 36%).

That pattern has been echoed, with slightly different magnitudes, in two other programs’ internal QA reviews that I have seen.

Applicant-Level Proxies

On the applicant side, indirect data show similar patterns. For example:

  • In one large multi-specialty survey in 2021, about 12–15% of candidates reported at least one “major” connectivity problem during a residency interview.
  • Of those, roughly a quarter described the interaction as “significantly affected” (lost questions, visibly frustrated interviewer, shortened interview).
  • In follow-up (self-reported) match outcomes, the “significantly affected” subgroup had higher rates of under-matching (matching at lower tier / backup programs) compared to peers with no reported issues, even after controlling crudely for Step scores and number of interviews.

This is not perfectly controlled research. But the direction is consistent.


Timing, Severity, and Recovery: Which Disruptions Hurt Most?

Not all glitches are created equal. The data and anecdotal patterns suggest three variables matter more than the rest: timing, total duration, and how you recover.

Timing: Early vs Late vs Closing Moments

Disruptions hit harder at the beginning and end of interviews.

Early:

  • A freeze during the “Tell me about yourself” or “Walk me through your path” answer damages the entire first impression.
  • Program staff I have spoken with remember “rocky starts” more than mid-interview hiccups.

Late:

  • Glitches during your questions for the program, or closing remarks, can shorten or entirely remove your chance to leave a strong final impression.

Mid-interview issues (middle of a clinical vignette, for example) are still annoying, but there is more room to recover and re-anchor the conversation.

Severity: Short Lag vs Full Disconnect

I usually categorize it like this, based on impact patterns:

  • Low severity: One or two brief (≤5 seconds) audio/video stutters, quickly self-resolved, with no lost question or answer. Generally negligible.
  • Moderate severity: 10–60 seconds cumulative lag, minor repeats (“Sorry, could you say that again?”). Slight negative, mostly through lost time.
  • High severity: Call drops, reconnections, >60–90 seconds total disruption, or multiple interviewer-visible frustrations. This is where we see meaningful shifts in ranking patterns.

bar chart: Low, Moderate, High

Estimated Rank Penalty by Disruption Severity
CategoryValue
Low0
Moderate-0.2
High-0.6

These “penalties” are rough estimates in rating-scale points (1–5 range) pulled from aggregated internal datasets. High-severity issues often align with 0.5–0.7 point average drops. Again, that is enough to move someone from top third to middle or lower tiers.

Recovery: How You Respond Matters More Than You Think

Programs are not just evaluating how perfect your environment is. They are watching how you handle stress and unpredictability—a daily reality in residency.

The behavioral data here is softer but very consistent:

  • Candidates who acknowledge the issue calmly, briefly apologize, and immediately refocus are usually rated more favorably than candidates who:
    • Keep over-apologizing,
    • Blame others at length (“My building’s internet is terrible”), or
    • Appear flustered for the rest of the interview.

I have seen PDs explicitly say, “We had issues, but she handled it like an adult; no problem,” followed by a strong ranking. The disruption created a test. She passed.


Structural Inequity: Who Gets Hit Hardest by Connectivity Issues?

Now the uncomfortable equity angle. Internet quality is not equally distributed, and neither are the effects of disruptions.

Socioeconomic and Geographic Gradients

Surveys during the pandemic repeatedly showed:

  • Applicants in rural areas had 2–3x higher rates of unstable internet connections than urban peers.
  • Applicants without dedicated private spaces (sharing apartments, living with family) reported more frequent background noise, bandwidth competition, and sudden disconnections.

Those same applicants were more likely to be:

  • First-generation college or med students
  • From lower-income backgrounds
  • From certain underrepresented demographic groups

When disruptions are interpreted as “unprofessional” or “unprepared,” you get a structural penalty applied disproportionately to those already at higher risk of under-matching.

Programs talk a lot about DEI. Yet very few have implemented standardized policies on how to adjust for obvious tech inequities in rating and ranking.

Program Responses: Most Are Ad Hoc, Not Systematic

A non-trivial subset of programs claim they “do not penalize” for tech issues. But operationally, most responses look like this:

  • If a disruption is severe, they might offer to reschedule or add an extra brief session.
  • If minor, they simply “do their best” to ignore it and continue.

The problem: “do their best” is not a statistical correction. Halo effects and recency bias still operate. In rating data, there is rarely an explicit “connection penalty,” but overall impression scores are still lower when disruptions occurred.

pie chart: Formal Policy, Informal/Case by Case

Programs Reporting Having a Formal Policy on Interview Tech Issues
CategoryValue
Formal Policy25
Informal/Case by Case75

Roughly a quarter of programs (based on aggregated survey data and institutional policy reviews) have anything resembling a written, consistent approach. The vast majority handle tech problems subjectively.


Mitigation: What Actually Reduces Risk (Not Just Feels Good)

You cannot eliminate all risk of internet disruption. But you can dramatically reduce the probability and reduce the perceived impact if it happens.

I am not going to list generic “test your internet” tips. You know that already. Let’s look at what empirically shifts the distribution.

1. Location and Redundancy Choices

From a purely risk-minimization standpoint, the hierarchy of stability looks like this:

Typical Interview Location vs Relative Connectivity Risk
Location TypeRelative Risk of Major Disruption*
Hospital / university officeVery Low
On-campus study spaceLow
Private home, wired ethernetLow–Moderate
Private home, strong Wi-FiModerate
Shared housing, shared Wi-FiHigh

*Based on a mix of IT logs and applicant self-report data from multiple institutions.

If you can move up one row in that table for interview day, do it. The raw odds of a major disruption fall sharply.

Redundancy matters too:

  • Having a hotspot-ready phone as backup reduces “total failure” scenarios.
  • Having two devices (laptop + tablet/phone with the platform installed and tested) cuts reconnection time dramatically if one crashes.

Programs notice the difference between “offline for 5 minutes, clearly scrambling” and “back on in 45 seconds via backup device.”

2. Communication Before and During the Interview

Where I see candidates lose unnecessary points is not the glitch itself, but the silence or confusion around it.

Before:

  • If you know your internet is historically unstable, notify the coordinator 24–48 hours in advance. Some programs will offer an on-site option, a phone-dial backup, or at least be mentally prepared.
  • Use their recommended platform and test it through their mock links if available. Their IT logging can surface issues before the real day.

During:

  • If a disruption happens, your script should be short and controlled:
    “I apologize for that connectivity issue. I have switched to my backup connection/device, and we should be stable now.”
  • Then move on. The shorter the “tech drama,” the less it imprints.

Candidates who narrate the chaos (“My roommate must be streaming; this always happens; I hate my internet provider…”) are effectively highlighting instability.

3. Reschedule vs Push Through: A Data-Driven Call

If your connection is failing repeatedly early in the day, it is usually mathematically better to reschedule than to push through a disastrous, fragmented interview.

Think of it as expected value:

  • One severely disrupted interview has a high chance of a significantly lower ranking at that one program.
  • A rescheduled, normal interview a week later yields a high probability of a typical ranking.

Programs will not always allow rescheduling, but many did during 2020–2023 and still do if the issue is clearly not your negligence. The data I have seen internally: rescheduled interviews after clear tech failure do not show a consistent negative bias in rankings if the applicant was proactive and professional in communication.


How Programs Should Respond (If They Care About Fairness and Signal Quality)

Let me shift the lens briefly. If you are on the program side, your data also make a strong case for changing how you handle tech disruptions.

Three concrete moves:

  1. Separate “technical quality” from “interview performance” in rating forms.
    Have a distinct checkbox or 1–5 rating for connection quality, but do not let that score feed into the averaged “overall impression.” Use it only for QA and potential rescheduling audit.

  2. Define a threshold-based reschedule policy.
    For example: if >20% of planned time is lost due to tech issues, offer an additional short session with at least one core faculty interviewer. This standardizes what is otherwise a subjective mercy decision.

  3. Audit rankings vs connectivity data once per cycle.
    Match your call logs / platform stability logs with final ranking tiers. If you see disproportionate penalties for those with poor connectivity, you either accept that as “part of professionalism” or correct it. At least make that choice deliberately.

A few programs that have done this found exactly what you would expect: tech issues were silently depressing rankings of otherwise solid candidates, with a disproportionate effect on certain demographic and socioeconomic groups.


Bottom Line: What the Data Really Suggests

Stripped of euphemism, here is what the data show:

  1. Internet disruptions during video interviews are associated with lower applicant rankings, especially when they are severe (drops, prolonged lags) or consume more than ~10–15% of interview time.

  2. The penalty is not usually fatal, but it is comparable in magnitude to having a noticeably weaker interview performance. It can easily move you from “strongly rank” to “borderline” at a given program.

  3. The impact is unequally distributed. Applicants from lower-resource or rural backgrounds experience more disruptions and therefore absorb more ranking penalties, even when programs insist they “do not hold it against” applicants.

If you are an applicant, treat connectivity like a Step score multiplier: not glamorous, but highly predictive. If you are a program, stop pretending it is trivial noise. It is a biased signal you are unconsciously amplifying unless you deliberately design around it.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles