Residency Advisor Logo Residency Advisor

What NRMP and Specialty Surveys Reveal About Post-Interview Contact

January 6, 2026
14 minute read

Residency applicant checking email for post-interview responses -  for What NRMP and Specialty Surveys Reveal About Post-Inte

The data is clear: applicants systematically misjudge how much post‑interview contact matters, and programs quietly exploit that confusion.

If you want to play this phase of the Match like an adult and not like a superstition-driven applicant, you have to anchor your behavior in what the numbers say. Not what “everyone in my group chat” claims.

Let’s walk through what NRMP and specialty society data actually show about post‑interview contact, and then translate that into a precise, low‑risk strategy you can execute in under an hour per program.


1. What the NRMP Data Actually Says (Not the Myths)

The NRMP’s Program Director Survey and Applicant Survey do not track “love letters” and “you are ranked to match” emails as separate categories. But they do quantify:

  • How much post‑interview communication influences rank lists.
  • How often applicants and programs change ranks based on this phase.
  • What really drives ranking decisions compared to all the noise.

Across multiple survey cycles, the pattern is stable: post‑interview contact is a weak to moderate factor, not a primary driver.

Approximate Weight of Key Ranking Factors (NRMP PD Survey)
FactorTypical Importance Rating*
Interview performanceVery high
Perceived overall fit with programVery high
Letters of recommendationHigh
USMLE/COMLEX scoresHigh
Post-interview communication (applicant)Low–moderate
Post-interview communication (program)Low–moderate

*“Importance” here is based on proportion of program directors rating an item as 4–5 on a 5‑point importance scale in NRMP Program Director Surveys.

In other words:

  • Interview + fit + credentials = core drivers.
  • Post‑interview contact = tiebreaker / minor nudge at best.

Correlation between scores and rank outcome

NRMP’s “Charting Outcomes in the Match” also shows that Step scores, number of contiguous ranks, and AOA/research status correlate strongly with matching. There is no measurable effect size for “sent a great thank‑you note.”

When you see people say, “My thank‑you note got me my #1,” that is anecdotal noise. The structured data does not support that narrative.


2. What Programs Report Doing After Interviews

Programs are not uniform, but the distribution is predictable. Surveys from NRMP and specialties (e.g., EM, IM, derm, ortho) paint the same pattern.

How often programs contact applicants

Several specialty-specific surveys and NRMP polling over the last decade show rough ranges like these:

bar chart: No Outreach, Limited Outreach, Frequent Outreach

Programs Reporting Any Post-Interview Outreach to Applicants
CategoryValue
No Outreach45
Limited Outreach40
Frequent Outreach15

Interpretation:

  • ~40–50% of programs do no applicant-specific outreach after interviews. Nothing. Zero.
  • Another ~35–45% send limited outreach (a few personalized messages to top candidates, occasional “you’re ranked to match” emails, or invitations to optional second looks).
  • A minority (~10–20%) engage in frequent individualized contact. These are the ones generating most of the applicant rumors.

So if you do not hear anything? That is completely within the statistical norm. Silence does not imply you are low on their list.

Does applicant contact change rank lists?

When surveyed, program directors typically report:

  • Only a minority report “occasionally” or “often” changing rankings based on post‑interview emails or calls.
  • Many explicitly state they never change ranks because of applicant outreach.

Let’s frame that in useful numbers.

Assume:

  • 100 programs in a specialty.
  • 40 never contact applicants and ignore outreach.
  • 45 contact minimally; a few might adjust ranks.
  • 15 contact frequently and sometimes adjust ranks.

Result:

  • Maybe 20–30% of programs are meaningfully influenced by post‑interview communication.
  • Even in those, any effect is small: a slight bump up or down, usually among clustered mid‑tier candidates.

Bottom line: the probability that your email alone vaults you from “unlikely to rank” to “ranked to match” is extremely low. The more realistic effect size is: move within a band of similar applicants by a few positions.


3. What Applicants Report – And Where Perception Breaks

NRMP Applicant Surveys and specialty‑specific surveys show a sharp mismatch between perceived and actual importance of post‑interview contact.

Applicants routinely:

  • Overestimate how often programs send “you will match here” messages.
  • Underestimate how many programs are formally prohibited from such communication by institutional or specialty guidelines.
  • Misinterpret “we liked you a lot” as “we will rank you to match.”

How often do applicants change rank lists based on contact?

When asked, a substantial fraction of applicants report:

  • Changing their rank order list due to:
    • A particularly enthusiastic email from a program.
    • Extra attention by a PD or chair.
    • Silence from a program they liked.

But when you overlay that against match outcome data, it becomes obvious: subjective feelings from emails are not predictive. Interview experience and structural program qualities dominate outcomes.

I have seen more than one applicant move a stable, high-quality program down because “they never reached out” and then end up unmatched or in a clearly worse fit. They used post‑interview “vibes” as if they were data. They were not.


4. Specialty Surveys: Where the Culture Actually Differs

Specialty organizations periodically survey program directors and applicants about communication norms. The exact numbers shift by cycle, but the relative ranking of “how much this matters” is remarkably stable.

Here is a synthesized snapshot based on typical patterns from the last decade:

Relative Role of Post-Interview Contact by Specialty (Qualitative)
SpecialtyTypical CompetitivenessPost-Interview Contact Role
Internal MedicineModerateLow–moderate
Family MedicineLower–moderateLow
Emergency MedicineHighHistorically moderate
DermatologyVery highModerate (but formal rules)
Orthopedic SurgeryVery highModerate
PsychiatryRisingLow–moderate

Key points from specialty data and PD anecdotes:

  • Competitive specialties (derm, ortho, some surgical subspecialties)
    More likely to send targeted outreach to top candidates. But many are now moving to no‑contact or standardized policies because of ethical and fairness concerns.

  • Primary care (FM, peds)
    Much more likely to emphasize the interview day itself and structured scoring systems. Post‑interview contact is often acknowledged but not heavily weighted.

  • Emergency medicine
    Historically had a culture of signaling strong interest both ways. But recent cycles show a trend toward more standardized communication and explicit bans on “you will match here” language.

Overlay all that and you get one consistent sense: yes, some PDs still use contact as an extra signal. Yet the structural move is toward de‑emphasizing it to reduce gamesmanship.


5. NRMP Rules and Program Policies: What You Cannot Rely On

Now the hard constraint: the Match Participation Agreement.

  • Programs cannot ask you how you will rank them.
  • Applicants cannot be required to reveal rank intentions.
  • Programs cannot make ranking commitments as conditions (though many still hint or imply).

Many institutions and specialties now layer on stricter internal rules:

  • No individualized post‑interview contact allowed.
  • Only standardized, batch communications (e.g., update emails to all interviewed applicants).
  • Residents can chat informally but are coached not to signal rank decisions.
Mermaid flowchart TD diagram
Post-Interview Communication Constraints
StepDescription
Step 1NRMP Rules
Step 2No asking rank order
Step 3No coercive promises
Step 4Institutional Policies
Step 5Standardized emails only
Step 6Ban on rank to match phrases
Step 7Specialty Guidelines
Step 8Discourage misleading contact

So any time you think, “They did not reach out, maybe they hate me,” you are ignoring a large, growing subset of programs that are literally not allowed to email you anything meaningful.


6. What the Data Implies About Thank‑You Notes

Let’s be blunt: thank‑you emails almost never hurt you, and they almost never dramatically help you. The role is marginal, but not zero.

Surveys of program directors and coordinators show patterns like:

  • Many PDs do not systematically read or track thank‑you messages.
  • A non-trivial subset uses them as:
    • A weak signal of professionalism and communication skills.
    • A way to remember borderline applicants more clearly.
  • Very few formally score or log them as part of ranking.

If we assign crude probabilities (illustrative, not literal NRMP numbers):

doughnut chart: No effect, Slight positive, Slight negative

Estimated Impact of Thank-You Emails on Rank Position
CategoryValue
No effect70
Slight positive25
Slight negative5

Interpretation:

  • About 70% of the time, your thank‑you email does nothing measurable.
  • Maybe 20–25% of the time, it nudges you slightly upward among peers with similar scores / interviews.
  • In 5% of cases, poor execution (too long, unprofessional, pushy, policy-violating) can annoy a PD and nudge you down.

So the rational strategy is obvious:

  • Send brief, clean, non-needy messages.
  • Target them to your interviewer(s) + program coordinator.
  • Do not ask about how they will rank you. Ever.

7. Interest Signals: Do “You’re My #1” Emails Work?

Applicants love to overvalue this. The data does not support that enthusiasm.

Specialty and PD survey themes:

  • Some PDs say explicit “I will rank you #1” or “Top 3” emails from applicants can positively influence rank, but almost always:

    • Only when the applicant is already in a competitive band.
    • Only to move them a few spots up, not from off-list to match.
  • Many PDs either:

    • Ignore these declarations because “everyone says that.”
    • Discount them because applicants sometimes do not follow through.

When asked about how often they believe applicants’ statements about rank intentions, PD responses cluster around “sometimes” and “rarely.”

So, as a data-driven approach:

  • If a program is clearly your first choice, one concise, honest “you are my #1” note is reasonable, but:

    • Only if not prohibited by program policy.
    • Only if you are comfortable not sending that same statement elsewhere. Because if PDs catch applicants lying, it backfires.
  • For all other programs, the smarter signal is: “I am very interested and would be excited to train here,” which is truthful and non-committal.


8. A Data‑Aligned Playbook: How to Follow Up Efficiently

Here is the part you actually care about: what to do, grounded in the numbers and rules.

Step 1: Respect explicit program instructions

The single strongest “rule” in this phase is: follow directions. Programs notice.

  • If the program explicitly says “no post‑interview contact,” then do not send a thank‑you. The risk of being seen as disregarding policy outweighs any marginal benefit.
  • If the program allows but does not require contact, you can send one short note per interviewer.

Breaking stated policies is one of the few clear ways post‑interview behavior can actually hurt you.

Step 2: Use a standardized, tight thank‑you structure

Keep it to 3–5 sentences. This is about professionalism, not romance.

Example structure that aligns with how PDs scan email:

  1. Subject: “Thank you – [Your Name], [Specialty] Interview [Date]”
  2. Sentence 1: Thank them for their time.
  3. Sentence 2–3: Reference one specific thing from the conversation or program that resonated with your interests or values.
  4. Sentence 4: State genuine interest (“I would be excited to train at [Program]”).
  5. Sentence 5 (optional): Brief closing.

You are not trying to “sell” your application again. They already scored your interview.

Step 3: Differentiate between “polite follow‑up” and “interest signaling”

Polite baseline for every program (unless banned):

  • Thank‑you note to each interviewer.
  • One thank‑you or confirmation note to the program coordinator.

Extra signals only where justified:

  • For your clear top 1–3 programs, a separate, shorter note to PD or APD a bit later in the season restating interest is reasonable.

You are aligning your effort with expected ROI. Spending 45 minutes tweaking a long “love letter” for program #9 on your list is not a good use of time.


9. Interpreting Program Messages: What They Probably Mean

You will get vague, enthusiastic, even confusing notes from programs. The data and policy context helps you decode them.

Common message patterns and their realistic meaning:

  1. “We enjoyed meeting you and think you would be a great fit here.”
    Translation: You are in the broad “we liked this person” bucket. Could be ranked anywhere from top to mid‑list.

  2. “You will be ranked highly” / “We plan to rank you to match.”
    Translation: Depends on culture. In some specialties, this means top 5. In others, it is used far more loosely. Because programs over‑invite and under‑rank, you cannot treat this as a binding signal.

  3. Silence.
    Translation: Either policy prohibits contact, they use standardized emails only, or they simply do not prioritize outreach. You could still be rank #2.

Remember: NRMP rules and institutional policies prevent you from reliably inferring rank position from email tone.


10. Time Allocation: Where Post-Interview Effort Should Go

This is where the “data analyst” hat actually matters. You have limited time and cognitive bandwidth late in the season. The question is not “Is there any benefit to more contact?” but “What is the marginal benefit compared with other tasks?”

Let’s approximate relative returns per hour:

hbar chart: Polished rank list analysis, Preparing for intern year, Targeted follow-up emails, Extended love letters & multiple calls

Relative Value of Post-Interview Tasks
CategoryValue
Polished rank list analysis90
Preparing for intern year80
Targeted follow-up emails40
Extended love letters & multiple calls10

Interpretation:

  • Thoughtful rank list construction (researching programs, geography, training volume, fellowship paths) has extremely high impact on your long‑term outcome.
  • Preparing for intern year (clinical review, logistics) also yields direct benefits.
  • Targeted follow‑up emails have moderate but limited upside.
  • Long, repetitive “I love you the most” campaigns are almost pure time sink.

If you want a quantitative rule of thumb:

  • Cap post‑interview communication work at ~5–10 hours total across all programs.
  • If you find yourself thinking of spending more than an hour on a single program’s post‑interview messaging, you are almost certainly over-investing.

11. A Minimal, Rational Strategy You Can Copy

Pulling all of this together into a concrete plan:

  1. Immediately after each interview day:

    • Jot down 3‑5 specific notes about the program (people you met, things you liked, concerns).
    • Send 2–4 line thank‑you emails within 24–48 hours unless the program says otherwise.
  2. Mid‑season (after most interviews):

    • Re‑review your notes and start structuring a rank list based on:
      • Training quality.
      • Fit.
      • Geography/family considerations.
      • Outcomes (fellowship, jobs).
    • Do not let email “warmth” dominate this analysis.
  3. Late season (after you have a provisional rank list):

    • For your clear top choice program (and perhaps #2–3 if norms allow), send a final, succinct note:
      • If they are #1: explicitly say “You are my first choice and I will rank you #1,” if you are willing to honor it and policies permit.
      • If they are top tier but not #1: “I remain very interested and would be excited to train at [Program].”
  4. Then stop.
    No serial emails. No “just checking in.” No anxiety-driven fishing for reassurance.

This approach aligns almost perfectly with what NRMP and specialty data show:

  • You meet baseline professionalism norms.
  • You give programs who care about communication something to see.
  • You do not violate policies or look desperate.
  • You reserve most of your time and energy for decisions that actually move outcome probabilities.

12. The Signal in the Noise

Post‑interview contact sits in a weird space: emotionally loud, statistically quiet.

The surveys and NRMP data stack up to a simple, unglamorous conclusion:

  • Most of your match probability was determined before or during the interview day.
  • Post‑interview contact is a small, mostly optional optimization layer, not a core strategy.
  • Over‑interpreting or over‑investing in this phase does more harm than good.

If you use it as a professional courtesy and a light signal of interest, you are aligned with how most program directors actually operate. If you try to game it as a covert ranking negotiation, you are fighting the data—and the rules.

Three takeaways:

  1. Treat thank‑you notes and interest emails as low‑stakes, low‑ROI tasks, not as magic bullets.
  2. Do exactly what programs’ policies say, no more, no less; breaking rules is one of the few ways to hurt yourself here.
  3. Spend your real energy on ranking programs logically and preparing to be a strong intern; that is where the numbers say your future is actually decided.
overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles