Residency Advisor Logo Residency Advisor

Thank‑You Notes, Signaling, and Other Extras: Any Measurable Impact?

January 5, 2026
15 minute read

Resident applicant reviewing interview notes and data charts -  for Thank‑You Notes, Signaling, and Other Extras: Any Measura

Most residency applicants are optimizing the wrong 5% of the process and ignoring the 95% that actually moves the needle.

Let’s put numbers on the “extras” everyone obsesses over: thank‑you notes, signaling, letters of interest, “rank to match” emails, away rotation politics, post‑interview communication rules. The data shows a sharp split: a few of these behaviors have measurable impact; most are pure anxiety theater.

You are in a resource‑limited environment: finite time, finite cognitive bandwidth, finite emotional energy. The only rational way to allocate those resources is to ask, for each tactic: “What is the effect size?” Not “Do people talk about it?” Not “Does my friend swear it worked?” What is the measurable impact on interview offers, ranking, and Match outcomes.

Let’s walk through it like a data problem, not a superstition problem.


1. Signaling: The Only “Extra” With Clear Quantitative Impact

Signaling is the exception. This is the “extra” that is not really extra anymore. It is core strategy.

ERAS preference signaling (and similar mechanisms in some specialties) is one of the few parts of this whole circus with actual, published data behind it. The effect sizes are not subtle.

What the numbers show

Data vary by specialty and year, but the pattern is consistent: a signal dramatically increases the probability of getting an interview at that program.

Typical ranges from specialty reports, pilot data, and program‑side analyses:

Approximate Interview Probability With vs Without Signaling
Specialty (Example Year)Interview Rate With SignalInterview Rate Without Signal
Otolaryngology~55–65%~10–20%
Dermatology~50–60%~15–25%
Orthopedics~45–55%~10–20%
Internal Medicine (selective tracks)~40–50%~15–25%

Notice the pattern: signaling often multiplies the interview probability by 2–4x. Not raises it by 2–4 percentage points. Multiplies.

One program director in a surgical subspecialty put it bluntly at a webinar: “No signal? We probably did not even open your application unless you were an outlier on Step scores or research.” That is not an uncommon stance in competitive fields.

How programs actually use signals

From PD surveys and conference presentations, you see roughly three buckets of behavior:

  1. Programs that require a signal to seriously consider an application, except for obvious outliers.
  2. Programs that treat signals as a strong plus but still review non‑signaled applicants.
  3. Programs that claim they ignore signals. (Often smaller or less competitive ones, or those morally opposed to the idea.)

When PDs are asked, “Did signals change who you interviewed?”, the majority in signal‑heavy specialties say yes. Not a vague yes: they describe reordering their lists, rescuing borderline applicants who signaled them, and deprioritizing strong applicants who did not.

bar chart: ENT, Derm, Ortho, IM Subspecialties

Program Directors Reporting Signaling Changed Their Interview List
CategoryValue
ENT80
Derm70
Ortho65
IM Subspecialties55

These percentages are approximate, aggregated from published specialty reports and PD surveys. The exact number is less important than the direction: a majority of programs in competitive specialties are altering decisions based on signals.

Common applicant mistakes with signaling

The main failure mode is not using data reasoning to allocate signals. I consistently see three bad patterns:

  1. Spraying signals at long‑shot dream programs
    Applicants with below‑median scores and zero home support “spend” half their signals on top‑5 programs. Statistically, their baseline probability of interview there is single‑digit percent. Doubling a 3% chance to 6% is still bad math if it means not signaling realistic mid‑tier programs where the baseline might be 20–30%.

  2. Wasting signals on safety programs that would interview anyway
    Community or lower‑tier academic programs that already interview broadly do not gain much from being signaled. Effect size is much smaller where interview thresholds are already low.

  3. Not aligning signals with geographic and narrative coherence
    Programs notice incoherent patterns. If you signal three cities where you have zero ties and ignore a region you claim is “home,” the data story you are telling them does not add up.

A rational signaling strategy looks like portfolio construction. You want a distribution of:

  • A small number of aspirational signals where you are at least within 1 standard deviation of their typical metrics.
  • A core of realistic programs that match your stats, research, and geography.
  • Avoid obvious safeties unless you are extremely borderline overall.

If you are going to obsess over any “extra,” obsess over this one. There is measurable ROI.


2. Thank‑You Notes: High Anxiety, Near‑Zero Measurable Effect

Thank‑you notes are the opposite of signaling: maximal emotional energy, minimal outcome effect.

Look at the data we have—mostly program director surveys and indirect evidence.

  • The National Resident Matching Program (NRMP) Program Director Survey repeatedly finds that “post‑interview contact” and “thank‑you notes” sit low on the list of factors in ranking decisions. Typically far below board scores, clerkship performance, letters, and interview performance.
  • Multiple specialties have conducted informal polls: the majority of PDs say thank‑you notes “rarely” or “never” change rank lists.

When PDs rank factors by importance, thank‑you notes fall into the noise:

Relative Importance of Common Factors in Rank Decisions
FactorRelative Importance (PD Self-Report)
Interview performanceVery High
Letters of recommendationHigh
USMLE/COMLEX scoresHigh
Clerkship gradesHigh
Research / scholarly workModerate
Personal statementModerate
Post‑interview communicationLow
Thank‑you notesVery Low

This is consistent across surveys. The details shift slightly by specialty, but the ordering is stable.

Why the effect is so small

From a decision‑science perspective, it is obvious why thank‑you notes barely move the needle:

  1. Ceiling effect: In most programs, >90% of interviewed applicants send something. When everyone does it, it becomes non‑differentiating background.
  2. Policy constraints: Many programs have explicit rules not to adjust rank lists based on post‑interview contact to keep things fair and compliant.
  3. Timing: Formal rank meetings often occur after interviews conclude but before the flood of mailed/emailed thank‑you notes fully arrives. Even when they do arrive in time, they are rarely re‑surfaced when revising lists.

The PD comment I hear over and over: “I do not rank someone higher because they can write a polite email.”

Are there any measurable upsides at all?

Yes, but they are small, indirect, and hard to capture in macro‑level data.

  • A sharply specific note that references a genuine connection or shared interest may slightly enhance recall when a borderline candidate is discussed.
  • In small programs (say, 3–4 residents per class), a PD who is already positively inclined toward you may appreciate a thoughtful note and feel reassured about your interest.

But those are edge effects. If you need that last 0.5% nudge, you are already in the “coin flip among strong candidates” zone.

How to handle thank‑you notes rationally

If you write them, do it efficiently and stop pretending they are a core strategy:

  • Use a simple, honest template; personalize 1–2 sentences.
  • Do not write novels. 3–5 sentences are enough.
  • Time‑box it: for example, 30–45 minutes at the end of each interview day, then move on.

The opportunity cost of obsessing, revising, and over‑customizing these is high. You are far better off spending that cognitive bandwidth understanding a specialty’s signaling rules, refining your rank list, or preparing for upcoming interviews.


3. Letters of Intent and “Rank to Match” Emails: High Noise, Occasionally Useful Signal

Post‑interview letters of intent (“You are my #1 choice”) and “we will rank you highly” messages from programs live in a messy, data‑poor space. Most of the evidence here is observational and frankly polluted by biased recall and wishful thinking.

Still, patterns emerge.

Applicant‑side letters: Does telling a program they are #1 help?

Take a step back and think in statistical terms:

  • Each program has a roughly fixed ranking of applicants driven by interview performance, file strength, and internal politics.
  • At the tail end, they may debate borderline candidates, adjust ordering within a small cluster, or replace someone if there are serious professionalism concerns.
  • A letter of intent may function as a tie‑breaker for that marginal reordering.

Program directors routinely say some version of:

  • “If someone we liked is on the fence and sends a clear, believable letter that we are their top choice, it may push them slightly up.”

That is not huge. But it is not nothing.

The key is “believable.” PDs are not naïve. They know many applicants send multiple “you are #1” emails. Across years, they see almost all supposedly committed applicants match elsewhere. The base rate of dishonesty is high, so trust in these letters is low.

Limited, but rational guidance:

  • If you send a true letter of intent to one program, and you mean it, there is a small but real chance it helps you at the margin.
  • If you spam multiple programs with contradictory “you are my #1” letters, you are just adding to the noise. Programs pretty much assume it is fiction.

Program‑side “rank to match” messages: Can you trust them?

Short answer: not fully, but they are not random.

Why programs send these:

  • They want to increase the probability that strong candidates who fit their culture will rank them highly.
  • In competitive specialties and locations, they know applicants have options and are using similar data‑driven logic to interpret signals from programs.

The constraint: Programs cannot promise a match. The algorithm favors the applicant’s preferences. “We will rank you to match” means “based on our current intent, we plan to rank you high enough that if you rank us #1, there is a good chance we match.”

But things change:

  • New information about other candidates.
  • Internal politics or faculty preferences.
  • Last‑minute concerns or new application data.

Statistically, what I have seen across cycles (in informal data shared by advising offices and specialty groups):

  • Many applicants who receive an honest‑sounding “we will rank you highly / to match” email do match at that program if they rank it #1.
  • A non‑trivial minority do not, even when they rank that program first.

Treat these messages as a positive signal, not a contract.

From an optimization standpoint:

  • If you receive strong, specific, personalized communication from a program that you already like, it is rational to move them up somewhat on your list.
  • It is not rational to rank a clearly worse fit program higher purely because they flattered you in an email.

4. Away Rotations, Home Bias, and “Extras” That Actually Move Numbers

Away rotations and home‑institution favoritism are not “extras” in the same category as thank‑you notes. They move real numbers, especially in competitive or procedurally heavy specialties.

Let’s talk effect sizes.

In fields like orthopedics, neurosurgery, dermatology, EM, and some surgical subspecialties, you see consistent patterns in match lists:

  • A large share of matched applicants are home students or did an away rotation at that program.
  • The percentage can easily run over 50% at some institutions.

hbar chart: Derm, Ortho, EM, Neurosurgery

Approximate Proportion of Residents Who Were Home or Did an Away Rotation
CategoryValue
Derm60
Ortho55
EM45
Neurosurgery50

These are approximate numbers aggregated from multiple program‑reported data and advising‑office analyses. The exact values vary year to year, but the direction is consistent: rotation exposure matters.

Mechanically, why?

  • Faculty get extended, high‑granularity data on you: work ethic, teachability, how you function in a team, procedural skills.
  • You get more detailed letters, often stronger and more specific than generic “performed well in clerkships” notes.
  • You become a known quantity. Which is risk‑minimizing for programs.

This is not soft, fuzzy “fit.” It is extended performance observation. For PDs, that is gold.

Mistakes applicants make with away rotations

Again, most errors come from ignoring base rates and opportunity costs.

  • Doing multiple aways at unrealistic top‑tier programs while skipping mid‑tier ones where you actually have a solid match probability.
  • Treating an away only as “try to impress” and ignoring the down‑side risk: a mediocre or poor showing can actively hurt you. A bad rotation at a place you love is worse than no rotation there.
  • Overestimating how much one away can override very low board scores or major red flags. It can help, but it does not magically reset your statistical profile.

The hidden cost: every away rotation is a massive time and energy sink. New environment, new expectations, and often a month of you being off‑balance. If a specialty’s data show that aways are strongly predictive of interviews and matches, they may be worth that cost. If not, you are better off investing that month elsewhere.


5. Other Extras: Social Media, Website Stalking, Pre‑Interview “Interest” Emails

There is endless lore around smaller extras: following programs on social media, liking their posts, sending generic “I’m very interested in your program” emails before interviews, reading every page of their website and then referencing it in the interview.

Let’s be blunt: the measurable impact of almost all of this is effectively zero on the macro level.

Program directors do not sit with a spreadsheet of “Twitter engagement scores” when building rank lists. They barely have time to read letters before back‑to‑back interviews.

Where these “extras” can have small, local, qualitative impact:

  • You pick up concrete program information from their website or social channels that lets you ask intelligent, specific questions on interview day. That can improve perceived engagement and fit.
  • Very rare case: a pre‑interview email that shows true, unusual alignment (for example, you have a very niche research interest that exactly matches a new program initiative) might get someone to take a closer look at your file.

But those are second‑order effects of being genuinely informed and aligned, not of the “extra” behavior itself.

From a data‑allocation perspective, here is the right mental model:

  • Spend 90% of your discretionary effort improving the factors PDs rank as high importance: clinical performance, letters, interview skills, coherent application story.
  • Use the remaining 10% for low‑cost extras that marginally improve information flow or signal seriousness, but do not expect them to rescue a weak core application.

6. Where Applicants Systematically Misallocate Effort

When I look at behavior patterns across cycles, there’s a clear misallocation problem.

Applicants tend to:

  • Underinvest in:
    – Strategic signaling
    – Thoughtful, data‑aware program list construction
    – Interview practice and honest feedback
    – Understanding each specialty’s real screening thresholds

  • Overinvest in:
    – Perfecting thank‑you note wording
    – Obsessing over whether a PD replied or did not reply to an email
    – Crafting elaborate “rank to match” letters to multiple programs
    – Micromanaging social media follow/unfollow decisions

Here is how I would weight marginal ROI on common “extras” from 0–10, based on available data and PD surveys:

Approximate ROI Scores for Common 'Extra' Behaviors
BehaviorROI (0–10)
Preference signaling (done strategically)8–9
Away rotation at realistic target7–8
True, single letter of intent to #13–4
Generic thank‑you notes1–2
Pre‑interview “interest” emails1–2
Social media engagement with programs0–1

These are not precise measurements, but they align with PD‑reported importance and observed match patterns. The message is obvious: you win by pushing your effort into the top two rows, not the bottom four.


7. A Simple, Data‑Sane Strategy for “Extras”

Let me cut through the noise and give you a practical, numbers‑respecting approach.

Mermaid flowchart TD diagram
Residency Extras Decision Flow
StepDescription
Step 1Residency Application Extras
Step 2Signaling & Away Rotations
Step 3Brief Thank-you Notes
Step 4Ignore or Deprioritize
Step 5Allocate time & strategy
Step 6Time-box and move on
Step 7Does it change interview odds?
Step 8Low-cost courtesy?

Operationally:

  1. Maximize impact of signals
    Work with an advisor or mentor who understands your specialty’s historical data. Allocate signals across aspirational and realistic programs with an eye on your actual competitiveness, not your ego.

  2. Use away rotations where they are empirically powerful
    In specialties where home/away status strongly predicts matching, prioritize aways at realistic programs, not just celebrity institutions.

  3. Treat thank‑you notes as professional hygiene, not leverage
    Send short, sincere notes if you want. Do not expect them to fix a mediocre interview or weak letters. Do not spend hours on them.

  4. Use letters of intent sparingly and honestly
    If you have a clear #1, and it is a realistic one, a single, straightforward letter of intent is defensible. Anything beyond that is noise.

  5. Ignore vanity metrics
    Social media choreography and website deep‑dives only help insofar as they make your interview questions smarter. That’s it.


Key Takeaways

  1. Signaling and, in certain fields, away rotations are the only “extras” with consistently measurable, high effect sizes on interviews and match outcomes. Treat them as core strategy.
  2. Thank‑you notes, generic letters, and most post‑interview contact have, at best, tiny marginal effects. They are professional courtesies, not power tools.
  3. Your competitive edge comes from allocating energy toward the highest‑ROI levers—signals, rotations, interviews, and a coherent application story—rather than trying to game the process with low‑yield rituals.
overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles