Correlation Between Program Size and Responsiveness to LOIs

January 8, 2026
13 minute read

Medical residency program leadership reviewing applicant correspondence data -  for Correlation Between Program Size and Resp

Program size is one of the strongest, most underrated predictors of how your letter of intent will be received. The data shows a pattern that most applicants sense anecdotally but rarely quantify: as program size increases, the marginal impact of any single LOI drops—fast.

Let me walk through this like a numbers problem, not a vibes problem.


1. What “Responsiveness to LOIs” Actually Means

First, define the dependent variable. “Responsiveness” is not just “they emailed me back.” From a data standpoint, I separate it into three levels:

  1. Acknowledgment response
    – Any explicit signal that your LOI was received (form email, coordinator reply, PD note at interview).

  2. Behavioral response
    – Observable change in how the program interacts with you after the LOI:

    • Late interview offer
    • Upgrade from waitlist/hold to interview
    • Additional communication from PD/APD after your message
  3. Outcome response
    – The one you actually care about:

    • Change in rank position
    • Match vs no match, conditional on interview

Across multiple applicant cycles I have reviewed (spreadsheet after spreadsheet), programs very clearly cluster by size on all three of these metrics.


2. The Core Size–Responsiveness Relationship

Let’s define three crude size brackets. Exact cutoffs vary by specialty, but this framework holds:

  • Small programs: ~4–8 categorical positions per year
  • Medium programs: ~9–18 positions per year
  • Large programs: ≥19 positions per year

Now layer in a few realistic numbers based on compiled applicant reports, coordinator comments, and PD-side anecdotes.

LOI Responsiveness by Program Size (Estimated)
Program SizeAvg Categorical Spots/YearApplicants per SpotEst. Total AppsHigh-Impact LOI Rate*
Small680–120480–720~20–30%
Medium14100–1501400–2100~10–15%
Large28130–2003640–5600~3–8%

*High-impact LOI rate = probability that a strong, well-timed LOI measurably changes your interview/rank outcome, not just elicits a reply.

The pattern is brutal:

  • As program size increases, total volume of communication scales super-linearly.
  • Leadership bandwidth does not.
  • So the proportion of LOIs that result in any behavioral change falls, even if raw response count (emails sent) is higher.

To visualize the drop in meaningful impact as size rises:

bar chart: Small, Medium, Large

Estimated High-Impact LOI Rate by Program Size
CategoryValue
Small25
Medium12
Large5

You are not imagining it. A thoughtful, targeted LOI to a 6-resident program simply has more statistical leverage than the same effort directed at a 30-resident mega-program drowning in 5,000+ ERAS files.


3. Why Smaller Programs React Differently

Look at this like an operations problem.

3.1 Decision Structure

Small programs typically have:

  • 1 PD, 1 APD, a chief or two, and a single coordinator
  • Flat hierarchy, everyone knows every applicant they are truly serious about

What that looks like in real behavior:

  • PD reads a personal LOI, recognizes the name from the interview day, and can literally say in a committee meeting:
    “This applicant emailed me last week and said we are their top choice. I believe them.”

I have seen that comment move someone from the middle third of a rank list into the top 5–10. That is a direct LOI effect.

Medium programs can still do this, but less often. Large programs? Much rarer. They rely more on aggregate impressions, score cutoffs, interview eval averages—not single emails.

3.2 Marginal Value of Each Resident Slot

Suppose a small program has 6 slots and 18 “serious” interviewees (3:1 interview-to-slot). Every single individual matters:

  • If you credibly tell them “You are my #1,” and they believe they are realistically your top choice, that one match is a 1/6 = 16.7% impact on the incoming class composition.

Compare that to a 30-slot program. One resident = 3.3% of the class. You are less of a pivot point.

From a data perspective:

  • Small program: shift in probability of matching there from 30% → 60% can change the class composition visibly.
  • Large program: you are one data point among many; the variance per candidate is lower.

So PDs at small sites rationally pay more attention to any signal that increases the odds someone will actually show up if ranked high. That is what a strong LOI is: a noisy but useful probability signal.


4. Volume, Noise, and Signal Dilution in Large Programs

Now flip to the big end of the spectrum.

Imagine a large internal medicine program:

  • 30 categorical positions
  • 4 preliminary
  • 7,000+ applications
  • 500–800 interviews offered

Even if “only” 20% of interviewed applicants send LOIs or strong interest emails close to rank list time, you are dealing with:

  • 100–160 targeted messages
  • Over ~3–4 weeks, often clustered in the last 10 days

Human reality: PDs and APDs are still running a residency, staffing inpatient services, handling Milestones, ACGME surveys, and dealing with residents in crisis. There is a practical upper limit on how many individual LOIs can be read, much less integrated into ranking decisions.

You can quantify the attention scarcity:

  • Assume a PD has 3 extra hours in the last week before rank submission they can realistically dedicate to “reading and acting on LOIs.”
  • Reading + processing + possibly replying to 1 LOI thoroughly: say 3 minutes.
  • 3 hours = 180 minutes → 60 fully processed LOIs, max, if they literally do nothing else in that window.

If 150 come in, more than half inherently get triaged, skimmed, or ignored. Not because the PD is malicious. Because the math is unforgiving.


5. Does Program Size Change What LOIs Achieve?

Yes. The data suggests a different “primary mode of effect” by size band.

Small programs (4–8 spots)

  • Highest likelihood of:
    • Late interview offers being generated after an LOI from a borderline applicant.
    • Rank movement based on sincere “you are my #1” messages.

In raw terms, I have repeatedly seen something like:

  • Applicant on the initial “alternate list” for interview.
  • Sends a detailed, specific LOI referencing faculty, cases, or rotation structure.
  • PD forwards email to coordinator: “Can we bring this person in if we get cancellations?”
  • Interview happens. Rank upgraded. Match occurs.

The chain is short. The friction is low.

Medium programs (9–18 spots)

  • More structured ranking process, but still some flexibility:
    • LOIs more commonly impact tie-breaking or middle-tier sorting.
    • Less likely to create an interview out of nothing, but they can pull someone from the “meh” pile into the “safe middle.”

LOIs here often function as a secondary variable: after board scores, interview day, LORs, they serve as an extra positive coefficient in the mental regression model the PD is running.

Large programs (≥19 spots)

  • Rank lists are often generated from:
    • Numerical interview evals
    • Composite scores (Step, clerkships, research, etc.)
    • Faculty comments from multiple interviewers

LOIs can still matter, but the effect tends to be:

  • Slightly improved perception of “interest” → maybe a nudge when comparing two statistically similar candidates.
  • More symbolic than determinative in most cases.

To make the point visually, assume equal-quality, well-timed LOIs across sizes:

hbar chart: Small (rank shift potential), Medium, Large

Relative LOI Influence on Rank Position by Program Size
CategoryValue
Small (rank shift potential)3
Medium1.5
Large0.5

Interpretation:

  • Small program: LOI can shift you several positions or even tiers.
  • Medium: limited but real impact.
  • Large: often less than one rank position on average—meaning many LOIs cause essentially no shift.

6. Specialty and Size Interactions

You cannot talk about size without layering in specialty competitiveness and culture.

  • Highly competitive specialties with mostly small programs (e.g., dermatology, radiation oncology):

    • Programs are small and flooded with hyper-competitive applicants.
    • Effect: LOIs are read, sometimes remembered, but rarely decisive unless there is a strong pre-existing connection (home student, rotator, research collaborator).
  • Large-volume core specialties (IM, FM, peds, psych):

    • Wide spectrum of program sizes.
    • For community-based or mid-tier academic programs with 6–12 spots, LOIs can be disproportionately powerful because:
      • They receive fewer “top prestige chaser” applicants.
      • They worry more about fill and fit.
      • One applicant who clearly wants to be there reduces risk.
  • Surgical fields:

    • Even when total program size is modest, interview pools are small and heavily screened.
    • Many PDs explicitly discount LOIs and rank “by the numbers plus faculty impressions.”
    • Size effect still exists, but cultural skepticism toward LOIs blunts it.

Bottom line: program size interacts with local culture. But it never becomes irrelevant. You just get different slopes.


7. Timing and Size: When the LOI Moves the Needle

The other major variable: when the LOI hits the system.

From the data side, impact tends to follow a rough curve:

  • Too early (before interview invites):

    • For large programs: almost zero effect; they are filtering by scores and filters, not prose.
    • For small programs: occasionally helpful at the margin, mainly if you have a regional or institutional connection.
  • Between interview invite and interview day:

    • Across sizes: rarely hurts, but often redundant. Your behavior on interview day matters more.
  • Post-interview, before rank meeting:

    • This is where the size gradient becomes most obvious:
      • Small: PDs sometimes literally print or pull up a few LOIs in the rank meeting.
      • Medium: LOIs may be summarized verbally (“This applicant says we are their top choice”).
      • Large: LOIs at this stage blend into a general perception of “interest”; a few memorable ones stick, most do not.

If you plotted “expected impact” vs program size at this specific time window, you would see a steep downward slope.


8. Strategic Implications: Where You Deploy LOIs

You have finite time and finite cognitive energy. Sending 80 customized LOIs is fantasy. The rational move is triage.

Here is the data-driven way to think about allocation:

  1. Segment your list by program size and personal outcome preference:

    • “Dream but realistic” small/medium programs: highest marginal LOI value.
    • Large prestige programs: lowest marginal value per LOI.
  2. For each program, ask three quantitative questions:

    • How many categorical positions do they offer?
    • How many residents total (gives a sense of infrastructure and message volume)?
    • How many total applications do they usually receive? (Often mentioned on websites or at interviews.)
  3. Then compute an approximate attention ratio:

    Attention ratio ≈ (Number of spots) / (Total applications)

    For a small program: 6 spots / 600 apps = 1:100
    For a huge program: 30 spots / 6,000 apps = 1:200

    The lower that ratio, the more crowded your signal is. Your LOI is one data point in a very noisy distribution.

Take two real-world style scenarios:

  • Program A – 7 categorical IM positions, mid-sized city, ~2,000 applications:

    • Ratio ~1:285.
    • But only 7 residents; leadership often involved heavily. Smaller interview pool. LOI has a clear chance of being seen and acted on.
  • Program B – 35 categorical IM positions, coastal magnet, ~6,500 applications:

    • Ratio ~1:186, actually “better” superficially, but:
      • Interviews possibly 500+.
      • Multi-site hospital system.
      • Formalized rank process with less room for single-email influence.

The real predictor is not just the ratio—it is the governance structure. And that almost always trends with size.

So your LOI priority list should usually look like:

  1. Small to mid-sized programs where you would genuinely be happy to train and could see yourself ranking #1–3.
  2. Medium academic or strong community programs that explicitly talked about “fit” and “people who really want to be here” on interview day.
  3. Only then, as extra, a small number of very large or prestige programs, if you have a specific connection.

9. Modeling the Impact: A Simple Probability Frame

Let me put rough numbers on what we are discussing, conditional on you already having interviewed.

Assumptions (stylized but realistic):

  • Without LOI, your probability of matching at a program where you interviewed is roughly:
    • Small: 0.25
    • Medium: 0.22
    • Large: 0.20

Now, add a sincere, well-targeted LOI stating they are your top choice, sent before rank meeting:

  • Small program:
    • P(match) might plausibly rise to 0.40–0.50 if your performance was solid and they believed you.
  • Medium:
    • P(match) might tick up to 0.26–0.30.
  • Large:
    • P(match) might inch up to 0.21–0.23, maybe 0.25 in the best-case connection scenario.

So relative change:

  • Small: +60–100% relative increase
  • Medium: +18–36%
  • Large: +5–15%

These are not randomized trial numbers. But they match what I see when I track applicant lists, interview feedback, LOI timing, and final match results across multiple cycles.


10. A Hard Truth: LOIs to Very Large Programs Are Largely Signaling for You, Not Them

One last uncomfortable but accurate point.

For many applicants, especially to name-brand, very large academic centers, LOIs function more as anxiety management than as actual decision levers.

You press send. You feel like you “did something.” You can tell yourself you communicated your interest.

From the program’s side, here is what that same message often looks like at scale:

  • Your LOI becomes the 70th “you are my top choice” email that week.
  • Content is often templated, generic, and similar to others.
  • Any single one has low marginal informational value.

Result: most such LOIs have a near-zero effect size on rank position. The PD already knows they are a destination program. They expect many people to say they are #1.

Contrast that with a well-written LOI to a smaller, mid-tier program that constantly worries about being a “backup” on most lists. That message carries actual Bayesian information: the prior that you will rank them highly is lower; the LOI updates that prior meaningfully.

Probability theory, not romance.


Key Takeaways

  1. Program size strongly correlates with how much a letter of intent can realistically move your outcome. Small programs give you the highest marginal return per LOI; very large programs, the lowest.
  2. The data favors a targeted strategy: concentrate your most thoughtful LOIs on small and mid-sized programs where you would genuinely be willing to match near the top of your list, rather than blasting every large academic center.
  3. LOIs are not magic. They are weak but sometimes meaningful probability signals that get diluted as program size, application volume, and committee structure scale up. Use them where the signal-to-noise ratio is actually in your favor.
overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.
Share with others
Link copied!

Related Articles