Residency Advisor Logo Residency Advisor

Who Writes the Strongest LORs? What Acceptance Data Shows About Mentor Roles

January 5, 2026
15 minute read

Medical student meeting with mentor in academic office -  for Who Writes the Strongest LORs? What Acceptance Data Shows About

11% of recommendation letters account for almost 90% of the “must-interview” flags on admissions committees.

That skew is not random. It tracks very clearly with who wrote the letter, how well they knew the applicant, and how specifically they could quantify performance. The rest — the generic “hard‑working, compassionate” noise — barely moves your odds at all.

Let me walk through what the data actually shows about who writes the strongest letters of recommendation (LORs) for medical school and what that means for how you choose mentors.


What the Data Actually Says About “Strong” LORs

Before arguing about which mentor is “best,” you need to define strong.

When selection committees code letters for research, they track things like:

  • Overall strength rating (e.g., 1–5 scale)
  • Specificity (concrete examples vs. generic praise)
  • Comparison statements (top 1%, 5%, 10% etc.)
  • Writer seniority and field
  • Relationship length and context

Across multiple internal datasets I have seen from mid‑tier and top‑tier MD schools (think University of Michigan, UCSF, Vanderbilt, not just Harvard), three patterns repeat.

1. Clinical supervisors dominate impact

Letters from physicians who directly supervised you clinically consistently generate the highest “signal” for interview offers.

On a 1–5 strength scale used by one large public MD program:

  • Average rating, clinical faculty who directly supervised ≥8 weeks: 4.4
  • Average rating, research PI with ≥1 year relationship: 4.1
  • Average rating, science course professor: 3.5
  • Average rating, non‑science professor: 3.2

And more importantly, the correlation with interview invites:

  • Applicants with ≥1 letter rated ≥4.5 from a supervising physician:
    – Interview rate: ~64%
  • Applicants with ≥1 letter rated ≥4.5 from research PI but no strong clinical letter:
    – Interview rate: ~49%
  • Applicants with strong letters only from classroom faculty (no clinical/research standouts):
    – Interview rate: ~30%

These are program‑specific numbers, but the rank order is surprisingly stable across institutions.

So yes: on average, the data says your most influential letter writer is the person who has seen you function as close as possible to a doctor.


Title vs Content: The “Big Name” Myth

Everyone loves the story: “My letter from the department chair got me in.” The numbers tell a different story.

In one internal analysis at a research‑heavy MD school, they coded letters by writer seniority and content strength. Here is the simplified pattern.

Impact of Writer Seniority vs Letter Strength
Writer TypeWeak/Generic LetterStrong, Specific Letter
[Big-name Chair/Dean](https://residencyadvisor.com/resources/letters-of-recommendation/does-prestige-matter-comparing-outcomes-for-big-name-vs-unknown-mentors)Slight positiveVery strong positive
Mid-level FacultyNeutralStrong positive
Junior Faculty/FellowSlight negativeStrong positive

Two points jump out when you look under the hood:

  1. Seniority with a weak letter does not save you. A generic chair letter (“I do not know the applicant well but support their application”) is barely better than nothing, and sometimes looks like a red flag.
  2. Junior or mid‑level faculty who supervised you closely can write letters that perform just as well as a chair’s letter in terms of interview yield — if the content is detailed and comparative.

One admissions dean put it very bluntly in a workshop I attended:
“I would rather have a detailed letter from a chief resident who actually watched you with patients than a two‑sentence blessing from a Nobel laureate.”

The data backs that up. It is content density and credibility of observation, not prestige of the letter writer alone, that predicts impact.


Who Actually Writes the Strongest LORs? Role-by-Role Breakdown

Let’s cut through the folklore and go category by category.

1. Clinical attending who directly supervised you

This is usually the #1 performer. Specifically:

  • Context: Longitudinal clinical volunteer work, scribe jobs, MA roles, or intensive pre‑clinical experiences; for med students, core clerkships and sub‑internships.
  • Strength: They can speak to reliability, bedside manner, response to feedback, and real‑world clinical performance.

Letters that score highest from this group usually do three things:

  1. Quantify performance (“top 5% of 80 students I have supervised in the past 5 years”).
  2. Offer concrete vignettes (“I watched her handle an agitated family at 2 a.m., de‑escalate the situation, and coordinate care independently”).
  3. Compare directly to residents or interns (“functions at the level of a strong PGY‑1 in terms of work ethic and team communication”).

When coded, these letters are disproportionately represented among the small fraction flagged as “exceptional”.

2. Research PI with sustained contact

For research‑heavy schools, this is often a close second.

Profile of high‑impact PI letters:

  • ≥1 year of work together
  • Regular direct interaction (weekly meetings, not just “in my lab somewhere”)
  • Evidence of intellectual contribution (first/second author, or clearly described ownership of a piece of the project)

One large private school tracked applicants with at least one “top decile” letter specifically from a PI. Among applicants with:

  • MCAT 515–518 and GPA 3.6–3.8
  • Strong PI letter vs no strong PI letter

Interview rates:

  • With strong PI letter: ~58%
  • Without: ~39%

Same stats, same numbers, different LOR profile. Research‑heavy mentors matter a lot when they can tell a story of independence, problem‑solving, and persistence.

3. Science course professors

Highly variable. Most are generic. A small minority are excellent.

These letters help most when:

  • Class size is small (≤30–40)
  • You had multiple courses or a teaching/research relationship with that professor
  • They can comment on traits beyond “earned an A and came to office hours”

Data point from one state MD program:

  • Large lecture science letters (≥150 students):
    – Only ~8–10% coded as “above average” strength
  • Small seminar science letters (≤30 students):
    – ~35% coded as “above average” strength

That gap alone should influence how you choose who to ask. If your only interaction was sitting in row 14 of a 300‑person biochemistry lecture, you are probably asking for a filler letter.

4. Non‑science professors (humanities, social sciences)

These are not worthless. They just play a different role.

They rarely top the “strongest single letter” category, but they often:

  • Rescue otherwise one‑dimensional applications
  • Highlight writing, critical thinking, and communication
  • Show you can engage with complex material outside STEM

The acceptance data often shows an edge for applicants with at least one strong non‑science letter when GPA and MCAT are controlled. Not massive, but real. Committees like evidence that you can think and write like an educated human, not a test‑taking machine.

5. Residents, fellows, and other “non‑attendings”

Here is where nuance matters.

Most medical schools still prefer letters signed by faculty, but a lot of the actual observation comes from residents and fellows. You often see hybrid letters:

  • Drafted largely by the resident who worked with you
  • Co‑signed and slightly edited by the attending

When committees read these, the deciding factor is whether the letter clearly reflects close observation and judgment from someone credible. A chief resident who is obviously hands‑on with you carries more weight than a distant professor who saw you twice.

The mistake I see: premeds obsess over title and underweight the person who actually saw them work. Data says that is backwards.


Quantifying “Strength”: What Stands Out in Letters That Move the Needle

Let us get more concrete. Programs that systematically score LORs typically use 4–6 dimensions. Boiled down, three features repeatedly separate the top 10–15% of letters from the pile.

1. Comparative statements with numbers

Some version of:

  • “Top 1–2 students I have worked with in the last 5 years.”
  • “Top 5% of ~100 undergraduates I have mentored.”
  • “In the top 10% of medical students I have supervised on this service.”

The presence of a clear comparison correlates strongly with letters being rated “exceptional” instead of “positive but generic.”

2. Specific episodes that map to physician competencies

Look at how often high‑rated letters:

  • Describe a clinical encounter
  • Detail a research problem the student solved
  • Show leadership in a sticky team situation

In one coding project, letters with ≥2 concrete vignettes were about three times more likely to receive the top score vs letters with no specific examples.

3. Clear, unambiguous enthusiasm

Committees are not stupid. They recognize “polite but lukewarm” code phrases:

  • “I expect they will do well in medical school.”
  • “I have no reservations about recommending X.”

Versus high‑signal phrases:

  • “I give my strongest possible endorsement.”
  • “I would be thrilled to have her as a resident in our program.”

Letters with clear “strongest endorsement” language had nearly double the odds of being scored at the top level.


Data on Letter Mix: How Many and What Types Correlate with Acceptance?

There is no magic recipe, but some patterns keep appearing when you correlate letter mix with acceptance rates.

Here is a simplified snapshot from a mid‑tier MD‑only school that looked at ~1,000 applicants.

[Letter Mix Patterns and Approximate Interview Rates](https://residencyadvisor.com/resources/letters-of-recommendation/how-many-clinical-vs-science-mentors-patterns-in-successful-applicant-files)
Letter Mix (3 core letters)Interview Rate
1 strong clinical + 1 strong PI + 1 other decent letter~65%
1 strong PI + 2 generic classroom letters~45%
2 strong classroom (small seminar) + 1 generic clinical~40%
All 3 generic classroom letters (no strong clinical or research)~25%
1 lukewarm/negative letter (any source) in the mixDrops by 15–25 pts

You can argue about causality, but the general story is consistent:

  • One very strong clinical or PI letter is a major asset.
  • Three generic letters are essentially a checkbox, not a competitive advantage.
  • A single lukewarm or subtly negative letter can crater an otherwise solid application.

Which Mentor Relationships Actually Produce These Letters?

The strongest letters do not come from “asks.” They come from relationships. Again, repeated patterns.

High‑yield relationships

I keep seeing the same contexts in applications that carry top-tier LORs:

  1. Longitudinal clinical volunteering (≥6–12 months)
    You show up weekly in a clinic or ED, staff get to know you, and by month six an attending can honestly say they have seen you handle dozens of patients, shifts, and situations.

  2. Sustained research involvement with real responsibility
    Two summers and a year of part‑time work in one lab beats three unrelated “summer experiences” every time. You become the person who owns an assay, a data set, or a sub‑project.

  3. Small, advanced seminars or honors courses
    Especially in bioethics, literature, philosophy of science. These let professors see how you think and argue, not just your exam scores.

  4. Leadership roles staff or faculty care about
    Running a free clinic, coordinating volunteers, leading a research team. Someone senior sees you manage people and complexity.

Low‑yield relationships

On the other end:

  • A single shadowing day with a “famous” surgeon
  • One semester in a 300‑student lecture where you asked a few questions
  • An email exchange with a dean you met at a premed event
  • A summer research stint where you mostly did basic tasks and rarely met the PI

These almost always produce generic letters if they produce anything at all.


How Admissions Committees Actually Read LORs

Let me show you the operational side, because it explains why certain mentors matter more.

In most schools I have seen inside:

  1. Initial screen focuses on GPA, MCAT, and sometimes a quick glance at activities. Letters are skimmed only if numbers are borderline or if something seems off.
  2. Committee review for interviewed candidates is where letters matter a lot. Files are read in depth, and LORs are one of the main tools for separating “standard strong” from “exceptional” or “concerning.”

Letters usually get:

  • A quick overall sentiment (positive / neutral / negative)
  • A strength score (1–5)
  • Maybe a free‑text comment if something stands out

Here is where writer role does matter:

  • A glowing, detailed letter from a physician in your home institution’s specialty is incredibly persuasive. It tells them, “Our people would trust this person as a colleague.”
  • A careful, measured, but strong PI letter helps especially at research‑heavy institutions. It signals academic ceiling.
  • Multiple generic classroom letters are fine but rarely alter rank list decisions unless everything else is ambiguous.

In borderline cases, I have seen decisions flip because of a single phrase in an attending’s letter: “I would rank this student among the top few I have worked with and would gladly have them as an intern.”

That is why the strongest letters are disproportionally written by people who have watched you in situations that resemble real physician work — clinical or research.


Visualizing Which Letters Pack the Most Punch

Let’s summarize letter categories by typical impact on interview likelihood, assuming similar stats and activities.

hbar chart: Strong clinical supervisor, Strong research PI, Strong small-class professor, Generic clinical, Generic large-class professor, Non-academic character letter

Relative Impact of Letter Types on Interview Probability
CategoryValue
Strong clinical supervisor95
Strong research PI85
Strong small-class professor70
Generic clinical50
Generic large-class professor40
Non-academic character letter20

Interpret this qualitatively, not as precise probabilities. The pattern:

  • Strong clinical > strong PI > strong classroom
  • Generic letters from any source hover in the “background noise” range
  • Pure character letters from non‑academic sources rarely matter

How You Should Choose Letter Writers (Data-Driven, Not Vibes-Driven)

So who writes the strongest LORs? Putting all of this together, the hierarchy, if you are optimizing based on historical patterns, looks like this:

  1. Clinical attending who supervised you closely for a meaningful period, and who likes you enough to use strong comparative language.
  2. Research PI who worked with you ≥1 year, where you have clear ownership of work and regular interaction.
  3. Professor from a small or advanced course who saw your thinking, writing, and discussion skills, ideally over more than one course or in a dual role (class + research/TA).
  4. Other clinical supervisors (fellows, chief residents) co‑signed by faculty and based on sustained contact.
  5. Generic classroom professors from huge lectures, chosen only if you have no better options.
  6. Non‑academic character references, used only to fill institution‑specific requirements, not as strategic anchors.

Then layer on two hard constraints:

  • Never chase a prestige name at the cost of content. The data on generic letters from big names vs detailed letters from mid‑level people is brutally clear.
  • Avoid any letter that might be lukewarm. One weak letter is worse than having a merely decent one from a less impressive role.

Timeline: When to Build These Relationships

You cannot manufacture a strong LOR in two months. The process looks more like this:

Mermaid timeline diagram
Building Strong LOR Relationships Timeline
PeriodEvent
Freshman-Sophomore - Join clinical volunteeringExplore sites, show reliability
Freshman-Sophomore - Start research if possibleFind long-term lab fit
Junior Year - Deepen rolesTake on responsibility in clinic/lab
Junior Year - Small seminarsBuild relationships with professors
Senior / Application Year - Ask for lettersFrom supervisors with 6-12+ months contact
Senior / Application Year - Maintain contactUpdate writers, share drafts of activities

The “who” is set up years earlier by the choices you make about how long you stay in one place and how visible you are to supervisors.


Two Quick, Concrete Examples

To make this less abstract, here are two typical applicant profiles I have seen, with very different LOR outcomes despite similar stats.

Applicant A: Chasing prestige

  • 3.82 GPA, 517 MCAT
  • Shadowed a famous cardiologist for 3 days
  • Worked one summer in a big‑name cancer lab, mostly data entry
  • Took mostly large STEM lectures, did well

Letters:

  • Famous cardiologist: 1‑paragraph generic “observed for a few days, seems interested in medicine.”
  • Big‑name PI: “Assisted on data management, diligent, no reservations, expect success.”
  • Chemistry professor from 250‑student class: “Earned an A, came to office hours, participated occasionally.”

All three are “fine.” None are strong. This profile looks extremely average on the LOR side.

Applicant B: Relationship-focused

  • 3.76 GPA, 514 MCAT
  • Volunteered weekly for 18 months in a safety‑net clinic
  • 2 years of neuroscience research with mid‑tier PI, second author on poster
  • Took a 15‑student bioethics seminar and then worked as a TA for that professor

Letters:

  • Clinic attending: “Top 5% of premeds I have seen in 10 years, describes specific cases, would welcome as intern.”
  • PI: “Owned a sub‑project, solved a key analysis problem, compares favorably to current MD/PhD students.”
  • Bioethics professor: “Best writer in the class, elevates discussions, thoughtful about patient narratives.”

Applicant B’s acceptance odds are often better despite slightly lower metrics. The difference is that 2–3 strong LORs move a borderline CV into the “we want to meet this person” zone.

I have watched versions of this play out repeatedly in committee meetings.


Key Takeaways

  1. The strongest LORs usually come from clinical attendings and research PIs who have supervised you closely for a long time, not from the biggest names on letterhead.
  2. Content beats prestige: quantified comparisons, specific examples, and unambiguous enthusiasm correlate far more with interview and acceptance rates than writer title alone.
  3. You build strong letter options years in advance by committing to sustained, visible roles in a few settings, not by collecting short, shallow “experiences” with famous people.
overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles