Residency Advisor Logo Residency Advisor

Trend Analysis: How Adcom Expectations for LOR Writers Have Shifted Since 2015

January 5, 2026
14 minute read

Committee reviewing medical school recommendation letters around a conference table -  for Trend Analysis: How Adcom Expectat

Admissions committees stopped reading letters of recommendation the way you think they do about five years ago. The surface rituals look the same—three to five letters, preferably from professors and physicians—but the underlying expectations for who writes them and what those writers can credibly say have changed sharply since around 2015.

You are not just collecting letters anymore. You are constructing a data set about your performance, professional identity, and risk profile. And adcoms read it exactly that way.

Let me walk you through how the expectations for letter writers have shifted, what the data and patterns show, and how you should respond if you are applying in 2025 and beyond.


1. The Macro Shift: From Prestige of Writer to Validity of Data

Around 2015, the dominant myth was simple: get the most famous person possible to write your letter. Department chairs. Big-name researchers. The neurosurgeon whose name is on the building.

The data from admissions outcomes—and from internal rubric changes—tell a different story.

Across multiple schools’ published evaluation guides and presentations from AAMC and AAMC GSA forums, you see the same pattern emerging:

  • Letters are now scored explicitly on:
    • Direct observation of the applicant
    • Specific behavioral examples
    • Comparison to a meaningful peer group
  • “Famous but vague” letters are increasingly treated as low-yield or even red flags.

Put differently: adcoms have shifted from authority-based interpretation (“this person is important, so their praise matters”) to evidence-based interpretation (“this person has seen you do X, Y, and Z, and can provide detailed behavioral data”).

That shift has concrete consequences for who is now considered an ideal letter writer.

Then vs Now: What the Committee Actually Values

Change in Adcom Priorities for LOR Writers (2015 vs 2025)
Factor2015 Weight (Est.)2025 Weight (Est.)
Writer prestige/titleHighModerate-Low
Direct supervision of applicantModerateVery High
Specific behavioral examplesModerateVery High
Length of relationshipModerateHigh
Alignment with competenciesLowHigh

Those “weights” come from how rubrics are structured, how letters are discussed in committee, and how often specific letter features are cited when defending a candidate. I have sat in rooms where a short, concrete letter from a lab supervisor carried more weight than a glowing but generic page and a half from a department chair.

If you still think the game is title-chasing, you are playing by 2010 rules in a 2025 environment.


2. The Rise of Competency‑Driven Letters

Around 2014–2016, AAMC began rolling out the core competencies for entering medical students and updated guidance on holistic review. That did not immediately rewrite letter culture. But by 2018–2020, you see a real shift in what adcoms ask from letter writers.

Letters are increasingly treated as qualitative evidence to support specific competency domains:

  • Reliability and dependability
  • Teamwork and interpersonal skills
  • Cultural competence and service orientation
  • Resilience and adaptability
  • Ethical responsibility and integrity

And when schools adopt standardized letter forms—especially for premed committees or for residents—the change is even more explicit: checkboxes, rating scales, forced ranking.

That pushes adcoms to prefer letter writers who can actually observe these domains. A physician who knows you from two shadowing mornings has nothing useful to say about your resilience or your teamwork. A research PI who has seen you troubleshoot failed experiments for 18 months does.

So the expectation has shifted:

2015:
“Get a doctor, a science professor, and maybe your PI. More names = better.”

2025:
“Get people who can generate high-fidelity behavioral data in multiple competency domains, ideally over months to years.”

Evidence of Competency Focus

You see this in:

  • Published institutional guidelines to applicants that now explicitly ask for:
    • “Writers who can comment on specific examples of your professionalism and interpersonal skills”
    • “Writers who have supervised you closely in academic or clinical work”
  • Evaluation forms used internally that have:
    • Rating scales for “maturity,” “integrity,” “initiative,” etc.
    • Prompts like “Provide a specific example where this applicant demonstrated resilience under pressure.”

So adcoms now expect letter writers who can do three things well:

  1. Describe actual behavior they observed.
  2. Map that behavior to core competencies.
  3. Compare you meaningfully to a relevant peer group (“top 5% of students in 10 years of teaching”).

Letters that fail on those three now routinely get scored as weak, regardless of the writer’s prestige.


3. Shift in Preferred Writer Profiles

Let’s be precise about the types of writers that have gained or lost influence since 2015.

Relative Value of Common LOR Writer Types (2015 vs 2025)
Writer Type2015 Perceived Value2025 Perceived Value
Department chair (no direct contact)HighLow
Course professor (large lecture, minimal contact)ModerateLow-Moderate
Course professor (small class, active engagement)ModerateHigh
Research PI (close supervision)HighVery High
Research PI (name only, minimal contact)ModerateLow
Clinical physician (shadowing only)Moderate-HighLow
Clinical supervisor (active role, volunteering or scribe)ModerateHigh
Non-traditional supervisor (full-time job manager)LowModerate-High

The pattern is obvious: proximity and observation beat hierarchy.

The Decline of “Shadowing-Only” Physicians

One of the clearest trends is the down-ranking of shadowing-based clinician letters. Around 2015, many schools still explicitly suggested getting a physician letter. By 2020, more schools had shifted their language to “if possible, secure a letter from someone who has observed you in a clinical environment” and quietly stopped caring if that person had “MD” after their name.

Why? Because shadowing is a low-signal activity. The average physician seeing a premed shadow twice a week for a month can credibly say the applicant:

  • Showed up on time
  • Wore appropriate clothing
  • Did not harass patients

That is a very low bar. Adcoms know this. They have seen stunning shadowing letters written for students who later failed spectacularly in professionalism.

By contrast, supervisors in paid clinical roles (scribes, MAs, CNAs, EMTs, hospice volunteers) generate higher-fidelity data:

  • How you handle stress at 3 a.m.
  • How you communicate with families
  • Whether nurses and staff trust you

As clinical employment among applicants increased, adcoms updated their expectations. They now often prefer those supervisors over a passive-shadow MD, even if the MD is a subspecialty chief.

If your only physician contact is shadowing, that letter is likely a weak signal in 2025.


4. Quantifying the Tilt Toward “Concrete over Glowing”

You can see the cultural turn in how adcoms talk about “strong” vs “weak” letters in workshops and selection meetings.

Before ~2015, a “strong” letter was often equated with:

  • Very positive adjectives
  • General statements of support (“I recommend without reservation”)
  • Length (more pages implied stronger support)

Now, when you listen to faculty readers, you hear a different vocabulary:

  • “This letter is all adjectives and no data.”
  • “I do not see any specific example of leadership.”
  • “They say top 1%, but they never describe what that means.”

In other words: committees are penalizing letters that are positive but nonspecific. That is a major expectation shift for writers.

What Separates a High‑Impact Letter in 2025

Content analysis across high-scoring letters usually shows:

  • Multiple specific, time-anchored examples (“In March 2024, when our lab lost a technician…”).
  • Clear operationalization of traits (“She independently redesigned our data collection sheet, reducing errors by 30% across three projects.”).
  • Comparative statements with defined reference groups (“Among ~400 undergraduates I have taught, he is in the top 5 for analytical rigor.”).

Those patterns are now baked into many schools’ letter-reading training. Committees explicitly tell readers to favor these concrete, measurable elements.

The expectation for writers has shifted from “praise enthusiastically” to “present case-based evidence with plausible metrics.”

If your proposed writer cannot or will not do this, you are misaligned with current adcom expectations, even if they like you.


5. Growth of Structured and Committee Letters

Another major shift since 2015: the growth and formalization of premed committee letters and structured forms.

Around the mid-2010s, many undergraduate advising offices still treated committee letters as narrative compilations—essentially long essays with snippets from other letters. Over the past decade, more of them have moved toward:

  • Standardized rating forms for competencies
  • Required minimum observation time before writing
  • Explicit policies on who counts as an acceptable evaluator

Adcoms have responded by:

  • Treating robust committee letters as a reliability boost
  • Expecting that if a school offers a committee letter, you use it (or explain why you did not)
  • Scrutinizing the underlying individual letters for consistency with the committee summary

This creates two explicit expectations for letter writers:

  1. They may be contributing to a structured, aggregated product (not just a standalone narrative).
  2. Their narrative will be read in the context of numeric ratings or comparative statements.

If a professor marks you as “top 10%” on a rating scale but their letter reads like “average student, nice person,” adcoms notice the mismatch. That mismatch is, in itself, data.


6. Data-Driven Profile: What an Optimal LOR Set Looks Like Now

Let us convert all this into a practical profile. If you look at admitted applicants from 2020–2024 at mid- to high-selectivity schools, a “high-signal” LOR set often follows a pattern like this:

  • 1–2 science faculty who:
    • Taught you in small or mid-sized courses
    • Supervised you in labs, office hours, or projects
    • Can comment on sustained academic behavior and problem-solving
  • 1 research PI or senior lab supervisor who:
    • Oversaw you for ≥1 year
    • Can give detailed examples of initiative, independence, and resilience
  • 1 clinical or service supervisor who:
    • Saw you in direct patient or community interaction
    • Can speak to communication, empathy, professionalism under stress

If a school caps letters at 3, applicants now often optimize for density of data, not for role diversity. They prioritize the three writers who know them best in day-to-day performance, even if that means skipping the big-name MD they met twice.

Here is a stylized comparison of two applicants and how adcoms actually read their letter sets:

Comparison of Two Hypothetical LOR Portfolios
Portfolio FeatureApplicant A (Old Strategy)Applicant B (Data-Optimized Strategy)
Letter 1Department chair, never taught applicantScience prof, small class, multiple projects
Letter 2Shadowing cardiologist (10 hours observed)Research PI (18 months, multiple papers)
Letter 3Volunteer coordinator who saw applicant 3 timesClinic supervisor (scribe, 800+ hours)
Overall specificity of examplesLowHigh
Consistency across lettersLowHigh
Competencies strongly evidencedFewMany

Applicant A thinks they built an impressive network. Applicant B built a robust, high-signal dataset. Committees increasingly admit Applicant B.


7. Timelines and Early Engagement: Another Hidden Trend

One more subtle but consistent shift: adcoms now implicitly expect that strong writers have known you for longer.

This is partly driven by how competitive applicants behave. More of them:

  • Work 1–2 years in the same lab or clinical job
  • Take gap years that extend relationships with supervisors
  • Engage in longitudinal service projects rather than semester-long dabbling

That means the “comparison set” has changed. A letter based on a single semester of moderate engagement competes against letters based on 18–36 months of intensive collaboration.

From an analysis standpoint, committees see:

  • High-performing applicants with multi-year continuity in 2–3 settings
  • Rich letters that naturally emerge from that continuity

So their mental baseline shifts. They unconsciously (and often consciously) expect solid letters to be grounded in:

Writers who can only say, “I had this student for a single semester and they performed well on exams” are now providing relatively thin evidence.

You cannot change this expectation in April of your application year. The only fix is adjusting your behavior 12–24 months earlier: pick fewer environments, stay longer, and work closely enough that someone can later write a detailed, competency-rich letter.


8. What This Means for You: Concrete Adjustments

Let me translate these trends into decisions you should make as you plan.

1. Prioritize Proximity Over Prestige

If you have to choose between:

  • A Nobel-adjacent PI who barely knows you, and
  • A staff scientist or postdoc who supervised you directly for two years

Pick the supervisor who actually saw your work. Every time.

2. Build Longitudinal Relationships Early

Aim for:

  • At least one science faculty relationship spanning more than one course, or course + research
  • At least one supervisor (lab, clinic, job) who has seen you for 12+ months

Stop sampling a new lab or clinic every semester. The data show that deeper engagement produces better letters and better outcomes.

3. Give Writers the Right Inputs

Most faculty are not tracking AAMC competency trends day to day. You are responsible for aligning them to current expectations without being obnoxious. Practically:

  • Provide a 1–2 page “LOR packet”:
    • Brief reminder of projects and interactions
    • Concrete bullet points for achievements (with numbers if possible)
    • A short note of traits or competencies you hope they can address
  • Politely emphasize that schools value specific examples and comparison to peers

You are not writing the letter for them. You are giving them high-resolution memory prompts so they can produce what adcoms now want.


9. Visualizing the Shift: From Name‑Dropping to Evidence‑Building

Here is the trend in what drives letter impact, simplified into an index. Values are relative, but the direction is real.

line chart: 2015, 2017, 2019, 2021, 2023, 2025

Shift in Relative Impact of LOR Features (2015–2025)
CategoryWriter PrestigeObserved Behaviors & Specificity
20158050
20177060
20196070
20215080
20234090
20253595

The line for “Writer Prestige” is sliding down. The line for “Observed Behaviors & Specificity” climbs steadily. Adcom behavior in committee tracks those lines.


10. The Bottom Line

The data from the last decade of admissions behavior, guideline changes, and committee practice point in one clear direction:

  • Adcoms have shifted from valuing who writes your letters to valuing how well they can document your real behavior over time.
  • Shadowing-only physicians, distant chairs, and generic praise letters are now low-yield signals in a process increasingly driven by competency frameworks and concrete evidence.
  • Applicants who deliberately cultivate long-term, close relationships with supervisors and faculty—and then help those writers generate specific, comparative, behavior-rich letters—match modern expectations and consistently outperform those who chase titles.

Design your letter strategy like a data problem. Maximize the number of high-quality observations per writer, not the number of honors on their CV. That is how admissions committees are reading LORs in 2025.

Student meeting with mentor to plan medical school recommendation letters -  for Trend Analysis: How Adcom Expectations for L

Medical school admissions reader scoring letters of recommendation using a rubric -  for Trend Analysis: How Adcom Expectatio

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles