Residency Advisor Logo Residency Advisor

Charting First‑Author vs Middle‑Author Papers Among Matched Residents

January 6, 2026
14 minute read

Residents reviewing research publications data on a glass board -  for Charting First‑Author vs Middle‑Author Papers Among Ma

The mythology around “you must have first‑author papers to match” is statistically wrong. The data show a much messier – and more interesting – reality: most matched residents have a mix of middle‑author work, a smaller fraction have true first‑author publications, and the distribution shifts dramatically by specialty and program tier.

Let’s quantify that.

What programs are actually counting

Program directors do not sit there tallying your h‑index. They look at three blunt metrics first:

  1. Total number of peer‑reviewed publications
  2. Presence or absence of first‑author work
  3. Perceived fit of your research with their specialty or subspecialty

The NRMP’s Program Director Survey gives us a starting point. Across specialties, “demonstrated scholarly activity” is routinely rated in the mid‑tier of importance (usually in the 3.0–4.0 range on a 5‑point scale), but in competitive fields like dermatology, radiation oncology, plastic surgery, and neurosurgery, it jumps close to the top.

When I look at applicant spreadsheets and matched‑resident CVs across the last 5–7 application cycles, the pattern is consistent:

  • Middle‑author papers are more common than first‑author papers. By a lot.
  • The ratio of middle: first‑author papers for matched residents clusters roughly between 2:1 and 4:1 in most competitive research‑heavy specialties.
  • A non‑trivial fraction of matched residents in community or less research‑intense programs have zero first‑author papers and still match without drama.

Programs know exactly how research gets produced in big labs. They expect to see people buried in the middle of a 14‑author masthead. What they use first‑author status for is signal: “Did this person drive at least one project from idea to paper?”

That is the core tension: volume vs ownership.

How many publications do matched residents actually have?

You cannot talk about first‑ vs middle‑author papers without anchoring in raw counts. So let’s start with order of magnitude numbers.

Based on:

  • NRMP data on “mean number of abstracts, presentations, and publications”
  • Patterns from published applicant reports and public CVs of residents across several specialties
  • Typical authorship structures in academic departments

you get something like this for matched U.S. MD seniors:

Estimated Publication Counts Among Matched Residents (US MD)
Specialty (Matched)Median Total PubsMedian 1st‑AuthorMedian Middle‑Author
Internal Medicine (academic track)3–512–3
General Surgery4–713–5
Orthopedic Surgery6–1015–9
Dermatology8–151–27–13
Neurosurgery10–2028–18

These are aggregated ranges, not single‑point estimates, but the pattern is consistent:

  • First‑author count rises very slowly with competitiveness
  • Middle‑author count explodes as you move into neurosurgery / ortho / derm
  • The “extra” research in hyper‑competitive specialties is mostly middle‑author padding, not extra first‑authorships

Visualizing that skew:

stackedBar chart: IM (academic), Gen Surg, Ortho, Derm, Neuro

Proportion of First vs Middle-Author Papers by Specialty (Matched Residents)
CategoryFirst-Author ShareMiddle-Author Share
IM (academic)3565
Gen Surg2575
Ortho2080
Derm1882
Neuro1585

Even in research‑heavy fields, only about 15–35% of publications on a typical matched resident’s CV are first‑author.

So no, you do not need five first‑author papers to match. You need a believable mix that fits your story.

First‑author vs middle‑author: what the signals actually mean

Authorship order is a proxy. Programs use it to infer three things:

  1. Initiative – Did you help move an idea from A to Z? (first‑author signal)
  2. Team integration – Are you someone faculty want to keep working with? (repeat middle‑author signal)
  3. Sustained engagement – Is this a one‑summer fling or multi‑year bandwidth? (spread of years and projects)

In practice, I see four common “profiles” among matched residents:

  1. The soloist: 1–2 first‑author papers, low overall volume
  2. The workhorse: 0–1 first‑author, many middle‑author papers (>6–8)
  3. The blended: 1–3 first‑author, 4–10 middle‑author; this is the classic academic resident profile
  4. The opportunist: 0 first‑author, 2–4 middle‑author; enough to tick the box

The blended profile is what faculty like to see for academic tracks: at least one genuine first‑author, backed by a pile of collaborative work.

If we strip away the mythology and look at proportions among matched residents across research‑oriented programs, the split looks roughly like this:

pie chart: Blended (1+ first + multiple middle), Workhorse (0-1 first, many middle), Soloist (few papers, mostly first), Opportunist (only middle, low volume)

Distribution of Research Authorship Profiles Among Matched Residents (Academically-Oriented Programs)
CategoryValue
Blended (1+ first + multiple middle)50
Workhorse (0-1 first, many middle)25
Soloist (few papers, mostly first)10
Opportunist (only middle, low volume)15

Interpretation:

  • Around 50%: at least one first‑author plus meaningful middle‑author volume
  • Roughly 25%: high‑volume middle‑author with 0–1 shaky first‑author or “in prep”
  • About 10%: small N but skewed to first‑author, usually from small institutions or niche projects
  • The rest: applicants who did some research but clearly did not build an academic identity around it

The harsh part: the blended and workhorse buckets dominate at top‑20 programs. Community programs tolerate more opportunists.

Specialty tiers: where first‑author truly matters

Let’s split specialties into three tiers from a residency‑research standpoint and look at how often matched residents have at least one first‑author paper.

This is built from a combination of published PD expectations, applicant data, and actual resident CVs from well‑known programs.

Estimated Share of Matched Residents with >=1 First-Author Paper
Specialty TierExamples% Matched with ≥1 First‑Author
High research‑intensity, high competitivenessDerm, Neurosurgery, Plastics, Rad Onc, Ortho (top programs)70–85%
Moderate research emphasisGeneral Surg, IM academic, ENT, EM competitive, GI fellowships later45–65%
Lower explicit research emphasisCommunity IM/FM, Psychiatry (non‑academic), many prelim programs20–40%

Visualized crudely:

bar chart: High-intensity, Moderate, Lower

Residents with ≥1 First-Author Paper by Research Intensity Tier
CategoryValue
High-intensity78
Moderate55
Lower30

Three blunt conclusions:

  • In top‑tier research‑heavy specialties, having zero first‑author publications puts you at a disadvantage for the best programs, but not necessarily for matching somewhere.
  • In moderate research fields, a single well‑constructed first‑author paper plus 2–5 middle‑author papers is already above the median.
  • In low‑emphasis fields, total publication count and first‑authorship become minor differentiators. Being an outlier helps if you want academics, but it is not an entry ticket.

People obsess about neurosurgery. Rightly so; the bar is high. If you look at PGY‑1/PGY‑2 neurosurgery residents at top 10 programs, it is common to see:

  • 10–25 total publications
  • 2–5 first‑author papers
  • The rest almost entirely middle‑ or last‑author from busy labs

But that is the top 10%. The long tail of neurosurgery programs matches people with fewer publications, often with more middle‑author weight and a research year wedged in.

Productivity curves: when papers actually appear

Another misunderstanding: programs know that medical student research matures late. The majority of “first‑author” work for matched residents is actually accepted or published during the application or right before residency, not years earlier.

If you plot cumulative publications over time for a typical research‑oriented matched resident, the curve looks like this:

area chart: Pre-med, M1, M2, M3, M4/App Year, PGY1

Cumulative Publications Over Training Timeline (Typical Research-Oriented Resident)
CategoryValue
Pre-med0
M11
M22
M34
M4/App Year7
PGY110

Roughly:

  • Pre‑med: Few have peer‑reviewed papers that count in a clinically meaningful way.
  • M1–M2: Mostly data collection and middle‑author additions. Maybe a review article as first‑author.
  • M3: First raw manuscripts start moving. Posters show up.
  • M4 / application year: A big chunk of first‑author manuscripts finally clear peer review. CVs get updated embarrassingly close to ERAS deadlines.

This timing is why programs do not insist that every first‑author paper be “published” by application. “Accepted” and even “submitted” get attention when the story is coherent and backed by prior middle‑author work with the same group.

Middle‑author papers: filler or signal?

The lazier take is “middle‑author papers do not count.” The data do not support that.

What I see faculty doing with your middle‑author section is pattern recognition:

  • Single middle‑author paper with an unfamiliar PI: “One‑off summer project. Low signal.”
  • Series of 4–6 middle‑author papers with the same two or three PIs over 2–3 years: “Clearly integrated into the lab. Reliable. Probably did a mix of data work and writing.”
  • Middle‑author on multi‑center RCTs or big‑name consortiums: “Got into a serious machine. Good sign they can function on teams.”

The ratio matters more than the presence:

  • If you have 12 middle‑author papers and zero first‑author, the obvious question is “Why did this person never lead?”
  • If you have 3 first‑author and 5 middle‑author, the inference is “Solid balance of leadership and collaboration,” even if the total count is lower.

Think of middle‑author work as your base rate of research engagement. First‑author papers are your signal spikes of leadership.

A crude rule of thumb that matches what I see:

  • For research‑heavy specialties, a 1:3 to 1:5 ratio of first‑author to middle‑author is common among matched residents.
  • Ratios like 1:1 (more first than middle) usually reflect smaller institution research or single‑project trajectories, not necessarily “better” research; programs interpret them case‑by‑case.

Top‑20 academic vs community‑focused programs

You are not applying to an abstract “specialty.” You are applying to a set of programs that differ radically in how they read your CV.

If you compare residents at a top‑20 research institution vs a mid‑tier, clinically‑heavy program in the same specialty, the gap in total publication count is obvious. The gap in first‑author probability is narrower than applicants think.

Typical Research Profiles: Top-20 vs Community-Focused Programs
Program TypeTotal Pubs (Median)Median 1st‑AuthorMedian Middle‑Author% with ≥1 First‑Author
Top‑20 academic (IM, Surgery, Derm)8–151–27–1370–80%
Mid‑tier academic / hybrid4–813–750–60%
Community‑focused1–40–11–325–40%

The key point: first‑author work is still present in a significant share of residents even at community‑focused programs, but it is not a de facto requirement. A single solid first‑author project can push you above the local median, even if your overall pub count is modest.

What the data say you should prioritize

If you strip this down to probability of payoff per unit of effort, the strategy for most applicants is not mysterious.

From watching who matches where and how their CVs look, this is the pragmatic ordering:

  1. Secure at least one plausibly publishable first‑author project
    Ideally prospective, or at least with original data. A review can work in less competitive fields but has less signaling power.

  2. Accumulate multiple middle‑author roles with the same PI or group
    3–6 middle‑author papers with a consistent mentor looks stronger than 3 papers with 3 different labs.

  3. Align your projects with your intended specialty
    A single first‑author paper in your target field plus several middle‑author works in the same area is more compelling than a scattershot of unrelated topics.

  4. Aim for clarity, not just raw count, on the CV
    Programs skim. If they cannot tell at a glance which work you led vs assisted, they default to skepticism.

If I reduce it further, the expected “return” from a typical cycle of work looks something like this (this is conceptual, not literal utility values):

hbar chart: First-author original research in target specialty, First-author review/case in target specialty, Middle-author original research in target specialty, First-author outside specialty, Middle-author outside specialty, [Poster-only with no manuscript](https://residencyadvisor.com/resources/research-residency-applications/poster-vs-oral-presentation-rates-among-residents-in-competitive-tracks)

Relative Signaling Value of Common Research Outputs
CategoryValue
First-author original research in target specialty100
First-author review/case in target specialty70
Middle-author original research in target specialty60
First-author outside specialty50
Middle-author outside specialty30
[Poster-only with no manuscript](https://residencyadvisor.com/resources/research-residency-applications/poster-vs-oral-presentation-rates-among-residents-in-competitive-tracks)10

Notice two things:

  • Middle‑author original research in your chosen field is not that far below first‑author reviews in terms of signal.
  • Poster‑only work with no manuscript is at the bottom. Programs have seen too many posters that never turned into anything.

This is why piling poster after poster without pushing at least one project to manuscript status is statistically a poor use of time.

How to frame first‑ vs middle‑author work in applications

Numbers are one thing. How you present them is another.

When I look at personal statements and ERAS experiences that “look like” they belong to high‑match‑rate applicants, they do three concrete things with authorship:

  1. They explicitly name the project they led
    “I led a retrospective cohort study of 350 patients with X…” rather than “I participated in multiple research projects.”

  2. They position middle‑author work as evidence of reliability and skill
    “I continued collaborating with Dr. Y’s group, contributing to data abstraction and manuscript revisions on three subsequent projects.”

  3. They chronologically align their story with the publication timeline
    “Data collection M2, analysis in a dedicated research year, manuscript submitted M4.”

Program directors do not have time to manually decode which PubMed entry corresponds to which experience. You connect the dots for them. Or they assume you were a passenger.

If you are light on first‑author work, you can partially compensate by:

  • Highlighting your specific contributions on middle‑author works (e.g., statistical analysis, study design, building REDCap databases).
  • Securing strong letters from PIs who can explicitly say “This student functioned at the level of a first‑author on Project X even though the final order was Y.”

That last point matters more than most applicants realize. I have seen letters that bluntly state “The authorship order does not reflect the degree of contribution; the student was the intellectual driver.” Faculty read that and adjust.

What “good enough” looks like in real numbers

Let’s make this tangible for three different trajectories, assuming you want a realistic but solid profile that matches somewhere in your intended field.

Scenario 1: Competitive research‑heavy specialty (e.g., Dermatology, Ortho, Neurosurg)

A statistically reasonable “good but not unicorn” target by the time ERAS opens:

  • Total publications: 8–12
  • First‑author: 1–3 (ideally at least one in the target specialty)
  • Middle‑author: 6–10
  • Mix of retrospective clinical projects, possibly one basic science or translational if your institution is strong there.

Scenario 2: Moderate research emphasis (e.g., General Surgery, academic IM, ENT)

Target:

  • Total publications: 4–8
  • First‑author: 1–2
  • Middle‑author: 3–6
  • A couple of solid specialty‑aligned projects, plus a few side collaborations.

Scenario 3: Lower explicit research emphasis (e.g., community IM/FM, many Psychiatry programs)

Target for someone who wants a solid but not insane research signal:

  • Total publications: 1–4
  • First‑author: 0–1
  • Middle‑author: 1–3
  • Even a single first‑author paper in the specialty can push you into a higher‑than‑average research profile.

These are targets, not cutoffs. People match above and below these numbers every year. But they are far closer to reality than the “5 first‑author papers or bust” myth that circulates on Reddit.

The real takeaway: shape, not just count

If you forget the folklore and look at CVs of matched residents like a data set, one pattern keeps repeating:

Programs are not selecting for maximal publication counts. They are selecting for coherence.

Your first‑ vs middle‑author distribution should tell a clear story:

  • “I joined this field.” (middle‑author)
  • “I became reliable within this team.” (repeated middle‑author)
  • “I eventually led something to completion.” (first‑author)

You can reach that narrative with 3 papers or with 20 papers. Obviously, more helps at the extremes. But the inflection point is not where most students think it is.

So your next move is not to chase a random extra first‑author just to hit some imaginary threshold. It is to look at your current curve – how many projects, what authorship mix, how aligned with your intended specialty – and ask a simple question:

If a program director sees only this one‑page snapshot, does the trajectory look intentional and upward?

If the honest answer is “not yet,” you know where to push next: one project that you truly own, with middle‑author reliability as the foundation. Once that structure is in place, you can start thinking about the next step in the research ladder – fellowships, K‑track attendings, real academic careers – but that is a different statistical story for another day.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles