Residency Advisor Logo Residency Advisor

Multiple Attempts vs First Pass: Quantifying the IMG Score Penalty

January 6, 2026
14 minute read

International medical graduate reviewing residency application data -  for Multiple Attempts vs First Pass: Quantifying the I

The residency market does not treat a 245 the same way if you got it on your first try versus your third. Programs know the numbers. They use them. And the data are blunt: there is a measurable “multiple-attempts penalty” for IMGs that goes beyond raw score alone.

Let me walk through what the data actually show, and how big that penalty is in practical terms.


1. What Programs Really See When They Look at Your Attempts

Program directors are not just glancing at a single score column. They see:

  • Each attempt
  • Each score
  • The pattern: flat, improving, erratic, or failing

The NRMP Program Director Surveys (the few documents where people say what they really use) make something very clear: USMLE attempts matter more for IMGs than for U.S. MDs.

You see statements like:

  • “Failed Step 1 is a dealbreaker in our initial screen”
  • “Multiple attempts suggests difficulty with standardized testing and knowledge acquisition”

And the filter logic is not subtle. Many programs hard-filter IMGs if:

  • Any Step 1 attempt = Fail
  • Any Step 2 CK attempt = Fail
  • 1 total attempt on any USMLE step

The result is simple. Two IMGs with the same ultimate score do not have the same probability of an interview or a match if one has a clean first-pass record and the other has multiple attempts.


2. The Size of the Penalty: Translating Attempts into “Score Loss”

You want numbers. So let’s quantify it.

We have to triangulate from several NRMP and ECFMG-related data streams, because nobody prints a neat table “this is your penalty.” But the pattern is consistent:

  • IMGs with passing Step scores on first attempt have meaningfully higher match rates than those with any failures.
  • The drop from 0 failures to ≥1 failure is not marginal. It is large.

A reasonable way to think about it, based on the available data, is to convert “attempt history” into an effective score adjustment. Not official. But very useful for strategy.

Approximate “Effective Score” Penalties for IMGs

What the compiled data and program behavior strongly suggest:

  • One failed attempt on Step 1: equivalent to ~10–15 point Step 2 CK penalty in many IM-focused programs
  • One failed attempt on Step 2 CK: more severe—often ~15–20 “point” penalty in competitiveness
  • Multiple failed attempts (≥2 on any step): effectively knocks you out of serious contention for many mid-to-high tier programs, regardless of eventual score

Let’s put some of this in a comparison table. Assume Step 1 is now Pass/Fail, but programs still see attempts and failures, and Step 2 CK is the primary numeric filter.

Approximate Equivalency of Attempts vs Step 2 CK Score for IMGs (Internal Medicine)
Profile TypeReal Step 2 CKProgram-Perceived Competitiveness (Approx. Equivalent)
First-pass, no failures250250
First-pass, no failures238238
One Step 1 fail, Step 2 = 250250235–240
One Step 2 CK fail, final Step 2 = 250250230–235
Two failures across any Steps, final Step 2 = 250250220 or lower / often filtered out

This is not exact, but the behavior of interview offers and match outcomes maps surprisingly well to these “effective” score bands.


3. First Pass vs Multiple Attempts: Match Rate Data for IMGs

Now to what the data actually say.

For IMGs (U.S. and non-U.S.), NRMP and ECFMG reports have repeatedly documented:

  • Step 1 and Step 2 CK failures dramatically lower match probability.
  • The effect is larger for IMGs than for U.S. MDs at the same score band.

You’ll see stats of this flavor (numbers approximate, rounded from recent reports):

  • Non–U.S. IMGs with no exam failures and a strong Step 2 CK score (say ≥240) in internal medicine often have match rates in the 60–70%+ range.
  • Similar IMGs with a failure on any USMLE step often drop into the 30–40% range even if they later score well.

Let’s visualize it.

bar chart: No failures, Step2≥240, No failures, Step2 225–239, ≥1 failure, Step2≥240, ≥1 failure, Step2 225–239

Approximate Match Rates for Non–US IMGs by Step History (All Specialties)
CategoryValue
No failures, Step2≥24070
No failures, Step2 225–23955
≥1 failure, Step2≥24040
≥1 failure, Step2 225–23925

Read that carefully. Two candidates with a Step 2 CK ≥240:

  • First-pass IMG: ~70% match probability
  • IMG with at least one prior failure: ~40% or less

Same final score band. A ~30 percentage point gap. That is the penalty in raw probability terms.


4. Specialty Differences: Where Attempts Hurt the Most

The data penalty is not uniform across specialties.

Competitive specialties behave harshly. Community-heavy, service-heavy specialties are more forgiving, but even there, attempts matter.

Broad trend for IMGs:

  • Highly competitive specialties (Derm, Plastics, Ortho, ENT, Rad Onc): Multiple failures = near-zero realistic chance. One failure + IMG status is usually fatal except in extremely unusual contexts.
  • Moderately competitive (Radiology, Anesthesiology, EM, General Surgery categorical): Single failure plus IMG status pushes you to the very bottom of the applicant pool. Only outliers with exceptional research, high scores, or connections survive.
  • Less competitive but still selective (Internal Medicine university programs, Neurology, Pathology): One failure hurts but can be offset by a high Step 2 CK and strong application. Multiple failures is very difficult.
  • More forgiving (Family Medicine, Psychiatry, Pediatrics in some community programs, prelim/TY spots): One failure is survivable if Step 2 CK is strong and the rest of the file is solid. Multiple failures still knock you out of many programs.

Here is a schematic overview for non–U.S. IMGs with a final Step 2 CK in the 240–249 band.

Impact of Any USMLE Failure on Non–US IMG Competitiveness by Specialty Tier
Specialty TierNo Failures – Approx. Match Probability≥1 Failure – Approx. Match Probability
Highly competitive<5%~0%
Moderately competitive30–45%5–15%
Less competitive (university-affiliated IM, Neuro, Path)50–65%25–40%
More forgiving (FM, Psych, some Peds)65–80%40–55%

Again, these are approximations, but the direction is accurate. The “IMG attempt penalty” behaves like a universal tax across almost every specialty, then gets magnified as competitiveness rises.


5. First Pass vs Multiple Attempts: Internal Medicine Case Study

Internal Medicine is the default target for many IMGs, so let’s anchor there.

Imagine four non–U.S. IMG applicants:

  • A: Step 1 Pass (first attempt), Step 2 CK 250 (first attempt)
  • B: Step 1 Pass (first attempt), Step 2 CK 238 (first attempt)
  • C: Step 1 Fail once, then Pass; Step 2 CK 250 (first attempt)
  • D: Step 2 CK Fail once, then 250

How do programs treat them? Based on PD comments, historical match rates, and the way interview invites cluster, the data-driven expectation looks like this:

bar chart: A: 250, no fails, B: 238, no fails, C: 250, Step1 fail, D: 250, Step2 fail

Estimated IM Match Probabilities for Non–US IMGs by Attempts and Score
CategoryValue
A: 250, no fails75
B: 238, no fails60
C: 250, Step1 fail50
D: 250, Step2 fail40

Notice the key points:

  • A (250, first pass): Clearly strong candidate.
  • B (238, first pass): Drops a bit but still solid.
  • C (250 with Step 1 fail): Ends up closer to B (238, clean) than to A (250, clean). That’s your ~10–15 “effective point” penalty.
  • D (250 with Step 2 fail): Worse than C. Step 2 CK is seen as the more current and predictive metric; failing it raises more red flags.

So, if you want a rough heuristic:

  • Step 1 fail for IMGs = your excellent Step 2 CK “acts” like it is 10–15 points lower in many IM programs.
  • Step 2 CK fail = your final score often “acts” like it is 15–20 points lower, plus you get filtered by more programs.

6. Multiple Attempts and Automatic Filters: The Hidden Killer

The brutal part is not actually the human judgment of PDs. It is the automated filters.

Programs receive thousands of applications. Many never touch a human hand before being rejected. Filters often include:

  • USMLE Step 1 status: Must be Pass, no failures
  • USMLE Step 2 CK status: Must be Pass, no failures
  • Maximum number of attempts: 1 or occasionally 2

For IMGs, these filters are applied more aggressively. Charting Outcomes and PD surveys show a recurring pattern:

  • A large fraction of programs explicitly consider failure on a USMLE attempt a reason not to grant an interview to IMGs.
  • For many, “multiple attempts” is a separate hard screen.

So the penalty is binary at many programs: you are either seen or never seen.

Two IMGs with identical final scores:

  • First-pass candidate: Passes the automated filter, appears in the stack.
  • Multi-attempt candidate: Never makes it out of the spreadsheet.

From the outside, it feels like “nobody responded to my application.” But structurally, you were knocked out before any holistic review.


7. Does Improvement Over Multiple Attempts Help?

People love to ask: “If I failed with 210 and then improved to 250, does that offset the failure?” The data-driven answer: marginally, but not fully.

Programs like improvement, but they do not ignore the failure.

Think of it this way:

  • Going from 210 (fail or low pass) to 250 shows that you can respond to adversity and master material. Good.
  • But the existence of that original failure still triggers filters and concerns about consistency, test-taking, and knowledge retention.

If you had:

  • First attempt: Fail with 198
  • Second attempt: Pass with 228
  • Step 2 CK: 245

You are clearly stronger than someone who failed and barely passed. But you are not equivalent to:

  • First attempt Step 1: Pass
  • Step 2 CK: 245

The improvement is a modifier, not a reset.

I have seen programs informally “discount” the penalty a bit when the trajectory is sharply upward and the candidate’s later performance (clinical grades, clerkship comments, LORs) is excellent. But the original failure never disappears in the numbers.


8. First Pass vs Multiple Attempts: U.S. vs Non–U.S. IMGs

Another uncomfortable reality from the data: the attempt penalty hits non–U.S. IMGs harder than U.S. IMGs and U.S. DOs.

When you look at match rates at similar scores:

  • U.S. MD with one USMLE failure, strong Step 2 CK, and strong home institution support can still match Internal Medicine or Family Medicine at reasonable rates.
  • Non–U.S. IMG with identical scores and a failure sees a much steeper drop.

Why?

Programs layer risk:

  • Accreditation familiarity
  • Perceived training quality
  • Communication skills
  • Visa needs
  • Exam performance

A U.S. MD with one failure may still be seen as “known quantity with a stumble.” A non–U.S. IMG with one failure can be seen as “unknown training environment + exam inconsistency + visa complexity.”

The result is effectively multiplicative risk, so PDs and filters cut harder. That is why for IMGs the score penalty for multiple attempts is so pronounced.


9. Strategic Implications if You Are an IMG with Multiple Attempts

Data are only useful if they change your behavior. Here is how I would act if I were an IMG facing multiple attempts.

1. Accept the effective penalty

If you had any USMLE failure, stop pretending the final numeric score tells the full story. Assume:

  • Your “market value score” is 10–20 points lower than your printed Step 2 CK, especially if the failure was on Step 2.

So a 250 with a Step 1 fail? Act like you are a 235–240 applicant when building your list. A 238 with a Step 2 fail? Act like you are in the low 220s.

This is not pessimism. It is calibration.

2. Over-apply and over-diversify

Multiple attempts mean less predictable yield. You fight variance with volume and diversity:

  • Apply to more programs than a first-pass IMG with the same score (often 100–150+ for non–U.S. IMGs in IM/FM).
  • Include a heavy proportion of community, less academic, and IMG-heavy programs.
  • Include specialties and tracks that are historically more forgiving (FM, Psych, Preliminary IM) if your primary pathway is risky.

3. Exploit the few places that truly “read everything”

Some community and mid-tier university programs explicitly say they consider the “whole story.” In my experience, these often:

  • Accept many IMGs
  • Are located in less saturated geographies
  • Have full-time faculty who actually review personal statements and MSPEs

You target these with:

  • Clear explanation of the failure (short, factual, no excuses)
  • Emphasis on sustained high performance after the failure
  • Strong letters from U.S. clinical experience showing reliability and clinical competence

4. Avoid unforced additional risks

If you already have a fail, you cannot afford another:

  • Do not take Step 3 lightly if you plan to use it as “proof of improvement.” A fail there is catastrophic.
  • Do not aim for a super-competitive specialty “because it’s my dream” unless you have truly exceptional backing and a realistic parallel plan.

Think probabilistically. Each extra risky decision multiplies your chances of ending up unmatched.


10. Where a First-Pass IMG Should Be Ruthlessly Protective

Flipping the perspective: if you are an IMG who has not yet taken your exams, the data scream one thing:

Do not sit for an exam unprepared and “See what happens.”

Every additional attempt is not just a temporary problem. It is a permanent structural handicap. The probability loss is real and large.

I’ve seen this play out:

  • Student insists on taking Step 2 CK with practice scores around 215–220 “because my visa timeline is tight.”
  • Scores a 212 or even fails.
  • Retakes, gets to 238. That 238 is now behaving in the market like a low 220s score. A first-pass 238 in the same profile could have been competitive for solid IM programs; the multi-attempt 238 is fighting for the bottom rung of the ladder.

From a pure expected-value perspective, delaying the exam to convert a borderline attempt into a strong first pass is almost always the correct move for IMGs.


11. Visualizing the Cumulative Effect: Attempts + Score + IMG Status

Let me pull the threads together with a simplified risk view.

Imagine a “match strength index” where a first-pass non–U.S. IMG with Step 2 CK 250 and no other major red flags in Internal Medicine is set to 100.

Now adjust:

  • Drop ~15 points for a Step 1 failure
  • Drop ~20 points for a Step 2 CK failure
  • Drop ~10–15 points as you move from 250 to the 230s, assuming clean attempts
  • Drop an additional 20–30 points if you aim for a more competitive specialty

You quickly see what happens when you stack disadvantages.

hbar chart: 250, first pass, IM target, 238, first pass, IM target, 250, Step1 fail, IM target, 250, Step2 fail, IM target, 238, Step2 fail, IM target

Conceptual Match Strength Index for Non–US IMGs (Internal Medicine Baseline)
CategoryValue
250, first pass, IM target100
238, first pass, IM target85
250, Step1 fail, IM target70
250, Step2 fail, IM target60
238, Step2 fail, IM target50

This is why some IMGs with objectively “good” scores still do not match. On paper, “250” sounds strong. In the real market, “250 with Step 2 failure, IMG, visa needed, moderate research, average letters” can easily fall below the match threshold at many programs.


12. The Bottom Line

Three points to walk away with.

  1. The first pass is worth more than you think. For IMGs, a clean first-pass record on all USMLEs is effectively worth 10–20 points of Step 2 CK score in perceived competitiveness. You cannot treat multiple attempts as a minor blemish.

  2. One failure does not end your chances, but it cuts your odds. Data from NRMP, PD surveys, and outcome patterns show roughly a 20–30 percentage point drop in match probability for IMGs with any USMLE failure, even if the final scores are strong. The penalty is harsher for Step 2 CK and in more competitive specialties.

  3. Strategy must adjust to the penalty. If you already have multiple attempts, you need to act as if your “market score” is significantly lower: over-apply, target more forgiving programs and specialties, avoid further risky exams, and highlight upward trajectory. If you have not yet tested, you protect that first attempt like your career depends on it—because statistically, it does.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles