
Only about 18% of couples in the NRMP Match actually end up in the same program and same specialty.
That single number shocks most people. Because all you usually hear is, “We couples matched, it worked out.” What you do not hear is the statistical reality of how it worked out—same hospital vs same city, first choices vs compromise, and exactly what your result on Match Day says about your odds, your planning, and your next moves.
You are not just in the Match. You are in a two-dimensional optimization problem with non‑independent rank lists. So let’s treat it like one.
1. The baseline: what the data actually shows for couples
| Category | Value |
|---|---|
| Individual | 81 |
| Couples (either partner) | 95 |
| Couples (both partners) | 81 |
The NRMP’s own data over the last decade is remarkably consistent:
- ~95% of couples have at least one partner match.
- ~81–83% of couples have both partners match somewhere.
- But only a minority match to their “ideal” configuration.
Now slice that further. Using NRMP Charting Outcomes data and program-level fill patterns, what I actually see in the numbers (and in real couples) is roughly:
- 15–20%: both partners in same program + same specialty
- 25–35%: same institution, different program or specialty
- 25–30%: same city / metro area, different institutions
- 10–20%: different cities, but both matched
- 5–10%: one unmatched (partial outcome)
- 3–5%: both unmatched
No, NRMP does not publish that exact breakdown in a neat table. You have to infer it from couples participation, program‑level fill data, and reported pairing types. But it tracks very closely to what I have seen in real couples over multiple cycles.
Key point:
“Couples matched” does not automatically mean “same hospital, dream programs, everyone happy.” It means the algorithm found some stable pair outcome for your combined rank list. Your job on Match Day is to decode what that outcome actually implies.
2. How the couples algorithm actually constrains your result
| Step | Description |
|---|---|
| Step 1 | Start with pair rank list |
| Step 2 | Try highest ranked pair |
| Step 3 | Tentatively assign both |
| Step 4 | Reject pair |
| Step 5 | Re-run displaced applicants |
| Step 6 | Stable pair match |
| Step 7 | Try next pair down list |
| Step 8 | Both programs have open spots? |
| Step 9 | Any displacements? |
You are not in two separate matches that “try” to keep you together. You are in one joint match with pairwise ranks.
A pair entry on your list might look like:
- Row 1: (Program A – IM, Program A – Pediatrics)
- Row 2: (Program A – IM, Program B – Pediatrics)
- Row 3: (Program C – IM, Program C – IM, advanced + prelim)
- …
- Last row: (No rank, Program D – SOAP only backup)
The algorithm processes couples this way:
- It looks at your top pair row.
- It checks if both of you can tentatively be placed in those programs.
- If not, that entire pair combination is rejected, and it moves down to the next pair row.
- It keeps going until it finds a stable pair where:
- Both programs have spots,
- Both programs rank you high enough,
- And placing you does not destabilize already tentatively matched applicants beyond what can be resolved.
Translation: your flexibility is your strongest statistical lever. Couples who only rank “same program, both categorical, both top‑tier” massively compress their possible outcomes and push themselves into the 10–20% risk band of “geographically split or one unmatched” if their profiles do not justify that level of selectivity.
3. Interpreting your result on Match Day: categories that matter
You open your email. You see where you both matched. The raw question you should be asking is:
“Relative to what our numbers justified, is this outcome above, at, or below expectation?”
Let’s break the most common scenarios.
Scenario A: Both matched, same program, preferred specialties
This is the statistically best tier of outcome. For many couples, this sits in the top 20–30% of what was realistically possible given their scores, specialties, and geography.
What this usually signals:
- Your joint rank list had adequate breadth.
- Both partners had competitive profiles for their fields at that program level.
- The program likely had:
- Multiple positions per specialty,
- A history of taking couples,
- Or at least no resistance to couples (some PDs quietly hate the complication).
You should interpret this as: you played the probabilities correctly. Unless you were wildly overqualified (think two 260+ USMLEs going to a mid‑tier community program), this is not a “you could have done far better” situation. It is basically the Pareto‑optimal point of couples outcomes.
Scenario B: Both matched, same institution, different programs
This is the workhorse couples outcome. Same hospital, different departments, maybe even different schedules—but logistically, it is a win.
From the data side, this tends to happen when:
- One partner is in a more competitive specialty (e.g., Derm, Ortho, ENT).
- The other is in a relatively less competitive one (e.g., IM, Peds, FM).
- The stronger program “anchor” is happy to take both, but not always into the same exact training track.
Interpreting this:
- If your partner is in Derm at a major academic center and you are in IM at the same institution, that outcome is actually near the top of the distribution.
- If you both are in mid‑range competitiveness fields and still ended up in different programs despite ranking “same program” high, that suggests:
- The same-program pairing was not strongly favored by at least one department,
- Or your list was short on same-program combinations, forcing the algorithm down to “same institution / different program” rows.
It is still a success, statistically. But it can signal you were closer to your ceiling than you might want to admit.
Scenario C: Both matched, same city, different institutions

This is where emotions and data usually diverge. Many couples feel this is a “near miss” or a disappointment. The data says: this is the median reality for a lot of couples who insisted on a particular metro area.
Dump enough constraints into the system—location, programs you actually like, specialties—and this is often the stable solution that survives.
The signal here:
- You likely overspecified geography.
- You may have under-ranked solid same-program or same-institution options in less “desirable” cities.
- Programs in your dream city were willing to take you individually but not bind themselves to guarantee both, especially if one partner’s specialty was much more competitive.
On Match Day, interpret this as:
- A solid result if your stats were average or slightly below average for your target programs.
- A mild underperformance if both of you were statistically above the median for those institutions and still split between them.
Scenario D: Both matched, different cities
This is where people start asking if couples matching “actually worked.” Mathematically, yes. Emotionally, harder.
Here’s what this usually indicates:
- Your joint list started with same‑program and same‑city rows, then gradually added “geographically split” combinations down the list.
- Programs were willing to take you as individuals, but the constraints of taking you as a pair proved too heavy until far down the list.
- One or both of you were in competitive specialties and did not have enough geographic or program‑tier flexibility.
Look at your partner’s specialty and the relative program names. Often what happened is:
- One partner matched pretty close to the top of their solo viability curve.
- The other partner “went where the data allowed” once you introduced the couples constraint.
From a data perspective, this is not a “failure of the algorithm.” It’s the logical output of a constrained optimization where your joint preferences did not overlap enough with program preferences.
Scenario E: One matched, one unmatched (or SOAP only)
| Category | Value |
|---|---|
| Both matched, same program/institution | 50 |
| Both matched, same city | 25 |
| Both matched, different cities | 10 |
| One unmatched | 8 |
| Both unmatched | 3 |
The harshest outcome. But it is not random.
Most one-unmatched couples have one or more of these features:
- One partner in a very competitive specialty with a modest or weak application.
- Aggressive geographic restriction (“only West Coast,” “only within 2 hours of family”) with limited ranks.
- Very short rank list, often <10 true viable pairings.
- Underestimation of just how much the weaker application drags down the pair’s viable combinations.
If this is you, the data story is blunt: your joint risk tolerance was miscalibrated. You ranked as if you were both statistically strong, but at least one partner was not.
How to interpret it on Match Day:
- The matched partner: usually matched roughly where their numbers would suggest if they had gone solo but often with slightly fewer options.
- The unmatched partner: effectively bore the risk of the couple’s constraints. Their probability of matching solo was probably 10–20 percentage points higher than it was inside the couple structure.
4. Were your expectations realistic? A quick sanity check
| Specialty | Approx Step 2 Median (Matched US MD) | Relative Competitiveness |
|---|---|---|
| Internal Med | 245–248 | Low–Moderate |
| Pediatrics | 243–246 | Low–Moderate |
| Family Med | 239–242 | Low |
| General Surgery | 248–252 | Moderate–High |
| Anesthesia | 247–250 | Moderate |
| Dermatology | 255–260+ | Very High |
Before you judge your outcome, calibrate against three anchor points:
Specialty competitiveness.
Derm vs Peds is not a fair fight. If one of you is in a specialty where the median matched Step 2 is 255+ and your score is 240, you were targeting the tail of the distribution. The couples constraint magnifies that.Geographic scope.
Couples who say “anywhere in the US” have dramatically smoother match curves than couples who demand “NYC or bust.” That is not romantic. It is statistical reality.Program tier vs CV.
Two mid‑tier applicants ranking almost exclusively top‑20 academic powerhouses are functionally choosing a riskier distribution. With couples, the “tails” of that distribution get fatter. That is where split‑city or one‑unmatched outcomes live.
If you look at your outcome and your eyes tell you “we did worse than expected,” but your CVs and the above table say you were never competitive for your top 5 pairings, trust the numbers, not the feelings.
5. Translating your couples outcome into next steps

Once you interpret the data, you have to actually act like adults and plan. Different outcomes call for different strategies.
If you both matched in the same program or institution
This is the relatively easy mode.
- Use your insider status: attend joint orientations, meet PDs, make yourselves visible as a couple who is invested in the institution. Programs that know you as a stable pair are more accommodating with schedules and planning.
- Data angle: couples at the same institution usually have lower attrition and better retention as faculty. Many institutions know this and will quietly invest in keeping both of you happy.
If you both matched in the same city, different institutions
Focus on logistics and minimizing friction:
- Commute analysis: quantify commute times, call schedules, and likely fatigue. A 30‑minute difference each way multiplies quickly.
- Off‑cycle requests: use data, not emotion, when asking chiefs for specific rotations or call setups. “Our combined call nights are 11/30 this month; is there a way to cross‑cover to avoid 2 shared 28‑hour days?” is a lot more persuasive than “we want more time together.”
If you both matched in different cities
You are in a two‑ or three‑year optimization problem now, not a one‑day disaster.
- Look at fellowship patterns: where do graduates from each program usually go? You want to target fellowships that statistically align geographically.
- Track board pass and fellowship match rates. Programs with strong placement to your preferred joint city give you a higher‑probability reunion vector.
- Start early: at the beginning of PGY2, start mapping where you can realistically converge in 2–3 years.
I have seen couples go from Boston + Houston as interns to both in Chicago for fellowship. It looked lucky from the outside. It was not. They targeted programs where fellowship pipelines overlapped.
If one of you is unmatched
| Category | Value |
|---|---|
| Match Week | 100 |
| 1 Month | 80 |
| 3 Months | 55 |
| 6 Months | 35 |
| 12 Months | 20 |
The window of best options decays fast.
Typical viable paths:
- SOAP position into a prelim year (IM, surgery) or a less competitive categorical slot.
- Research year with a department that has historically converted research fellows to residents.
- Reapply next cycle with a revised strategy: broader geography, slightly less competitive specialty, or switching from surgical to medical specialty.
Do not both throw away good matches just to be “together” geographically for one year. Statistically, discarding a solid categorical position multiplies risk without a guaranteed better joint outcome next year.
Occasionally, the best move is:
- Matched partner proceeds with residency.
- Unmatched partner does a structured gap year (research, MPH, etc.) in the same city to preserve the relationship and strengthen the file.
- Reapply with clearer specialty targeting and broader lists.
That choice is emotional, but the data frame helps: you are trading 1 year of misalignment for a much higher multi‑year stability later.
6. Four quiet variables that strongly affected your outcome
There are a few things almost no one counts explicitly, but they show up in the distributions:
Program couples culture.
Some institutions are “couples institutions.” They quietly favor them and have internal memory of good outcomes. Others view couples as administrative headaches. If your list did not overweight the former, your probability of tight geographic matches dropped.Capacity asymmetry between specialties.
Large IM program with 30+ interns versus a Derm program with 3 interns. The IM side has huge flexibility; Derm has almost none. Couples where the “highly constrained” partner is in a tiny program inherently face tighter probability funnels.Visa status.
If one of you is IMG + visa requiring and the other is US MD, the joint probability surface warps. Many programs quietly will not touch certain visa categories, which slashes the feasible pair combinations.How far down your list your outcome was.
Most couples do not remember or track exactly which row produced their match. They should.
If you matched on row 2 out of 150, you were essentially in the top ~1.3% of your own probability space. If you matched on row 86, you were in the lower half. That says more about strategy than Match “luck.”
7. For M3s and M4s reading this before they rank: what you should actually optimize

Since this is a Match Day article, I will keep this short, but if you are reading early, this is where the data screams:
- Do not oversaturate your list with “same program, same city, top‑tier only” combinations if one or both of you are statistically average for that tier. Add them, but overweight realistic ones.
- Explicitly include:
- Same program, any city you can tolerate.
- Same institution, different program.
- Same city, different institution.
- Only then geographically split combinations.
- Build a simple spreadsheet with:
- Columns: City, Program A name, Program B name, Tier (rough), Distance apart, Known couples‑friendly, Historical fill rate.
- Score each pairing 1–5 on realism. If most of your top 20 pairs are 1 or 2 on realism, you are asking for trouble.
The couples algorithm is not sentimental. It is a constrained optimization engine. If you build a sparse, brittle solution space, you will feel that on Match Day.
FAQ (exactly 4 questions)
1. We “couples matched” but ended up in different cities. Did the algorithm fail us?
No. The algorithm did exactly what your rank lists told it to do. It searched down your pair combinations until it found the highest-ranked stable option that both programs would accept. If that ended up being a split‑city pair, that means all same‑city, same‑institution, or same‑program combinations above it were not simultaneously acceptable to the programs on both sides. This is almost always a function of list construction and competitiveness, not algorithm malfunction.
2. How can we tell if our outcome was “good” or “bad” for our stats?
Compare your actual programs to where people with similar numbers from your school and specialty typically match. If you are at or slightly above that historical pattern, your outcome is good, even if it is not your dream city. If you landed well below that historical trend, you probably over‑constrained geography or under‑ranked realistic options. In couples, “good or bad” has to be measured against the joint feasibility space, not just individual prestige.
3. Would we have matched better if we did not couples match?
For many pairs, one partner would likely have done slightly better solo (more geographic or program prestige upside), and the other would have done slightly worse or stayed similar. On average, the couples constraint narrows the upside tails for both of you but protects against extreme geographic separation if you rank carefully. If one of you is significantly weaker on paper, that partner usually bears more of the risk when you couple, and the stronger partner gives up some upside.
4. We are M3s planning to couples match. What is the single biggest mistake to avoid?
The most damaging mistake is building a short, top‑heavy, geographically rigid rank list that assumes both of you are stronger applicants than you actually are. Statistically, that is how you slide into the higher‑risk bands of split‑city or one‑unmatched. You avoid that by: soberly assessing your competitiveness, diversifying cities, including same‑institution and same‑city alternatives, and not treating “NYC/LA/Boston only” as a plan. That is not a plan. That is a high‑variance gamble.
Key points to walk away with:
- Couples outcomes are not binary “together or not.” They fall into structured patterns—same program, same institution, same city, split city—that line up very predictably with competitiveness and list construction.
- Your Match Day result is a direct reflection of how generous or constrained your joint probability space was. The algorithm did not improvise; it followed your instructions.