
The biggest lie in post-bacc marketing is “95% of our students get into medical school.”
You cannot trust that number by itself. Not at all.
If you want to know whether a post-bacc program’s outcomes are actually strong, you need to dissect what’s behind those percentages: who they’re counting, what schools they’re getting into, and how much the program is really contributing vs. the student’s own profile.
Here’s how to evaluate outcomes like someone who has seen the tricks before—and doesn’t fall for them.
1. Start With This: “95% of who, exactly?”
Whenever a program advertises an “acceptance rate,” your first question should be: 95% of who?
Most programs do at least one of these things (sometimes all three):
- Only count “linkage” or committee letter applicants
- Exclude students who dropped out, switched paths, or applied later
- Only consider MD or only DO, depending on what makes the number look better
You want numbers that include the whole story, or as close as you can get.
Ask the program directly:
“Is your stated acceptance rate based on:
- all students who start the program,
- all who finish,
- or only those who receive a committee letter or linkage?”
“Over what time frame is that acceptance rate measured (1 cycle, 2 cycles, 3+ years)?”
If they can’t or won’t answer clearly, I’d treat their outcomes with suspicion.
Here’s how those differences change the reality:
| Category | Value |
|---|---|
| All Starters | 55 |
| All Completers | 70 |
| Committee Letter Only | 92 |
Same program. Three very different stories.
A truly transparent post-bacc will tell you:
- Percent of all entrants who eventually get into ANY MD/DO school
- Percent of completers who eventually get in
- Percent of committee letter or linkage participants who get in
If they only advertise the last one, that’s cherry-picking.
2. Look at Where Graduates Are Actually Getting In
“Strong outcomes” are not just about whether people get in, but where they end up.
You should ask for (and care about):
- The breakdown of MD vs DO vs other (dental, PA, etc.)
- The types of medical schools:
- US MD
- US DO
- Caribbean or international schools
Here’s the pattern I’ve seen in real conversations with former students:
- Program A bragged “92% acceptance!”
- When pressed: 40% US MD, 35% DO, 17% Caribbean, the rest “other health professions.”
- Program B listed “70% acceptance” with no fanfare.
- But all were US MD or DO, with multiple acceptances per student and several at competitive schools.
If a program won’t share a school list or at least a distribution (even anonymized), that’s a red flag.
Ask this explicitly:
“In the last 3–5 years, what percentage of your accepted students matriculated at:
- US MD schools
- US DO schools
- Caribbean/international programs?”
“Can you share some examples of schools that students frequently attend?”
You’re not trying to chase “Top 10 only” nonsense. You’re trying to see if people end up at solid, accredited US programs or if the outcomes are being padded with less desirable options that med school advisors quietly warn about.
3. Separate “Career Changer” From “Record Enhancer” Outcomes
Lumping all students together hides reality.
Career-changer post-baccs (no prior prereqs) and record-enhancer programs (repairing a weak GPA) have very different baselines and outcome expectations.
If a program takes:
- mostly 3.7+ GPA students who just need prereqs → outcomes should be excellent
- mostly 2.7–3.1 GPA students trying to recover → outcomes will naturally be lower, even if the advising is stellar
What matters is not the raw percentage, but whether students like you are succeeding.
Ask:
“What are the typical stats (GPA, background) of:
- your career-changer cohort?
- your record-enhancer cohort?”
“Do you track outcomes separately for those groups? If so, what are they?”
If you have a 2.9 GPA and they proudly give you outcomes mainly based on career changers who started at 3.8? That’s not relevant to you.
4. Look for MCAT and GPA Impact, Not Just Final Acceptances
You’re not just buying an acceptance rate. You’re buying:
- Stronger academic record
- A better MCAT outcome
- Better positioning overall
A good program should be able to show that students actually improved, not just that some percentage eventually got in.
What to look for:
- Average science GPA in the program (not just cumulative GPA)
- Change in GPA trend (e.g., “most students show 3.6+ in post-bacc science coursework”)
- MCAT score distributions for those who take it during/after the program
Something like this is credible:
| Category | Value |
|---|---|
| Before Program | 2.9 |
| After 1 Year | 3.4 |
| After Completion | 3.6 |
If a program never mentions MCAT support, doesn’t track MCAT data, or has no idea how students perform after their classes, that tells you they’re focused on enrollments, not outcomes.
Ask:
- “What is the typical science GPA in your program among those who complete it?”
- “Do you offer structured MCAT prep or just ‘recommended resources’?”
- “Do you track MCAT score ranges for students who go through your program?”
If the answer is vague, you already know how much they’re really investing in outcomes.
5. Understand the Committee Letter and Linkage Game
Committee letters and linkages are often the backbone of “strong outcomes” claims.
But here’s the trick:
Programs often only give committee letters or linkage eligibility to students who already look competitive.
So when you hear “100% of students with a committee letter were accepted,” ask yourself: how many students qualified for that letter?
| Step | Description |
|---|---|
| Step 1 | All Students Enter Program |
| Step 2 | Eligible for Committee Letter |
| Step 3 | No Letter / Advised to Delay Applying |
| Step 4 | Apply to Med School |
| Step 5 | Not Counted in Outcome Stats |
| Step 6 | Meet GPA/MCAT Threshold? |
You need the actual funnel numbers:
- How many students started?
- How many:
- Finished the program?
- Qualified for a committee letter or linkage?
- Applied without the committee letter?
- Delayed application?
You should ask:
“What percentage of students who start the program eventually:
- receive a committee letter,
- apply with that letter,
- and are accepted?”
“What criteria do you use to decide who gets a committee letter or can use a linkage agreement?”
If linkages are a big selling point, demand clarity:
- “How many students actually use the linkage each year?”
- “How many successfully matriculate through the linkage vs regular applications?”
Big, prestigious linkages that only 2–3 people per year can realistically use are not a program-wide outcome. They’re a perk for a tiny subset.
6. Ask For Cohort-Level, Not Just “Success Story” Anecdotes
Every program has a handful of star students they’ll happily parade in front of you—multiple top-20 acceptances, scholarships, the whole thing.
You’re not evaluating their top 5%. You’re evaluating the entire cohort.
Ask for cohort-level info over multiple years:
“What’s the size of each entering class?”
“How many of those typically:
- complete the program,
- go on to apply to medical school,
- and eventually matriculate?”
“Can you share a multi-year summary—e.g., over the last 5 years—of:
- total students,
- total applicants,
- total matriculants?”
That data might look something like this:
| Category | Value |
|---|---|
| Year 1 | 40 |
| Year 2 | 45 |
| Year 3 | 50 |
| Year 4 | 52 |
| Year 5 | 55 |
Where you’d then see of those 40–55 per year, maybe 60–70% actually ended up in med school. That’s far more honest and useful than “Our linkage to X Top School!” on the brochure.
7. Talk to Recent Alumni (Not Just the Ones They Hand-Pick)
You learn quickly how real a program’s outcomes are when you talk to people who finished in the last 1–3 years.
Ask them:
- Did the advising help you build a realistic school list?
- Did the committee letter process feel supportive or gatekeeping?
- How honest were they about your chances based on your actual stats?
- How much of your success do you attribute to:
- structure and coursework,
- advising and letters,
- vs. things you did on your own (MCAT prep, extra clinical work, gap year)?
Pay attention to:
- Whether alumni felt “sorted” early into likely-success vs “don’t apply yet” groups
- Whether the program was open about outcomes or defensive when questioned
- Whether anyone quietly mentions lots of classmates who never applied or who went Caribbean and weren’t mentioned in info sessions
Use LinkedIn, school-specific Facebook groups, Reddit (with caution), and premed forums to find and message people. Short, respectful outreach like:
“Hi [Name], I’m considering [Program] and saw you recently completed it. Would you be open to sharing your honest experience with the program and how it affected your med school application?”
You’ll get the uncensored version fast.
8. How to Compare Programs Side-by-Side (Quick Framework)
Here’s a simple comparison approach I use when helping students choose:
Score each program 1–5 in these areas:
Transparency of Outcomes
- 1 = vague claims, no numbers
- 5 = multi-year data, MD/DO breakdown, clear denominators
Fit for Your Profile (career-changer vs record-enhancer)
- 1 = outcomes based on a totally different group than you
- 5 = strong outcomes specifically for your category
Academic Lift
- 1 = no clear GPA/MCAT support or tracking
- 5 = consistent 3.5–3.7+ post-bacc GPA and strong MCAT outcomes with support
Selectivity and Support Structure
- 1 = anyone admitted, little guidance, outcomes weak
- 5 = selective intake, mandatory advising, structured prep
True Med School Pipeline (not just marketing)
- 1 = big claims, few concrete med school destinations shared
- 5 = clear track record to multiple US MD/DO schools, realistic expectations
Do this for 3–5 programs you’re serious about. Add up the scores. Then ask yourself: which program is best for someone with my numbers, my story, my timeline?
The “highest ranked” or most famous name is not always the one with the strongest outcomes for you.
FAQ (Exactly 7 Questions)
1. Is a post-bacc “80%+ acceptance rate” actually good?
It can be—but only if the denominator is honest. If 80% refers to all students who start or at least all who complete the program and eventually apply, that’s strong. If 80% is only among a filtered subset with committee letters or linkages, it tells you almost nothing. You want: “Of the X students who finished and applied within 1–3 years, Y% ultimately matriculated to US MD/DO schools.”
2. How much weight should I put on linkage agreements?
Less than the marketing implies. Linkages are usually restricted to students with very strong performance (and sometimes high MCATs) and may have strict timelines. Many students either do not qualify or choose not to use them. Strong linkages are a bonus, not the core reason to pick a program. Focus on overall outcomes and support; if you later qualify for a linkage, great.
3. Should I avoid programs that send students to Caribbean schools?
Not automatically, but be skeptical if Caribbean placements are quietly lumped into “medical school acceptance” stats. A credible program will tell you clearly: X% US MD, Y% DO, Z% Caribbean. If a large chunk are Caribbean and you’re aiming for US MD/DO, that’s a sign the program doesn’t consistently elevate students to where you probably want to be.
4. How do I know if a program is right for a low GPA (e.g., 2.7–3.0)?
You need data specifically for record-enhancers. Ask what their incoming GPA range looks like, what post-bacc GPA students usually achieve, and what fraction of students starting with sub-3.0 ultimately get into US MD/DO. If they brag about outcomes while quietly revealing that most students start at 3.5+, that’s not your program.
5. Are DIY (community college/university extension) post-baccs worse for outcomes than formal programs?
Not inherently. Formal programs tend to have better advising, committee letters, and sometimes stronger school reputations. But I’ve seen plenty of students with excellent DIY coursework, strong MCATs, and thoughtful advising (even paid private advising) do very well. Formal programs usually have cleaner data to show; with DIY, your “outcomes” depend more heavily on your own discipline and strategy.
6. What’s a realistic expectation for a solid post-bacc’s med school outcomes?
For a reasonably selective, well-run program with advising and a committee letter: 60–80% of students who complete the program and apply within a few years getting into US MD/DO is solid. Above 80% is possible, especially for career-changer-heavy programs with strong applicants. Once numbers creep above 90–95%, you should assume heavy filtering of who they count.
7. If programs won’t share detailed outcome data, should I still consider them?
I’d be cautious. Sometimes newer programs do not have enough years of data yet, which is fair. But if a program has existed for years and still only has vague claims with no breakdown by MD/DO, no clarification of denominators, and no willingness to answer specific questions—that usually means the numbers are not as impressive as the marketing suggests. There are enough transparent programs out there that you don’t need to gamble on opaque ones.
The bottom line:
- Don’t trust a single acceptance percentage without dissecting who’s counted and where they went.
- Focus on outcomes for students like you—career-changer vs record-enhancer, your GPA range, your goals.
- Prioritize programs that share multi-year, detailed, MD/DO-focused data and whose alumni can describe real, consistent support—not just hype.