Residency Advisor Logo Residency Advisor

How Many Clinics Do I Need for a Meaningful Pilot Study?

January 7, 2026
15 minute read

Clinician-entrepreneur reviewing pilot study data across multiple clinics -  for How Many Clinics Do I Need for a Meaningful

The usual advice about “start small and just get one clinic to try it” is wrong for meaningful pilot studies.

If you’re serious about building a medical startup after residency, one-clinic pilots usually give you just enough data to fool yourself—and not enough to convince anyone else.

Let me walk you through how many clinics you actually need, by scenario, with numbers you can show to a skeptical medical director, a CTO, or a VC who’s seen this movie 100 times.


The Real Question: What Are You Trying to Prove?

You don’t pick the number of clinics out of thin air. You pick it based on what you need the pilot to prove.

Broadly, pilots in medical startups usually aim at one (or more) of these:

  1. Does it work at all? (clinical or operational effect)
  2. Will real clinicians actually use it? (adoption and usability)
  3. Does it work outside of your “friend’s clinic”? (generalizability)
  4. Can it integrate with real-world workflows and systems? (EHR, billing, staffing)
  5. Is there a business case strong enough for a larger contract or seed/Series A round?

The right number of clinics depends mainly on:

  • How variable your outcome is (e.g., blood pressure vs no‑show rates)
  • How complex the workflow is
  • How much statistical vs directional evidence you need
  • Who you need to convince next (hospital execs, payers, investors, regulators)

Fast Answer: Typical Clinic Counts by Goal

Here’s the short version before we unpack it.

Recommended Clinics by Pilot Goal
Pilot GoalTypical Clinic Count
Basic feasibility / proof of concept1 clinic
Usability + workflow validation2–3 clinics
Early clinical/operational signal3–5 clinics
Strong business case to sell systemwide5–10 clinics
Multi-site, payer‑level evidence10+ clinics

But these are ballpark. Let’s get specific.


Scenario 1: You Just Need to Know if This Thing Works At All

Example: You built an AI triage assistant for urgent care that drafts HPI notes and suggests ICD‑10 codes. You’re post‑residency, working part‑time in that same urgent care chain. You want to know: “Is this useful or annoying?”

For this kind of feasibility / POC pilot, one clinic can be enough if:

  • You’re mainly measuring:
    • Does the system run without falling over?
    • Does it fit into a real shift?
    • Are there glaring safety issues?
  • You’re not making outcomes claims yet, just usability and feasibility.

But here’s the catch: one clinic is trustworthy only for you, not for a serious external partner. The physicians may be unusually tech‑friendly. The clinic manager may be your friend. The documentation style may be atypical.

Use 1 clinic when:

  • You’re still iterating every week.
  • You expect to break things and restart.
  • The goal is quick learning, not external validation.

Do not pretend this is “meaningful evidence”. It’s not. It’s a glorified alpha test with real patients.


Scenario 2: You Want Real-World Usability and Workflow Data

Once it works without catching fire, the next question is: will normal clinicians use it outside your personal sandbox?

Now you care about:

  • Variability in clinician habits
  • Different clinic managers and staffing patterns
  • Slightly different patient populations

For this, 2–3 clinics is the sweet spot.

Why not one?

Because with just one clinic, half your “insights” are actually just personality effects. I’ve watched early products look amazing in a clinic where the lead doc was a champion, then utterly flop two miles away under a skeptical medical director.

At 2–3 clinics, you can:

  • See if adoption holds up when you are not on site constantly.
  • Compare metrics like:
    • % of clinicians using the tool regularly
    • Average time added or saved per visit
    • Number and type of support tickets
  • Identify workflow differences that would kill scale later (e.g., “we don’t allow phones in rooms,” or “front desk controls this process, not nurses”).

bar chart: Clinic A, Clinic B, Clinic C

Clinician Adoption Rates Across 3 Pilot Clinics
CategoryValue
Clinic A85
Clinic B62
Clinic C40

If Clinic A is at 85% adoption and Clinic C is at 40%, now you’ve got a real conversation: is that training, culture, product design, or leadership?

For a “meaningful” pilot on usability and workflow, I’d call 2–3 clinics the minimum.


Scenario 3: You Want Early Outcome Signals (Clinical or Operational)

This is where most founders dramatically underestimate what they need.

Let’s say you want to show:

  • 15–20% reduction in no‑shows
  • 10–15% decrease in average HbA1c in poorly controlled diabetics
  • 30% reduction in time to close open charts
  • 10% fewer ED visits among a complex care population

Outcomes like this are noisy. They bounce around month to month based on staffing, local policies, flu season, one new NP who hates your tool, etc.

With only one clinic, your outcome signal is basically anecdote dressed up as a bar chart.

For directional outcome evidence that people will take somewhat seriously:

  • Aim for 3–5 clinics
  • Make sure you have:
    • Pre‑period vs post‑period data (at least 3–6 months baseline if you can)
    • Clear inclusion criteria (e.g., all adult primary care visits, or all CHF patients)
    • Consistent core metric definitions across sites

This is often enough to:

  • Convince a regional medical director to expand to more sites
  • Get early-stage investors to stop asking, “But does it actually move the needle?”
  • Frame a believable effect size (even if the stats aren’t bulletproof yet)

Very Rough Statistical Reality Check

You do not need a PhD in biostats, but you do need to respect sample size and variation.

Rules of thumb for clinic-level pilots:

  • If you’re measuring something that happens at almost every visit (e.g., documentation time, screening completion), 3–5 clinics with a few hundred visits each can give you a usable signal.
  • If you’re measuring relatively rare events (e.g., hospitalizations, ED visits), one or two clinics will give you garbage for a while. You’ll need more clinics or much more time.

line chart: 5%, 10%, 20%

Visits Needed vs Event Rate for Early Signal
CategoryValue
5%2000
10%1000
20%500

Interpretation: if the baseline event rate is 10% and you want to see a directional drop, you need on the order of ~1000 eligible encounters to see if your 15–20% improvement is plausible—not definitive, but not fantasy.

Bottom line: 3–5 clinics gives you a foothold for outcome claims, but you must be honest about limitations.


Scenario 4: You Want to Convince a Health System or Payer to Roll Out

Now you’re playing a different game. You want a system CMO, CIO, or payer medical director to:

  • Approve a larger contract
  • Carve out budget
  • Tolerate disruption across dozens of clinics

For that, one pretty case study clinic does not cut it.

You usually need:

  • 5–10 clinics in the same organization or across a few comparable orgs
  • Demonstrated:
    • Consistent adoption
    • Replicable implementation playbook
    • Stable or improving outcomes over several months
    • Clear ROI or at least a rational path to ROI

This is where you stop saying “we did a pilot” and start saying “we did a multi‑site implementation study.”

Mermaid flowchart TD diagram
Clinic Expansion Path for a Medical Startup
StepDescription
Step 11 Clinic POC
Step 23 Clinic Usability Pilot
Step 35 Clinic Outcome Pilot
Step 410 Clinic System Rollout
Step 5Regional or Payer Contract

You don’t always need 10 clinics to close a deal, but if you’re selling into a serious integrated system (think Kaiser, Optum, large FQHC networks), having data from at least 5 diverse clinics is almost mandatory to be taken seriously.


Scenario 5: You’re Thinking Research-Grade Evidence

Sometimes you’re not just doing a pilot for commercial reasons. You want:

  • Publishable data in a peer‑reviewed journal
  • Evidence for a payer coverage decision
  • Support for regulatory submissions (esp. for certain SaMD, digital therapeutics, etc.)

Now you’re talking:

  • 10+ clinics or a smaller number of very large clinics
  • More formal design:
    • Cluster randomized trial
    • Stepped‑wedge rollout
    • Clear control vs intervention periods
  • Biostatistician involvement from day 1

This is beyond “post‑residency side project pilot” territory and looks more like a real clinical study. But it’s where serious digital health is headed.


How to Decide: A Simple Framework

If you want a blunt rule set, here you go.

Ask three questions:

  1. Who do I need to convince next?
  2. Am I mostly testing feasibility, usability, outcomes, or ROI?
  3. How variable is what I’m measuring?

Then use this:

  • If the next person is you and your small team → 1 clinic is fine.
  • If the next person is a clinic owner or small group practice → 2–3 clinics is usually enough.
  • If the next person is a regional director or midsize system → 3–5 clinics with decent data.
  • If the next person is a large system, payer, or serious investor → 5–10 clinics and a more mature data story.
Clinic Count by Stakeholder Type
Stakeholder You Need to ConvinceRecommended Clinics
Yourself / cofounder1
Single clinic owner1–2
Small group (3–10 clinics)2–3
Regional director / midsize system3–5
Large system / payer / VC5–10+

If you’re stuck, default to asking: “What’s the smallest number of clinics that still makes my results not obviously a fluke?” That’s almost never just one.


(See also: How Hospital Execs Actually Evaluate Your Health Tech Pilot for more details.)

Design Matters More Than the Exact Number

Here’s where founders waste months: obsessing about sample size while ignoring study design basics.

You can do a terrible 10‑clinic pilot that tells you nothing, or a sharp 3‑clinic pilot that changes your trajectory.

Minimum you should lock down:

  • Clear primary outcome
    Not “improve care,” but “reduce no‑show rate from 18% to 14% within 6 months.”

  • Specific eligibility criteria
    Which patients or visits count? New vs established? Specific diagnoses?

  • Defined measurement window
    Pre‑period, intervention period, any wash‑in time.

  • A comparison
    Could be:

    • Before vs after in the same clinics
    • Some clinics as controls (delayed rollout)
    • Matched clinics not using your tool
  • Implementation details
    Who trains clinicians? How long? What happens when it breaks? What’s your support SLA?

Founders planning pilot study design with clinic leadership -  for How Many Clinics Do I Need for a Meaningful Pilot Study?

With that in place, 3 clinics with good design beats 8 clinics with chaos every time.


Common Failure Patterns I See Over and Over

Let me be blunt about a few traps:

  1. The Hero Clinic Trap
    You pilot in one friendly clinic. The champion doc loves you. Outcomes look great. Then expansion stalls because the rest of the system says, “That clinic is weird. Their numbers are always off anyway.”
    Fix: Get at least 2–3 clinics that are meaningfully different from each other.

  2. The Vanity Pilot
    A large health system gives you “a pilot in 1 clinic” as a way to say no without saying no. You kill yourself over 9 months, they nod politely, and nothing expands because there was never an internal sponsor with budget.
    Fix: Before starting, ask explicitly: “If this pilot hits X metric improvement across Y clinics, what’s the next step and who signs that check?”

  3. Underpowered Outcome Claims
    You run a 1‑clinic pilot for 2 months, see a 10% trend, and put “reduces hospitalizations by 10%” in your pitch deck. Any serious investor or clinical exec will roll their eyes.
    Fix: For claims that sound like real medicine, run a real enough study—3–5 clinics minimum, with adequate duration.

  4. Too Big, Too Soon
    The opposite problem. You sign a 20‑clinic pilot, your product and support team are not ready, the implementation is a mess, and now you’re labeled “bad vendor” systemwide.
    Fix: Earn your way up. 1–3 clinics → clean success → then ask for 5–10.

Medical startup founder debriefing lessons from early clinic pilots -  for How Many Clinics Do I Need for a Meaningful Pilot


Practical Steps: How to Scope Your Pilot Now

You’re post‑residency, maybe working part-time, and you’ve got a product (or at least an MVP). What should you do next?

  1. Decide your primary goal for this phase:

    • “We need to prove clinicians use it” → aim for 2–3 clinics.
    • “We need an early outcomes story” → aim for 3–5 clinics.
    • “We just need it not to break” → start with 1, but plan the next step upfront.
  2. Write a one-page pilot spec you can show a clinic leader:

    • Objective
    • Number and type of clinics
    • Duration
    • Primary metric
    • What you provide (training, support, hardware if any)
    • What they provide (staff time, data access)
  3. Negotiate phased expansion baked into the agreement:

    • Example: “If after 4 months, metric X improves by Y% in at least 2 of 3 clinics, we expand to 5 additional clinics under pre‑agreed pricing.”
  4. Keep a simple, consistent measurement dashboard across sites:

    • Even a basic monthly spreadsheet with the same metrics across all clinics is better than 10 different EHR reports that can’t be compared.

area chart: Month 1, Month 2, Month 3, Month 4, Month 5, Month 6

Example Pilot Timeline for 5 Clinics
CategoryValue
Month 11
Month 22
Month 33
Month 43
Month 55
Month 65

That rough curve—1 clinic → 3 clinics → 5 clinics over 6 months—is a sane trajectory for a post‑residency startup that’s still maturing the product.

Team reviewing multi-clinic pilot dashboard on big screen -  for How Many Clinics Do I Need for a Meaningful Pilot Study?


FAQ: Pilots and Clinic Counts

  1. Is one clinic ever enough for a “meaningful” pilot?
    It can be meaningful for internal learning and early feasibility. It’s rarely enough to convince a skeptical third party about outcomes or ROI. If your claims go beyond “people can use it and it doesn’t crash,” you almost certainly need more than one clinic.

  2. What if I can only get access to one clinic to start?
    Start there, but be explicit that this is Phase 1. Design it to optimize learning, not PR. While running it, use your early wins to pitch 1–2 additional clinics, ideally with different staff or patient mix, as Phase 2.

  3. Should I prioritize more clinics or longer time in fewer clinics?
    For adoption and workflow, more clinics is better. For outcomes that change slowly (e.g., chronic disease control), longer time at fewer clinics can be okay initially. For serious outcome claims, you eventually need both: multiple clinics and adequate duration.

  4. How many patients do I need per clinic?
    Depends on the metric. For high-frequency metrics (screening completion, visit time), hundreds of visits per clinic can give you a signal. For rarer events (hospitalizations), you may need thousands of patients and/or more clinics. If you’re making big outcome claims, talk to a biostatistician.

  5. Do investors actually care how many clinics I used in the pilot?
    Yes—savvy ones do. They ask where, how many, how different they were, how results varied by site, and how tightly you controlled the implementation. A 3–5 clinic pilot with honest, nuanced data beats a flashy one-clinic case study every time with serious investors.

  6. Should I mix clinic types in my pilot (e.g., primary care + urgent care)?
    Not in your first serious outcome pilot. Keep settings comparable so noise stays manageable. Once you have a playbook that works in one setting, then test generalizability in a second type of clinic and label it clearly as such.

  7. What’s the biggest red flag in a pilot from an exec’s perspective?
    Results that look “too clean” from a single hand‑picked clinic, with no discussion of variability or challenges. Leaders know real-world implementation is messy. If you ignore that, you look naïve—or worse, like you’re hiding something.


Key points:

  • One clinic is fine for feasibility; “meaningful” evidence usually starts around 2–3 clinics for usability and 3–5 clinics for outcomes.
  • Match your clinic count to your goal and your next stakeholder. More is not always better—but one is almost never enough for serious decisions.

(Related: Should I Build for Patients, Doctors, or Hospitals First?)

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles