Residency Advisor Logo Residency Advisor

Does Being a Great Clinician Make You a Great Educator? What Data Says

January 8, 2026
12 minute read

Senior physician teaching residents at bedside -  for Does Being a Great Clinician Make You a Great Educator? What Data Says

Being a brilliant clinician does not automatically make you a great educator

The worship of the “star clinician” as the ideal teacher is one of the most persistent myths in medical education. You’ve seen it: the legendary surgeon who “everyone wants to rotate with,” or the diagnostician whose name gets whispered like a spell. The assumption is quiet but constant: if they’re outstanding in the OR or on the wards, they must be outstanding teachers.

The data says otherwise. Repeatedly.

There is overlap between clinical excellence and teaching excellence, yes. But it’s smaller than most people think, and the correlation is nowhere near 1. In some settings, it is disturbingly close to zero. In a few, it may even be negative.

Let’s walk through what we actually know, not what people like to believe.


What the research actually shows about “good doctor = good teacher”

For decades, medical schools and teaching hospitals have tried to answer one straightforward question: do the best clinicians get the best teaching evaluations and the best learner outcomes?

You’d expect, if the myth were true, a strong, consistent correlation. We don’t see that.

Multiple studies—across internal medicine, surgery, pediatrics, and emergency medicine—looked at links between clinical quality metrics (complication rates, adherence to guidelines, peer-rated clinical excellence, patient outcomes) and teaching metrics (student/resident evaluations, teaching awards, learner performance). The punchline: weak or no association.

A few examples, summarized:

Clinician vs Educator Performance – What Studies Find
Study AreaRelationship Found
Internal MedicineWeak or no correlation
SurgeryInconsistent, often minimal
PediatricsNo significant association
EM / ICUMixed, mostly weak positive
Overall Trend“Good doctor” ≠ “good teacher”

You’ll see individual papers that show a small positive correlation. Those exist. But zoom out across specialties, institutions, and time, and a pattern emerges: being a top clinician is neither necessary nor sufficient for being a top educator.

I’ve watched programs chase “rockstar” clinicians to “fix teaching” on a service, only to find the teaching scores flat or worse. The attending’s outcomes were spectacular. The residents’ learning? Not so much.

Why? Because the skillsets are related but distinct. And we keep acting like they’re the same job.


Why clinical excellence and teaching excellence only partially overlap

Being a strong clinician demands very specific abilities: rapid pattern recognition, procedural skill, risk assessment, prioritization under pressure, efficient documentation, communication with patients and teams.

Being a strong educator uses some of that, but also demands different muscles: making thinking visible, calibrating explanations to the learner’s level, structuring progressive responsibility, giving actionable feedback, resisting the urge to just “do it myself,” and, crucially, tolerating short-term inefficiency in exchange for long-term learner growth.

Those last ones clash directly with how many “elite” clinicians operate day to day.

On a busy ward or in a high-volume OR, the star clinician’s instincts are often: go faster, tighten control, minimize variability, anticipate three steps ahead and quietly fix problems before they surface. That’s superb for patient flow and outcomes. Teaching, though, often requires you to slow down, expose your reasoning, let the learner make (safe) mistakes, and then debrief.

The Venn diagram looks something like this:

doughnut chart: Primarily Clinical Skills, Overlap, Primarily Teaching Skills

Overlap Between Clinical and Teaching Skills
CategoryValue
Primarily Clinical Skills40
Overlap30
Primarily Teaching Skills30

Those “overlap” skills are things like communication, professionalism, reliability. But notice how many critical teaching behaviors sit in that “primarily teaching” bucket:

  • Structured explanation instead of intuitive shortcut.
  • Intentional questioning instead of rapid directives.
  • Feedback that is specific and behavior-focused, not a vague “good job” or “you’re not ready.”

I’ve seen residents stunned when the most technically gifted surgeon they know absolutely tanks a teaching evaluation. Comments like: “amazing surgeon, but I learned more about the case from the PGY-3 than from the attending.” That isn’t a contradiction. It’s the predictable result of conflating two different competencies.


Students’ favorite attendings are often not the “best” clinicians by traditional metrics

Here’s another uncomfortable truth: when you look at learner evaluations, the people who students and residents rate as “excellent teachers” are often… solid but not superstar clinicians, at least on paper.

They’re clinically safe, dependable, up-to-date. But they’re not the “send your family member to them” legend of the department everyone name-drops.

Why do learners love them?

Because they:

  • Take time.
  • Explain their reasoning.
  • Ask questions that are hard but fair.
  • Protect learners from getting humiliated in front of patients.
  • Give feedback that you can actually use the next day.

And they are usually willing to sacrifice a little throughput or personal efficiency to do those things.

From the institution’s perspective, this creates tension. Hospital leadership loves the high-volume, low-complication powerhouse. Education leadership loves the person who consistently produces competent, confident graduates. They are not always—sometimes not even usually—the same individual.


The particularly messy issue of teaching evaluations

Now, someone’s going to object: “But teaching evaluations are biased, noisy, and popularity contests.” Correct. They are. They’re also, unfortunately, what most places use to label someone a “great educator.”

So we need to be careful here.

Learners’ evaluations tend to overvalue:

  • Charisma and friendliness.
  • Leniency in grading and workload.
  • Not “pimping too hard.”
  • Protecting them from stress.

They undervalue:

  • Accurate, blunt feedback.
  • High expectations with accountability.
  • Long-term learning that feels painful in the moment.

So yes, it’s possible that a clinically excellent, demanding attending pushes residents hard, gives piercing feedback, and gets punished for it in evaluations. There are data backing that phenomenon up too: more rigorous rotations often get slightly lower “satisfaction” scores.

But here’s the kicker: when studies try to separate pure popularity from actual educational effectiveness (using exam scores, clinical performance, Objective Structured Clinical Examinations, later practice outcomes), the pattern still holds. Some of the very best teachers, as measured by learner performance, are not the ones with the flashiest clinical profiles.

In other words, there’s noise, but the signal underneath doesn’t rescue the “great clinician = great educator” narrative.


What about patient outcomes vs learner outcomes?

Let’s talk outcomes, not vibes.

People love to say, “You learn best from the physicians with the best patient outcomes.” That sounds obvious. It’s also mostly unsupported.

Studies that match attending, patient outcomes, and trainee performance on exams or clinical assessments find—again—a weak association at best. Residents do not necessarily perform better on boards, in OSCEs, or in independent practice simply because they trained under the “highest quality” clinicians by institutional metrics.

On the flip side, when you look at interventions that consistently improve learner performance, they’re almost never about pairing students with “star clinicians.” They’re about:

  • Faculty development in feedback and bedside teaching.
  • Structured curricula on rounds instead of ad hoc teaching.
  • Clear expectations and assessment rubrics.
  • Intentional debriefings, case discussions, and reflection.

In short: systematized educational design beats relying on osmosis from clinical brilliance.

I’ve watched graduating residents rave about how much an “average” hospitalist taught them because that person ran every admission as a mini-case conference, forced them to commit to a plan, and then dissected it. Meanwhile, the consult service everyone worshipped clinically produced very little learner growth. Cases were outstanding. Teaching was an afterthought.


The scary part: sometimes being “too good” clinically can hurt teaching

Here’s where it gets uncomfortable.

Some clinicians have reached a level of automaticity that makes them terrible at explaining what they do. They skip steps when they talk. They can’t remember what it felt like not to see the pattern.

You ask them to explain a complex decision, and you get: “It was obvious this was X.” That’s not reasoning. That’s a black box.

There’s decent cognitive science behind this. As expertise increases, conscious access to the underlying mental steps decreases. The expert’s brain compresses information into chunks. Great teachers fight that tendency. They slow down, unpack, and reconstruct their own thinking explicitly.

Not everyone is willing or able to do that work.

So yes, there are situations where the very thing that makes someone lethal in the ICU—rapid, unconscious decision-making—makes them fragile as a teacher. Trainees get exposed to excellence, but they don’t get tools to build toward it.


How programs keep getting this wrong in hiring and promotion

Despite all this data, academic medicine still runs a depressingly predictable playbook:

  • Promote the top biller to “Director of Education” because “they’re such a strong clinician.”
  • Load the most complex teaching rotations with the most clinically in-demand specialists.
  • Give teaching awards based almost entirely on student satisfaction forms.

You know this. You’ve seen the CVs. “Named Top Doctor in [City]” translates directly into, “Let’s make them course director.” There’s often no serious look at whether that person can give feedback, design a rotation, or mentor struggling residents.

The result:

  • Burnout for the few true educators, who end up informally coaching and teaching behind the scenes.
  • Frustration for learners, who rotate with legends and leave with surprisingly thin practical skills.
  • Confusion for administration, who cannot figure out why their “elite” faculty do not translate into elite training outcomes.

If you actually care about building a strong teaching program, you have to stop hiring for clinical prestige and hoping teaching excellence appears by magic.


What actually predicts strong teaching?

Three things show up over and over in the data and in real-world experience.

First, explicit training in education. Faculty who go through genuine faculty development—courses on feedback, bedside teaching, assessment design—tend to improve. Not just in evaluations, but in learner outcomes.

Second, deliberate practice as a teacher. People who seek feedback on their teaching, adjust based on that feedback, and treat education as a skill to be honed, get better. Just like procedures. Just like clinical diagnosis.

Third, institutional signals. When departments:

…you start to see a cadre of faculty who are good clinicians and good teachers. Not by accident. By design.

Here’s how programs that understand this reality tend to structure roles:

Different Faculty Profiles in Teaching Hospitals
ProfileClinical StrengthTeaching StrengthBest Use Case
Star ClinicianVery HighVariableComplex cases, modeling excellence
Clinician-EducatorHighHighCore rotations, curricula design
Solid WorkhorseModerate-HighModerateService coverage, supplemental teaching
Pure Educator (Sim/Didactic)Low-ModerateVery HighSkills labs, OSCEs, remediation

Notice only one of those—the clinician-educator—is strong on both axes. That’s the role most systems underinvest in.


How this should change your own career decisions

If you’re in medical school or residency and you care about teaching, stop assuming your path must be “become the top clinician, then the teaching will follow.” It usually doesn’t.

You need to make a conscious decision: do you want to be primarily a clinician who occasionally teaches, or a clinician-educator with parallel expertise in education?

Those paths look different:

  • The pure-clinician path: more RVUs, more complex clinical referrals, leadership roles anchored in service lines, occasionally giving grand rounds or letting a student trail you.
  • The clinician-educator path: protected teaching time, involvement in curricula, advising students/residents, maybe formal training (e.g., Masters in Education, medical education fellowships), promotion dossiers that highlight teaching portfolios.

One is not morally superior. But pretending they’re the same thing is how people end up bitter at 45, doing two jobs badly and getting credit for neither.

If you want to be an excellent educator, you’ll eventually have to accept trade-offs. Less clinical volume. Some lost income compared to your most RVU-obsessed colleagues. But you’ll have actual impact on how the next generation practices.

And if you want to be an elite clinician and mostly do that? Fine. Own it. Just don’t delude yourself that excellence at the bedside automatically confers excellence as a teacher.


Why this myth refuses to die

Medicine has a hero problem. We like singular figures. The legendary diagnostician. The cowboy surgeon. The rainmaker proceduralist.

That mythology is emotionally satisfying. It’s also terrible for thinking clearly about how people learn.

We’re drawn to the idea that simply being near greatness will rub off. “Watch what they do, pick it up, and you’ll become like them.” That’s not how expertise transfers. Expertise transfers when someone can teach—consciously structure, scaffold, and coach.

Programs cling to the myth because it’s administratively easy. You already know who your “great clinicians” are. Slapping a “teacher” label on them saves you from the harder project of building an education infrastructure with its own standards, training, and rewards.

Learners cling to the myth because it feels good to say “I trained with X” instead of “I spent three months with someone who systematically taught me how to think.”

But mythology does not care about outcomes. Data does.


The bottom line

Three points, without the sugar-coating:

  1. Clinical excellence and teaching excellence overlap, but the correlation is weaker than the profession pretends. Being a great clinician does not automatically make you a great educator.

  2. The real predictors of strong teaching are explicit training in education, deliberate practice as a teacher, and institutional support—not just star-level clinical performance.

  3. If you care about medical education as a career, treat teaching as its own discipline. Stop assuming that simply becoming a brilliant clinician will make you a brilliant educator. It won’t. You have to build that second skillset on purpose.

Everything else is just wishful thinking dressed up as tradition.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles