Residency Advisor Logo Residency Advisor

Competency-Based Medical Education: Practical Implementation for Faculty

January 8, 2026
18 minute read

Medical faculty discussing competency-based medical education framework during curriculum meeting -  for Competency-Based Med

Competency-based medical education is only “confusing” if faculty are guessing. When you stop guessing and build a system, it becomes brutally clear—and extremely powerful.

Let me break this down specifically for you as faculty. Not theory. Implementation.


1. What CBME Actually Changes For You (And What It Doesn’t)

Most faculty are told, “We’re moving to CBME,” and then handed a 60-page PDF of competencies, milestones, and EPAs. No one explains what this changes in their daily work. That is why people hate CBME.

Here is the blunt version.

CBME changes:

  • How you define “good enough” performance
  • How you judge progression and readiness for responsibility
  • How you document, aggregate, and act on assessment data

CBME does not inherently change:

  • Your content expertise
  • The need to teach on rounds, in clinic, in the OR
  • The fact that some learners will still struggle and need remediation

In a traditional time-based system, the hidden rule is: “If you survive the rotation and I do not see anything catastrophic, you pass.” In CBME the rule becomes: “If you consistently demonstrate these clearly defined behaviors to this standard, you progress. If not, you do not.”

That sounds simple until you realize your entire clinical teaching culture is built on “gestalt competence” and single global ratings.

So step one as faculty is accepting the mental shift:

  • From: “Did they meet my personal standard?”
  • To: “Did they demonstrate the agreed-upon behaviors in this domain, at this level, with enough consistency and independence?”

You are no longer grading vibes. You are collecting and interpreting evidence.


2. The Skeleton: Competencies, Milestones, and EPAs Without the Jargon

If you do not get this part structurally, implementation will always feel like paperwork.

Core components you actually use

Think of CBME as a 3-layer structure.

  1. Competencies – broad domains (e.g., patient care, medical knowledge, communication, professionalism, systems-based practice, practice-based learning). You already know these. They are your “buckets”.

  2. Milestones – developmental levels within each competency. Basically: what does a novice, advanced beginner, competent, and near-independent learner look like in that domain?

  3. Entrustable Professional Activities (EPAs) – real clinical tasks or responsibilities you would either entrust or not entrust a trainee to perform with a certain level of supervision (e.g., “admit and manage an adult inpatient with common conditions,” “handover a patient to another provider,” “perform focused history and physical in an urgent care setting”).

The mistake I see over and over: faculty try to use the entire national milestone document as a daily checklist. That is insanity.

Instead, you map:

  • A manageable set of EPAs to your rotation or course
  • Specific milestones that are actually observable in your context

Faculty reviewing EPAs and milestones mapping for a clinical rotation -  for Competency-Based Medical Education: Practical Im

Here is a simple mental model:

  • EPAs = “What” the trainee must be able to do independently.
  • Milestones = “How well” they must perform the component skills and behaviors to get there.
  • Competencies = “Which domain” we are talking about (knowledge, communication, etc.).

As faculty you mostly interact with EPAs and milestones. The competency labels are for program-level accounting and accreditation.


3. Translating CBME Into Your Daily Teaching

You do not implement CBME in a retreat. You implement it on Monday at 7:30 a.m. before rounds.

So let’s walk through how this looks in real life.

Step 1: Know the 3–5 EPAs that matter on your rotation

If you teach on an inpatient internal medicine service, your core EPAs might be:

  • Admit and manage an adult inpatient with common diagnoses
  • Present a patient on rounds in a clear, prioritized way
  • Write safe, accurate admission and progress notes
  • Develop and communicate a discharge plan

If you teach in an ambulatory pediatrics clinic, it might be:

  • Conduct focused, age-appropriate outpatient visits
  • Counsel caregivers on diagnosis, treatment, and prevention
  • Coordinate follow-up and referrals

Your program director or CBME lead should give you these. If they do not, push them. You cannot apply CBME vaguely.

Step 2: Break each EPA into observable behaviors at your level of training

Take “Present a patient on rounds.” For a graduating medical student, you might expect:

  • Organizes the presentation in a logical, conventional structure without frequent prompting
  • Prioritizes active problems and relevant data, does not read the entire chart
  • Synthesizes an assessment that demonstrates understanding, not just recitation
  • Proposes a basic plan, even if incomplete

Now you have something you can actually observe, coach, and rate.

Step 3: Turn every patient encounter into a micro-assessment opportunity

This is where you either make CBME work or you drown in it.

You cannot do a 20-item assessment for every patient. You should not. But you can:

  • Watch the learner present one patient in full each day with attention
  • Give 2 minutes of specific feedback tied to the EPA behaviors
  • Log a very short workplace-based assessment (WBA) in your phone or computer

For example, a 5-question form:

  1. Which EPA did you observe? (dropdown)
  2. Level of supervision required (e.g., “direct,” “reactive but quickly available,” “oversight only”)
  3. Overall entrustment judgment for this context (scale)
  4. One strength
  5. One suggestion for growth

That is it. Frequent, brief, behaviorally anchored.

bar chart: Mini-CEX, Direct Observation, DOPS, Field Notes, Multisource

Common Workplace-Based Assessment Tools Used in CBME
CategoryValue
Mini-CEX40
Direct Observation55
DOPS25
Field Notes35
Multisource20

Step 4: Use the language of progression, not “good” or “bad”

Learners drown in vague comments. “Good job.” “Needs more confidence.” Useless.

With CBME, your language shifts to developmental language:

  • “You are consistently at the level where you can present patients with only indirect supervision. Next step is refining your problem prioritization so you can lead the first part of the team’s discussion.”
  • “Right now, you require direct supervision for complex discharges because you miss key social and follow-up details. Let us focus this week on watching how the senior resident structures those conversations.”

This frames performance as a trajectory along milestones, not a personal judgment.


4. Assessment in CBME: How To Stop Making This Miserable

Let me be blunt: assessment is where CBME implementations usually fail. Not because the concept is bad, but because the execution is clumsy and overbuilt.

You need to think like this:

  • The unit of assessment is the observation, not the rotation.
  • One observation should be quick to record (1–3 minutes).
  • Many low-stakes data points are better than one high-stakes global rating.

The 4 assessment tools you should actually care about

There are dozens of tools in the literature. In practice, most programs that function well lean heavily on a small core set:

  1. Direct observation checklists or global ratings

    • Used for: history, physical exam, counseling, procedures
    • Short, anchored descriptors tied to milestones
  2. Mini-CEX (Clinical Evaluation Exercise)

    • One focused encounter (e.g., “assess chest pain,” “explain lab results”) followed by rapid feedback
    • Ideal for CBME because it is context-specific and repeatable
  3. Procedural assessments (like DOPS – Direct Observation of Procedural Skills)

    • For procedures where entrustment is critical (LP, central lines, intubations, joint injections)
    • Must include not only the technical steps but consent, sterile technique, complication management
  4. Narrative field notes

    • Free-text, but structured prompts (e.g., “What did the learner do that increased or decreased your trust?”)
    • Powerful when aggregated across many raters and settings
Core Workplace-Based Assessment Tools in CBME
ToolBest Use CaseTypical Time to Complete
Mini-CEXFocused clinical encounters5–10 minutes
Direct ObservationRoutine daily tasks3–5 minutes
DOPSProcedures5–10 minutes
Field NotesQualitative entrustment data2–3 minutes

Building credibility into your assessments

Faculty often worry: “I do not see them enough to make a big judgment.” Good. CBME agrees with you.

Your job is to contribute high-quality, limited-scope data, clearly framed:

  • Specify the context: acuity, complexity, level of support.
  • Make the supervision level explicit: “I had to step in,” “I was in the room but silent,” “I supervised via chart review only.”
  • Tie your judgment to behaviors, not personality.

For example, your note should look like:

“PGY1 presented a new CHF admission. Required prompting to articulate a unifying diagnosis. Missed key element of recent weight trend. Able to discuss diuretic options when asked. At this stage still requires direct supervision for complex admissions. Suggest deliberate practice on problem representation.”

That single observation, surrounded by dozens of others over time, becomes powerful.


5. Entrustment: The Real Decision You Are Making

CBME without explicit entrustment language is performative. The entire point is aligning assessment with what we actually care about: “Can I trust this person with this responsibility?”

Entrustment decisions always have 3 parts:

  1. The task (EPA)
  2. The level of supervision
  3. The context (patient complexity, setting, supports)

Many programs use a 5-point entrustment scale that actually matches clinical reality better than “below expectations / meets / exceeds” nonsense.

Typical entrustment scale:

  1. Not allowed to practice the activity, even under direct supervision
  2. Allowed to practice only with direct, proactive supervision in the room
  3. Indirect, reactive supervision (supervisor quickly available)
  4. Distant supervision (supervisor available but not immediately)
  5. Full unsupervised practice (graduate-level)

boxplot chart: Start Year, Mid Year, End Year

Sample Distribution of Entrustment Levels for an EPA
CategoryMinQ1MedianQ3Max
Start Year11223
Mid Year22334
End Year33445

Your daily job:

  • Observe the learner doing the EPA
  • Decide what supervision level you actually needed to feel safe
  • Record that specific entrustment judgment

Notice: this is not “what level do I think they should be at next year.” It is “what level did I actually use today in this context.”

Then, at the program level (competency committee, CCC), those entrustment data get aggregated across time and settings to decide progression.

The danger: inflated entrustment

Most faculty overestimate what they entrusted, because they forget all the scaffolding they quietly provided.

Example: you co-wrote the note, told them exactly what imaging to order, and then say “they basically managed the case independently.” No, they did not. You did.

Be ruthless with yourself:

  • Did you correct their plan?
  • Did you anticipate and prevent mistakes before they happened?
  • Did the patient’s safety actually depend on your unseen work?

If yes, your entrustment level should be lower. That is how CBME stays honest.


6. Programmatic Assessment: How Your “Little” Comments Turn Into Big Decisions

One of the selling points of CBME is “programmatic assessment.” Translation: we stop pretending that a single high-stakes exam or a single rotation evaluation should decide everything.

Instead, we build a system where:

  • Many low-stakes assessments are collected across contexts and raters.
  • Each data point has clearly defined purpose and limitations.
  • Decisions are made by a group (competence committee), not a single supervisor.
Mermaid flowchart TD diagram
Programmatic Assessment Flow in CBME
StepDescription
Step 1Clinical Encounter
Step 2Faculty Observation
Step 3Workplace Assessment Form
Step 4Assessment Database
Step 5Periodic Data Review
Step 6Competence Committee Meeting
Step 7Progression Decision
Step 8Feedback to Learner
Step 9Individual Learning Plan

Your role here is narrower than you probably think:

You are not making promotion decisions. You are contributing valid data.

Good programmatic assessment systems make it:

  • Easy for you to submit observations
  • Easy for committees to see patterns (strengths, gaps, trajectories)
  • Hard for idiosyncratic bias to dominate

The only way this works in reality is if the tools are short, mobile-friendly, and aligned with how you actually work. If your institution makes you log into a clunky desktop system after clinic, no one will do it consistently. That is a system problem, not a CBME problem.


7. Feedback and Coaching in a CBME World

You cannot do CBME without feedback. Authentic feedback. Not the sugar-coated nonsense that keeps everyone “above expectations” until they fail a summative barrier exam.

What changes in your feedback with CBME

  1. You anchor feedback to observable behaviors tied to EPAs / milestones.
  2. You talk explicitly about readiness for responsibility.
  3. You co-create specific next steps, not generic “work on that.”

Example before CBME:

“You need to be more organized with your presentations.”

CBME-style feedback:

“On this patient, your data gathering was thorough, but you listed every lab chronologically, and the team struggled to see the main issues. At your current stage, we expect you to cluster by problem and lead with your assessment. For the next two days, I want you to write down a one-sentence problem representation before you present, and we will review it quickly before rounds.”

That is specific, linked to expectations, developmental, and coachable.

Clinical preceptor giving targeted feedback to a resident after patient encounter -  for Competency-Based Medical Education:

The minimum CBME-aligned feedback behaviors you should adopt

  • Ask the learner what EPA they want feedback on for that encounter.
  • Observe them for that task with full attention (even if brief).
  • Give 1–2 concrete strengths, 1–2 concrete areas for growth.
  • Tie it explicitly to milestones: “This is typical of someone early in the year; to progress, you will need to…”
  • Document a short note that mirrors what you told them.

If the written comments and the spoken feedback do not match, learners lose trust in the system. And once that happens, CBME becomes another bureaucratic game.


8. Remediation and the “Not Yet Competent” Reality

CBME surfaces performance problems earlier and more clearly. That is the point. But most institutions are still emotionally stuck in the “everyone passes eventually” mindset.

As faculty, you will be in the uncomfortable position of identifying learners who are “not yet competent” for a given EPA at a given stage.

Here is how to do that responsibly.

Distinguish between:

  • “Slow but progressing” – learner is behind peers but shows clear improvement with coaching and repetition.
  • “Plateaued” – repeated similar feedback across time and settings with minimal change.
  • “Unsafe” – performance exposes patients to risk even with supervision, or severe professionalism issues.

CBME gives you language and data to support these distinctions, rather than vague impressions.

Your responsibilities:

  • Document concerns in specific, behaviorally anchored terms (“missed repeated opportunities to identify red flags in chest pain,” “fails to complete assigned follow-up despite reminders”).
  • Avoid global character judgments (“lazy,” “not motivated,” “not smart enough”).
  • Escalate concerns early to program leadership with examples.

Program leadership, in turn, must use the CBME framework to build:

  • Individualized learning plans
  • Targeted experiences (more supervised practice, specific clinics or simulations)
  • Clear timelines and criteria for demonstrating competence

If programs instead use CBME terms to obscure or delay hard decisions, faculty will lose faith quickly. I have seen this happen. “Let us gather more data” becomes an excuse not to act.


9. Faculty Development: What You Should Demand (And What You Can Do Yourself)

If your institution “implements CBME” by emailing a PDF and offering a one-time lunch talk, they are not serious. For CBME to work, faculty need repeated, concrete training and support.

At minimum, you should demand:

  • Rotate-specific EPA and milestone summaries in plain language, not accreditation-speak.
  • Live workshops where you practice:
    • Writing short, high-quality narrative comments
    • Using the entrustment scale on sample videos or cases
    • Conducting 5-minute feedback conversations tied to EPAs
  • Simple, mobile-friendly assessment tools with clear examples of what each rating means.
  • Periodic feedback to you as a rater (e.g., “Your ratings are consistently higher than peers; here is how that affects decisions.”)

line chart: Pre-training, 1 month, 6 months

Faculty Confidence Before and After CBME Training
CategoryUnderstanding CBMEUsing Entrustment Scales
Pre-training3525
1 month6560
6 months8075

What you can do yourself even if the system is weak:

  • Start explicitly thinking in EPAs and supervision levels on rounds: “What am I entrusting this learner with today?”
  • Build your own short observation-feedback routine: observe, name EPA, give 2 strengths + 1 target, document a line or two.
  • Talk openly with learners about milestones and where you see them on that trajectory.

It is better to do a small number of CBME behaviors consistently than to mimic every element of a “perfect” system poorly.


10. A Concrete Example: Implementing CBME on an Internal Medicine Ward

Let me pull this all together with a realistic scenario.

You are an attending on a 4-week internal medicine ward rotation with residents and students. Your program is “doing CBME,” but no one has explained what that means for you beyond yet another form.

Here is a practical implementation plan.

Week 1: Set expectations and establish EPAs

Day 1 huddle:

  • “On this rotation, the big things I care about and will assess you on are:

    1. admitting and managing inpatients with common problems,
    2. presenting concisely on rounds, and
    3. preparing safe discharge plans.
      These are our core EPAs.”
  • “I will watch you do these, give you feedback, and log brief assessments. By the end of the month, I want to be able to say what level of supervision you need for each of these tasks.”

Already, CBME is not abstract. It is the structure of the month.

Daily routine (15 extra minutes, used well)

  • Before rounds, you pick one learner and one EPA you will intentionally focus on that morning (e.g., “I will closely observe Alex’s new admission presentation”).

  • You give that learner a heads up: “Alex, I am going to focus on your CHF admission presentation as an EPA today.”

  • During the encounter, you listen with full attention and jot 2–3 quick notes.

  • After rounds, you spend 3–4 minutes with Alex:

    • “You organized the data clearly and your problem list made sense. Where you need to grow is in moving from listing findings to offering a synthesis and initial plan. At this stage, you still need indirect supervision for complex admissions. Over this week, I want you to commit to saying, ‘My main concern is…’ for each new patient.”
  • You then log a quick WBA on your phone with supervision level, 1–2 strengths, 1 target.

By the end of the week, you have 10–15 high-quality, targeted observations across your learners. Not heroic. Just deliberate.

Mid-rotation and end-rotation

At midpoint:

  • You pull up the accumulated CBME data (if your system is halfway decent).
  • You check patterns: is one resident consistently at low entrustment on discharges? another excelling in presentations?
  • You have a mid-point talk framed explicitly around EPAs and supervision levels.

At the end:

  • Your formal evaluation is no longer guesswork or “gut.”
  • You reference specific EPAs and their trajectory: “You started the month needing direct supervision for discharges; by week four you were reliably structuring them and anticipating social needs with only oversight review. That is the progression we expect.”

This is CBME in reality. Not perfect. But coherent.


11. The Bottom Line For Faculty

CBME is not magic. It will not fix a toxic culture or weak leadership. But used properly, it does three things very well:

  1. It forces us to define clearly what competence looks like in real clinical work.
  2. It aligns assessment with what we actually care about: trust, responsibility, and patient safety.
  3. It gives you, as faculty, a structure to coach learners more honestly and effectively.

If you remember nothing else:

  • Think in EPAs and supervision levels, not vague “strong / weak” labels.
  • Turn everyday clinical work into brief, focused observations with explicit feedback.
  • Document specific behaviors tied to progression, not your overall impression of “goodness.”

Do those three things consistently, and you are not just “participating” in CBME. You are actually making it work.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles