Residency Advisor Logo Residency Advisor

How New Programs Handle Feedback, CCCs, and Milestones in Year One

January 8, 2026
17 minute read

New residency leadership team reviewing milestones and feedback systems -  for How New Programs Handle Feedback, CCCs, and Mi

New residency programs either build brilliant feedback systems from day one—or they spend five years cleaning up chaos they created in the first six months.

Let me break down exactly how the smart ones handle feedback, Clinical Competency Committees (CCCs), and milestones in that fragile first year.


The Reality Check: Year One Is Not “Business as Usual”

New programs love to say, “We’re doing what everyone else is doing, just with smaller numbers.” That is almost always wrong.

A new residency program in year one faces three structural problems that established programs do not:

  1. No historical data
  2. No culture of feedback
  3. No calibrated sense of “what PGY-1 performance should look like here”

If you pretend those do not exist and import a generic evaluation system from someone else’s institution, your CCC will be useless and your residents will not trust the process.

The programs that do this well are obsessive about three things from day one:

  • How often and how specifically residents receive feedback
  • How their CCC actually uses data rather than vibes
  • How milestones are interpreted, not just checked off

Let us go piece by piece.


Building a Feedback Culture from Zero

Year one is when you hardwire expectations. If you are casual about feedback in the first cohort, you will be fighting that culture for a decade.

1. They Specify What “Feedback” Means

Good new programs do not assume faculty know the difference between:

  • “Good job today”
  • “You should read more about sepsis”
  • “Let us sit down for 10 minutes, walk through that code, and identify two things to keep doing and two things to fix”

Only the last one is actual, actionable feedback.

Strong programs define feedback for faculty and residents, explicitly and early:

  • Feedback is timely (within the same shift if possible)
  • Feedback is behavior-based (“your admission presentations are too long—try this structure”)
  • Feedback includes a next step (“practice focusing your summary on the 3 active problems”)

They say this in faculty development. They say it on day one of orientation. They repeat it in CCC.

Weak programs assume “our faculty are experienced” and do none of this. Huge mistake.

2. They Overweight Formative, Not Just Summative

New programs that scale up too quickly on formal evaluations end up with garbage input to the CCC: vague, delayed, copy-paste comments.

The better approach in year one:

  • Frequent, informal, micro-feedback on the floor or in the OR
  • Monthly or bi-monthly structured sit-downs between resident and faculty mentor
  • Fewer but higher-quality formal evaluations tied to actual observed work

You want a culture where a PGY-1 expects:

  • “My attending will debrief that tough case with me today.”
  • “My mentor will look at my milestone profile this month and actually talk about it.”

You do not want residents relying on a semiannual CCC letter as their first real signal that something is off. That is the pattern that produces angry meetings and probation.

3. They Force Direct Observation Early

Here is the issue in year one: there are no senior residents. No established norms. Faculty are still learning how the rotation flows with learners. Without intention, you will get almost no true direct observation.

Strong new programs design it in:

  • Structured “direct observation days” for new interns:
    Faculty must directly observe basic tasks—H&Ps, consent, handoff, code response, procedures—then document it.

  • Simple tools:
    One-page checklists or brief EPA-based forms that can be filled in under a minute.

  • Protected time:
    The first 4–6 weeks have specific sessions where faculty are not overloaded with patients so they can watch, not just sign notes.

If you skip this, your first CCC meeting is just opinions. “I feel like she is fine.” That is not defensible if someone is struggling.


Designing the CCC in a New Program: Who’s in the Room and What They Actually Do

A lot of new programs treat the CCC like a compliance checkbox. They pick three faculty, put a meeting on the calendar, and that is it.

The programs that get this right treat the CCC as the engine of their educational quality.

1. Membership: Small, Consistent, and Actually Engaged

Good year-one CCCs are:

  • Small: usually 3–5 members
  • Stable: the same core group for at least the first 2–3 years
  • Mixed but intentional:
    • PD or APD (sometimes both, if PD is non-voting)
    • At least one core faculty who works closely with interns
    • Someone who understands assessment and milestones (this might be an APD with that interest)

Common mistake: inviting every “interested” faculty and ending up with 8–10 people, half of whom have not read the evaluations before the meeting. That kills the signal.

2. Data Infrastructure: They Do Not Rely on Memory

First‑year programs lack historical benchmarks, so they overcompensate with structure.

A good setup for each resident includes:

  • Aggregated numerical data:

    • Mini-CEX / direct observation scores
    • Procedure logs
    • 360s from nurses, allied health, and sometimes patients
    • In-training exam or OSCE scores if available
  • Key narrative sources:

    • Free-text comments from rotations
    • Remediation or coaching notes (if applicable)
    • Resident’s own self-assessment

These are not vague, they are labeled clearly by rotation and time period, so CCC members can see trajectory, not just snapshots.

The best programs have everything in one dashboard. Even if it is just a carefully structured spreadsheet early on, not a fancy analytics platform.

bar chart: Faculty evals, 360 evals, Direct observation, In-training exam, Self-assessment

Typical Evaluation Sources for CCC in Year-One Programs
CategoryValue
Faculty evals95
360 evals70
Direct observation80
In-training exam40
Self-assessment60

3. Meeting Structure: They Avoid the “Anecdote Olympics”

In weak CCCs, the meeting devolves into:

  • “I had him on nights, seemed fine to me.”
  • “I heard from someone that she struggled once on cross-cover.”
  • One loud faculty dominates; others nod.

Good programs explicitly structure the discussion.

A very functional pattern in year one looks like this:

  1. Pre-meeting:
    Each CCC member reviews residents assigned to them and prepares preliminary milestone “anchor” ratings.

  2. In the meeting, for each resident:

    • Pull up the dashboard
    • 1–2 minute factual summary from the assigned reviewer
    • Quick look at trajectory: early vs more recent evals
    • Discuss specific concerns (with reference to actual comments or episodes)
    • Adjust milestone levels by consensus
  3. Clear documentation:

    • Reasoning for any rating that suggests concern
    • Plan: simple (monitor) versus targeted (coaching, remediation, program change)

The PD’s job in year one is to squash purely anecdotal commentary unless it is concrete and tied to behaviors.


Milestones: How New Programs Avoid the “Checkbox Trap”

Milestones are not intuitive. That is why so many established programs misuse them. New programs have the advantage of a clean slate—if they use it.

1. They Treat Milestones as a Shared Language, Not a Report Card

New programs that do this well start by saying:

“Milestones are how we talk about performance consistently, not your ‘grade’ as a resident.”

They train both faculty and residents:

  • Faculty: how to map specific behaviors to milestone levels, ideally using specialty-specific examples.
  • Residents: how to read their milestone profiles and what “at expected level for PGY-1” actually looks like.

Good programs give concrete, local examples. For an internal medicine PGY-1:

  • PC1 (History and Physical Exam):
    • Level 1: Needs frequent prompting, misses key elements, disorganized H&P
    • Level 2: Usually complete, needs some guidance, still inefficient
    • Level 3: Consistently complete, focused, and reasonably efficient on common problems

They do not just hand people the ACGME document and hope for the best.

2. They Use Milestones to Drive Coaching, Not Punishment

In the first year, almost everyone will be somewhere between Level 1 and Level 3 in most domains. That is normal.

The smart programs use this to normalize growth:

  • “On systems-based practice, you are closer to Level 1. On patient care, you are closer to Level 3. Over the next six months, we will focus intentionally on SBP with specific experiences.”

Then they tether development plans to actual activities:

  • Struggling with handoffs (Profism/ICS)? → Shadow senior resident sign-outs, do observed handoffs with feedback 1–2 times per week.
  • Struggling with clinical reasoning (PC/Med Know)? → Structured case conferences with attending, write out differential and problem lists, debrief weekly.

The CCC reviews if those interventions actually moved the needle, not just whether “more time” passed.


How Feedback, CCC, and Milestones Interlock in Year One

If you build each component in isolation, you get noise. The strongest new programs design the system so each part feeds the other.

Here is the rough architecture when it is done well:

  1. At the front line:

    • Faculty give frequent, brief, behavior-based feedback
    • Some of those are captured in short forms tied to specific EPAs or milestones
  2. Monthly or quarterly:

    • Residents meet with assigned advisors, review specific incidents and early data
    • Advisor helps the resident frame a self-assessment linked to milestones
  3. Semi-annually:

    • CCC reviews patterns, not individual blips
    • CCC produces narrative-based milestone decisions with concrete examples
  4. After CCC:

    • PD/Advisor meets with resident
    • Discusses CCC’s view, not as a verdict but as a structured reflection: “Here are 2 areas that are ahead of where we expect, and 1–2 that we want to push.”
Mermaid flowchart TD diagram
Year-One Feedback and CCC Workflow in New Programs
StepDescription
Step 1Clinical Work
Step 2Real time feedback
Step 3Brief documented observations
Step 4Advisor meetings
Step 5Resident self assessment
Step 6CCC review
Step 7Milestone decisions
Step 8Individualized development plan

That loop is the entire game in year one. If any link is weak, you end up with CCC decisions that residents do not recognize as matching reality.


Specific Year-One Challenges and How Strong Programs Handle Them

Let me walk through some problems I have seen repeatedly in new programs—and what the better ones do differently.

Challenge 1: Inflated Early Evaluations (“Everyone Is Great!”)

Faculty in new programs often over-rate first-year residents because:

  • They want to be “supportive”
  • They are nervous about being “too harsh” with the first class
  • They do not know how interns in this setting should look at baseline

Fix in good programs:

  • Explicit calibration sessions:
    Faculty sit down with sample vignettes or videos of resident performance and assign milestone levels, then compare and discuss.

  • PD/APD quietly lowers the ceiling in early months:
    “For the first 3 months, if you think someone is ‘above expected,’ write me a specific example. Otherwise, use ‘at expected’ as your default for solid but typical work.”

  • Direct education about grade inflation:
    “If we say everyone is Level 4 in PGY-1, our CCC will have no way to distinguish who needs support later. That actually harms residents.”

Challenge 2: No Negative Feedback until the CCC Meets

Classic pattern in a new program: the first time an intern hears they are struggling is at the 6‑month CCC summary. Residents feel blindsided, furious, and distrustful.

Better programs mandate:

  • “No surprises” rule:
    If something is concerning enough to bring up in CCC, it must have been communicated to the resident explicitly beforehand.

  • Documentation discipline: Faculty and advisors are expected to document “difficult conversations” in a brief, factual manner. Not to build a case against the resident, but to give context to the CCC.

  • Timely interventions:
    If an intern is clearly having trouble with, say, time management or documentation, the program does not wait for the CCC. They start coaching, and the CCC later evaluates the effect.

Challenge 3: Too Little Data in Small Programs

Year-one programs may have 4–8 residents. You might only have a handful of evaluations per resident per rotation. The statistics are thin.

So smart programs adjust their expectations:

  • They accept that pattern recognition will be slower. One bad rotation does not define the resident.

  • They supplement with structured events:

Those give you additional snapshots outside of routine ward or clinic work. You are not relying on just a couple of busy attendings on nights.

Typical Additional Assessments Used by Strong Year-One Programs
Assessment TypePrimary Purpose
OSCEStandardized check of core clinical skills
SimulationTeamwork, crisis management, communication
Direct Obs DaysReal-world patient care behaviors
Chart AuditsDocumentation quality, clinical reasoning
360sProfessionalism, teamwork from nursing/allied staff

Challenge 4: Faculty Inexperience with Milestones

New programs often recruit excellent clinicians who have never touched a milestone form in their life. If you hand them the full ACGME milestone document, they will ignore it.

Better programs simplify:

  • Use core examples: For each high-yield subcompetency (e.g., patient care 1, ICS, Prof), they produce a 1-page “translation” with:

    • Level 1: concrete local behaviors
    • Level 2: concrete behaviors
    • Level 3: and so on
  • Use “anchors” during faculty dev: Review real resident scenarios: “This intern presents efficiently, but misses social factors and safety planning. Where do you place them on these two subcompetencies?”

  • Emphasize relative calibration: “We are not comparing our interns to a national superstar; we are describing where they are on a developmental continuum.”

This is tedious for the first year or two. But programs that do it early avoid years of erratic ratings later.


Transparency with Residents: How Much Do You Show in Year One?

Here is where programs make very different choices—and you can tell who understands adult learners.

Strong programs do not hide the guts of the system from their first cohort.

They share:

  • The CCC process in detail
  • The types of data CCC sees
  • Sample milestone reports (de-identified)
  • How CCC decisions feed into promotion decisions

They also teach residents how to use feedback:

  • How to read an evaluation for patterns, not just emotional tone
  • How to ask for specific feedback when they get generic “keep reading” comments
  • How to do self-assessment that does not sound like either self-loathing or self-promotion

Resident and faculty advisor reviewing milestone report -  for How New Programs Handle Feedback, CCCs, and Milestones in Year

When residents can see the map, they are far more likely to accept course corrections. Hiding the system behind closed doors breeds paranoia and rumor.


Year One vs Year Three: How Systems Mature

Let us be honest: year-one CCCs and feedback systems are prototypes. The best programs treat them like that deliberately.

They build in structured revision:

  • End of year one:

    • Survey residents: Were you surprised by CCC feedback? Did feedback match your lived experience?
    • Survey faculty: Which tools were impossible to use in real life? Where did we over-complicate?
  • Year two adjustments:

    • Drop evaluation forms that no one completed meaningfully
    • Add or refine direct observation events
    • Tighten or expand CCC membership based on who shows up prepared
  • By year three:

    • The system is stable.
    • There is enough historical data to know what “normal” growth looks like in that environment.
    • The first cohort is now senior and can participate as near-peers in feedback and coaching.

line chart: Year 1, Year 2, Year 3

Maturity of Feedback Systems Over First 3 Years
CategoryValue
Year 140
Year 270
Year 390

Year-one programs that think they will “get it perfect out of the gate” usually paralyze themselves. The ones that accept iteration—but still demand rigor and transparency—end up with systems residents trust.


Future-Facing Moves: Where New Programs Are Experimenting

Some of the more forward-thinking new programs are using year one as a sandbox for ideas more established programs are too rigid to try.

Here are a few patterns I am seeing:

1. EPA-Focused Assessment

Instead of swimming in 30+ milestone subcompetencies, some programs organize feedback and CCC decisions around a limited set of Entrustable Professional Activities (EPAs):

  • Admit and manage a common inpatient
  • Perform pre-op evaluation and post-op management
  • Lead a rapid response or code for common scenarios

They then map each EPA to milestones behind the scenes. Faculty think in EPAs, CCC translates that into milestone language for the ACGME.

Residents find this far easier to understand: “How trusted am I to do X?” instead of “Are you a Level 2.5 in SBP2?”

2. Longitudinal Coaching Roles

Instead of assigning an “advisor” in name only, some new programs are serious about coaching:

  • Each intern gets a coach who:
    • Reviews all evaluations
    • Attends CCC (or receives a summary)
    • Meets at least quarterly with the resident
    • Documents a simple learning plan for the next 3–6 months

The CCC sees the coach’s notes and can see if the resident is actually working on what they said they would.

Faculty coach and resident in a structured coaching session -  for How New Programs Handle Feedback, CCCs, and Milestones in

3. Data Visualization for Residents

A handful of newer programs are building or buying tools that show residents their progress visually:

  • Trend graphs of milestone levels over time
  • Heatmaps of strengths vs growth areas
  • Comparison against anonymous cohort averages (carefully de-identified)

This reduces defensiveness during feedback. Residents can literally see: “Your communication ratings trend up nicely; your clinical reasoning has plateaued since month 4.” That is a much more focused conversation than “You need to read more.”


What Residents Should Expect (and Demand) from a New Program

If you are a resident in the first or second class of a new program, you are not just training—you are shaping the culture. You should reasonably expect:

  • Regular, specific, face-to-face feedback tied to real cases
  • A CCC that is not a mysterious black box
  • Milestone reports that match your lived experience, not surprise verdicts
  • A clear path for addressing concerns if you feel an evaluation is inaccurate or biased

And yes, you will see some clunkiness in year one. Forms that do not quite fit. Meetings that run long. But if leadership is open, transparent, and willing to revise based on data and your experience, that is a good sign.

If instead you see:

  • Vague or non-existent feedback
  • Sudden negative CCC outcomes with no prior warning
  • Milestone ratings that are wildly inflated or deflated with no clear basis

—that is not “just how new programs are.” That is sloppy design.


Bottom Line

Three points matter most for how new programs handle feedback, CCCs, and milestones in year one:

  1. The first year locks in culture. If a program builds clear, behavior-based feedback and transparent CCC processes early, residents will trust the system. If they do not, you will feel that mistrust for years.

  2. Milestones are only useful when interpreted, not just recorded. Strong programs localize and explain them, use them to drive coaching, and avoid the temptation to inflate or hide behind numbers.

  3. The best new programs treat their first feedback and CCC systems as structured experiments. They iterate deliberately, involve residents in that evolution, and use data—real observations, not anecdotes—to guide promotion and remediation decisions.

If you are building or joining a new program, do not underestimate year one. That is when you decide whether your CCC will actually help people grow—or just generate PDFs for the ACGME.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles