Evaluating Teaching Quality: Conferences, Feedback Systems, and Prep

January 6, 2026
19 minute read

Residents engaged in small-group teaching conference on the wards -  for Evaluating Teaching Quality: Conferences, Feedback S

Most applicants completely misjudge teaching quality because they look at the wrong signals.

You see “protected didactics,” “robust feedback,” “strong board prep” on every website and think programs are similar. They are not. The gap between the top 20% and the bottom 20% in actual teaching quality is enormous, and you can absolutely detect it if you know where to look.

Let me break this down very specifically: you are evaluating three pillars of a program’s educational culture:

  1. How they run conferences and didactics
  2. How their feedback systems actually function
  3. How intentional and effective their exam/prep infrastructure is

If any one of these is weak, you will feel it by PGY‑2. If all three are weak, you will be teaching yourself while doing a full‑time service job.

1. Conferences: The Most Overhyped, Most Misunderstood Signal

Everyone advertises conferences. Very few run them well.

You are not just counting “hours of didactics.” You are interrogating the quality, structure, and protection of that time, and whether it survives contact with reality (high census, staffing shortages, demanding attendings).

A. Core Questions To Ask About Conferences

On interview day or second look, you should be able to get precise answers to these:

  • How many hours of scheduled conference per week?
  • How much of that is actually protected (with coverage)?
  • What percentage of residents realistically attend on a typical ward month?
  • Who is in the room teaching (faculty vs residents vs pharma vs no one)?
  • How interactive is it? (polling, cases, chalk talks vs 60‑minute PowerPoint monologue)
  • Is there a longitudinal curriculum or just random talks slotted in?

Well‑run programs can answer these in detail. Mediocre ones give fluff: “Oh, we have lots of great teaching; it’s very robust.”

You walk away with nothing.

B. Spotting Real “Protected Time” vs Fake Protected Time

Real protected time means:

  • Pagers and phones are physically handed off to a designated coverage person (float, hospitalist, night team).
  • There is a clear expectation that attendings do not round through conference.
  • Nurses and consultants know the block and do not schedule elective tasks during it.
  • Chiefs or program leadership will call out attendings who routinely violate it.

Fake protected time:

  • “You’re technically supposed to go, but if something comes up you stay on the floor.”
  • Senior “covers” but still gets crushed, so interns keep stepping out every 5 minutes.
  • Nurses page right through because nobody ever told them otherwise.
  • ICU/ED “does not really count” for protected time, so you miss half the curriculum.

Ask residents: “On your last wards month, what percentage of noon conferences did you actually attend from start to finish?” If they say 20–40%, you know exactly where teaching sits on the priority list.

bar chart: Program A, Program B, Program C, Program D

Reported Noon Conference Attendance on Busy Wards
CategoryValue
Program A80
Program B55
Program C30
Program D20

Program A is where you will actually learn. Program D is where you will become very “efficient” and very under‑taught.

C. Types of Conferences That Actually Matter

Ignore the grand labels. Look for these specific formats and how they are executed:

  1. Morning Report (for IM/FM/Med‑peds types)

    • Good: Case‑based, resident‑led with strong faculty discussant, 45–60 minutes, focused on diagnostic reasoning and management frameworks. Chiefs actually moderate.
    • Bad: Attending reading their own interesting case from 5 years ago, zero participation, ends with “Any questions?” and silence.
  2. Noon Conference / Core Curriculum

    • Good: Mapped to ACGME/board blueprint, repeated on a 12‑ or 18‑month cycle, high faculty attendance, use of audience response, handouts or digital notes.
    • Bad: Random pharma lunches, disorganized topics (“Interesting EKGs” three times in a month, zero coverage of bread‑and‑butter CHF).
  3. Subspecialty Conferences

    • Good: Residents actually invited and free to go; sessions are pitched at resident level (not just for fellows).
    • Bad: “You can come if you want, but we are too busy,” and they are essentially fellow‑only echo chambers.
  4. M&M / QI Conferences

    • Good: Structured around systems improvement and education, residents are engaged, not just publicly shamed.
    • Bad: Blame‑the‑intern ceremonies dressed up as “learning opportunities.”

If a program cannot show you a structured, recurring curriculum, you will be at the mercy of whichever faculty like giving talks.

D. What To Look For On Interview Day Tours

You will not get a 3‑month ethnography of their teaching culture. You get a few hours. Use them intelligently:

  • Ask for a sample didactic schedule (last month or last block), not a generic “curriculum slide.”
  • Glance at conference rooms: Are there residents’ notes on whiteboards, timestamps of recent conferences, or is it mostly empty corporate meeting space?
  • Ask: “Who runs morning report?” If the answer is “It depends, sometimes…” with no clear structure, that is weak.
  • Ask a simple reality‑check question: “How often do conferences get cancelled?” Programs with strong culture will almost brag: “Basically never. People get grumpy if we cancel.”

E. Red Flags Hidden in Plain Sight

  • “We do most of our teaching on rounds.” Translation: no structured curriculum. Luck of the draw.
  • “We’re moving to a more resident‑led conference model.” Translation 50% of the time: we have no faculty time or interest, so residents are now responsible for everything.
  • “We are piloting a new didactic schedule.” Translation: what you see today may not exist when you start; they are improvising.

You want stability plus evolution, not chaos disguised as innovation.


2. Feedback Systems: Do You Actually Get Better, Or Just Survive?

ACGME requires feedback. That does not mean you will receive actionable, specific, timely feedback.

Teaching quality is not just about the lectures. It is about whether the program systematically helps you identify weaknesses and improve.

A. Anatomy of a Real Feedback System

A functional system has three layers:

  1. Formative, real‑time feedback

    • End‑of‑shift comments. Attending pulls you aside and gives 5–10 minutes of concrete feedback.
    • Mid‑rotation check‑ins that actually happen, not just boxes ticked in New Innovations/MedHub.
  2. Summative, written evaluations

    • Completed on time (within 2–3 weeks of rotation).
    • Contain written comments, not just “meets expectations” on 20 identical items.
  3. Synthesis and follow‑through

    • Semiannual review with PD or APD that references specific evals and creates an actual plan: “Your notes need work; here is what we are going to do about it next block.”
    • Use of milestones for guidance, not just accreditation paperwork.

Ask residents: “When was the last time you got specific, useful feedback that changed something about how you work?” If they have to think hard, that is your answer.

B. Questions That Cut Through the Sales Pitch

You are not asking, “Do you receive feedback?” Everyone will say yes. Ask these instead:

  • “How often do attendings give you feedback before the last day?”
  • “Are written evals usually completed? Or do you need to chase them?”
  • “Do you see what attendings write about you? Or is it just PD‑only?”
  • “Can you give an example of negative feedback and what happened next?”
  • “Have you ever had a faculty member you could not get feedback from? What did the program do?”

Good programs:

  • Residents can tell you specific stories of actionable feedback.
  • Chiefs or coordinators chase delinquent evaluations aggressively.
  • There is a formal mid‑rotation touchpoint.

Weak programs:

  • Residents say, “Honestly, you only hear if something goes really wrong.”
  • “We get evals…eventually.”
  • PD “check‑ins” are 10 minutes of generic reassurance, no data.
Feedback System Strength Comparison
FeatureStrong ProgramWeak Program
Mid-rotation feedbackScheduledRare/Informal
Eval completion time< 2–3 weeks1–3 months
Written comments on evalsCommonSeldom
Semiannual reviewsData-drivenVague/general
Faculty coaching availabilityStructuredAd hoc

C. The Hidden Curriculum Around Feedback

Listen for tone.

Residents in strong programs talk about feedback even when you do not prompt them:

  • “My MICU attending sat me down week one and said, ‘Here are two specific goals for you this month.’ It was useful.”
  • “Our APDs are pretty blunt but fair. You know where you stand.”

In weaker programs, there is a strange avoidance:

  • “Everyone here is really nice; no one will yell at you.” That is fine, but niceness is not feedback.
  • “You will be fine as long as you show up and work hard.” Translation: no one is systematically supporting your growth; you are self‑directed or you stagnate.

Pay attention to how residents talk about struggling colleagues. In a healthy culture:

  • “Yeah, we have people who needed help; the program put them on a structured plan and most improved.”

In a toxic or checked‑out culture:

  • “You do not want to be on the PD’s radar.”
  • “If you mess up, they just…well, word gets around.”

You are not just evaluating the existence of a feedback system. You are evaluating whether it is safe to be less than perfect and still be developed rather than discarded.


3. Prep: Boards, In‑Training Exams, and Real Career Preparation

Teaching quality is not just “what happens 12–1 pm.” It is whether the program takes ownership of your outcome: passing boards, performing on in‑training exams, and being ready for fellowship or practice.

A. Board Prep: Look Past the Buzzwords

Every slide deck says: “Our board pass rate is excellent.” You need numbers and structure.

Ask:

  • “What is your 5‑year rolling board pass rate for first‑time takers?”
  • “Do you provide any paid resources (question banks, review courses, flashcards)?”
  • “How do you respond if a resident fails the in‑training exam or boards?”

For internal medicine, for example, anything consistently below ~90% first‑time pass is concerning, and below 85% is a serious red flag unless there is a very clear explanation and recent sustained improvement.

boxplot chart: Program X, Program Y, Program Z

5-Year First-Time Board Pass Rates by Program
CategoryMinQ1MedianQ3Max
Program X9092949698
Program Y8084868890
Program Z7076808386

Program Z might still be “nice.” It is not where you go if you care about structured education.

Concrete board‑prep signals of quality:

  • Program purchases a question bank license for all residents (e.g., MKSAP, TrueLearn, Rosh, etc.).
  • Scheduled board‑review conferences integrated into the year, not just random sessions in PGY‑3.
  • Review sessions that track resident performance and adapt topics accordingly.
  • Dedicated board‑prep time or lighter elective near graduation for residents who want it.

Programs that just “remind you to study on your own time” are shifting all responsibility onto you. That is acceptable in a top‑tier academic environment with very strong residents; it is less acceptable in a community program that also has weaker pass rates.

B. In‑Training Exams (ITEs): Tool or Paperwork?

ITE approach tells you how seriously a program takes academic outcomes.

Good programs:

  • Share scores individually and benchmark against national percentiles.
  • Meet with residents after ITEs to set goals and plan remediation where needed.
  • Adjust conference topics based on patterns of weakness (e.g., everyone bombing endocrine).

Weak programs:

  • “We take the ITE in October” and then…nothing. Maybe an e‑mail saying “Study more.”
  • No one can tell you average scores or trends.

Ask explicitly: “What happens if a resident scores below, say, the 25th percentile on the ITE?” You want to hear a structured answer, not “We tell them to work harder.”

C. Fellowship / Career Prep As a Teaching Metric

Strong educational environments produce graduates who are competitive for what they want next. That might be:

  • Subspecialty fellowship
  • Hospitalist jobs
  • Community practice
  • Academic careers

How they prepare you:

  • CV/ERAS review sessions, mock fellowship interviews.
  • Faculty mentors who actually meet with you more than once.
  • Letter writers who know you well because they have worked with you beyond service coverage.
  • Guidance around scholarly activity that is realistic (not, “Do a randomized trial in your spare time.”)

Ask senior residents:

  • “If you decided on a new subspecialty late PGY‑2, would it be possible to get the right exposure and letters?”
  • “How many residents matched into your top fellowship choices last year? Any examples?”
  • “If you wanted a job locally vs nationally, did the program help with connections?”

Teaching quality is not just internal. It is about whether the program is outward‑facing enough to help you land where you want to be.


4. How To Actually Compare Programs On Teaching: A Simple Framework

You will interview at 10–20 programs. They will blur together. You need a concrete way to rate what matters.

Here is a straightforward, low‑BS scoring system I have seen residents use effectively.

A. Build a Simple Scorecard

After each interview, take 5–10 minutes and fill this out before they blend in your memory.

Residency Teaching Quality Scorecard
Domain1 (Poor)3 (Average)5 (Excellent)
ConferencesDisorganized; low attendanceSome structure; spotty protectionStructured curriculum; truly protected
Bedside TeachingRare; task-focusedVariable by attendingFrequent, expected, modeled
Feedback SystemLate, genericInconsistent, some usefulTimely, specific, culture of coaching
Board/ITE PrepMinimal supportSome resources, little follow-upProvided resources, data-driven plans
Career/Fellowship PrepResidents on their ownInformal helpFormal mentorship, strong outcomes

You are aiming for H. If you are landing consistently on C, E, or G for a program, think hard before ranking it high.


5. Concrete Scripts And Moves You Can Use On Interview Day

Let me hand you phrases. You can copy‑paste them into your brain and deploy.

A. Questions To Ask Residents (Not Faculty)

  • “On a typical ward month, how many noon conferences do you actually attend start to finish?”
  • “Who covers the floor when you are in conference?”
  • “Can you remember a time when an attending gave you feedback that stung a little but helped a lot?”
  • “How long after a rotation do you usually see written evaluations?”
  • “If someone is struggling academically, what does the program actually do?”
  • “What board resources does the program pay for, versus what you buy yourself?”
  • “How did the program help you with fellowship or job applications?”

Residents will be more honest, and you can ask these in small groups or 1:1.

B. Questions To Ask Faculty / PDs

  • “Can you walk me through your core conference curriculum and how it repeats over 3 years?”
  • “How do you ensure protected teaching time is respected on high‑census days?”
  • “What changes have you made to teaching or board prep based on recent ITE or board pass data?”
  • “How do you train faculty to give effective feedback?”
  • “What is your first‑time board pass rate over the last 5 years, and how do you respond when that dips?”

Pay attention to whether they answer with specifics or slide into vague generalities.

C. What To Look For In Written Materials

When you get home, skim:

  • Sample block schedules: Is there actually time carved out for didactics?
  • Curriculum maps: Are they mapped to ACGME competencies and board blueprints or just a list of topics?
  • Board pass stats: Are they easy to find, or buried? Programs proud of their outcomes usually display them.

If information is conspicuously missing (no board pass data, no clear curriculum structure), ask yourself why.


6. Pulling It All Together For Your Rank List

You are not choosing a spa. You are choosing where you will become the physician you are for the rest of your career. Educational quality is not a “nice to have.” It is the core product.

Here is a simple three‑column exercise you should do before certifying your rank list:

  1. Column 1: List your top 5–7 programs.
  2. Column 2: For each, write a one‑sentence summary of their teaching environment based on conferences, feedback, and prep.
  3. Column 3: Give each a teaching score out of 10.

Example:

  • “Program A – Highly structured core curriculum, true protected time, aggressive board support, residents talk about frequent feedback.” Score: 9/10
  • “Program B – Very heavy service, strong bedside teachers but weak formal didactics, board support modest but outcomes good.” Score: 7/10
  • “Program C – Advertises a lot of teaching but residents report missing most conferences, vague about board pass rates.” Score: 5/10

Now compare that to your “gut feel” about location, lifestyle, prestige. Decide consciously where you are willing to trade teaching quality for other factors.

You do not need perfection. You do need alignment: a program whose teaching culture matches how you learn and what you need.


FAQ

1. How much weight should I give teaching quality versus program reputation when ranking?
If you are serious about long‑term competence and board performance, teaching quality should be near the top, alongside culture and location. A big‑name program with chaotic teaching and weak feedback will not magically educate you. Reputation can help with fellowship and jobs, but poor preparation will cap how far you can leverage that name. I tell applicants: if two programs are similar on location and fit, pick the one with clearly better teaching infrastructure, even if the name is slightly “less fancy.”

2. Is a weak formal didactic schedule a deal‑breaker if residents seem clinically strong?
Not always. Some high‑volume, high‑acuity programs produce excellent clinicians with relatively sparse formal conferences because the bedside and case‑based teaching is superb. The key distinction: are residents strong because of the environment, or in spite of it? If they are thriving due to strong mentorship, active teaching on rounds, and excellent outcomes (board pass rates, fellowships), you can live with fewer PowerPoints. If they are just surviving and self‑studying at home, that is a different story.

3. How can I assess feedback culture if people are afraid to be honest?
You read between the lines. Ask for specific examples: “Tell me about a time you got tough feedback that helped you.” If people cannot produce any real cases, that suggests either no feedback or a fear‑based culture. Also, ask multiple residents the same question and look for consistency. If junior residents say “We get a lot of coaching” and seniors say “No one tells you anything unless there is a problem,” that inconsistency is instructive. Consistent, concrete stories usually mean the system is real.

4. What if a program recently revamped its curriculum and claims ‘big improvements’?
New initiatives are not automatically bad, but they are unproven. Ask: “What changes have you made, and what outcomes have you seen so far?” If they can show early improvements in ITE scores, better conference attendance, or resident satisfaction and can describe specific steps (not just buzzwords), that is somewhat reassuring. Still, there is execution risk. As an applicant, you are safer betting on programs with a track record of stable, high‑quality teaching rather than those mid‑overhaul—unless you are very attracted to other strengths and are comfortable with some uncertainty.

With this framework in your pocket, you are not just reacting to glossy brochures and friendly smiles. You are dissecting how a program actually teaches, supports, and prepares its residents. Once you can do that, you are a lot closer to building a rank list that will age well three years from now—when you are the one standing at the front of the room, teaching the next class.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.
Share with others
Link copied!

Related Articles