
The biggest mistake new medical educators make is pretending curriculum development can be “fit in” around everything else. It cannot. You need a structured, ruthless, academic-year timeline or the course will own you.
You’ve got one academic year to develop and study a new curriculum. Here’s how to run it like a project, not a vague aspiration.
Big-Picture Year Plan (Month 0 Snapshot)
Before we go month‑by‑month, see the skeleton. This is roughly what your academic year should look like if you’re developing and studying (evaluating) a new med-ed curriculum while still teaching, doing clinics, and staying sane.
| Phase | Primary Focus |
|---|---|
| May–June (Pre-Year) | Needs assessment, approvals |
| July–August | Blueprint, assessments, schedule |
| September–October | Content creation, pilot elements |
| November–December | Finalize materials, train faculty |
| January–February | Launch and early data collection |
| March–April | Iterate, deepen evaluation |
Now we drill down.
May–June: Groundwork and Sanity Checks (Pre-Year)
If your “year” starts in July, your real work starts in May. At this point you should be answering one blunt question: Is this curriculum even worth doing and can we pull it off this cycle?
Week 1–2: Define the Scope and Non-Negotiables
At this point you should:
- State the purpose in one sentence.
- E.g., “Implement a systems-based, case-driven pre-clinical cardiology block that improves Step 2 CK performance and clinical reasoning.”
- Define your learner group:
- MS1 vs MS2 vs residents? Required vs elective? Class size?
- Lock the timeframe and credit:
- How many weeks? Contact hours per week? Required assessments?
Do not keep this in your head. Write a one-page concept brief. Dean-level people don’t read 20 pages.
Week 2–3: Needs Assessment (Rapid but Real)
You don’t have time for a two-year Delphi process. You do have time for a lean, focused scan.
At this point you should:
- Review:
- Recent exam performance (NBME subject, Step scores, OSCEs) related to your area.
- Existing course evaluations: scan comments, not just Likert scores.
- Talk (briefly) to:
- 2–3 clinical faculty: “Where are students weak when they hit the wards in this domain?”
- 3–5 students or residents: “What current teaching on this topic actually helps you? What’s useless?”
- Check standards:
- LCME/ACGME requirements.
- Specialty society guidelines (e.g., AHA for cardiology).
Summarize in a half-page: gaps, priorities, and the “why now.”
Week 3–4: Stakeholder Buy-In and Feasibility
Curriculum without buy-in dies slowly and painfully.
At this point you should:
- Identify:
- Course director (maybe you).
- Co-directors or track leads.
- Core teaching faculty.
- Run a 30–45 minute meeting with:
- Proposed aims.
- Time and resource ask.
- High-level timeline (use a simple Gantt-ish slide).
You’re looking for three things:
- Verbal support from leadership.
- Realistic estimate of faculty availability.
- Agreement on which year this will launch. If they’re vague (“maybe next year”), beware.
July–August: Blueprint and Assessment First
This is where you win or lose. If you skip the blueprint and jump into making slides, you’ll be rewriting everything at 2 a.m. in January.
Early July: Backward Design, Properly Done
At this point you should be designing from the assessment backward. Not vibes. Outcomes.
- Write program-level goals (3–5).
- Example: “Students will correctly manage acute chest pain in a simulated setting.”
- Translate into specific, observable learning outcomes.
- “List the components of initial chest pain evaluation.”
- “Interpret ECGs with common ischemic patterns.”
- “Formulate initial management for NSTEMI vs STEMI.”
Keep outcomes measurable and limited. If you’ve got 40 outcomes for a 2-week module, you’re lying to yourself.
Mid July: Build the Assessment Plan
Design assessment now, not in November.
At this point you should:
- Decide assessment types and weighting:
- Written: NBME-style MCQs, SAQs.
- Performance: OSCE stations, simulation checklists.
- Workplace: mini‑CEX, direct observation forms (for residents).
- Map each assessment to outcomes. Literally.
| Outcome | Assessment Type |
|---|---|
| Interpret ischemic ECG changes | MCQs + OSCE station |
| Manage acute chest pain | Simulation checklist |
| Communicate findings to team | OSCE communication |
- Choose your evaluation metrics for studying the curriculum:
- Pre/post test scores.
- Satisfaction ratings.
- Objective performance (OSCE, clerkship eval changes).
- Longitudinal metrics (board performance, if feasible).
You’re building not just a course, but a small research project. Treat it that way.
Late July–Early August: Content and Modality Blueprint
Now outline how material will be delivered.
At this point you should:
- Decide teaching modalities:
- Large-group interactive.
- TBL/CBL.
- Simulation.
- Asynchronous modules.
- Build a session-by-session schedule:
- Date, theme, outcomes, modality, faculty, assessment linkage.
| Category | Value |
|---|---|
| Large-group | 6 |
| Small-group | 4 |
| Simulation | 2 |
| Self-study | 3 |
If your calendar is blank more than 4 weeks out, you’re already behind.
September–October: Content Creation and Micro-Pilots
This is the grind phase. At this point you should be creating, but in a controlled, prioritized way.
Early September: Prioritize High-Stakes Components
You cannot build everything at once. Start with what will hurt most if it’s bad.
Week 1–2 you should:
- Finalize:
- High-stakes exams (blueprint, number of items, cognitive level).
- OSCE/sim scenarios and checklists.
- Draft:
- Session objectives and outlines for the first half of the curriculum.
- Standard slide templates and case templates so faculty aren’t freelancing formats.
Lock in a version control system (simple shared drive with date-stamped folders is fine). I’ve seen teams lose entire OSCE stations because someone overwrote a file the night before.
Late September: Faculty Development & Early Testing
At this point you should stop pretending everyone knows how to teach in your format.
You should:
- Run 1–2 faculty development workshops:
- How to run TBL or CBL.
- How to use the assessment tools, especially OSCE checklists and rating scales.
- Micro-pilot:
- Test 1–2 new cases with a small group of students or residents.
- Trial assessment items on a non-graded quiz to check difficulty and discrimination.
Collect real-time feedback:
- “That ECG was unreadable.”
- “Time for small group discussion was too short.”
- “Instructions for the SP were confusing.”
Fix those now, not after launch.

October: Lock Logistics, Keep Building Content
Now it’s about discipline.
At this point you should:
- Finalize:
- Room bookings and simulation center time.
- Standardized patient contracts/schedules if applicable.
- Technical needs (LMS setup, recording, clickers/polling tools).
- Push content creation on a rolling 4–6 week horizon:
- All materials for weeks 1–3 of the curriculum finished.
- Weeks 4–6 drafted.
- Weeks 7+ outlined.
If you’re still deciding core cases in October for a January launch, you’re in trouble.
November–December: Finalization and Study Design Tightening
These months are about switching from “building” to “polishing and studying.”
Early November: Evaluation and Research Design
You said you’re studying this curriculum. That means method, not vibes.
At this point you should:
- Clarify your research/evaluation questions:
- “Does the new case-based curriculum improve OSCE performance compared to last year?”
- “How do students perceive workload and perceived learning compared to the lecture-based format?”
- Choose your design:
- Pre/post comparison within the same cohort.
- Historical control vs new cohort.
- Mixed-methods (quant + focus groups).
- Draft your data collection plan:
- What data?
- When collected?
- By whom?
- Where stored?
If needed, submit IRB/ethics application now. Do not assume “program evaluation” automatically exempts you. I’ve watched strong projects die on that assumption.
Late November: Final Material Review and Pilot Run-Throughs
You’re entering rehearsal mode.
At this point you should:
- Run at least one full run-through of:
- An OSCE station with SPs and raters.
- A complex small-group session or TBL day.
- Have a colleague review:
- A sample of exam questions for clarity, bias, and alignment.
- Cases for representation and realism.
Fix:
- Typos and broken images (yes, they matter).
- Overlong cases.
- Impossible learning objectives for 50-minute blocks.
December: Contingency Planning and Communication
Curricula fail at the edges: sick faculty, tech meltdown, unclear instructions.
By end of December you should:
- Create backup plans:
- Alternate faculty for each key session.
- Offline version of essential materials (PDFs, printouts).
- Backup assessment if a session is canceled.
- Communicate to students:
- Clear syllabus with expectations, grading, schedule.
- How they’ll access materials.
- Who to contact for logistics vs content questions.
This is also when you set expectations with faculty:
- Response time to student emails.
- Timing for posting materials.
- Grading and feedback deadlines.
January–February: Launch and Early Data Collection
Now you’re live. At this point you should be in tight feedback loop mode, not autopilot.
Week 1: Launch Week
You only get one first impression.
During the first week you should:
- Hold a brief orientation for students:
- Goals of the new curriculum.
- How learning will look different.
- How feedback will be used.
- Run your first assessment low-stakes:
- A short quiz or formative OSCE element.
- This gives you baseline data and exposes logistical problems early.
Track:
- Attendance patterns.
- LMS access issues.
- Timing problems (sessions consistently running over, etc.).
Weeks 2–4: Weekly Micro-Evaluation and Adjustment
Here’s where most course directors either overreact (rewrite everything) or underreact (ignore clear red flags). You want a middle path.
At this point you should:
- Hold a 15–20 minute weekly huddle with key faculty:
- What worked last week?
- What broke?
- What do we tweak for next week (not next year)?
- Collect short pulse-checks from students:
- 3–4 items max:
- Perceived clarity.
- Workload.
- Perceived learning.
- Open comment.
- 3–4 items max:
| Category | Value |
|---|---|
| Week 1 | 3.8 |
| Week 2 | 4.1 |
| Week 3 | 3.9 |
| Week 4 | 4.2 |
Only change what:
- Is clearly broken (tech, timing, logistics).
- Creates safety or equity issues.
- Has zero alignment with your stated outcomes.
Don’t rewrite the philosophy mid-course because three students want more lecture.
Late February: First Summative Data Review
By end of February you’ve likely had at least one substantial assessment (exam, OSCE, or major assignment).
At this point you should:
- Compare student performance to:
- Your prior cohort (if you have that data).
- Your expected standard (e.g., “70% of students should meet this level”).
- Analyze:
- Item-level stats on exams (difficulty, discrimination).
- Checklist scores and inter-rater reliability for OSCE.
- Triangulate with:
- Weekly pulse-checks.
- Anecdotal faculty impressions.
This is your first reality check: is this curriculum directionally right, or are there foundational issues?
March–April: Consolidation, Full Evaluation, and Next-Cycle Planning
Now you’re finishing the first run and planning version 2.0.
March: End-of-Course Evaluation and Deeper Analysis
At this point you should be moving from “how did it feel” to “what did it do.”
You should:
- Administer end-of-course evaluations:
- Standard institution form.
- Plus 3–5 curriculum-specific questions (about cases, simulations, sequencing).
- If IRB-approved for research:
- Conduct focus groups or semi-structured interviews with:
- A small sample of students.
- A few faculty and SPs.
- Conduct focus groups or semi-structured interviews with:
- Pull all your quantitative data:
- Pre/post tests.
- Summative exam/OSCE scores.
- Comparison with historical performance, where valid.
Then block 2–3 concentrated half-days to analyze:
- Obvious wins (e.g., OSCE scores up, engagement high).
- Clear failures (e.g., one module consistently poorly rated, specific sessions tanking).
Do this now, while details are fresh. Six months later, everyone forgets.

Early April: Curriculum Debrief and Redesign Decisions
At this point you should gather the right people and be brutally honest.
Run a 2–3 hour debrief meeting with:
- Course leadership.
- Representative faculty.
- Maybe one or two student reps, depending on culture.
Structure the discussion:
- What are we definitely keeping?
- E.g., “Sim day on decompensated heart failure worked extremely well.”
- What needs modification, not destruction?
- Timing tweaks, clearer instructions, better preparatory material.
- What are we cutting or replacing?
- Sessions that consistently bombed or are redundant with other parts of curriculum.
Document:
- Specific changes.
- The rationale (tie to data, not feelings).
- The responsible person and timeline for revisions.
Late April: Lock Next-Year Plan and Scholarship Products
By the end of the academic year, you should be ready for both version 2.0 and scholarship.
At this point you should:
- Freeze the revised outline for next year:
- Updated session list.
- Revised assessments.
- Adjusted schedule (if needed).
- Plan your faculty development needs for the next cycle based on observed gaps.
- Outline at least one scholarly product:
- Conference abstract (IAMSE, AAMC, ASME, etc.).
- Med-ed journal article (e.g., MedEdPORTAL, Academic Medicine).
- Internal presentation for promotion dossier.
If you wait until summer to “get around” to scholarship, you won’t. Do the outline now.
Micro-Timeline: Example 16-Week Curriculum Run (January–April)
To tie it together, here’s a high-level week-by-week view for a 16-week semester where the new curriculum runs across the whole term.
| Period | Event |
|---|---|
| Pre-launch - Week 0 | Final checks, faculty briefing |
| Early phase - Week 1 | Launch, orientation, baseline assessment |
| Early phase - Week 2 | First full teaching week, pulse-check |
| Early phase - Week 3 | Weekly review, minor tweaks |
| Early phase - Week 4 | First graded quiz, data review |
| Mid phase - Week 5 | OSCE or major assignment |
| Mid phase - Week 6 | Faculty huddle, adjust case sequence |
| Mid phase - Week 7 | Mid-course evaluation, student focus group |
| Mid phase - Week 8 | Midterm exam, performance analysis |
| Mid phase - Week 9 | Targeted revisions to weaker sessions |
| Late phase - Week 10 | Advanced topics, continued pulse-checks |
| Late phase - Week 11 | Simulation-intensive week |
| Late phase - Week 12 | Prep for final assessments |
| Late phase - Week 13 | Final OSCE, exit exam |
| Late phase - Week 14 | End-of-course evaluations |
| Late phase - Week 15 | Data consolidation, initial analysis |
| Late phase - Week 16 | Debrief, plan revisions and scholarship |
Final Checkpoints
If you remember nothing else from this year-long plan, keep these three anchors:
- Front-load design and assessment. By end of August, your outcomes, assessment plan, and session blueprint should be real, not hypothetical.
- Build in weekly feedback loops during delivery. Short, ruthless, and consistent. Adjust what’s broken; protect what’s working.
- Treat evaluation like scholarship from day one. IRB, data plan, and analysis time need to be on the calendar early, not bolted on at the end.
Follow this timeline with discipline, and you will end the year with not just a functioning curriculum—but a defensible, improvable, and publishable one.