
The biggest mistake medical educators make is trying to “do a project” instead of building scholarship from day one.
You do not need a PhD in education. You do need a clear, ruthless process. I am going to give you that process.
This is a stepwise guide to taking your messy teaching idea and turning it into a scholarship‑ready medical education project that can withstand IRB review, peer review, and promotion committees.
Step 1: Stop “Having an Idea” and Start Framing a Scholarship Question
Most education projects die as vague ideas:
- “We should improve our intern orientation.”
- “Let’s create a point‑of‑care ultrasound curriculum.”
- “Residents hate morning report; we should fix that.”
These are not scholarship questions. They are complaints with aspirations.
You need a focused, researchable question. Use this three-part filter:
- Learner group
- Intervention
- Measurable outcome
Convert fuzzy idea → scholarship question like this:
- Fuzzy: “Better intern orientation.”
- Scholarship-ready:
“Among incoming internal medicine interns at our institution, does a structured two-day simulation-based orientation, compared with the usual single-day lecture-based orientation, improve Objective Structured Clinical Examination (OSCE) performance and reduce patient safety event reports in the first three months?”
Now you have:
- A population (incoming IM interns)
- An intervention (two-day sim-based orientation)
- A comparator (usual single-day lecture)
- Outcomes (OSCE scores, patient safety events)
- Time frame (first three months)
Your action today:
Write down your current idea. Force yourself to rewrite it into a question with all five elements above. If you cannot, your project is not ready.
Step 2: Choose the Right Project Type (and Stop Fighting Reality)
Not every good idea is a randomized trial. You need to match your ambition to your time, data access, and experience.
Here are the four main buckets for beginner-to-intermediate med ed projects:
| Project Type | Typical Design | Best For |
|---|---|---|
| Curriculum Development | Pre/post or cohort study | New teaching programs |
| Assessment Innovation | Validation/correlation study | New exams, OSCEs, tools |
| Program Evaluation | Before/after, mixed-methods | Orientation, rotations |
| Faculty/Resident Development | Workshop evaluation | Teaching skills, feedback |
How to pick quickly
Ask yourself:
Am I creating a new curriculum?
→ Curriculum development study with learner outcomes.Am I creating or revising an assessment tool (checklist, rating scale, exam)?
→ Assessment study with reliability/validity evidence.Am I changing how a program works (new schedule, new clinic format)?
→ Program evaluation (often mixed methods: surveys + outcomes + interviews).Am I training faculty/residents to teach better?
→ Faculty development study with behavior or learner outcome measures.
If this is your first serious project, avoid:
- Multi-center designs
- True randomization
- Complex qualitative methods without a collaborator who actually knows them
Start with a single-site, quasi-experimental design (pre/post, cohort comparison, or interrupted time series). Reviewers do not punish feasibility. They punish sloppy design.
Step 3: Do a Targeted, Ruthless Literature Sweep (90 Minutes, Not 9 Months)
You are not writing a thesis. You are building enough background to:
- Prove your idea is not obviously redundant.
- Borrow what already works.
- Justify your methods.
Set a 90‑minute timer and do this:
Search terms
Combine your topic + learner level + setting.
Example:
“simulation intern orientation internal medicine patient safety”Databases
- PubMed
- MedEdPORTAL
- Your specialty’s main education journal (e.g., JGIM, Academic Medicine, Anesthesia & Analgesia, etc.)
Grab only what matters
You need:- 3–5 papers on similar interventions or populations
- 1–2 on theories or frameworks (Kirkpatrick, self-determination, deliberate practice, etc.)
- 1–2 on measures you might reuse (validated scales, OSCE checklists, etc.)
Extract instead of reading everything
For each key paper, jot down:- What they did (design, setting)
- Outcomes and instruments used
- Effect sizes or key results
- Weaknesses you can improve on
You are not trying to know everything. You are trying to anchor your project in the existing literature and avoid reinventing the wheel.
Step 4: Anchor Your Project in a Simple Framework (Without Getting Lost in Theory)
Educational theory scares clinicians because they have seen it misused as jargon wallpaper. You need one framework that fits what you are doing.
Here is a brutally practical menu:
| Category | Value |
|---|---|
| Kirkpatrick | 40 |
| Competency/EPAs | 25 |
| Deliberate Practice | 15 |
| Self-Determination | 10 |
| None/Other | 10 |
Quick framework matches
You are primarily measuring learning outcomes, behavior, or system impact
→ Use Kirkpatrick’s Levels (reaction, learning, behavior, results).
Example:- Level 1: Learner satisfaction
- Level 2: Knowledge/skills test
- Level 3: Behavior change (chart audits, observed behavior)
- Level 4: Patient or system outcomes
You are aligning with ACGME, EPAs, milestones
→ Use Competency-based medical education (CBME) frameworks.You are doing simulation, skills training, or repetition-heavy practice
→ Use Deliberate Practice / mastery learning concepts.You are shifting autonomy, motivation, coaching
→ Use Self-determination theory or coaching models.
Pick one. Two at most. Explicitly state:
- Why this framework fits.
- How it guides your choice of outcomes and intervention design.
That alone elevates your project from “we tried a new workshop” to “we designed and evaluated an intervention grounded in X framework.”
Step 5: Define Outcomes and Measures That Promotion Committees Actually Respect
If your only outcome is “participants liked it,” you have a teaching innovation, not education scholarship.
You need at least one objective outcome beyond satisfaction.
Think in layers, roughly aligned with Kirkpatrick:
Reaction (Level 1)
- Session evaluations
- Net promoter–style items (“Would you recommend…?”)
Learning (Level 2)
- Pre/post knowledge tests (MCQs, SAQs)
- Skills checklists (simulation, OSCE stations)
- Script concordance tests, if you know what you are doing
Behavior (Level 3)
- Chart review (documentation changes, ordering patterns)
- Direct observation with validated tools (mini-CEX, teaching observation scales)
- Patient interaction recordings coded with a rubric
Results (Level 4)
- Patient outcomes (LOS, readmissions, complication rates)
- System metrics (incident reports, throughput, adherence to pathways)
For a scholarship-ready early‑career project, aim for:
- 1–2 Level 2 outcomes
- Optional but desirable: 1 Level 3 outcome
Reaction data are fine as supporting material, not the main event.
Key point: Whenever possible, steal (adapt) existing validated instruments. Promotion committees and reviewers trust reliability and validity; they do not care that you wrote your multiple-choice questions from scratch at midnight.
Step 6: Design the Study Like a Clinician, Not a Hobbyist
You already know research design. You have read clinical trials and observational studies. Use that brain.
Most education projects can be:
- Pre/post cohort (before and after curriculum launch)
- Controlled cohort (your institution vs another rotation, service, or year)
- Clustered roll-out (one site or group gets it this year, another next year)
Here is a simple design decision tree:
| Step | Description |
|---|---|
| Step 1 | Have control group? |
| Step 2 | Use controlled cohort |
| Step 3 | Use pre post design |
| Step 4 | Collect baseline and follow up |
| Step 5 | Define primary outcome |
| Step 6 | Plan sample size |
| Step 7 | Prepare IRB submission |
Non-negotiables
Define a primary outcome in advance
Not “we will see what changes.” That is fishing.Specify exposure clearly
Hours of curriculum, number of sessions, facilitators’ training, modality (in-person, online, hybrid).Think attrition now
Residents are busy. Build in:- Automated reminders
- Testing embedded in required sessions
- Back-up data collection dates
Timeline sanity check
12-Month Scholarship-Ready Project Timeline Period Event Planning - Month 1-2 Question, team, lit review Planning - Month 3 IRB and protocol finalization Implementation - Month 4-6 Baseline data and curriculum launch Implementation - Month 7-9 Ongoing sessions and follow up data Analysis and Dissemination - Month 10 Data cleaning and analysis Analysis and Dissemination - Month 11 Abstract and poster preparation Analysis and Dissemination - Month 12 Manuscript draft submission
If your residency schedule cannot support this timeline, resize the project. Better a smaller, clean study than a grand one that collapses.
Step 7: Build a Minimal but Serious Project Team
Lone wolves burn out and produce half-baked data. You want a lean, complementary team:
At minimum:
Content expert
You or a colleague who knows the clinical content (e.g., sepsis, ultrasound, handoffs).Methods/education person
Someone who has published in medical education. Does not have to be world-famous. Needs to know what “construct validity evidence” actually means.Statistician or data-savvy collaborator
Could be a biostatistician, epidemiologist, or VERY experienced researcher.
Nice to have:
Frontline sponsor (PD, clerkship director, simulation director)
They control protected time, access to learners, and political cover.Learner representative (resident, student)
They sanity-check feasibility and help with buy-in.
Clarify roles early using something as simple as this:
| Role | Person | Key Responsibilities |
|---|---|---|
| PI | You | Overall design, IRB, manuscript lead |
| Co-PI (Education) | Med ed faculty | Framework, instruments, analysis help |
| Statistician | Biostats partner | Sample size, analysis plan |
| Program Director | Sponsor | Access to residents, schedule integration |
| Resident Champion | PGY-2 | Recruitment, feedback, pilot testing |
Do not wait until authorship fights start. Agree now:
- Who is first author (usually the workhorse).
- Who is senior (usually the education mentor).
- Criteria for co-authorship (not just “showed up once”).
Step 8: Handle IRB and Ethics Without Losing Months
Educational research is still human subjects research. You cannot skip ethics because “it’s just a survey.”
Most med ed projects qualify for:
- Exempt or Expedited review.
- Minimal risk categories.
To speed this up:
Label your project honestly
Use language like “educational research,” “program evaluation,” “curriculum study.”Clarify risks
- Risks are usually confidentiality and perceived coercion.
- Solutions: anonymous surveys, non‑faculty data collectors, de-identified performance data, clear statements that participation will not affect grades or evaluations.
Decide on consent strategy
- Often, implied consent via survey completion is enough.
- If using performance data, you may need written consent or an opt‑out.
Prepare documents tightly
- 2–3 page protocol
- Instruments (surveys, checklists, tests) attached
- Recruitment scripts
I have seen many projects delayed not by IRB but by sloppy, inconsistent submissions. Treat the IRB like a peer reviewer. Clean, consistent, brief.
Step 9: Collect Data Like a Professional, Not Like “Whoever Remembers”
Data sloppiness is why many good education projects never become publishable.
Design data collection systems, not last-minute chaos.
Core principles
Automate whenever possible
- Use REDCap, Qualtrics, or your institution’s survey system.
- Schedule email reminders in advance.
- Embed surveys in required rotation tasks or LMS modules.
Standardize data entry
- Use drop-downs, numeric fields, and clear coding (0/1, Yes/No).
- Avoid free text except for qualitative comments.
Train raters (if you are using checklists or rating scales)
- Short rater training session
- Use sample videos or standardized scripts
- Check inter-rater reliability on a small sample
Plan for missing data
- Decide in advance:
- Will you analyze only complete cases?
- Will you impute? (If you do not know how, do not pretend.)
- Build your sample size assuming 20–30% nonresponse.
- Decide in advance:
Concrete tactic:
Tie data collection to something residents cannot ignore. For example, OSCE feedback only released after they complete the post-test survey. Still ethical, highly effective.
Step 10: Analyze for Decisions, Not Just for P-values
You do not need exotic statistics. You do need planned, interpretable analysis.
For most early projects:
Descriptive stats
- Means, medians, standard deviations, IQRs
- Participation rates, completion percentages
Comparative tests
- Pre/post in same learners: paired t-test or Wilcoxon signed-rank
- Two different groups: independent t-test or Mann–Whitney
- Multiple groups: ANOVA or Kruskal–Wallis (if needed)
- Categorical outcomes: chi-square or Fisher’s exact
Effect sizes
- Cohen’s d for continuous outcomes
- Risk difference or ratio for categorical outcomes
For scholarship impact, effect sizes and confidence intervals impress more than “P=0.049.”
Work with your statistician early:
- Define the primary analysis (which outcome, which comparison).
- Plan secondary analyses (subgroups, exploratory items).
- Decide how to handle missing data and outliers.
Step 11: Turn the Project into Actual Scholarship Outputs
A “great project” that never leaves your hard drive does nothing for your career.
Think from the start in terms of dissemination formats:
| Category | Value |
|---|---|
| Conference poster | 40 |
| Oral presentation | 20 |
| Journal article | 25 |
| Curriculum repository (MedEdPORTAL) | 15 |
Minimal viable output set
From one solid project, you should be able to produce:
Local presentation
- Department grand rounds, GME retreat, teaching conference.
- Purpose: refine your story and get feedback.
Conference abstract and poster
- Target: specialty education meetings or big ones (AAMC, APDIM, SGIM, etc.).
- Deadline-driven; forces you to analyze early data.
Manuscript
- Choose a journal that actually publishes projects like yours.
- Format:
- Introduction (problem, gap, objective)
- Methods (framework, design, setting, participants, instruments, analysis)
- Results (with tables/figures)
- Discussion (interpretation, limitations, implications)
Optional: Curricular product
- If your curriculum is replicable, submit to MedEdPORTAL or specialty repositories.
- Requires full materials: instructions, slides, cases, evaluation tools.
A quick heuristic:
If your project cannot produce at least one poster and one manuscript, it is not “scholarship-ready.” Re-scope or redesign.
Step 12: Align the Project With Your Career and Promotion
You are not doing this just for fun. You are building a teaching career.
Connect your project explicitly to:
Your educational niche
Examples: handoffs, ultrasound, patient safety, feedback skills, simulation, assessment.Your promotion criteria
Promotion committees want:- Peer-reviewed publications
- Regional/national presentations
- Evidence of impact beyond your institution
- Leadership roles that emerged from your scholarship
Translate your project into those terms:
- Instead of: “I redesigned intern orientation.”
- Say: “I led a multi-year, outcomes-based, simulation-enhanced orientation program, with peer-reviewed publications, national presentations, and adoption by two other residency programs.”
If you do this 2–3 times in a coherent area, you stop being “that person who likes teaching” and become an education scholar in X domain. Completely different career trajectory.
Example: Putting It All Together (Concrete Scenario)
Let me walk you through a realistic example with all steps visible.
Idea:
“Residents give terrible feedback to students. We should teach them to be better.”
Scholarship-ready version:
Among internal medicine residents at our academic medical center, does a three-session, practice-based feedback skills curriculum, compared with no formal feedback training, improve direct observation ratings of feedback quality and increase medical student satisfaction with feedback over six months?
Project type: Faculty/resident development project with behavioral and learner outcomes.
Framework:
- Use Feedback models plus Deliberate Practice: repeated practice, immediate feedback, increasing complexity.
Outcomes:
- Level 1: Resident satisfaction with the curriculum.
- Level 2: Knowledge test on feedback principles.
- Level 3:
- Direct observation of real feedback encounters using a validated rubric.
- Medical student end-of-rotation ratings of feedback quality.
Design:
- Controlled cohort:
- PGY-1 and PGY-2 residents get the curriculum this year.
- PGY-3 residents serve as a comparison group (usual practice).
Team:
- You (hospitalist, feedback interest) – PI
- Med ed faculty with feedback research experience – co-PI
- Biostatistician – analysis
- Clerkship director – sponsor
- Two resident champions – recruitment and pilot testers
IRB:
- Educational research, minimal risk, de-identified rating data.
- Written consent from residents; students’ evaluations used in aggregate.
Data collection:
- REDCap pre/post knowledge tests.
- Standardized observation forms used by trained faculty observers during ward rotations.
- Student evaluations pulled and linked by resident ID, de-identified before analysis.
Analysis:
- Pre/post within intervention group (PGY-1/2).
- Between-groups comparison (intervention vs PGY-3).
- Effect sizes for change in feedback behavior scores and student ratings.
Outputs:
- Local: GME teaching conference talk.
- National: Poster at Clerkship Directors in Internal Medicine (CDIM).
- Manuscript: Journal of Graduate Medical Education.
- Potential toolkit: MedEdPORTAL submission with slides, cases, rating forms.
This is how you convert “we should teach feedback better” into a scholarship-ready project that promotion committees respect.
FAQs
1. How small is “too small” for a medical education project to be publishable?
Small samples are not fatal; sloppy samples are. A single residency program with 30–60 residents can absolutely produce publishable work, especially for pilot or feasibility studies. The key is to:
- Define your primary outcome and analysis before collecting data.
- Achieve high completion rates for that primary outcome (≥70% is a reasonable target).
- Report effect sizes and confidence intervals honestly instead of over-selling P-values.
Journals understand that med ed often has constrained samples. They do not accept unclear questions, shifting outcomes, or missing key methods details.
2. Do I really need a theoretical framework, or can I just describe what I did?
If you are aiming for scholarship that counts for promotion and gets into decent journals, yes, you need a framework. Not a dissertation, not a wall of jargon—one or two clear frameworks that explain:
- Why you designed the intervention this way.
- Why these outcomes are the right outcomes to measure.
A paper that reads “we tried this, here is what happened” without any conceptual basis is much easier for reviewers to dismiss as a local quality improvement project rather than generalizable scholarship.
3. What if my institution has almost no med ed research support?
Then you need to be more intentional, not less. Practical moves:
- Look for an education-minded ally at your medical school or another department. Many universities have centralized medical education offices or academies of medical educators.
- Join a professional society’s education arm (e.g., SGIM, APDIM, COMSEP, SAEM Education). Many have mentoring programs and methods workshops.
- Start with a smaller, tightly designed project with simple outcomes you can collect yourself. Use free or low-cost tools like REDCap (often provided by your institution), Google Forms, or LMS export data.
You can also co-author with colleagues at other institutions who have methods skills if you bring the clinical problem, access to learners, and the implementation energy.
4. How do I know which journal to target for my project?
Reverse-engineer it. Spend 30 minutes:
- Listing 3–5 potential journals (e.g., Academic Medicine, Medical Education, JGME, specialty-specific education journals).
- Searching each for articles in the last 2–3 years that look similar to your project type (same learner level, similar intervention scope).
- Noting word limits, formatting, and how rigorous their methods tend to be.
Then, pick the lowest “altitude” journal where similar work routinely appears. You can always aim higher for later projects. For an early-career med ed scholar, getting something solid into a reputable but not hyper-elite journal is far more valuable than sitting three years on a “perfect” manuscript that keeps getting rejected.
Open a document right now and write one sentence: your scholarship-ready question with a clear population, intervention, comparator, outcome, and time frame. That single sentence is the backbone. Once it is sharp, every other step in this guide becomes 10 times easier.