Residency Advisor Logo Residency Advisor

Month-by-Month Plan to Build an AI-in-Medicine Portfolio Before Residency

January 8, 2026
17 minute read

Medical student working on laptop with clinical data visualizations on screen -  for Month-by-Month Plan to Build an AI-in-Me

It is July 1st. You just started your MS3 year (or PGY1 prelim) and every other person you meet claims they are “doing something with AI.” You have nothing to point to. No GitHub. No projects. No poster with the words “machine learning” in the title.

You are not late. But if you treat this casually, you will be invisible when residency program directors start asking, “So tell me about your interest in AI and medicine.”

Here is a concrete, month‑by‑month plan for 12 months to build a real AI‑in‑medicine portfolio before you apply for residency. Real outputs. Not vibes. At each point: what you should be doing, what you should have finished, and what to ruthlessly ignore.


Big Picture: What You Are Building Over 12 Months

By the end of 12 months, you want at least:

  • 1–2 tangible AI‑in‑medicine projects with:
    • clear clinical question
    • documented methods
    • ethical/clinical reflection
  • 1 public-facing artifact:
    • small web app, dashboard, or simple decision-support prototype or
    • well‑written technical blog / white paper
  • 1 CV‑worthy line:
    • poster, talk, workshop, or accepted manuscript
  • A tight story:
    • “I am not just AI‑curious. Here is exactly what I built, what went wrong, and what I learned about ethics, bias, and patient safety.”

You do not need to become a data scientist. You need to become a clinically literate collaborator who has actually shipped something.


Mermaid timeline diagram
12-Month AI-in-Medicine Portfolio Timeline
PeriodEvent
Foundation - Month 1Orientation and skills audit
Foundation - Month 2Technical basics and ethics
Practice - Month 3Mini-project 1
Practice - Month 4-5Core portfolio project build
Validation - Month 6Evaluation, ethics, documentation
Validation - Month 7-8Second project or extension
Translation - Month 9Presentation and submission
Translation - Month 10Clinical integration and shadow testing
Polish - Month 11Public profile and narrative
Polish - Month 12Final refinement before applications

Month 1 – Orientation and Skills Audit

At this point you should:

  • Stop randomly watching AI hype videos.
  • Decide exactly what niche you are playing in.

Week 1–2: Reality Check and Direction

  1. Pick a domain where you already speak the language:
    • Example lanes:
      • Clinical decision support for your favorite specialty
      • Workflow tools (triage, documentation, task management)
      • Patient education (chatbots, tailored instructions)
      • Quality and safety (predicting falls, readmissions)
  2. Do a brutally honest skills audit:
    • Coding:
      • None / basic Python / comfortable scripting?
    • Data:
      • Ever touched a CSV? Used pandas? SQL?
    • Stats:
      • Understand sensitivity/specificity, AUC, calibration?
    • Ethics:
      • Ever read an AI ethics framework?

Write this down. One page. No fluff.

Week 3–4: Minimal Exposure to the Field

Your goal this month is orientation, not mastery.

  • Read 3–5 high-impact, AI‑in‑medicine papers in your domain. Examples:
    • Sepsis prediction (e.g., Epic Sepsis Model critiques)
    • Radiology image classification
    • Large language models for note drafting / summarization
  • For each paper, answer:
    • What clinical problem?
    • What data?
    • What model?
    • How did they evaluate?
    • What went wrong or could go wrong ethically?

Deliverables by end of Month 1:

  • 1-page skills audit
  • 3–5 structured article notes
  • Short list of 2–3 concrete project ideas, each in one sentence

Month 2 – Technical and Ethical Baseline

At this point you should:

  • Get just enough technical skill to be dangerous, not paralyzed.
  • Build a serious ethics spine so your portfolio is not naive.

Week 1–2: Technical Bootstrapping

If your Python is weak or nonexistent, you do this first.

Recommended path (you can compress this into 2–3 weeks if focused):

  • Learn:
    • Basic Python syntax
    • Jupyter notebooks
    • pandas for tabular data
    • scikit‑learn basics (train/test split, logistic regression, random forest)
  • Do 1–2 tiny exercises using health-related open data:
    • Example: Predict diabetes from the Pima Indians Diabetes Dataset
    • Focus on:
      • defining target variable
      • handling missing data
      • basic evaluation (accuracy, ROC AUC)

If you already code comfortably, tighten your stats/ML understanding:

  • Regularization, overfitting, cross‑validation
  • Calibration vs discrimination
  • Imbalanced datasets

Week 3–4: Ethics and Governance Deep Dive

You will be asked about ethics. Program directors are bored of tech‑only stories.

Read and summarize:

  • WHO: “Ethics and Governance of Artificial Intelligence for Health” (sections on equity and accountability)
  • AMA or national body policy on augmented intelligence in medicine
  • One case study of AI failure or bias:
    • Example: The algorithm that underestimated care needs of Black patients due to cost‑based proxies.

For each, write a paragraph on:

  • What went wrong / what is the risk?
  • Where in the workflow the failure occurred (data, model, deployment, governance)?
  • How your projects will explicitly address this.

Deliverables by end of Month 2:

  • One Jupyter notebook where you train and evaluate a simple model on health data
  • 2–3 pages of structured notes on ethics frameworks and a concrete failure story

Month 3 – Mini-Project 1: From Idea to Tiny Prototype

Now you build something small and finish it. No “still working on it” excuses.

At this point you should:

  • Pick the most feasible of your earlier project ideas.
  • Aim to complete a scoped mini‑project in 4 weeks.

Week 1: Clear Problem Statement

Define:

  • Population
  • Problem
  • Outcome
  • Setting

Example:

“Predict 30-day readmission risk in heart failure patients using basic EHR features to support discharge planning—retrospective model only, no deployment.”

Lock this in. Do not pivot mid‑month.

Week 2–3: Data + Baseline Model

You likely do not have direct EHR access. That is fine. Use:

  • Public/open datasets (e.g., MIMIC if you can get credentialed, or similar tabular sets)
  • Synthetic or de‑identified institutional datasets, if available and IRB‑approved

You must:

  • Clean data: handle missing values, encode categorical variables
  • Create a baseline model:
    • Logistic regression or simple tree‑based model
  • Evaluate with:
    • ROC AUC
    • Sensitivity/specificity at clinically relevant thresholds
    • Confusion matrix

Week 4: Ethics Lens + Documentation

Write 2 short sections:

  1. Risk and bias analysis

    • Who might be harmed if this model is wrong?
    • Are there subgroups with worse performance (age, sex, race if available and appropriate)?
    • Is the outcome a good proxy for what we care about?
  2. Deployment sanity check

    • Where in the workflow could this live (triage, discharge, follow‑up)?
    • What safety checks would be needed (human override, conservative thresholds)?

Deliverables by end of Month 3:

  • One well‑commented notebook:
    • data prep, model training, evaluation
  • A 1–2 page project summary:
    • question, data, methods, results, limitations, ethics

This is your first portfolio piece. Imperfect but finished.


Months 4–5 – Core Portfolio Project: Build Something You Can Show

Now you scale up. This is the one you will talk about on interviews.

At this point you should:

  • Commit to one larger project that is clinically interesting and feasible in 8 weeks.

Some good project archetypes:

  • Triage / risk scores:
    • e.g., simple ED triage risk scoring tool using structured data
  • Documentation support:
    • extract key problems / meds from synthetic notes using NLP
  • Patient education:
    • prototype a system that generates tailored discharge instructions, with strict guardrails
  • Quality/safety:
    • early flagging of patients at risk for missing follow‑up

Medical resident reviewing AI-generated risk dashboard -  for Month-by-Month Plan to Build an AI-in-Medicine Portfolio Before

Month 4 – Design and First Version

Week 1: Spec the project like an engineer, not a dreamer

Write a 1–2 page mini‑proposal:

  • Clinical question and stakeholders (ED residents, ward nurses, clinic staff)
  • Data you will use (be specific: tables, variables, time frames)
  • Output format (risk score, text summary, classification flag)
  • Constraints:
    • No PHI leaving institutional servers
    • No black‑box deployment without supervision
    • Time: 8 weeks

Get quick feedback from:

  • One clinically-minded mentor
  • One tech/AI mentor (faculty, data scientist, or an advanced resident)

Week 2–3: Build v0.1 model or pipeline

Depending on your project type:

  • Risk model: refine features, try 2–3 algorithms, cross‑validate
  • NLP/documentation:
    • Start with rule‑based / keyword + simple ML (do not jump straight to giant LLMs unless you have infra and guidance)
  • Patient education:
    • Start with templates + rule‑based personalization, then add AI assistance if safe

Focus on:

  • Reproducibility (clear code, versioned data)
  • Simple, interpretable baseline before you bolt on complexity

Week 4: Quick and dirty user-facing layer

You need something visual:

  • Simple web dashboard:
    • Streamlit, Dash, or even a structured PDF report
  • For NLP:
    • Side‑by‑side original vs AI‑augmented output

You are not building production software. You are building a demo that makes your work understandable in 3 minutes.

Month 5 – Iterate, Evaluate, Stress-Test

Week 1–2: Quantitative evaluation

Do this properly:

  • Train/validation/test split with no leakage
  • Report:
    • ROC AUC, PR AUC if data are imbalanced
    • Calibration plot (is predicted risk close to observed?)
  • If text/NLP:
    • Use simple metrics like precision/recall for labeled concepts
    • But also do qualitative review with clinicians

Week 3: Ethical and workflow review

Sit down (even informally) with 1–2 real clinicians:

  • Walk them through your prototype
  • Ask:
    • “Where would this fit in your day?”
    • “What would make you ignore this?”
    • “What could go wrong if this is wrong 10% of the time?”

Document their feedback. Explicitly address:

Week 4: Tighten documentation

Prepare:

  • 4–6 slide mini‑deck:
    • Problem, data, model, results, limitations, ethics, demo screenshots
  • Clean, final notebook or repo
  • One‑page “risk and governance” note:
    • Failure modes, monitoring needs, and why this should not be used clinically yet

Deliverables by end of Month 5:

  • A clearly defined flagship project with:
    • Working prototype
    • Evaluation results
    • Ethics/workflow commentary
    • Slides and code that you can send to a mentor without embarrassment

Month 6 – Validation, IRB, and Academic Output Planning

At this point you should:

  • Turn “cool project” into “academic output.”

Week 1–2: Decide Your Output Path

Pick at least one:

  • Abstract/poster at:
    • Specialty-specific meetings (e.g., SAEM, SHM, RSNA, etc.)
    • Clinical informatics or quality conferences
  • Short methods paper or case study
  • Educational workshop (teach others about your project and its ethical challenges)
Common Output Options for AI-in-Medicine Projects
Output TypeTimeline to PrepareDifficultyCV Impact
Poster2–4 weeksLowModerate
Oral abstract4–6 weeksMediumHigh
Short paper2–6 monthsHighHigh
Workshop4–8 weeksMediumModerate

Week 3–4: IRB and Regulatory Check (If Needed)

If you used real patient data from your institution:

Clarify:

  • Is this QI, research, or purely educational?
  • Do you need IRB approval for publication?
  • Any data use statements or institutional review language required?

Deliverables by end of Month 6:

  • Clear plan for at least one scientific output
  • Draft abstract or outline
  • IRB status clarified (approved, exempt, or not needed, with justification)

Months 7–8 – Second Project or Strategic Extension

At this point you should:

  • Either build a complementary second project
  • Or deepen your main project in a new direction (fairness, interpretability, or deployment simulation).

Option A: Second, Smaller Project

Pick something that:

  • Uses a different modality (if first was tabular, now text; if text, now images or workflow)
  • Emphasizes ethics and safety explicitly in the design

Examples:

  • A simple tool that quantifies potential bias (e.g., performance by race/sex/age on your original model)
  • A documentation assistant that flags unsafe discharge instructions

Scope: 4–6 weeks to MVP, 2 weeks to evaluate.

Option B: Extension of Flagship Project

Smart choice if time is tight.

Possible extensions:

  • Add fairness metrics:
    • performance across subgroups
    • calibration drift
  • Add interpretability:
    • SHAP plots, feature importance
    • natural language explanations (with strong warnings on hallucinations)
  • Run a retrospective “silent trial”:
    • Compare model predictions against what actually happened, without influencing care.

stackedBar chart: Months 1-2, Months 3-5, Months 6-8, Months 9-12

Time Allocation Across 12-Month AI-in-Medicine Plan
CategoryLearningBuildingPublishing/PresentingPolish & Narrative
Months 1-26020020
Months 3-520601010
Months 6-820403010
Months 9-1210304020

Deliverables by end of Month 8:

  • Either:
    • Completed second mini-project
    • Or substantially upgraded flagship project with fairness/interpretability or silent trial results

Month 9 – Submit, Present, and Get Feedback

At this point you should:

  • Stop tinkering long enough to actually show your work to the world.

Week 1–2: Finalize Abstracts and Submissions

  • Polish:
    • Title that is specific, not buzzword salad
    • Clear methods and metrics
    • One explicit ethics or safety line in the conclusion

Submit to:

  • At least one conference or meeting
  • Optionally:

Week 3–4: Practice Your Story

You need a 2–3 minute verbal summary ready for:

  • PDs
  • Faculty
  • Random person in the elevator

Structure:

  1. Clinical problem
  2. What you built
  3. How you evaluated it
  4. What worried you ethically, and how you addressed it
  5. What you would need before real-world deployment

Deliverables by end of Month 9:

  • At least one submission sent
  • A rehearsed, concise project “pitch”

Month 10 – Clinical Integration and Shadow Testing

Now you prove you understand that AI is not just code. It is workflow and responsibility.

At this point you should:

  • Map how your tool would actually live in a clinic or hospital.

Week 1–2: Workflow Mapping

Build a step-by-step map:

  • Where in the clinical day:
    • Admission, triage, discharge, rounds, clinic intake, etc.
  • Who touches it:
    • Attending, resident, nurse, MA, patient
  • What they see:
    • Risk score, summary text, recommendation, alert
Mermaid flowchart TD diagram
AI Tool Integration Into Clinical Workflow
StepDescription
Step 1Patient arrives
Step 2Data entered in EHR
Step 3AI model computes risk
Step 4Alert to clinician
Step 5No alert
Step 6Clinician reviews and decides
Step 7Document override or acceptance
Step 8Threshold met

Week 3–4: Shadow Testing and Counterfactuals

You probably cannot deploy. But you can simulate:

  • Take a retrospective cohort (already discharged, already had outcomes)
  • Run your model or tool on their data
  • Compare:
    • Would your predictions have changed anything?
    • Any obviously dangerous recommendations?

Document:

  • Cases where the model was helpful
  • Cases where the model was very wrong
  • What safety net you would want clinically

Deliverables by end of Month 10:

  • A workflow diagram or written map
  • A short “shadow deployment” memo with concrete examples (good and bad)

Month 11 – Public Profile and Portfolio Packaging

Now you assemble the pieces so program directors, mentors, and collaborators can actually see what you did.

At this point you should:

  • Make your work findable, understandable, and clearly ethical.

Week 1–2: Portfolio Assembly

Create:

  • A simple portfolio page (personal site, Notion, or even a well‑structured GitHub README) with:
    • Project titles and 2–3 sentence summaries
    • Links to code (de‑identified, safe) or synthetic examples
    • Slides/posters, if allowed to share
    • A brief “AI in medicine philosophy” section: 2–3 paragraphs on safety, bias, and clinician responsibility

Week 3–4: Refine Your Ethical Narrative

You are in the “Personal development and medical ethics” phase. You need to show growth here.

Write a 1–2 page reflection:

  • What you believed about AI and medicine 12 months ago vs now
  • The most alarming ethical risk you ran into in your own work
  • How that changed your view of:
    • clinical judgment
    • consent and transparency
    • equity and access

Deliverables by end of Month 11:

  • Public‑facing portfolio page
  • Written ethical reflection you can pull from for personal statements and interviews

Month 12 – Final Polish Before Residency Applications

This is clean‑up and alignment with your target specialty.

At this point you should:

  • Make sure your AI story supports, not distracts from, your specialty choice.

Week 1–2: Specialty Alignment

For your chosen specialty (IM, EM, radiology, surgery, etc.):

  • Read 2–3 recent AI‑in‑that‑specialty review papers
  • Identify:
    • Where your projects fit
    • Where they are naive or not yet clinically realistic

Tune your narrative:

  • If you are applying to IM:
    • Emphasize risk stratification, QI, population health
  • If EM:
    • Triage, throughput, early risk prediction
  • If radiology:
    • Workflow integration, error reduction, not “replace the radiologist”

Week 3–4: CV, Personal Statement, and Talking Points

Integrate your AI work:

  • CV:
    • Put projects under Research, QI, or Technology Development
    • Use specific verbs: “developed,” “evaluated,” “implemented in shadow mode,” “quantified bias”
  • Personal statement:
    • 1–2 paragraphs max on AI
    • Always tie back to patient care and ethics, not shiny tech
  • Interviews:
    • Prepare 2–3 stories:
      • A technical challenge you solved
      • An ethical dilemma you faced and how you approached it
      • A time clinicians pushed back and why they were right

Deliverables by end of Month 12:

  • CV with AI‑in‑medicine section clearly visible
  • Personal statement draft that uses your portfolio as evidence, not decoration
  • Confidence that you can answer, without flinching:
    • “So, what do you think the limits of AI are in your specialty?”

Two Very Common Failure Modes (Avoid These)

  1. The GitHub graveyard.
    Ten half‑done notebooks, nothing finished, no coherent story. Fix: Prioritize completion and documentation over endless model tweaking.

  2. The ethics paragraph you clearly wrote at 2 a.m.
    Tacked on at the end, generic “we must be careful of bias” nonsense. Fix: Build ethics into the design, evaluation, and workflow discussion from Month 2 onward.


FAQ (exactly 2 questions)

1. Do I really need to code to build an AI‑in‑medicine portfolio that residency programs respect?
You should have at least one project where you personally touched code or structured analysis. Low‑code tools and prebuilt platforms are fine as components, but if every part of your story depends on “someone else handled the technical side,” it is weaker. That said, you do not need to be a full‑fledged ML engineer. A couple of solid, reproducible Python notebooks with clear evaluation and ethical reflection are enough for residency.

2. How do I talk about AI in interviews without sounding like I want to replace doctors?
Anchor everything in patient safety, equity, and augmentation. Use phrases like “decision support,” “workflow improvement,” and “error reduction,” not “automation” and “replacement.” Emphasize the ways your projects revealed the limits of AI—cases where clinician judgment, context, or values were indispensable. Programs are looking for residents who understand both the power and the boundaries of these tools.


Key points, briefly:

  1. Structure your year: 2 months learning, 3–4 building, 3–4 validating and sharing, last 2 packaging and aligning with your specialty.
  2. Finish real projects with evaluation and ethics built in; half‑done experiments do not count.
  3. Turn your work into outputs—posters, code, and a clear narrative—that show you are a clinically grounded, ethically serious future physician who happens to understand AI.
overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles