
The worst way to bring AI into medicine is to “just turn it on and see what happens.”
If you treat AI decision support like a fancy Google search during rounds, you will create noise, erode trust, and eventually get burned. If you treat it like an intern whose work you rigorously check, log, and learn from, you can get very real benefits without compromising ethics or patient safety.
This guide is about the second option.
1. Ground Rules: What AI On Rounds Is (And Is Not)
Let me be direct: AI on rounds is a tool, not a clinician. It should behave like a well-read but inexperienced sub-intern: fast with information, terrible with judgment unless you supervise it aggressively.
Here is the mindset you need before you even open an AI app at the bedside:
- You are the final decision-maker.
- The AI is a consult, not an order.
- Everything must be traceable, explainable, and overridable.
- If you would be embarrassed to say “I did this because the AI told me,” you are using it wrong.
Define Your Use Cases Up Front
Do not “try AI for everything.” That is how you get sloppy. Start with narrow, low-risk, high-friction tasks where AI can shine without controlling care.
Good starting uses during rounds:
- Turning long chart reviews into structured, time-limited summaries
- Drafting problem lists and assessment/plan templates
- Generating differential diagnoses you might have missed (to review, not to accept blindly)
- Creating patient-friendly explanations of diagnoses, procedures, or treatments
- Organizing follow-up questions for consultants (ID, nephrology, etc.)
Bad starting uses:
- Choosing medications or doses without your own independent check
- Overruling a human consultant
- Making disposition decisions based primarily on AI
- Using it as your only source for rare disease management
If you do not explicitly know what problem you are solving with AI before you open it, do not use it.
2. A Safe Workflow: Step-by-Step Integration Into Daily Rounds
Let us build a concrete, repeatable workflow that you can actually use on a medicine ward tomorrow. This is the part most hospitals skip. They turn on a tool and hope habits form. They do not. You must script them.
Step 1: Decide When AI Is Allowed In The Workflow
Pick specific points in your morning:
- Pre-rounding – chart review and note drafting
- Team rounds – refining differential, plans, and communication
- After rounds – cleaning up notes, discharge instructions, letters
Avoid “live” bedside use until your team is comfortable and your workflow is stable. Bedside is where privacy, consent, and trust issues explode.
| Step | Description |
|---|---|
| Step 1 | Pre-rounding |
| Step 2 | Summaries and problem list |
| Step 3 | Team rounds |
| Step 4 | Refine differential and plans |
| Step 5 | After rounds |
| Step 6 | Notes and patient education |
Step 2: Pre-Rounding – Use AI as a Chart Distiller, Not a Brain
Typical pre-rounding: you open Epic/Cerner, scroll a decade of notes, and drown. This is where AI can quietly save your time without touching orders.
Protocol: Pre-Round AI Use (10–15 minutes per complex patient)
- De-identify before sharing (if using any external tool):
- Strip name, MRN, DOB, exact dates, addresses
- Use relative timing (“hospital day 3”, “two weeks ago”) instead of exact dates
- No free-text copying of entire notes from EMR into external tools
- Feed only the essential facts:
- Age range, sex
- Key comorbidities (just the list)
- Reason for admission and major events
- Current vitals trends (qualitative)
- Current meds (generic names, rough doses)
- Ask the model for:
- A one-paragraph summary of current course
- A bulleted problem list prioritized by acuity
- For each active problem: 2–3 standard management considerations with guideline-level thinking
- You then:
- Cross-check each bullet with the EMR
- Delete anything that is wrong or inapplicable
- Add your own assessment in your own words
What you do not do: screenshot and paste the AI output directly into your progress note as if it were chart review. That is how you end up documenting hallucinations.
Step 3: During Team Rounds – Use AI as a Second Brain, Not a Second Attending
On rounds, speed and clarity matter. Your AI should either be helping you think or staying out of the way.
When it is allowed:
- You have a specific question, like:
- “What are less common causes of nephrotic-range proteinuria in adults besides diabetes, minimal change, and FSGS?”
- “What is a structured way to present this multi-problem ICU patient?”
- You have already heard the intern’s assessment and plan
- You are at a workstation, not discussing patient identifiers in public areas
Live rounds protocol:
- Resident or attending states their differential and plan first.
- If there is diagnostic uncertainty, the team frames a precise clinical question.
- A single designated person (usually the resident) queries the AI with:
- De-identified case summary (high level only)
- Clear, narrow question
- Team listens to the AI’s output as:
- Possible additional items for the differential
- Possible tests or red flags they may have missed
- Any AI-suggested change must:
- Be explicitly stated as “AI suggested X; we agree/disagree because Y”
- Be cross-checked with at least one other source (guidelines, UpToDate, attending experience)
The AI is not allowed to break ties between residents and attendings. Ever.
Step 4: After Rounds – Documentation and Patient Education
Here AI can be very useful if you keep the guardrails.
For documentation:
- Use AI to:
- Turn a set of bulleted thoughts into a coherent paragraph
- Generate a structured plan template (e.g., “CHF exacerbation – assessment and plan outline”)
- Do not:
- Paste raw AI text into the chart without editing
- Document anything you did not verify from the EMR or patient
- Let AI write your reasoning. That is the part that makes you a clinician.
For patient education:
This is a high-yield but underused application.
- You give the AI:
- The diagnosis (e.g., “new diagnosis of heart failure with reduced EF”)
- The approximate age range
- The key teaching goals (e.g., “salt restriction, daily weights, why we use these meds”)
- Ask it for:
- A 5–8 sentence explainer tailored to 8th-grade reading level
- You then:
- Read it yourself first
- Edit any phrasing that is off
- Discuss it verbally with the patient
- Optionally provide a printed copy or portal message
You are still the teacher. The AI is your copy editor.
3. Safety, Privacy, and Ethics: Non-Negotiable Boundaries
This is where people get nervous – and they are right. A sloppy AI workflow can violate HIPAA, blow up trust, and cause real harm.
Let us make the rules painfully clear.
Rule 1: You Do Not Upload PHI to Non-Compliant Tools
If your hospital has not approved a specific AI tool as HIPAA-compliant, it is off-limits for any identifiable data. Period.
That includes:
- Names, MRNs, phone numbers
- Dates of birth, admission, procedure dates
- Addresses, employer, or other clear identifiers
- Free-text copy-pastes that obviously originated from your EMR
If you must use a consumer tool for conceptual help (and many residents do, quietly):
- Convert everything to a generic case vignette
- Change ages, timing, and any unique combination of features
- Treat it like asking “What is the general approach to…” in a textbook, not “Help me with Ms. Smith in 5B.”
Rule 2: AI Cannot Be the Only Source For Critical Decisions
If the model is the sole reason you changed anything major, that is unsafe.
For decisions that can harm if wrong (med changes, procedures, disposition, code status), your minimum standard:
- AI may suggest →
- You cross-check with at least one of:
- Attending/consultant
- Authoritative reference (guidelines, UpToDate, drug database)
- Your own strong prior knowledge and reasoning
If you cannot justify the choice without mentioning the AI, you are not ready to act.
Rule 3: Be Honest With Patients and Supervisors
Trying to hide AI use is a recipe for distrust and mistakes.
A practical standard:
- With patients:
- You do not need to say “we used an AI.” You do need to ensure what you tell them is accurate, human-reviewed, and empathetic.
- With colleagues and supervisors:
- You should be transparent when AI significantly influenced your differential or plan:
- “I used an AI support tool that reminded me of X and Y in the differential. I checked those against UpToDate; here is why I think X is more likely.”
- You should be transparent when AI significantly influenced your differential or plan:
That kind of transparency forces you to think – and it keeps the power dynamics honest.
4. Evaluating and Choosing AI Tools for Rounds
Not all AI tools are created equal. Some are shiny garbage. Some are decent for education but poor for clinical decision support.
Here is a short, unforgiving checklist for anything you want to use on rounds.
The Five Tests of a Clinically Useful AI Tool
Data security test
- Is it explicitly HIPAA-compliant?
- Is it officially approved by your institution?
- Does it log data for training? (Often unacceptable for PHI.)
Provenance test
- Can it show sources for its recommendations?
- Does it reference guidelines, trials, or textbooks that you can verify?
- If it cannot cite anything, treat it like a junior medical student’s guess.
Control test
- Can you adjust the level of detail?
- Can you restrict it to a specific knowledge base (e.g., local guidelines)?
- Does it allow you to turn off features (like auto-writing notes) you do not trust?
Transparency test
- Does the interface clearly show what input was used and what assumptions it made?
- Can you easily review its reasoning chain (if provided)?
Failure behavior test
- How does it respond when uncertain?
- Does it say “I’m not sure” or does it hallucinate confidently?
- Does it encourage you to cross-check, or does it sound like gospel?
If a tool fails 2 or more of these, it does not belong in your clinical workflow.
| Criterion | Generic Chatbot | Hospital-Approved CDS | Specialty-Specific App |
|---|---|---|---|
| HIPAA-Compliant | No | Yes | Often Yes |
| Shows Sources | Rarely | Often | Often |
| Customization | Low | High | Medium |
| Best Use Case | Education only | Daily rounds support | Narrow clinical tasks |
5. Training Yourself and Your Team: A 4-Week Integration Plan
Most teams fail because they have no plan beyond “try it.” You need a short, structured ramp-up where everyone learns together and nobody hides mistakes.
Here is a concrete 4-week protocol you can pitch to your attending or chief.
Week 1: Dry Runs (No Patient Data, No Real Decisions)
Goal: Get comfortable with the tool’s behavior.
- Use old cases (already resolved) from memory and anonymized details.
- Ask the AI:
- For differentials you already know
- For workup of classic presentations
- To summarize textbook-like scenarios
- Compare:
- Where does it miss obvious diagnoses?
- Where does it overcomplicate?
- Where does it hallucinate contraindicated drugs?
End of week: Brief team debrief – 10 minutes. Decide if the tool is even worth using with live cases.
Week 2: Pre-Round Summaries Only
Goal: Safely reduce cognitive load without altering care.
- Residents and interns:
- Use AI strictly for pre-round organization:
- Summaries, problem lists, checklist of management points
- No direct EMR copy-paste into non-compliant systems
- Use AI strictly for pre-round organization:
- Attending:
- Reviews one AI-supported case per day explicitly with the team:
- “What did AI add?”
- “What was wrong or misleading?”
- Reviews one AI-supported case per day explicitly with the team:
No diagnostic or therapeutic decisions are allowed to be started based only on AI suggestions this week.
Week 3: Differential Diagnosis Aid (With Guardrails)
Goal: Use AI as a structured check on your thinking.
- For selected complex patients:
- Team builds their own differential and plan first.
- Then queries AI:
- “Given this de-identified summary, what additional diagnoses should we consider that we have not mentioned?”
- Team explicitly marks AI contributions as:
- Useful additions
- Noise
- Dangerous suggestions
- Any AI-suggested change that would alter care:
- Must be verified with guidelines or consultant opinion.
Week 4: Integrate Documentation and Patient Education
Goal: Use AI to accelerate communication, not decision-making.
- Allow:
- Drafting of sections of notes (HPI summary, assessment structure)
- Drafting patient handouts and explanations
- Require:
- Human editing of every sentence that enters the chart
- Explicit labeling in your own mind: “AI-assisted wording,” not “AI thinking”
By the end of week 4, you should have a very clear sense of:
- What the AI is good for on your service
- Where it tends to fail
- Which parts of the workflow actually benefit from it
If you do not, either your tool is bad or your team is not using it systematically.
6. Logging, Feedback, and Continuous Improvement
The last piece nobody does but should: track the impact.
You do not need a randomized trial. You need a simple, honest log.
| Category | Value |
|---|---|
| Week 1 | 5 |
| Week 2 | 15 |
| Week 3 | 25 |
| Week 4 | 30 |
Simple Case Log Template
For any case where AI meaningfully influenced your thinking, jot down 5 things:
- Patient type (anonymized): “Older adult with sepsis,” “young patient with new onset psychosis”
- Where you used AI:
- Pre-round summary
- Differential generation
- Management checklist
- Patient education
- What AI contributed:
- “Suggested adding HLH to differential”
- “Reminded us of sodium correction limits”
- Result:
- No change
- Confirmed our plan
- Led us to order X or consult Y
- Safety check:
- Was this cross-checked? With what?
Review this log every 2–4 weeks as a team. Look for patterns:
- Repeated dangerous suggestions → reconsider or retire the tool.
- Consistent time savings without safety concerns → formalize that use case in your team’s routines.
Over time, you want a written micro-policy for your service, even if your hospital has not issued one:
- “On our team, AI is allowed for A, B, C. It is not allowed for X, Y, Z.”
- “We always cross-check AI suggestions for meds and diagnoses against [resources].”
- “We record any major AI-influenced decision in our internal log.”

FAQs
1. Should I tell my attending I am using AI for my notes and plans?
Yes. Hiding it is amateur behavior. A simple, professional way to say it: “I have been using an AI tool to help me structure my problem lists and make sure I am not missing standard management steps. I always verify content against the chart and guidelines. If you prefer I do not use it on your service, I will stop.”
If they are open to it, ask them to spot-check a case or two where AI helped so you can calibrate together.
2. What is the ethical bottom line for AI use on rounds?
Three points.
First, patient welfare and safety come before convenience; AI cannot drive care independent of human judgment.
Second, privacy and confidentiality matter; do not send identifiable data to unapproved tools.
Third, honesty with colleagues about how you arrived at your decisions is non-negotiable; AI should augment your reasoning, not replace it, and you should always be able to defend any decision without saying, “Because the AI said so.”