Residency Advisor Logo Residency Advisor

Your First 90 Days as an Attending: Safely Adopting New Technologies

January 8, 2026
14 minute read

New attending physician reviewing medical technology in a modern hospital workroom -  for Your First 90 Days as an Attending:

The fastest way to damage your credibility as a new attending is to chase shiny technology without a safety net.

You are stepping into the most dangerous period of your career: competent enough to order almost anything, inexperienced enough in your new role to misjudge risk, surrounded by vendors and administrators who want “early adopters.” The first 90 days will set your pattern. Either you become the attending who uses innovation wisely—or the one everyone quietly blames when the “smart” tool backfires.

Here is how your first three months should look, week by week, if you want to safely adopt new technologies without becoming a cautionary tale.


Weeks 1–2: Stabilize Before You Innovate

At this point you should not be “implementing” anything. You are not here to fix the hospital in your first fortnight. You are here to prove you can practice safely in a new role.

Clinical baseline: lock down your fundamentals

During Days 1–14, your priorities are:

  • Learn the local way things actually get done
  • Identify where tech is already embedded in patient care
  • Understand which tools are optional vs. mandated

Concrete steps for your first two weeks:

  1. Shadow your own workflow with a tech lens

    Do 2–3 full shifts where your sole “extra” task is to notice technology:

    • EHR order sets, macros, decision support alerts
    • AI triage tools, sepsis prediction, imaging prioritization
    • Smart pumps, closed-loop insulin systems, advanced monitors
    • Telehealth platforms, remote monitoring dashboards

    Literally make a private list titled: “Tools I touched today but do not fully understand.” You will use this later.

  2. Find your “tech safety triangle”

    Before you touch anything new, you need three humans:

    • A senior attending known for being cautious, not flashy
    • A nurse or charge nurse who has survived multiple rollouts
    • An IT/informatics or clinical engineer contact who actually understands the plumbing

    At this point you should:

    • Ask the senior attending: “What tech here causes problems?”
    • Ask the nurse: “Which devices or orders from attendings are hardest to manage?”
    • Ask the IT/informatics contact: “Which tools are considered high-risk or still in pilot?”
  3. Clarify your ethical boundaries early

    Week 1–2, you sketch your personal red lines:

    • No adopting patient-facing AI tools without clear documentation and consent language
    • No relying on AI or automation for decisions you cannot defend independently
    • No letting vendor reps into patient rooms without explicit patient consent and purpose

    Write this down somewhere. You will be tempted to bend it when you are tired and a tool “looks helpful.”

Mermaid timeline diagram
First 2 Weeks Focus
PeriodEvent
Week 1 - Day 1-2Observe workflows
Week 1 - Day 3-4Map existing tech
Week 1 - Day 5-7Meet tech safety triangle
Week 2 - Day 8-10Clarify policies and mandates
Week 2 - Day 11-14Define personal ethical boundaries

Weeks 3–4: Audit the Technology You Already Use

By now, you are functioning. At this point you should not add new platforms. You should understand the ones you are already using automatically.

Week 3: Map risk in your current tools

Pick 2–3 tools that clearly influence patient care. For example:

  • Sepsis prediction model embedded in the EHR
  • AI chest x‑ray triage flagging PE or pneumothorax
  • Insulin dosing algorithm or smart infusion pump
  • Procedure guidance system (e.g., navigation, robotic platform)

For each tool, do a 30–45 minute micro‑audit:

  • What does it actually do? Prediction? Triage? Dose suggestion?
  • Where in the workflow does it enter? Before your decision, after, or in parallel?
  • Who can override it? You? Nursing? Pharmacy? No one?
  • What happens when it fails? Does anything fail safe?

Then ask the hard question: “If this vanished tonight, could I still practice safely?” If the answer is no, you are already over‑dependent on a tool you barely understand. That is not innovation. That is risk.

Week 4: Establish your documentation habits

You will not remember these first months in detail when something goes wrong. Documentation is how you show you used technology ethically.

At this point you should standardize:

  • How you document tech use when it impacts a decision
    Example: “AI CT triage flagged possible PE; personally reviewed images, did not see embolus; ordered D‑dimer and repeat study if symptoms progress.”

  • How you explain tech to patients
    Two sentences, maximum. No sales pitch.
    “We use a software tool that helps flag concerning findings on x‑rays. I still review all the images myself, but this can point us toward things to double‑check.”

  • When you explicitly state you overrode a suggestion
    This matters ethically. It shows you are the decision‑maker, not the algorithm.

You are building muscle memory here. Day 25 you are still conscious about it. Day 90 you will write this way automatically.

Attending documenting technology-influenced decisions in the EHR -  for Your First 90 Days as an Attending: Safely Adopting N


Month 2 (Weeks 5–8): Select and Pilot One New Technology

You are stable. You know the landmines. Now you can cautiously adopt something new. One thing. Not five.

Week 5: Choose your first technology deliberately

Pick a tool that:

  • Solves a problem you actually feel daily
  • Has at least some outcome or safety data behind it
  • Is supported by your institution (not just “this cool app I found”)

Good “first adoption” candidates in the first 90 days:

  • A vetted AI documentation assistant integrated into your EHR
  • A decision support module for antibiotic stewardship with pharmacy backing
  • Telehealth workflow enhancements for follow‑ups in an outpatient clinic
  • Structured imaging AI for simple, high‑volume studies (e.g., bone age, basic chest x‑ray triage) with radiology sign‑off

Bad candidates:

  • Anything that auto‑approves or auto‑orders treatment without your direct confirmation
  • “Black box” vendor tools with no local champion and no clear oversight
  • Tools where you cannot see or explain how false positives/negatives are handled
Good vs Risky First Technologies
TypeFirst 90 Days Choice
AI note assistant (EHR‑integrated)Good
AI sepsis early warning (already live)Evaluate only
New robotic surgical platformRisky to adopt now
External patient‑facing appUsually risky
Telehealth visit templatesGood

Week 6: Set your safety rules before first use

Before you use the tool on a live patient, you define your sandbox:

  • Scope: For which types of patients or scenarios will you use it?
  • Stop criteria: When will you not use it? (e.g., complex multi‑morbid patients, end of life, pediatrics)
  • Double‑check plan: How will you validate it early on—shadow use, parallel charting, second opinions?

Example for an AI documentation tool:

  • Scope: “I will use it only for straightforward admissions and follow‑ups, not for complex ICU or ethics cases during the first month.”
  • Stop criteria: “If it fabricates or misrepresents clinical findings twice in a week, I stop and report.”
  • Double‑check: “For the first 10 patients, I will compare AI‑generated notes line‑by‑line with my manual notes before signing.”

You write this plan in an email to yourself and your “tech safety triangle” contact. Paper trail matters.

Week 7: Start a controlled pilot (patient‑by‑patient)

Now you go live, slowly.

Daily routine:

  1. Morning: Decide whether you will use the tool on 1–3 specific patients.

  2. Mid‑day: Note (privately) where the tool helped, hindered, or confused you.

  3. End of day: Spend 10–15 minutes writing a very short log:

    • Patient/context
    • What the tool did
    • What you decided
    • Any near‑miss or concerning behavior

This is tedious. Do it anyway—for 2 weeks. You are teaching your brain how this technology behaves in your hands, with your patients, not in a marketing slide.

Week 8: Review, recalibrate, or shut it down

At this point you should decide if this tool stays, changes, or goes.

Ask yourself:

  • Did it actually save cognitive load or time, or did it just “feel cool”?
  • Did it ever push you toward a riskier decision than you would have made alone?
  • Can you explain its role clearly to a patient, a colleague, and a lawyer?

Three options:

  1. Adopt with guardrails
    It works, you understand it, no pattern of concerning behavior. You keep it, with your written scope/limits.

  2. Restrict and re‑pilot
    It helps in narrow situations only. You shrink its allowed use and continue cautiously.

  3. Abandon
    It adds confusion, risk, or extra work. You document why and move on. You are not obligated to keep bad tech.

line chart: Day 1, Day 5, Day 10, Day 14

Perceived Benefit vs Risk of New Tool During Pilot
CategoryPerceived Benefit (1-10)Perceived Risk (1-10)
Day 147
Day 565
Day 1074
Day 1473


Month 3 (Weeks 9–12): Scale Safely and Protect Your Reputation

By month three, people are watching you differently. You are not “the new attending” anymore. You are just an attending. Your tech habits are becoming your brand.

Week 9: Integrate technology into team communication

At this point you should not be a solo early adopter. You bring your team into the loop.

On rounds or in huddles, start using explicit, disciplined language:

  • “The AI note drafted this; I have confirmed the key findings and assessment.”
  • “The sepsis alert fired here. I am not following it because…”
  • “The imaging AI flagged this study; I still want a human radiology read before acting.”

This does three things:

  • Trains juniors to see tech as assistive, not authoritative
  • Makes your ethical reasoning visible
  • Protects you if something goes sideways—people heard you thinking out loud

Week 10: Solidify your personal “tech template”

You need a mental checklist you run through every time you consider a new tool. By now you have piloted one technology; use that experience to build your template.

A simple 5‑question template that takes 20 seconds:

  1. What exact decision or task is this affecting?
  2. What is my plan if this tech is wrong?
  3. Who else is involved if there is an error (nursing, radiology, pharmacy)?
  4. How will I document that I made the final decision?
  5. Would I be comfortable explaining this choice to a family member at bedside?

If you cannot answer in 20–30 seconds, you do not use the new tool on that patient.

Multidisciplinary team discussing AI tools during rounds -  for Your First 90 Days as an Attending: Safely Adopting New Techn

Week 11: Revisit your ethical boundary conditions

Here is where people get lazy. The tool has not burned them yet, so boundaries slide. Do not do this.

Block 30–45 minutes one evening and ask:

  • Have I started using this tool on more complex patients than I originally intended?
  • Have I ever skipped my usual clinical reasoning because the tool agreed with me?
  • Have I disclosed tech use to patients consistently when it mattered?

If the answer to any of those is “yes, I slipped,” tighten back up:

  • Re‑narrow your scope of use
  • Re‑commit to manual double‑checks where you cut corners
  • Report any concerning patterns through your institutional channels (ethics, quality, or informatics)

Ethically, you are responsible for noticing drift. Tech does not “just creep.” You permit it.

Week 12: Decide your 6‑month technology strategy

At this point you should stop thinking in days and start thinking in quarters.

Look back at the 90 days and ask three blunt questions:

  1. What single technology actually made patients safer or care more humane?
    Keep it. Refine it. Consider teaching others how to use it safely.

  2. What technology mostly helped the hospital or billing, not patient care?
    Be honest. If it is just an administrative requirement, treat it like that. Do not oversell its clinical value to yourself or your trainees.

  3. Where did I feel uneasy but moved ahead anyway?
    That is your danger zone. Flag those situations:

    • New algorithm on a vulnerable population (peds, geriatrics, language barriers)
    • High‑stakes decisions where tech “pushed you over the edge” to act or not act
    • Cases where you could not explain the underlying logic but trusted it anyway

From this review, set a 6‑month plan. For example:

  • Deepen use of your successful tool and present your pilot data at a department meeting
  • Stop one marginal technology and inform your leadership why
  • Choose one additional low‑risk tool to pilot using the same framework

stackedBar chart: Month 1, Month 2, Month 3

Time Allocation for Tech Adoption in First 90 Days
CategoryObservation / LearningPilot / EvaluationStable Use / Teaching
Month 160100
Month 2254510
Month 3153035


Day‑by‑Day Micro‑Checklist for High‑Risk Decisions

Some days will be quiet. Others will throw you straight into a tech‑heavy, high‑stakes call: AI alerts, radiology triage, smart pumps, the works. In those moments, do not improvise your ethics.

Here is the 60‑second “tech safety pause” you should run that day, before you act:

  1. Name the tool out loud (to yourself or the team)
    “This is the AI PE triage flagging this CT as high risk.”

  2. State its role (assistant vs. driver)
    “This is an assistant. I am the decision‑maker.”

  3. Identify your independent data

    • Vitals, exam, history
    • Lab trends and imaging you have personally reviewed
  4. Decide your action without the tool
    “If this AI did not exist, what would I do?”
    If your answer is identical, good. If your answer changes only because of the tool, you need a strong justification.

  5. Document the relationship
    One line is enough:
    “AI alert suggested X; decision based primarily on Y, with Z as supporting information.”

This is how you keep innovation from eroding your clinical judgment.

Attending physician pausing during a critical decision in front of monitors -  for Your First 90 Days as an Attending: Safely


Common Pitfalls in the First 90 Days (And When They Typically Appear)

You are not special here; everyone falls into the same traps on the same timeline. You can either be surprised by them or plan for them.

Pitfalls by Timeline in First 90 Days
TimeframeTypical Pitfall
Week 1–2Overtrusting local “this is how we do it” tech use without questioning
Week 3–4Ignoring documentation of tech‑influenced decisions
Week 5–6Piloting too many tools at once
Week 7–8Expanding tech use to complex cases too fast
Week 9–10Teaching trainees to rely on tools instead of reasoning
Week 11–12Normalizing small unsafe shortcuts with tech

Watch for each pitfall in its likely window. If you see it creeping in, you know exactly where you are in the “early adopter” failure script—and you can step off.


The Bottom Line

Your first 90 days as an attending are not about proving you are cutting‑edge. They are about proving you are safe.

Three points to remember:

  1. Stabilize first, innovate second. The first month is for understanding existing tech and your own limits—not adding new toys.
  2. Pilot deliberately, document ruthlessly. One new tool, clear scope, written guardrails, and visible reasoning in the chart and with your team.
  3. Keep clinical judgment primary. Technology can assist, but you must always be able to defend a decision as if the tool never existed. If you cannot, you are not using innovation—you are outsourcing your ethics.
overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles