
What is the fastest way to make your attending permanently distrust your judgment? Pitch a shiny new tech idea the wrong way.
You are in the “innovation” phase of your career. They are in the “this will land us in a lawsuit” phase of theirs. If you forget that, you will get burned.
Let me walk through the biggest red flags I see when trainees push new devices, apps, AI tools, or workflows on a service. These are the things that make attendings quietly think: “This person does not understand risk, systems, or ethics.”
You do not want to be that person.
1. Confusing “Cool” With “Clinically Safe and Necessary”
The classic mistake: you saw a demo at a conference, read a Twitter thread, or tried something on your own phone and decided, “We should use this on our patients tomorrow.”
That is how you lose credibility.
Red flags your attending will notice immediately:
- “This app is really cool, it uses AI to…” (no mention of evidence)
- “Everyone at other hospitals is using this.” (no specifics)
- “It will make our lives easier.” (but what about safety and accuracy?)
What they are thinking:
“Where is the data? What problem does this actually solve? What is the failure mode?”
You must separate three things very clearly:
- Novelty – Is it new or clever?
- Utility – Does it solve a real clinical or operational problem?
- Safety/Validity – Is there evidence it works and does not harm patients?
If you only talk about #1 and #2, you look like a gadget-chaser. You need #3 front and center.
Better framing:
- “This ECG AI tool has a prospective validation study in JAMA Cardiology, AUC 0.93 for LV dysfunction, and a multi-center implementation study showing reduced echo utilization by 15% without missed pathology. I think it might help in our heart failure population. Here is the paper.”
Now you sound like a clinician who happens to be tech-savvy. Not a tech bro who accidentally completed medical school.
| Category | Value |
|---|---|
| Cool factor | 80 |
| Saves time | 70 |
| Evidence quality | 40 |
| Risk management | 30 |
| Patient benefit | 50 |
The mistake: leading with “cool factor” and convenience.
What they care about: evidence, risk, and actual patient benefit.
2. Ignoring Privacy, Security, and Hospital Policy
If your pitch includes any version of:
- “We could just text the images in a group chat…”
- “I can upload the data to this cloud tool to analyze it…”
- “It is not technically approved here, but we could try it on one or two patients…”
You just triggered every red flag a seasoned attending has.
Security and compliance are not afterthoughts. They are gatekeepers. Violating them can get:
- You in trouble with GME or the medical board
- Your attending in trouble with the department chair
- Your hospital in trouble with regulators
And yes, people get fired over this.
Danger phrases that should never be in your pitch:
- “Off the record”
- “We do not have to tell IT yet”
- “We can just work around Epic/Cerner for now”
- “We can use my personal account/device”
Instead, assume:
- If it touches PHI, it must be:
- HIPAA-compliant (or your country’s equivalent)
- Approved by hospital IT / information security
- Covered by a BAA (business associate agreement), if an external vendor is involved
If you do not know whether the tool is compliant, say:
“I do not know if this is HIPAA-compliant or approved here yet. I wanted to ask you first if it is worth exploring formally with IT / compliance.”
That sentence alone will save you from sounding reckless.

3. Trying to Pilot on Patients Without a Safety Net
Nothing makes an attending’s blood pressure spike like hearing, “We already tried this on a few patients and it seemed fine.”
That is how careers end.
Red flags:
You have already:
- Used a non-approved app for medication dosing
- Stored PHI in a consumer cloud app to 'analyze patterns'
- Given patients device-based advice not in guidelines (“This wearable said your A-fib is controlled, so you are probably fine.”)
You propose:
- Using an algorithm’s recommendation without standard-of-care confirmation
- Letting a device replace a validated workflow without backup checks
Remember:
As a trainee, you are not allowed to change the standard of care on your own. Even your attending cannot unilaterally change it if it deviates from institutional policy or guidelines, unless they can justify it thoroughly and document it explicitly.
If your tech idea involves any of the following, you must slow down:
- Changing who is responsible for interpreting critical data
- Bypassing existing safety steps (two-nurse checks, pharmacist review, radiologist reads)
- Replacing a validated test with an unvalidated or lightly validated one
Safer framing:
- “I am not suggesting we change our management solely based on the tool. I was thinking we could compare its output to our usual process for a defined period, purely observationally, and if you think it might be valuable, we could consider talking to QI/IRB.”
If the tool affects diagnosis or treatment—or could influence it, even implicitly—expect your attending to worry about:
- Standard of care
- Documentation
- Liability
- IRB requirements if it smells like research
Do not minimize those concerns. That is a quick way to be labeled naive and dangerous.
4. Overpromising and Hand-Waving the Limitations
“I think this will basically solve our throughput issues.”
“This AI tool is better than radiologists now.”
“This will eliminate human error.”
I have heard all three. From very smart people who had read exactly one press release and half a preprint.
Overpromising kills your credibility. Fast.
Red flags attendings listen for:
- Absolutes: “always,” “never,” “eliminate,” “perfect”
- Unsupported claims: “better than experts” without strong, peer-reviewed evidence
- No discussion of:
- False positives / false negatives
- Edge cases
- Failure modes
- Generalizability (single-center vs multi-center, population differences)
You must walk in knowing—and openly stating—limitations:
- “This algorithm was validated in a relatively homogeneous population at a single quaternary center. Our patient mix is very different.”
- “Sensitivity is high but specificity is modest. It might be more useful as a screening tool with human confirmation, not as a final decision-maker.”
- “The study excluded unstable ICU patients, so I would not trust it in our sickest cases.”
If you sound like a salesperson, your attending will treat you like one.
If you sound like a cautious, skeptical scientist, they might actually listen.
| Signal | Red Flag if Missing |
|---|---|
| Peer-reviewed studies | Only company white paper |
| Multi-center data | Single site, N=30 “pilot” |
| Clear performance metrics | Vague claims, no numbers |
| Limitations discussed | One-sided, “no downside” pitch |
5. Disrespecting Hierarchy, Workflow, and Politics
You are not just pitching technology. You are proposing to change what people do, how they document, and what they are accountable for.
If you do not understand that, you look like a bull in a china shop.
Red flags:
You pitch something that obviously increases work for:
- Nurses
- Unit clerks
- Techs
- Pharmacy
- Radiology
…without ever talking to them.
You frame the idea as:
- “Nurses could just…”
- “Pharmacy can probably…”
- “We can get radiology to…”
People remember who tried to hand them extra burdens without involving them early. That reputation spreads much faster than your “innovative” reputation.
Before you pitch to an attending:
Map who is affected. Literally write down which roles will:
- Enter data
- Monitor alerts
- Respond to notifications
- Document changes
Ask a few of them informally.
“If we had an app that did X, would that actually help or just add more clicks?”Acknowledge this explicitly.
“I know this would affect nursing documentation and maybe add one extra step for respiratory. I talked with two of the senior nurses, and they raised these concerns…”
This shows you understand reality: tech changes are political and cultural, not just “feature updates.”

6. Skipping Ethics and Equity Concerns
If your pitch touches any of these:
- AI triage tools
- Risk prediction models
- Algorithms for resource allocation (ICU beds, imaging, consults)
- Screening tools with imperfect sensitivity/specificity
…then skipping the ethical implications is a big red flag.
You must think about:
Bias:
Does the model underperform in certain racial, socioeconomic, or language groups?
Was it trained on a population nothing like yours?Transparency:
Can you explain to a patient why the tool flagged—or did not flag—them?Consent:
Is this clearly part of standard care? Or are you effectively enrolling patients into an experiment without telling them?Fairness:
Does it risk deprioritizing patients who already have worse access or outcomes?
Example of a dangerous blind spot:
“I found this risk prediction app online that can tell us which patients are likely to bounce back after discharge. We could prioritize follow-up for the ones it flags.”
No mention of how it was trained. What variables it uses. How it might penalize people with poor documentation or inconsistent access.
Your attending might not have all the answers either. But if you never raise the questions, you signal that you have not internalized core ethical responsibilities.
A better move:
- “The model’s performance by race and language was not well described. That worries me. If we ever considered something like this, I would want equity and ethics folks involved.”
Now you sound like a physician, not a venture capitalist.
| Category | Value |
|---|---|
| Data bias | 35 |
| Lack of transparency | 25 |
| Inadequate consent | 20 |
| Access inequity | 20 |
7. Presenting Chaos Instead of a Thoughtful Plan
Another red flag: your pitch is just vibes.
You show them a website or app and then vaguely say, “We could start using this on rounds.”
What they want to see instead:
A specific use case
“I am thinking about this only for low-risk chest pain patients in CDU, as an adjunct to our current protocol, not in the ED or ICU.”A minimal, reversible pilot
“Two weeks, limited set of users, data collected manually, no changes to order sets yet.”A measurement plan
“Track: time to result, discordance between tool and clinician, any breakdowns, user satisfaction.”A contingency plan
“If we see X warning sign, we stop the pilot immediately.”
If you walk in with a rough but clear structure, you look like someone who understands systems. If you walk in with, “We could just start using it and see,” you look inexperienced and risky.
Consider drafting a one-page memo:
- Problem: (concrete, measurable)
- Proposed tech solution: (specific tool, limited scope)
- Supporting evidence: (citations, brief summary)
- Risks and limitations: (clear, honest)
- Proposed pilot: (who, where, how long, what measured)
- Approvals needed: (IT, compliance, QI, service chief, etc.)
You do not need a 50-slide deck. You do need to show you have thought it through beyond the shiny interface.
| Step | Description |
|---|---|
| Step 1 | Identify real problem |
| Step 2 | Review evidence |
| Step 3 | Check policy and security |
| Step 4 | Talk with frontline staff |
| Step 5 | Draft limited pilot plan |
| Step 6 | Discuss with attending |
| Step 7 | If interest - engage IT QI |
8. Making It About You, Not the Service or Patients
Final big red flag: the pitch smells like a personal branding exercise.
Signals:
You talk a lot about:
- Your startup
- Your idea being “first in the country”
- How this will look on your CV
- Getting a paper out of it
You talk very little about:
- How it fits the service’s current pain points
- What your attending is accountable for
- What happens if it fails
Most attendings are not anti-innovation. They are anti-being-used.
If they sense you are primarily trying to:
- Impress a future employer
- Sell a product
- Build your LinkedIn story
…they will pull back hard. Or worse, they will nod politely and then never trust your judgment.
Re-center your pitch:
- “If this is not aligned with the service priorities or feels too risky, I would rather not push it. I am interested in innovation long-term, but not at the cost of safety or trust.”
This does two things:
- It shows you understand their responsibility and risk.
- It separates your identity from this one idea. So if the idea is rejected, your reputation is not.

FAQs
1. How do I know if a tech idea is “worth” bringing to my attending?
Ask yourself three questions:
- Does it solve a real problem on our service that people complain about regularly?
- Is there at least some credible evidence or experience from comparable institutions?
- Can I describe a limited, low-risk way to explore it without changing the standard of care?
If you cannot say yes to all three, you are probably too early. Keep reading, researching, and talking informally with peers and nurses before escalating.
2. What if my attending seems totally anti-technology?
Do not push head-on. Instead:
- Ask what they see as the biggest inefficiencies or risks on the service.
- Listen for specific pain points (handoffs, medication errors, delays).
- Quietly look for tools that address their expressed concerns, not your pet interests.
- When you pitch, start from their problem: “You mentioned missed follow-ups as a big issue. I found something that might help us track those. Here is the data and the limitations.”
If even that is shut down, accept that you may need a different mentor for innovation work, and keep your experimentation strictly within policy and outside clinical care.
3. Can I work on unapproved tech ideas in a research or quality improvement context?
Yes, but with guardrails:
Route it through proper channels:
- QI committee
- IRB (if applicable)
- IT / security
- Department or division leadership
Be absolutely clear in proposals and consent (if required) that:
- This is experimental.
- It does not replace standard care.
- There are oversight and stopping rules.
Pitching it as QI or research without actually using the structures and protections of QI/research is a red flag and can get you and your mentors in trouble.
4. How do I recover if I already made one of these mistakes?
Own it. Fast and clearly.
- Tell your attending or program director what you did, without minimizing.
- State what you have learned:
- About privacy
- About standard of care
- About chain of command
- Propose safeguards you will follow next time:
- Always check policy before using external tools with PHI.
- Never pilot anything on patients without attending approval and appropriate oversight.
- Involve frontline staff early before proposing workflow changes.
Most faculty will forgive a first offense if they believe you actually learned. What they will not forgive is defensiveness, hiding, or repeating the same stunt.
Key points to remember:
- Do not confuse enthusiasm for judgment. Bring evidence, limits, and risk awareness.
- Never bypass privacy, policy, or standard of care to “try” tech on patients.
- Make your pitch about service problems and patient safety, not your personal brand.