Residency Advisor Logo Residency Advisor

Step-by-Step Guide: Turning a Tech Idea into a Publishable QI Project

January 8, 2026
17 minute read

Resident physician reviewing quality improvement tech dashboard -  for Step-by-Step Guide: Turning a Tech Idea into a Publish

Step-by-Step Guide: Turning a Tech Idea into a Publishable QI Project

It is 2:17 a.m. on night float. You are watching a chaotic dance of pages, secure messages, and a clunky EHR that takes twelve clicks to order Tylenol. In your head, you are thinking what every frustrated resident has thought:

“If we just had a simple app / script / dashboard to do X, this whole mess would be 10x better.”

You sketch a rough idea on the back of a progress note. Maybe it is:

By morning, the idea still seems good. Maybe actually good. You start telling people. A few nods. One attending says, “That would be fantastic… someone should do that.”

Here is the problem.

Most tech ideas in hospitals die in exactly this gap:

  • Too complex to do “on the side”
  • Too vague to turn into a formal QI project
  • Too unstructured to ever become publishable

You are here because you do not want that. You want a concrete path from “this is annoying, we could fix it with tech” → to “we ran a QI project” → to “here is the published paper.”

This is that path.


Step 1: Convert the Vague Tech Idea into a Tight QI Question

Your first job is not to code anything.

Your first job is to translate “cool tech thing” into a specific, measurable problem.

1.1. Define the clinical or operational pain point

Ask yourself three ruthless questions:

  1. Who is suffering from this problem? (patients, nurses, residents, schedulers, etc.)
  2. What exactly goes wrong? (delays, errors, missed follow-up, burnout, duplicated work)
  3. How does it show up in data? (time, counts, percentages, errors, costs)

Bad version: “The discharge process is inefficient.”
Good version: “Discharge summaries for general medicine patients are often completed after the patient has already left, which delays pharmacy reconciliation and follow-up scheduling.”

Now make it even more concrete:

  • Unit or population: “general medicine inpatients on 5E and 5W”
  • Time frame: “weekday discharges”
  • Outcome: “time from discharge order to completed discharge summary”

Once you can state the problem like that, you are in QI territory.

1.2. Use a QI-friendly question format (SMART + tech)

You need a question that sounds like it belongs in a QI abstract, not a start-up pitch deck.

Use this template:

“In [population / setting], does implementing [tech intervention] reduce/improve [specific outcome] by [target amount] over [time period]?”

Example:

“In adult general medicine inpatients discharged from 5E/5W, does implementing an EHR-integrated discharge summary template that auto-populates key data reduce median time from discharge order to completed discharge summary by 30% over 6 months?”

That is now:

  • Specific
  • Measurable
  • QI-aligned
  • Tech-enabled

You are halfway to a Methods section already.


Step 2: Map Your Stakeholders and Get Quick Buy-In (Before You Build)

You cannot tech your way out of hospital politics.

If this thing touches workflows, you need allies.

2.1. Identify the minimum viable stakeholder set

At minimum, you need:

  • A clinical sponsor
  • A data/IT contact
  • Someone who owns the workflow you are changing
Core Stakeholders for a Tech-Enabled QI Project
RoleWhat You Need From Them
Clinical sponsorCredibility, guidance, protection
IT / EHR analystData access, build feasibility
Unit leader (nurse manager / chief resident)Operational buy-in
QI mentorStudy design, publication framing
IRB/QI reviewerOversight classification

You do not need a 15-person steering committee. You do need at least one person in each of those boxes.

2.2. Have a 15-minute pitch ready

You are not pitching an app. You are pitching a problem + outcome.

Your mini-pitch:

  1. “Here is the specific problem.”
  2. “Here is how big it is (rough estimate or anecdotal but concrete).”
  3. “Here is a low-burden tech solution that fits existing workflows.”
  4. “Here is what we would measure before and after.”
  5. “Here is why it is low risk and reversible.”

If, after that, your stakeholder says, “This sounds good, but how much IT work will it take?” you are on the right track. Curiosity about feasibility is a good sign. Indifference is not.


Step 3: Lock the Project into a QI Framework Early

If you want this to be publishable, it must look like planned QI, not a random tech hack that you later wrapped in buzzwords.

Use a standard QI framework from the start. PDSA is the workhorse.

3.1. Build a simple driver diagram

You do not need artwork. A one-slide sketch is enough.

  • Aim: “Reduce discharge summary completion time by 30% in 6 months.”
  • Primary drivers:
    • Discharge summaries started late
    • Redundant manual data entry
    • No visibility on pending discharges
  • Secondary drivers:
    • Residents do not know which discharges are ‘today’
    • EHR templates are cumbersome
    • No performance feedback
  • Intervention:
    • EHR-based smart discharge template + daily discharge list dashboard

This is your “storyboard” for the Methods section.

3.2. Predefine your measures like an adult

You need three categories. Every serious QI tech project has all three:

  • Outcome measures – What you actually care about
  • Process measures – Are people using the tech as intended?
  • Balancing measures – What might you be accidentally worsening?

Example:

  • Outcome:
    • Median time from discharge order to completed discharge summary
    • Percentage of summaries completed before patient physically leaves
  • Process:
    • Percentage of discharges using the new smart template
    • Mean time spent in discharge summary note (from EHR logs)
  • Balancing:

Document these in a one-page project charter. This document becomes 30–40% of your eventual manuscript.


Step 4: Decide the Tech Scope: Prototype vs Production

This is where most clinician-innovators overreach. They try to build a startup when they only need a minimum viable tool to support a QI cycle.

You have two main paths: embedded EHR build or external tool.

4.1. Path A: EHR-embedded (preferred for publishable QI)

This is things like:

  • SmartText / SmartPhrase / dot-phrase templates
  • Order sets
  • Flowsheet row + dashboard
  • BPA (best practice alert)
  • Custom report or registry

Pros:

Cons:

  • You are at the mercy of IT queues
  • Governance committees may delay or water it down

If your intervention changes ordering, documentation, or communication, push hard to keep it inside the EHR or official hospital tools. Journals trust that more.

4.2. Path B: External tech (script, app, bot)

This is:

Pros:

  • Faster to prototype
  • More control over design
  • No need to fight EHR build queues for simple things

Cons:

  • Adoption can be fragile (extra logins, new habits)
  • Harder to scale or sustain
  • More scrutiny about privacy and security

If you do this path:

  • Avoid storing PHI if at all possible
  • Use hospital-approved tools (e.g., internal Power BI, RedCap, SharePoint, internal web hosting)
  • Clear it with IT/security before you start using real data

4.3. Choose the minimum viable tech

Ask: “What is the smallest thing we can build that lets us test the core idea in a PDSA cycle?”

Not: “What would the ideal product look like?”

If you think you need an app, you probably need:

  • A better template
  • A digestible report
  • A dashboard
  • A notification system that rides on something people already use

Step 5: Get Oversight Right: QI vs Research vs “Nope”

You want this to be publishable. That means you must handle oversight the right way from the beginning.

5.1. Classify the work properly

Use your institution’s QI/IRB decision pathway. Many hospitals have a QI review committee or a short form: “Is this QI or research?”

General rule (not universal, but common):

  • If you are implementing a change to improve local care and you are not adding extra risk, burden, or randomization, it is usually QI.
  • If you are randomizing, withholding standard of care, or systematically testing multiple conditions, it leans toward research and may need IRB.

Do not decide this yourself. Send a 1–2 page summary with:

  • Problem statement
  • Proposed intervention
  • Measures
  • Data sources
  • Whether patient consent will be needed (usually not for pure QI)

Get an email or letter that says “This is QI” or “This requires IRB.” Save it. Journals ask.

5.2. Lock in your data plan

You need:

  • Which data fields
  • From which system
  • Over what time windows (baseline vs post)
  • How they will be extracted and stored

This is not bureaucracy. This is your Methods.

line chart: Baseline Month 1, Baseline Month 2, Baseline Month 3, Post Month 1, Post Month 2, Post Month 3

Example Data Collection Timeline for QI Project
CategoryValue
Baseline Month 145
Baseline Month 244
Baseline Month 346
Post Month 135
Post Month 232
Post Month 330

That chart could be your primary outcome (median discharge summary time in minutes) plotted over 6 months. Plan it now, not 2 days before a conference deadline.


Step 6: Design the QI Cycles Around the Tech, Not the Other Way Around

You are not building a product and then “seeing what happens.” You are running structured tests of change.

6.1. Start with a very small PDSA

Plan:

  • One unit
  • One week
  • One shift team

Do not roll out hospital-wide on day one. That is how things implode.

Example PDSA #1:

  • Unit: 5E
  • People: 3 senior residents on the medicine service
  • Change: Use the smart discharge template for all discharges for 1 week
  • Data: Manual log of use + quick survey of users + process measure from EHR (template used or not)

Study:

  • Did they use it?
  • What broke?
  • Any obvious unintended harms?

Act:

  • Fix the most annoying barrier
  • Decide whether to expand, modify, or kill it

Document this PDSA in a simple table as you go. Not retrospectively.

6.2. Layer in more sophisticated measurement in later cycles

PDSA #2 and onward can:

  • Add another unit or team
  • Add automatic data pulls instead of manual logging
  • Add a balancing measure (e.g., coder feedback, error rates)

By PDSA #3 or #4, you want:

  • Stable usage of the tech in the defined population
  • Automated or semi-automated data extraction
  • A preliminary idea of impact

That sequence is literally your Results section in narrative form.


Step 7: Instrument the Tech for Measurement from Day 1

If your tech does not log its own usage, you are making life harder than necessary.

7.1. Basic instrumentation checklist

At minimum, track:

  • Who used it (de-identified if needed, or role-based)
  • When they used it (timestamp)
  • On which patient/unit (can be aggregated, no need to store identifiers forever)
  • Completion vs partial use (e.g., template opened vs signed note)

For EHR builds, this often already exists:

  • Template usage reports
  • Order set utilization logs
  • Report/dashboards access logs

Work with an analyst to define the log fields early. Put them into the project charter.

7.2. Predefine your run charts

Do not wait until after to “see what the data show.” You want run charts ready:

  • X-axis: time (weeks or months)
  • Y-axis: outcome measure (e.g., median minutes, rate %)
  • Annotate when each PDSA / tech change occurred

area chart: Week 1, Week 2, Week 3, Week 4, Week 5, Week 6

Run Chart Example - Outcome Measure Over Time
CategoryValue
Week 148
Week 246
Week 345
Week 440
Week 536
Week 634

That figure alone, with proper annotation, makes a QI paper feel serious.


Step 8: Operationalize Adoption: Training, Nudges, and “Default Wins”

Tech that depends on heroics fails. Tech that rides on defaults survives.

8.1. Make the right thing the easy thing

Use these levers:

  • Default templates – new smart template appears first in the list
  • Pre-checked options – within order sets or flowsheets
  • Link in existing workflows – embed links in discharge checklist, sign-out tool, or existing shared drives

Do not ask users to remember. Bury the change where they already live.

8.2. Quick and dirty training that works

Skip the 1-hour lecture nobody will attend. Do this instead:

  • 5-minute demo at morning report or sign-out
  • 1-page screenshot handout emailed and posted
  • 1–2 local champions available for questions on rounds

And yes, you should measure training exposure as a process variable:

  • “Percentage of residents who attended a live demo”
  • “Number of views of the tip sheet page”

If adoption is poor, you will want to show whether it is a training failure or a tech design failure.


Step 9: Collect Qualitative Data While the Project Is Live

Publishable QI papers often rise or fall on how well they capture frontline experience.

Do not wait until the end to “get a few quotes.”

9.1. Simple ways to collect real feedback

  • Short, 3–5 question surveys with both Likert and open-ended items
  • Brief semi-structured interviews with 5–10 frequent users
  • Field notes: keep a simple log of comments you hear on rounds

Example survey items:

  • “Using the new discharge tool makes my workflow: much worse – worse – neutral – better – much better”
  • “What is the most frustrating aspect of the new tool?” (free text)
  • “Have you noticed any errors introduced or prevented by this tool?” (yes/no + comment)

You will not run thematic analysis like a social science thesis, but you will absolutely want a few well-chosen quotes that illustrate barriers and facilitators.


Step 10: Prepare While You Go for a Clean, Fast Write-Up

The worst thing you can do is finish the project, then try to reconstruct everything from memory.

10.1. Maintain a living “manuscript backbone” document

One shared document, with sections:

  • Background: 3–4 bullet points with key literature and local problem stats
  • Problem statement: your original QI question
  • Methods:
    • Setting, population
    • Intervention (version history)
    • PDSA cycles summarized
    • Measures and data sources
  • Results:
    • Paste preliminary numbers as you get them
    • Keep versions of your run charts
  • Lessons and limitations:
    • Add bullets when things go wrong or surprise you

Update this during each PDSA review meeting. Not later.

10.2. Think ahead about target journals and standards

Most QI journals will expect alignment with SQUIRE guidelines.

Common homes for tech-enabled QI:

  • BMJ Open Quality
  • BMJ Quality and Safety
  • American Journal of Medical Quality
  • Joint Commission Journal on Quality and Patient Safety
  • Specialty-specific journals with QI sections (e.g., pediatrics, surgery)
Example Target Journals for Tech-Enabled QI
JournalTypical Word LimitQI Focus Level
BMJ Open Quality~3000High
BMJ Quality and Safety~3500–4000Very High
American Journal of Medical Quality~2500–3000Moderate
Joint Commission Journal on Quality and Patient Safety~3500High

Download the SQUIRE checklist once and shape your backbone document to match it.


Step 11: Turn Your Project into a Convincing Story, Not Just a Data Dump

When it is time to write, you are not just proving that your tech worked. You are showing:

  • You identified a real, important problem
  • You designed a reasonable intervention embedded in context
  • You evaluated it systematically
  • You learned something others can reuse

11.1. Shape the narrative

Structure it like this:

  1. Background – Two paragraphs max. “Here is the gap. Here is why it matters. Here is why tech is a rational approach.”
  2. Local problem – Numbers from your baseline. Show that the issue was real.
  3. Intervention – Describe the tech in plain, operational terms. Include enough detail that another hospital could copy the concept, even if they do not use your exact tools.
  4. Study of the intervention – How you measured impact. Mention run charts, sampling, pre/post window definitions.
  5. Measures – Outcome, process, balancing. Clear definitions.
  6. Results – Tables + charts + brief qualitative findings.
  7. Interpretation – Why you think it worked or did not. How it fits with prior literature.
  8. Limitations – Single site, limited follow-up, EHR-specific, etc.
  9. Conclusions – Focus on practice implications, not product hype.

bar chart: Pre-Intervention, Post-Intervention

Pre vs Post Comparison of Primary Outcome
CategoryValue
Pre-Intervention45
Post-Intervention32

That pre/post bar, plus a run chart, plus a table of adoption rates, is usually enough for the quantitative spine.


Step 12: Avoid the Classic Failure Modes of Tech QI Projects

I have watched dozens of these die in the same ways. You do not have to repeat them.

12.1. Failure mode: Tech in search of a problem

Symptom: Slides full of UI screenshots, almost no baseline data or clearly defined outcome.

Fix:

  • Always start with the measured problem and the target improvement.
  • If you cannot state the primary outcome in one sentence, you are not ready to build.

12.2. Failure mode: Adoption collapse after initial enthusiasm

Symptom: Great first two weeks, then usage drops to near-zero.

Fix:

  • Check: is it in the default workflow or does it require extra effort?
  • Add gentle enforcement: e.g., attending expectation, checklist integration.
  • Tighten the UX: remove steps, clicks, fields that are not essential.

12.3. Failure mode: Unpublishable “because we did not collect X”

Symptom: You realize at the end that you have no baseline, no balancing measures, or no consistent time windows.

Fix (right now, at the beginning):

  • Define 1–2 essential outcome measures
  • Define at least one balancing measure
  • Lock calendar windows for baseline and follow-up
  • Set up automatic logging or reports

Step 13: Generalize Without Over-Claiming

Editors are allergic to overhyped tech claims. You need a sober final step.

Focus your “future of healthcare” angle on:

  • How similar hospitals could implement the same type of intervention (even with different tools)
  • What ingredients made it work: tight problem definition, integration into existing workflows, measurement discipline
  • How this approach could expand: e.g., from one unit to multiple campuses, from one specialty to others

Acknowledge dependency on local factors:

  • EHR platform
  • Governance culture
  • Resident structure
  • Data infrastructure

That honesty actually makes your paper more persuasive, not less.


Your Next Action Today

Do not “think about” this. Do one concrete thing.

Today, open a blank page and write three sentences:

  1. One sentence defining the specific problem you want to fix with tech.
  2. One sentence describing the clinical setting and population.
  3. One sentence in the format: “Does implementing [tech change] reduce/improve [outcome] by [amount] over [time]?”

Then show those three sentences to one potential stakeholder (attending, QI lead, nurse manager, or IT analyst) and ask a direct question:

“Is this a problem worth solving here, and would you support me testing a small tech-based QI project around it?”

That is how ideas stop living in your head and start turning into publishable, real-world change.

overview

SmartPick - Residency Selection Made Smarter

Take the guesswork out of residency applications with data-driven precision.

Finding the right residency programs is challenging, but SmartPick makes it effortless. Our AI-driven algorithm analyzes your profile, scores, and preferences to curate the best programs for you. No more wasted applications—get a personalized, optimized list that maximizes your chances of matching. Make every choice count with SmartPick!

* 100% free to try. No credit card or account creation required.

Related Articles