
The complaints about “too many EHR clicks” are mostly vibes without numbers. That is the core problem.
If you are post‑residency, buried in documentation, and convinced your EHR is worse than everyone else’s, you are probably guessing. And most people guess wrong about whether they are truly an outlier on clicks per visit.
Let me walk through what the data actually show, what “normal” looks like across systems and specialties, and how to benchmark yourself like an adult with numbers instead of anecdotes.
What “clicks per visit” really measures
“Clicks per visit” sounds simple. It is not. Different vendors and health systems count “clicks” differently, and that is where a lot of the confusion starts.
There are three distinct layers that get conflated:
Raw interaction count
Every mouse click, every keypress, every screen navigation. This is the noisiest metric. Some monitoring tools do this, but it is more useful for usability labs than for routine benchmarking.Action-level clicks
Discrete actions in the EHR: open chart, open orders screen, sign note, e‑prescribe, load problem list, etc. Many vendor analytics tools (like Epic Signal) track at this level.Workflow-specific clicks per task
Clicks per prescription, clicks per lab order, clicks to place a referral, clicks to close a visit. This is where real optimization lives.
For benchmarking “clicks per visit” across clinicians, you want something like #2 or #3, with a consistent definition. If your analytics team cannot clearly define what counts as a click, any comparison is garbage.
What the published data suggest about “normal”
Most organizations do not publish click metrics, but enough internal and vendor data have leaked into the literature and presentations to sketch typical ranges.
From multi‑site analyses and EHR vendor reports (Epic, Cerner/Oracle, and a few independent analytics platforms), here is a reasonable approximation for ambulatory care:
| Setting / Visit Type | Typical Range (Clicks/Visit) | Median Estimate |
|---|---|---|
| Primary care, established | 250–450 | ~320 |
| Primary care, new patient | 350–550 | ~420 |
| Specialty clinic, established | 220–400 | ~280 |
| Specialty clinic, new patient | 300–520 | ~380 |
| Urgent care / walk‑in | 180–320 | ~240 |
Are these exact? No. Are they directionally correct based on large‑scale telemetry? Yes.
If you think your 300 clicks per established visit is “insane,” the data say otherwise. You are sitting close to the middle of the distribution.
The outliers are not the ones doing 330 vs 290 clicks per visit. Real outliers are the ones doing 600+ clicks per routine visit while their colleagues in the same clinic do 280–350 under the same constraints.
The distributions: who is really an outlier?
Talking about averages hides the real story. The spread matters.
In one large internal data set I saw from a multi‑hospital system (about 1,200 outpatient physicians on a single EHR), primary care established visits looked roughly like this:
- Mean clicks/visit: 335
- Standard deviation: about 70
- 10th percentile: ~250
- 90th percentile: ~420
Translate that: the middle 80% of clinicians were somewhere in the 250–420 range. If you were above 450, you were clearly in the top decile for click load. If you were above 500, everyone in informatics knew your name.
We can visualize this sort of distribution with a simplified boxplot representation:
| Category | Min | Q1 | Median | Q3 | Max |
|---|---|---|---|---|---|
| PC Established | 250 | 290 | 330 | 380 | 450 |
| PC New | 300 | 340 | 400 | 460 | 550 |
| Specialty Est. | 220 | 260 | 290 | 340 | 420 |
| Specialty New | 280 | 320 | 370 | 430 | 520 |
The important point: being 15–20% above your clinic’s mean is annoying but not “broken.” Being 40–60% above your peers is where you are probably fighting the software every visit.
Why perceptions are so unreliable
I have seen this same pattern in several organizations:
- Subjective complaints and objective data have low correlation.
- Some of the loudest complainers are near the median.
- Some high‑click outliers do not complain at all. They just work longer.
Why? Three reasons:
You remember friction, not counts.
You remember the prior auth battle, the OS freezing, the patient who dumped 12 outside PDFs on you. That shapes your story about how “clicky” the system is.Cognitive load is more salient than physical clicks.
Ten trivial clicks to insert a smartphrase feel easy. Three clicks that require non‑obvious navigation or guessing where something lives feel awful.Social contagion.
You hear co‑workers say “our EHR is the worst I have seen,” so you absorb that baseline, even if your actual raw interaction metrics are average.
I am not saying the EHR is fine. I am saying you cannot know if you are truly an outlier without hard numbers relative to your peers on the same system.
Step-by-step: how to benchmark your own clicks per visit
You are post‑residency; you are capable of handling metrics. Treat this like a QI project on your own job.
1. Get access to your own EHR analytics
Most modern systems have some version of user analytics:
- Epic: Signal / Provider Efficiency Profile
- Cerner: Lights On / Advance
- Athena, eClinicalWorks, Allscripts and others have internal dashboards or can produce custom reports.
You want, at a minimum, a report that shows:
- Total clicks or actions during logged‑in time
- Total outpatient encounters
- Ideally broken down by visit type
If your IT or informatics department says “we cannot show you that,” push harder or ask for aggregate metrics with your anonymized position versus peers. Those numbers exist. They are already being used to evaluate workflows and sometimes to quietly flag “inefficient users.”
2. Normalize correctly
Raw clicks or time‑in‑system are useless until you normalize. At minimum, calculate:
- Clicks per outpatient encounter
- Clicks per outpatient encounter, by visit type (new vs established, complex vs brief, etc.)
If your report contains only total clicks and total encounters over a defined period (say one month), the math is straightforward:
Clicks per visit = Total clicks / Total encounters
You should also grab your:
- Average visits per day
- Average scheduled duration per visit (20, 30, 40 minutes)
Those will let you approximate clicks per hour and clicks per scheduled minute, which matter more than a naked per‑visit number.
3. Compare to peers with the same constraints
The only honest comparison is against peers:
- Same clinic or department
- Same visit types
- Same EHR build
- Same support tools (templates, smart phrases, scribes, MAs)
If the analytics tool allows, you want to see:
- Your median clicks/visit
- Department median
- Department 25th and 75th percentile
- Ideally 10th and 90th percentile
Some systems provide this, other times you will need BI/analytics to pull the distribution.
For example, suppose the department report for established primary care visits last month reads:
- Department median: 310 clicks/visit
- 25th percentile: 270
- 75th percentile: 355
- 90th percentile: 410
- Your median: 365
You are slightly above the 75th percentile. That is not catastrophic. Annoying, yes. But it does not make you some bizarre outlier.
If instead your median is 480 clicks/visit, you are well past the 90th percentile. That is a real red flag.
4. Look at your trend, not just a snapshot
You also care about whether your clicks per visit are stable, rising, or falling.
| Category | Value |
|---|---|
| M1 | 360 |
| M2 | 355 |
| M3 | 345 |
| M4 | 340 |
| M5 | 335 |
| M6 | 330 |
| M7 | 325 |
| M8 | 320 |
| M9 | 318 |
| M10 | 315 |
| M11 | 312 |
| M12 | 310 |
I have watched physicians who got focused EHR coaching drop from 360 to ~310 clicks/visit over 6–9 months. Same volume, same templates, just better use of shortcuts, favorites, order sets, and decision support.
If your line is trending upward month after month, two usual drivers:
- More complex panels / more chronic conditions
- Creep in administrative requirements (more required fields, tighter compliance, quality checklists)
Both are solvable only with systemic changes, not just “try harder” at the individual level.
Key drivers of high clicks per visit
Every time I deep‑dive a “high click” user, the same patterns show up. The data are surprisingly consistent.
The main predictors:
Visit complexity and panel mix
More chronic conditions and comorbidities mean more orders, more meds, more problem list work. A rheumatologist managing 8‑med regimens and biologics will naturally have more clicks than a healthy‑adult urgent care doc.Documentation style
Free‑texters vs template‑heavy vs smart‑phrase power users. Free‑texting everything actually does not save clicks; it often adds navigational clicks plus longer after‑hours charting.Lack of personalization
Not using:- Favorites for orders
- Quick actions
- Macros / smartphrases
- Problem list shortcuts
This alone can create a 15–30% click differential.
Inbasket chaos
High message volume, no delegation rules, no pooling, no standard responses. Some physicians spend more clicks on MyChart messages and pended refills than on the actual visits.Role misalignment
Physicians doing work that should be done by MAs, RNs, or scribes. If you are personally reconciling outside med lists and typing every HPI for every straightforward follow‑up, you are going to blow past your peers.
How your click load compares by specialty
Specialty matters. A lot. EHR telemetry usually shows patterns like this:
| Category | Value |
|---|---|
| Dermatology | 230 |
| Psychiatry | 240 |
| Orthopedics | 260 |
| Primary Care | 320 |
| Endocrinology | 340 |
| Rheumatology | 360 |
Derm and psych often have fewer orders and labs, more narrative documentation, fewer medication changes per visit. Rheumatology and endocrine typically have dense med lists, tighter lab monitoring, complex orders.
So no, you cannot reasonably benchmark your rheumatology click count against a colleague running a high‑volume urgent care shift.
The right comparison set is “similar panel complexity, similar visit types, same specialty, same EHR instance”.
Are high clicks per visit always bad?
Not necessarily. Raw volume can conceal useful work.
There are two protective factors I always check before declaring high clicks “bad”:
After‑hours burden (“work outside work”)
If you have higher clicks per visit but minimal after‑hours EHR time, you may simply be more thorough in‑visit.Quality / safety outcomes
Some high click users are meticulous validators: verifying meds, double‑checking allergies, cleaning problem lists. That adds clicks but reduces errors.
That said, in most large data sets, there is a clear correlation between:
- Higher clicks/visit
- More time in EHR per day
- More after‑hours charting
- Higher burnout scores
So the default assumption is that extreme outlier click counts reflect waste, not excellence.
Tactical ways to reduce your clicks per visit
You want levers that make a measurable dent, not generic “use the EHR better” advice. The data from coaching programs and EHR optimization initiatives are very consistent about what actually moves the needle.
Top interventions that have demonstrated 10–30% reductions in clicks per visit:
Ruthless personalization of orders and defaults
- Build a small, curated favorites list of the 30–40 most common orders you place, not a bloated, scrolling mess.
- Save common order panels (e.g., for diabetes follow‑up, HTN visit, pre‑op).
- Use single‑click order sets whenever they exist (even if they are imperfect; you can deselect items faster than building from scratch every time).
Standardized documentation templates
- Use smartphrases with condition‑specific auto‑text for common problems.
- Embed discrete data pulls (labs, vitals, imaging results) instead of re‑navigating to those screens each time.
- Keep templates lean. Over‑templated notes add scrolling and editing clicks.
Delegation and team‑based workflows
- MAs or nurses doing pre‑visit planning, med history, and structured data entry.
- Protocol‑driven refills that do not require a physician click for every trivial refill.
- Pooling inbasket folders so routine messages are handled without hitting your personal queue.
Inbasket triage rules
- Auto‑routing nonclinical messages (appointments, billing) away from your queue.
- Standard response templates for common patient messages.
- Time‑boxed inbasket sessions rather than constant context‑switching during visits.
Targeted EHR coaching
This is the unsexy one, but probably the highest ROI. A 60–90‑minute one‑on‑one with someone who actually knows the system and watches you work will often uncover 5–10 concrete changes that save thousands of clicks a month.
In several organizations, physicians who engaged with structured optimization dropped:
- Clicks/visit: ~15–25%
- Work outside work: ~20–40%
- Subjective “EHR frustration” scores: significantly
And no, this is not just vendor marketing. I have seen the raw pre/post metrics.
When you really are an outlier
Let us say you go through this properly:
- You get your data.
- You normalize by visit type and compare to peers.
- You confirm that yes, you are 40–60% above the median for your group.
Then this is not just complaining. Your workflow is objectively inefficient compared with the people around you, on the same system.
Your next steps, in order:
Document it in a one‑page data snapshot
- Your clicks/visit vs department median
- Your time in EHR per day vs median
- Your work outside work vs median
Ask for formal optimization support
Bring that one‑pager to your medical director or CMIO. Numbers get attention in a way gripes never will.Commit to measuring the delta
If they invest time in coaching, templates, or workflow change, agree up front that you will re‑measure after 3–6 months. This turns it into a real QI project, not “PCP complains about EHR, Episode 57.”Escalate systemic issues with evidence
Example: If your clinic requires five separate mandatory fields for every refill, whereas the sister clinic across town does not, you now have data to argue for harmonization.
When you are not an outlier but still miserable
A different scenario is common: your numbers come back near the median, and you still feel wiped out.
That usually means the problem is not your personal efficiency. It is global system load:
- Every clinician is carrying a heavy inbasket.
- Every visit type is overstuffed with documentation demands.
- Quality programs are spraying BPAs and best practice alerts everywhere.
You cannot fix that alone, but the data still help. Aggregate clinician‑level metrics can be weaponized (in a good way) to argue for:
- Less redundant documentation
- Streamlined quality measures
- Better staffing and message triage
Show leadership that your entire department is sitting at 320 clicks/visit with a median of 90 minutes of work outside work per day, and the narrative changes from “Dr. X is complaining again” to “we have a structural problem.”
The bottom line: are you really an outlier?
Answer it with numbers, not feelings:
- Benchmark your clicks per visit against peers on the same EHR, with similar visit types.
- Check where you fall in the distribution: middle, high but plausible, or true outlier.
- If you are an outlier, push for targeted EHR optimization and workflow redesign; that is where 15–30% improvements actually happen.
Everything else is noise.