Good Doctors Make Mistakes
Patient safety exists to catch inevitable human error before patients pay the price.
The Most Dangerous Mistake I Ever Made
The most dangerous mistake I ever made in medicine didn’t come from ignorance.
It came at 3:12 a.m., halfway through a night shift, when my brain skipped a step it had performed a thousand times before.
I was trained to be careful. Thorough. Vigilant. To double-check. To stay sharp.
But fatigue is quiet. It doesn’t announce itself. It erodes.
And systems that rely on individual perfection eventually meet individual limitation.
Patient safety doesn’t exist because clinicians are careless.
It exists because even good clinicians, working in good faith, will eventually fail inside a complex system.
Patient safety was created to catch us before patients pay the price.
What Patient Safety Really Is
Patient safety didn’t emerge from policy meetings or theory.
It emerged from harm.
As healthcare grew more complex, adverse events rose alongside it — not because clinicians stopped caring, but because no individual can reliably outperform a system operating under constant pressure.
The World Health Organization recognizes patient safety as a global health priority for a simple reason:
Complexity without a failsafe is dangerous.
Burnout isn’t just a workforce issue.
It’s a patient safety issue.
Patient safety allows healthcare to evolve without pretending humans are infallible.
How Harm Actually Happens
Most medical errors don’t come from a single catastrophic decision.
They come from alignment.
The Swiss Cheese Model explains this clearly. Every layer of healthcare — protocols, technology, staffing, communication, environment — acts as a barrier against harm. Each layer also has weaknesses.
An adverse event occurs when those weaknesses briefly line up.
Every medical error starts long before the moment we notice it.
The insight isn’t that failure happens.
It’s that every weak point represents an opportunity to intervene before harm reaches the patient.
Making Sense of Patient Safety in the ED
In real life, patient safety shows up in four places:
• In who gets missed
• In how humans break under pressure
• In what patients see that we don’t
• In whether we actually fix what keeps hurting people
This framework doesn’t live in textbooks.
It lives on the floor — between triage, resuscitation bays, hallway beds, and hurried handovers.
Identifying Patient Safety Issues
The most powerful safety tool in any hospital isn’t software.
It’s culture.
A strong safety culture recognizes that mistakes are inevitable in high-stress environments and responds with learning, not punishment.
When clinicians feel safe reporting near misses, systems improve.
When they fear blame, hazards stay hidden.
Most hospitals use Safety Learning Systems where staff can report incidents, near misses, or unsafe conditions. These reports are reviewed by quality teams looking for patterns — not scapegoats.
Electronic medical records increasingly act as early warning systems, flagging abnormal labs, medication mismatches, or delayed follow-ups before harm occurs.
The challenge isn’t technology.
It’s engagement.
Human Factors: Designing for Reality
Human factors engineering studies how people actually work — under fatigue, interruption, noise, and cognitive load.
Instead of expecting perfection, it designs systems that anticipate limitation.
Human-factor principles help:
• Reduce medication errors through smarter ordering systems
• Improve handovers so critical information isn’t lost
• Streamline workflows so clinicians spend less energy navigating systems and more time caring for patients
Good design doesn’t make clinicians smarter.
It makes the safest action the easiest one.
Environmental Factors: Space Is Not Neutral
The physical environment shapes behaviour more than we admit.
Small design choices can reduce harm:
• Strategic placement of hand sanitizer reduces infection rates
• Reduced noise improves diagnostic accuracy
• Better lighting supports visual tasks
• Acuity-adaptable rooms prevent dangerous patient transfers
Patient safety isn’t just about what we do.
It’s about where we do it.
The Patient’s Perspective
For a field centered on patient care, healthcare has been slow to meaningfully involve patients in safety design.
That’s a mistake.
Patients and families see things clinicians miss: unclear communication, unsafe transitions, delays that feel invisible from inside the system.
Complaints and stories often surface system failures long before metrics do.
True patient engagement means partnership — listening not to assign blame, but to diagnose weakness.
The Part We Don’t Talk About
I still remember the patient involved in my own mistake.
I don’t write about it publicly.
But it permanently changed how I think about fatigue, systems, and responsibility.
The mistake wasn’t dramatic. It didn’t make headlines.
It was the kind that almost happens every day.
And that’s exactly why patient safety matters.
Because good clinicians will have bad nights.
And resilient systems protect patients anyway.
The Bottom Line
Patient safety isn’t about perfect doctors.
It’s about resilient systems.
It’s the quiet work of designing healthcare that assumes good people will have bad days — and protecting patients anyway.
And in emergency medicine, where chaos is constant, that work matters more than we like to admit.


Doesn’t matter how good a clinician you are, you can’t beat the law of large numbers.
Biggest error we can make is to believe that humans can learn not to make errors. "Systems thinking"