

How mistakes can save lives: one man’s mission to revolutionise the NHS - wormold
http://www.newstatesman.com/2014/05/how-mistakes-can-save-lives

======
elemeno
A very interesting article, although I can't help but feel that it tars
doctors with slightly too broad a brush.

For the sake of context, I'm in the UK (so med school is a 5 or 6 year degree,
not an undergrad and postgrad like the US), my fiancee is an Obstectric
Surgeon (aka an Ob/Gyn) who's half way through specialist training (six years
out of med school, three years training in her specialty), my father is an
Oncologist (clinical, not surgical, though a lot of interaction with surgeons
as would be expected) in his mid-sixties (so med school was forty odd years
ago, consultant for over thirty years) - he is, however, both research focused
and head of his department so is far more aware of current trends in the NHS
than someone who's purely a clinician who wouldn't have much reason to keep up
to date with the current literature or thinking in the NHS.

It's interesting to compare and contrast the mindsets and training they both
have. A large number of the points raised in the article have been, from what
I can tell, incorporated in the training that doctors receive these days -
there have been changes to how surgical teams work to help break down barriers
between the surgeon(s) and everyone else in the room. In the current NHS
thinking the idea of surgeon-as-god is long gone, and even little measures
like everyone in the operating theatre introducing themselves before the
operation starts go along way to breaking down the formal barriers between the
staff.

However, medicine is a very traditional discipline and it can be hard change
the way people practice when you're trying to fundamentally change something
they've been doing for the last twenty years. From the stories I hear from my
father, there are more than a few surgeons of his era who do take the view
that they can do no wrong and errors happen as a result. Thankfully, people of
that mindset are mostly approaching retirement age and the generation of
doctors below them seem to be more open to new ideas - something I suspect is
an artefact of being trained during the eighties onwards when lots of fields
of medicine became a lot more research focused and less interested in doing
things because "that's how it's always been done". Quite likely also due to
increased collaboration with colleagues around the globe suddenly being
possible so new ideas spread faster.

It's also difficult to talk about the NHS as a whole. Every NHS trust (every
geographical area in the UK falls under a particular trust that encompasses
one or more hospitals) is slightly different, as there's a degree of autonomy
within each trusts. Each deanery (the group of hospitals that a doctor does
their specialist training in) is different - the quality of training you'd get
in a deanery which is recognised as delivering world class training in your
specialty is going to be entirely different than if you end up at a random
deanery that has little connection to your specialty. Like any other area of
academia, the best people tend to clump in certain places.

This means that, to the detriment of the NHS as a whole, just because some
hospitals and trusts change the status quo and improve matters, it doesn't
mean that it will spread immediately to everywhere else. It's sad, but perhaps
not too surprising when you consider the size of an organisation like the NHS
- were you to add them all together, it's an organisation with something in
the order of 1.4 million employees, so it perhaps should be a surprise that
chance is often slow. Doubly so when you consider that it's a profession that
likes to see an idea proved before it's implemented.

On the upside, my fiancee will be spending the next year (from Sept) as an
education and simulation fellow where she'll be designing and running
simulations for junior doctors. Making sure that they can recognise when the
blinders start to slip on and empowering them to speak up when they think that
something is going wrong, even if it means telling someone several decades
their senior to stop what they're doing, is surely one of the most useful
things we can do to ensure that problems like the one in the article are kept
to a minimum.

~~~
vacri
On the topic of The Way Things Are Done, I remember talking to a surgeon a
little over a decade ago, talking about Weary Dunlop, an Australian folk hero.
A surgeon credited with keeping a great many men alive during internment in a
Japanese PoW camp in WWII, he rightfully gets a lot of accolades.

The surgeon I was talking to said that it was probably a good thing when
Dunlop died when he did, as there were increasing murmurs in the surgical
field about the extremely poor quality of his work, some floating the
possibility of him being deregistered. The words my friend used were "his
techniques were fine for the jungle, but his skills haven't advanced since
then" ('this is the way things are done'). The two examples he gave were that
when doing an amputation, he wouldn't leave a flap of skin to cover the stump;
and that his sutures were "almost thick enough to be twine", leading to huge,
ropey scars where other surgeons were routinely making their work invisible.
"Old soldiers love those scars, because they can point to them and say 'Dunlop
worked on me', but it's simply bad practise". Basically my surgical friend was
saying 'can you imagine being the person who stripped a folk hero of the
license to do the exact thing he's a folk hero for doing?'

------
lbarrow
Great article. The checklist bit speaks to me in particular. I once worked on
a team that was responsible for maintaining a medium-sized legacy application.
After one-too-many failed deploys, we instituted a pre-deployment checklist.
The checklist consisted of things like:

* Have the tests all ran and passed? Is CI green?

* In addition to the tests passing, has a third party (not the devs themselves) clicked around and kicked the tires on any new features?

* What is the roll back plan? Can we roll back cleanly?

* What is in this set of changes? Should all of these changes go out?

These checks seem stupid, but the simple act of taking a breath and slowly
going down the list really seemed to make a big difference. After instituting
the list we had almost no failed deployments.

I don't know if this is an approach that generalizes. I tend to think that for
clean, well-maintained and well-tested web applications, a continuous
deployment approach is safer and faster in the long run. But when you're
working on legacy code, a checklist works wonders.

~~~
masklinn
And software has an even bigger advantage (making the lack of checklist use
inexcusable) that all of the process _can be encoded in software_.

You can restrict deploys to only CI-checked revisions, you can mandate some
sort of functional testing (with the tester stamping the revision), you can
check that the rollback procedure works and works cleanly, you can put the
changes list in front of somebody and ask that its items be validated.

The field seems to be moving very slightly forward, but it's not always easy
to get people to relinquish useless control to automated systems. I'm
currently trying to get an integration bot[0] set up in my company and the
pushback surprised me. But fundamentally, you can get something similar for
deploys, ask for a deploy and it kicks off the checklist automatically,
mailing people and updating statuses on dashboards as needed.

And then you can even start collecting stats and charting time to deploy or
amounts of rollbacks.

[0] by which I mean something similar to the Rust project's "bors": when
changes are proposed, the bot requires validation by a "core developer" (who
may be able to self-validate, it's only software so the validation process for
a change are flexible), runs the linter, runs the tests, sends the changes to
functional testing for validation[1], and merges the changes to "mainline".
Humans only get involved where they make sense and _can not_ be automated
away: proposing and reviewing (technically and functionally) the changes.

[1] optional, and requires a significant functional validation/testing team,
but definitely possible

------
pling
Absolute hero. The NHS tried their hardest to be as negligent as possible with
my wife. It's almost a copycat situation to be honest but she was lucky. Three
days on ICU and the surgery failed, plus she had an internal bleed where they
did a "stupidly bad job" according to our current surgical reg. The surgeons
were show-boating in this case.

She's been through 4 years of shit getting a diagnosis to start with thanks to
several GPs being disinterested or just incompetent, now a year of immobility
after a fucked up surgery and now faces at least 2 years of recovery and
rehabilitation.

When it's over we're rolling out the legal eagles but based on past experience
I expect to have to do a few weeks of forensics first and possibly get a court
order to hand medical records over. Historically we've requested them and
they've either been too expensive to prepare (ha) or completely disappeared.
It's like chasing cockroaches with a torch.

And this is a minor surgical issue (hernia).

~~~
matthewmacleod
_Absolute hero. The NHS tried their hardest to be as negligent as possible
with my wife._

I'm sorry that your wife had poor care - nobody should have to go through
that.

But isn't the approach you've described exactly what we want to avoid in a
healthcare system? Blame is almost poisonous in its ability to encourage
people to cover up the truth, "misplace" documents, make excuses, and so on -
this often actively hinders improvement.

Applying the principles of air accident investigation to the NHS is absolutely
something I applaud, because the focus _should_ be on why accidents happen,
and how they can be prevented — not on who is responsible for them, outside of
the most egregious cases of professional failure. Most medical professionals
are competent and skilled, and really try rather hard to avoid causing harm to
their patients. But mistakes and accidents will always happen regardless of
this.

I'm actually very surprised that such a system isn't already in place in the
UK. We've already got an independent body which investigates rail accidents,
for example - the RAIB looks into all major irregularities or failures on the
UK rail network, and rather than finding fault, identifies root causes for
failures. That often includes things like the design of systems to reduce the
possibility of human error, or even such things as shift patterns which risk
causing sleep deprivation. They then issue recommendations for improvement to
different bodies as required, and I think they have some legal clout to follow
up if they're not implemented.

It seems like a system similar to that would be effective applied to the NHS,
but I'm _very_ cautious of heading down the same litigious route as the US.

~~~
paulhauggis
"Most medical professionals are competent and skilled, and really try rather
hard to avoid causing harm to their patients. But mistakes and accidents will
always happen regardless of this."

I don't know if I believe this. I have found many medical professionals (not
all of course) to be arrogant and cocky and even if you pointed directly to
the cause of the problem, would most likely not think that those rules apply
to them.

"not on who is responsible for them"

Mistakes happen because of people. If there isn't some sort of punishment or
blame, the majority of people will just continue on with what they are doing
(and the mistakes will continue to happen).

We need to blame people, so they change their behavior and have less accidents
in the future. We also need to know why it happened.

"Blame is almost poisonous in its ability to encourage people to cover up the
truth"

If someone covers up the truth, they should be fired immediately. I honestly
don't want someone like this working anywhere near the medical field..and
neither should you.

Children cover up their mistakes and lie..adults own up to their problems and
admit when they made a mistake.

Also, the reason pilots have less mistakes is because a computer pretty much
does 99% of the work. This is not the case with surgery and medicine. Humans
still do most of the work.

~~~
ISL
Mistakes happen because of people's errors, not 'because of people'. To blame
a person, rather than their actions, is to alienate them.

You want everyone on your team to be focused on quality outcomes, not on
avoiding blame.

~~~
pling
The NHS currently runs on avoiding blame which is the problem.

When someone is made accountable and the first thing they do is establish a
process to avoid blame then you need to ask the question: do these people have
my best interests at heart or are they avoiding being accountable?

On many occasions the outcome is to avoid a breach rather than to fix a
problem.

~~~
ZoFreX
The NHS also runs on overworking its employees and assuming they will still be
effective at the end of a 20 hour shift with no sleep. If anyone making a
completely understandable mistake due to the terrible conditions they have to
work in is blamed and fired for it, would that really help? I think it's
understandable for people to cover their own ass if they are expected to be
infallible even under poor conditions - everyone else is doing it, if you
don't the only difference it makes to the NHS is that it's down one employee.
This is just one way of many possible ways that what might first appear to be
a single human failing is down to a larger institutional problem.

~~~
pling
No they don't. That's the nursing unions and the press talking. It's total
bollocks.

The decision makers and the senior staff have considerably smaller shifts.
With respect to the surgical and anaesthesiologist staffing, most of them
(80%+) are contracted out and are private staff working on contract to the NHS
now. A surgeon for example will probably spend two days a week on NHS cases,
one day being surgical and the other being consultant and that's all you see
of them. The remainder of the time is lucrative private care and occasionally
teaching (which is well paid as well).

There are a few exceptions to this i.e. StR's and consultants who seem to
migrate into a speciality then leave after a few years. They all take the
extra shifts because they're paid for the privilege of up to a 50% bonus.
They're overworked because they're chasing the cash.

The REAL overworked staff are mainly the support staff i.e nursing, supply,
logistics. Most of them are contracted out now as well in the larger PCTs.

Even with all that, no one should be allowed to cover their arses as it turns
into a vicious circle of unaccountability. It's a poor excuse which must not
be allowed to continue in the profession (or ANY profession to be honest).

------
ColinWright
I wrote about this a few months ago:
[https://news.ycombinator.com/item?id=7655018](https://news.ycombinator.com/item?id=7655018)

The top comment there starts like this:

    
    
        What I object to here is the core principle that
        there's something to fix in software development
        terms here.
    

This is something I feel strongly about, and yet I don't seem to be able to
get existing software practitioners to take it seriously. They're happy that
everyone else needs it, but convinced that software doesn't.

A bit like many of the doctors.

~~~
tormeh
The core difference is that programmers do new things every time. Pilots can
write checklists for things to do when they fly the plane, because landing and
taking off is usually pretty similar wherever it may be. Doctors can do the
same. Software is a bit more like engineering in that checklists are largely
useless - it's not that we think we're infallible, it's just that we have no
idea what should be on the checklists since we don't have standard procedures
to write checklists for.

If you look at critical software development they're big on analysis and
testing and those kinds of things. Not the same.

~~~
rodgerd
> The core difference is that programmers do new things every time.

Boy, it's just as well human biology is so perfectly deteministic and well-
understood it can be trivially mapped out, unlike the vastly more complicated
field of software development.

~~~
tormeh
You've been watching too much House MD. Most operations are pretty standard.

------
pja
See also: Atul Garande's books on medical practice in the US:
[http://atulgawande.com/books/](http://atulgawande.com/books/) including "The
Checklist Manifesto".

You can have the best people in the world, but without a safety culture that
abstains from blame and works to eliminate sources of error over time, your
hospital will kill people that didn't have to die.

~~~
carbocation
Yes, checklists do seem important, but keep in mind the counterpoint data (new
from just this year):
[http://www.nejm.org/doi/full/10.1056/NEJMsa1308261](http://www.nejm.org/doi/full/10.1056/NEJMsa1308261)

Now, I've seen studies with positive results and neutral results from
checklists, but never one with a negative (harmful) result, so on balance the
checklist approach seems useful or at least not harmful.

------
robbiep
When I started work as a doctor at the start of this year we watched a
training video on this exact case narrated by the husband. The parallels
between airline and medicine are very interesting and we would do well to
incorporate the hard-won lessons of airline care.

I think the major issue medicine lags here is that airline disasters are
highly visible and airlines and pilots are forced to respond, whereas medical
disasters are usually seen as 'one off' so even with root cause analysis of
failures, injuries sustained by 1 patient periodically do not become as
visible; systemic failures are harder to identify

------
enscr
Atul Gawande's book, Checklist Manifesto, is an excellent source on the topic
at hand. Surprisingly the author of statesman article didn't mention it at
all. This article from New Yorker that was probably the inspiration for his
book is worth reading. Interestingly, it draws analogy from the perils of not
having a checklist in flying airplanes too

[http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_...](http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_gawande?currentPage=all)

------
graeham
If you find this interesting, check out the Design for Patient Safety Handbook
- [http://www-edc.eng.cam.ac.uk/medical/downloads/report.pdf](http://www-
edc.eng.cam.ac.uk/medical/downloads/report.pdf)

Its the result of a multi-institution look at applying design thinking to
healthcare, in the context of patient safety and risk management. In my
opinion its quite well put together and has great graphics.

------
hershel
Stanford did some research on using large screen displays and google glass to
increase safety in medicine, using smart adaptive checklists and good UI
design:

[http://hci.stanford.edu/research/icogaid/](http://hci.stanford.edu/research/icogaid/)

This might also be interesting for creators of user interfaces for rapid
decision making under stress.

~~~
merkury7
Really interesting paper, thanks for posting

------
Jemaclus
I really, really enjoyed this article, which is something I don't say too
often. I spent about an hour talking over this with my girlfriend last night.
Her father is also a pilot, and we were discussing the difference in
methodology between both of our professions (programmer and researcher,
respectively), piloting, and surgery.

I'm mostly intrigued that someone who has no background in medicine was able
to cut through some bureaucracy and effect major change. It's tragic that
something like this had to happen in order for someone to do that, but it
makes me wonder how many other tragedies have to happen before something like
this changes.

This guy experienced a tragedy. Afterward, he insisted upon an investigation.
When he received the results, he saw a very fixable flaw in the NHS, assembled
experts on that issue, and managed to kickstart a revolution within the system
to address the very fixable flaws that he and his experts had identified.

The Flight 173 case is another example of this. A tragedy happens, experts are
assembled, and very fixable flaws are, well, fixed.

Contrast that with other tragedies that occur monthly (if not more often) in
America. Mass shootings in Aurora, Santa Barbara, Fort Hood. Investigations
occur, and maybe experts are assembled, but nothing changes. What's up with
that?

Then we have the other side of the coin. Take 9/11, for example.
Investigations were done, experts were assembled, and changes were made. Have
they helped? Probably not as much as we'd like.

I'm not really sure what the takeaway is, but I'm very interested in seeing if
we can really, truly prevent some of these tragedies from happening with a
little foresight, a little communication, and a global recognition that things
need to change for the greater good.

------
nextos
The next step should be to make this systematic like in the aviation industry.
Build a database, and make it compulsory to register all incidents and
accidents. Use that data to update procedures and checklists. Hope he
succeeds. NHS researcher here.

------
emmelaich
Also read about how using checklists to reduce simpler errors reduced
infection rates at major hospitals.

    
    
        http://en.wikipedia.org/wiki/Peter_Pronovost
    

Pronovost's "work has already saved more lives than that of any laboratory
scientist in the past decade" according to this article in the _New Yorker_

The article also mentions the airline industry - specifically the the use of
checklists to reduce pilot errors in the new complex bombers of WWII.

A great read:
[http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_...](http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_gawande?printable=true&currentPage=all)

It has also been discussed here on HN.

------
ArenaSource
UK's healthcare system has real serious issues, sorry but the Victorian era is
over, stop underrating all the protocols, procedures and recommendations from
your American and European counterparts.

And don't say it's because UK population has different needs, one example, two
hospitals at London same medical specialty, they recommend different
medication doses for exactly the same treatment in their protocols.

No specialty chiefs, no teams, no supervisors, only solo consultants.

~~~
arethuza
From what I've heard the fact that every single hospital insisted that it's
requirements were special and that they could never standardize was one of the
contributing factors in the great NHS systems disaster.

Of course, the other part was the the consultants doing the analysis were
quite happy to work with this complexity as it meant more billable time for
them - they had no incentives to make anything simpler.

NB I'm not implying that medics should be forced to standardize when it isn't
relevant - getting the right level of standardization of processes is why
implementing these kinds of systems is so difficult.

------
RyanMcGreal
There is a similar movement in the USA to apply the practices of high-
reliability industries to medicine to reduce error:

[http://www.slate.com/blogs/thewrongstuff/2010/06/28/risky_bu...](http://www.slate.com/blogs/thewrongstuff/2010/06/28/risky_business_james_bagian_nasa_astronaut_turned_patient_safety_expert_on_being_wrong.html)

------
lazyant
The ironic thing is that if that guy wasn't a pilot (high in the hierarchy)
nobody wold have bother listening to him.

------
topbanana
This stuff is gaining some traction in the UK. There's a textbook available,
and plenty of courses.

[http://www.amazon.co.uk/Human-Factors-Healthcare-Level-
One/d...](http://www.amazon.co.uk/Human-Factors-Healthcare-Level-
One/dp/0199670609/)

------
msandford
This guy is a hero. Can we kickstart having him come here to the US?

