
Predicting Patient Deterioration - stenlix
https://deepmind.com/blog/predicting-patient-deterioration/
======
arkades
I don’t have access to the full Nature article. Their intro has only a tiny
bit of meat on its bones:

> comprising 703,782 adult patients across 172 inpatient and 1,062 outpatient
> sites. Our model predicts 55.8% of all inpatient episodes of acute kidney
> injury, and 90.2% of all acute kidney injuries that required subsequent
> administration of dialysis, with a lead time of up to 48 h and a ratio of 2
> false alerts for every true alert.

1\. Studies have shown that normal, healthy people regularly suffer AKI just
walking around - and resolve spontaneously. Most commonly this is transient
dehydration. Catching 50% of these is so simple an amateur nurse can do it.
Unimpressive.

2\. Requiring HD is generally something that occurs over days (still
technically an AKI), and it’s almost never a surprise. Catching 90% of these
isn’t anything either.

3\. A “lead time of up to 2 days” is a lot different than the linked site’s
“lead time of 2 days” suggestion. A lead time of “up to” 2 days including
cases trending towards dialysis is very unimpressive.

I realize stripped of clinical context this sounds like they pulled something
off, but AKI is usually not a meaningful problem, and is one of the easiest
things to catch, and basically always gets caught. This algo, at least
relative to my experience in my (semi-prestigious, regionally semi-known)
institution, underperforms what I would expect out of a fourth year med
student / bright third year med student / experienced nurse.

If this wasn’t attached to an AI buzzword, I can’t imagine it being
publishable or noteworthy.

~~~
mktmkr
They seem to be claiming to have reduced "missed" cases of AKI from 12% to 3%,
but it is not clear if this has anything to do with "AI" or is purely a result
of improved process/UX/human factors. I'm skeptical of your analysis because
there are a lot of practicing doctors who are not anywhere near as effective
as a bright third-year med student. If this software and these processes can
improve the performance of the doctors we actually have in the field, that
could be good.

~~~
scott00
The reducing missed cases of AKI from 12% to 3% stat was a result from the
introduction of the Streams app into the AKI workflow. The model used was an
existing NHS AKI model, not the new model discussed in the first part of the
article.

------
pgcudahy
With only 58% sensitivity and 2 false positives for every true positive, this
is going to be like the automated drug interaction checks that we all click
through because they are too many false positives

------
sheeshkebab
Having recently observed AKI requiring hospitalization - I think spotting this
is only small and IMO not very useful. In fact even spotting a cause of AKI
(eg heart failure) is also not particularly eye opening.

A better system (perhaps not even using much AI, whatever that is, and just
use basic rules and ML techniques) would be helping physicians monitor and
come up with treatment suggestions and outcome prwdictions for a specific
patient based on their particular vitals/history. E.g provide a suggestion to
physician that heart failure causing aki could be treated by crtp pacemaker bc
cf rate is within a range for this particular patient - so they should talk to
cardiologist/surgeon immediately.

