I like this analysis, although I come to a different conclusion: if AI can give early warning to nursing staff, telling them 'look closer', and over 1/3 of the time, it was right, that seems great. Right now in a 30 bed unit, nurses have to keep track of 30 sets of data. With this, they could focus in on 3 sets when an alarm goes off. I believe these systems will get better over time as well. But, as a patient, I'd 100% take a ward that early AI warning with 66% chance of false positives over one with no such tech. Wouldn't you?
I would not. High false alarm rates are a problem in all sorts of industry when it comes to warnings and alerts. Too many alerts, or too many false positive alerts cause operators (or nurses in this example) to start ignoring such warnings.
This is the real problem. In a perfect world, everyone pays attention to alarms with the same attentiveness all the time. But it just isn't reality. Before going into building software, I was in the Navy and after that did work as a chemical system tech. In the Navy, I worked in JP-5 pumprooms. In both environments we had alarms and in both environments we learned what were nuisance alarms and what weren't, or just took alarms with a grain of salt and there for never paid proper attention to them.
That is always the issue with alarms. You have a fine line to walk. Too many alarms and people become complacent and learn to ignore alarms. Too few alarms and you don't draw the attention that is needed.
More data with appropriate confidence intervals can always be leveraged for good. I hear this application often in medical systems, and recognize the practical impact. The problem is incorrect use of this knowledge (eg to overtreat); not having the knowledge.
No, the problem is information overload. Even without these errors nurses are often overburdened with work and paperwork. Adding another alarm, with a >50% false positive rate is going to make that situation worse. And the nurses will start ignoring the unreliable warning.
I suspect we are on the same page. My point is in regards to using information as described in the article to improve the system. I do not think an on/off "alarm" is the way to do this. The key is to use information from signal processing theory (eg how a Kalman filter updates) to provide input into what medical action to take. The reactions against more diagnostics etc is due to how they are applied, like a brute force alarm, leading to worse outcomes through, for example, unnecessary surgeries etc.
The reduction I am arguing against is: "Historically, extra information and diagnostics that have an error margin results in worse outcomes because we misapply it; therefore don't build these systems."
At work, we had an appliance which went into failsafe on average 8 times per day. The failsafe is meant to remove power from a device-under-test in case of something like fire in the DUT. The few actual critical failures were not detected by the appliance.
Instead, the failsafe has the effect of merely invalidating the current test, and making the appliance unable to run a test correctly until either power cycled or the appliance's developer executes a secret series of commands that are not shared with us.
So of course an operator of the appliance found a way to feed in a false "I'm here!" with a loop, to trick the appliance into never going into failsafe…
That's for ~6.8% of all tests being false-positive, ~93.2% being true-negative, and ~3 tests that should have triggered failsafe did not.
Sorry, I meant to say that with only 6.8% of all tests triggering a false alarm (and 0% true alarm), a test operator still found a way to prevent the alarm from occurring rather than being kept on their toes.
I'm not. If you have three alerts a day, a 33% chance of true positive per alert means you'll get an alert pointing to a problem at least once per day.
That's enough to anchor "alert == I might find a problem" in the user's mind.
No, many people working in clinical units wouldn't. Because of what might happen on false alarms. What GP said: more meds, more interventions. It's not clear at all whether such systems would help with current workflows and current technology. One of the most famous books about medicine says that good medicine is doing nothing as much as possible. It's still very true in 2024, and probably for a long time still.
I like this analysis, although I come to a different conclusion: if AI can allow nurses to manage 10x as many beds (30 vs 3), a hospital can now let go 90% of its nursing staff. Wouldn’t you?
Generally speaking, they aren’t short staffed because there aren’t enough nurses, but because they can’t/won’t pay them enough. Those same hospitals hire large numbers of travel nurses to supplement their “short staff” at pay rates double or triple a local nurse.
And the nurses who want decent pay and can do travel nurse, do travel nurse