
Predicting Death for Better End-Of-Life Care - jonbaer
https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/stanfords-ai-predicts-death-for-better-end-of-life-care
======
Herodotus38
I feel fortunate that I happened to check into HN today because this is an
area where I have a lot of experience and I have made something similar
(albeit much less advanced) that is currently in use by some physicians.

Quick background: I am an internal medicine physician, and my focus is
hospital medicine, so I actually see a lot of cancer patients suffering from
complications related to progression of their disease, treatment, and have
lots of end-of-life discussions.

A couple years ago I had a case where I wished I had more data to drive me to
make a prediction for a family who wanted to know when their loved one was
going to die, and I felt like I didn't do a good job. I ended up contacting an
oncologist Dr. David Hui who has done a lot of research into predicting end of
life care.

There are dozens if not hundreds of "prognostic calculators" out there that
different groups have made over the years to try and find which variables X,
Y, Z, can be used to better predict how a patient with cancer A, B or C will
do.

Most of them are not great and very specific. Some of them are general to all
cancers, some are only for "advanced cancer" etc... So this isn't really a new
idea, it's just that this is the first time someone has applied AI to it as
far as I'm aware.

The problem is these calculators are all buried in journals and have different
inputs. What we did was collate some of the best validated ones and make it
easier to check multiple prognostic calculators at once, with the idea that
you could get more of a range or gestalt on the patient and use that info to
help guide further decisions.

In case anyone is interested the site is www.predictsurvival.com but it isn't
for lay use so may not be of much interest to most here. I built it using
Python and Flask and it's what I'm most proud of in my programming.

Anyways, I knew someone would eventually use AI to try and tackle this
question. I'll read their paper later tonight and I wouldn't be surprised if
it ends up being far better than what I've done. I hope they make it open
source.

I'd be happy to try and answer any questions people might have about medicine
and end of life care

------
evanlivingston
Better end-of-life care isn't a technological problem.

When people have terminal cancer they are still usually advised or persuaded
to take chemo and/or radiation, which royally destroy their bodies, as anyone
who had a loved one go through treatment knows.

In the case of many late stage cancer cases it's generally known that the
person is at the end of their life, yet the medical establishment still tends
to focus on treatment rather than actual care. How does machine learning help
the cultural problem?

~~~
viraptor
> yet the medical establishment still tends to focus on treatment rather than
> actual care

Have you actually talked to doctors about it? As in those who get the initial
talk, not those who get patients already decided to try something? I know
quite a few GPs and they often do a serious talk with people with cancer who
want "everything possible done". Or with families who want that for their
grandparents.

They try to discourage useless treatment as much as possible. Often by listing
the side effects that are going to just decrease the quality of life even
worse.

(Just in case: yes, there will be different doctors and different opinions.
What I'm trying to say is generalising to medical establishment is not
accurate and not helpful. Also I'm glad that if someone makes the decision to
get treatment they can get it - it may be a really bad decision, but it's
their decision.)

~~~
DanielBMarkham
When my mom had terminal cancer, her GP told her she was out of options.

Then a nearby "cancer center" got a hold on her, I believe through one of the
diagnostic partners she was using. Two weeks later she was in heavy chemo --
as an elderly person who had almost a zero chance to survive.

She ended up with MRSA and in intensive care in isolation. I lived far away
and traveled to see her when I heard her condition gotten worse.

I'll never forget her GP walking in. My mom couldn't hear well. Her GP was
both sad and angry. She didn't have any idea how she ended up in the ICU.

He explained to her again that all we had left was palliative care, perhaps
hospice. I could tell this wasn't the first time he had had this talk. There
had been many other patients.

When he left my mom looked at me and said "What'd he say?" So I sat next to
her, took her hand, and explained that the doctors had done all they could for
her and she would die soon.

I'll never forget that, either the talk or the look on that GP's face. We have
a terribly broken system.

~~~
bluetwo
Sorry to hear your family had this horrible experience.

------
anigbrowl
Everyone from doctors to philosophers has been saying this for ages
(literally). No healthcare provider, public or private, is likely to pick this
up because people who are afraid of death will freak about about 'computer
decides when you should die' and will get up a big enough moral panic to
prevent any further discussion of the idea. Given that I think it's literally
a waste of time and resources to program a computer to tell us what we already
know.

Edit: by 'program a computer' I don't mean to put down the work of the
Stanford team, which I'm sure is great, but rather to highlight the
reductionist way it will play in public debate.

~~~
chx
> people who are afraid of death

I am afraid to _live_ after a certain point of time. When you have no direct
descendants or significant other, when your bucket list is done, what's the
point of keeping a slowly failing mind alive in a more rapidly failing body? I
don't want to grow old, I fail to see why I should and there's no law, not
even Canada that would allow me to choose a peaceful ending date for my own.
And they call this suicidal which I am not. I have my life planned and I know
when I want to end it, more than a decade from now. Some people plan to travel
the world when they retire, I just want not to live but I can't.

~~~
rdiddly
There is no law anywhere that can stop you from killing yourself, much less
punish you afterward!

~~~
chx
Neither there is one allowing assistance :/ I want to end in peace and
dignity. Is that too much to ask? Apparently.

There is a lot of assistance available that leads you to extend your life
beyond the point where you can keep your dignity but none where you can just
... bow out.

Whenever a trainer has asked what's my goal, my only answer was, I don't want
to end up in a wheelchair in twenty years. My back already hurts a little
pretty much constantly, I don't want to live long enough where it becomes
unbearable and/or I can't even walk any more. Truly what's the point. And
voicing these -- in my opinion entirely rational -- thoughts are almost taboo.

~~~
rdiddly
[https://www.deathwithdignity.org/learn/death-with-dignity-
ac...](https://www.deathwithdignity.org/learn/death-with-dignity-acts/)

~~~
chx
Every single assisted dying statute at this point requires you to be
terminally ill. The Netherlands is on the path to waive this
[https://www.theguardian.com/world/2016/oct/13/netherlands-
ma...](https://www.theguardian.com/world/2016/oct/13/netherlands-may-allow-
assisted-dying-for-those-who-feel-life-is-complete) but only for the elderly.
We will see what age they set but I feel like it'll be like 70 or something.

------
aaavl2821
this would be a really good application of AI in healthcare, but it needs to
be considered in light of other predictive tools. a sr exec at a large health
system said earlier this year that they evaluated a ton of AI tools for
predicting death to inform end of life care, but none did materially better
than simply asking physicians which of their patients they thought would die
in 12-18 months. this AI may be better than the ones this exec studied of
course

even if AI performs better, it would need to offer improvements significant
enough to justify the cost of implementing it, which depending on what data it
uses could be non-trivial

another unfortunate issue is that patient preference is only one consideration
in determining how end of life care is managed. profitability is another
concern. the same exec said that the health system only agreed to implement
their improved end of life plan once they realized that it would be profitable
to the system

------
eismcc
Having worked in this space - dialysis - sometimes it’s better to let the
patient gracefully decline versus degrading their quality of life with
treatments that will not lengthen life and mage what time they have miserable.
There’s a few companies already using ML to predict when a patient is a
candidate for hospice / palliative care. It’s often cheaper, which is why it’s
kind of a taboo topic.

~~~
phkahler
>> It’s often cheaper, which is why it’s kind of a taboo topic.

That's a very strange point. Sometimes people think the doctors want to do
everything they can "to save you" just to run up the bill. With that thinking,
patients and their families would want to "save money". OTOH one may also
interpret saving money as giving up and be angry that the doctor would put a
price on life. You just can't win. Ultimately people and physicians need to
put aside all of that and be able to objectively assess the situation. That's
really hard to do particularly for the patient and family - I have no idea
what it's like from the doctors side, but we had one that had his eyes wide
open but couldn't drive the point home enough for the soon-to-be widow to get
it.

~~~
hughes
I feel like this AI is an attempt at making death marketable.

And I mean that seriously. I think you and eismcc are right, that it is taboo
even when it is arguably the best option for the patient.

It would be difficult for a doctor to say "this multivariate optimization
found a global maximum by stopping treatment" in a way that sounds ethical.
But by attributing it to an AI that is mysterious enough to sound divine, the
message might have a chance at being received.

------
Asdfbla
Add deep learning and even boring old survival analysis can make headlines as
"AI predicts Death". (Not bashing the paper here, just the reporting.)

Though I suppose making use of the large amount of unstructured health data is
a good thing and something which classical statistical models which require
relatively clean study data could have problems with.

------
jasim
I'm not quite sure that doctors need AI to see the state of a terminally ill
patient for what it is, but if the authority of a mystical-sounding blackbox
helps them accept and act on that tough reality, that's fine by me.

~~~
cheez
There is a movie called Idiocracy which has some version of this. Pretty
hilarious.

------
pwdisswordfish2
If I am in the position of the patient, I would be horrified to hear an
analysis tell me how much time I have left to live and how likely the death
is. No matter what the system tells me I will still opt for whatever possible
treatment in the case that the prognosis is wrong, or I fall in the ex. 10%
chance of survival category.

I just can't picture the doctor saying "the AI says there is a 90% chance of
dying in the next month" and me choosing to die in peace at home. I'll keep
fighting and take my chances.

~~~
inimino
Good! Let's talk again when you're 70...

Regardless of what decisions you make, having more information (basically a
statistical aggregate of similar cases) should be useful information, no?

