
IBM's Watson recommended 'unsafe and incorrect' cancer treatments - airstrike
https://gizmodo.com/ibm-watson-reportedly-recommended-cancer-treatments-tha-1827868882
======
scarejunba
That's okay, though, right? We're not talking about fully automating diagnosis
and treatment. We're talking about using a tool to help doctors diagnose and
treat more effectively.

i.e. it's more like predictive text input than anything. If it makes it faster
for you to diagnose, makes harder to diagnose stuff easier, and recommends the
right treatment most of the time then it saves the doctor some time and
energy.

The only question is whether it does that.

~~~
wlesieutre
Having systems like autopilot in planes can make pilots worse at flying
because they don't spend as much time practicing at the controls and
monitoring what's happening. Then when something goes wrong, they risk not
paying close enough attention to catch it, or not being practiced well enough
to correct it.

If you get complacent and just assume the computer knows what it's doing
(because it usually does) this can and will end very badly.

The person whose Tesla drove into a concrete barrier at 65 mph earlier this
year was perfectly capable of driving a car, but they mistakenly believed that
the computer had it under control.

99% Invisible has a pair of podcast episodes on the subject (2015):

[https://99percentinvisible.org/episode/children-of-the-
magen...](https://99percentinvisible.org/episode/children-of-the-magenta-
automation-paradox-pt-1/)

[https://99percentinvisible.org/episode/johnnycab-
automation-...](https://99percentinvisible.org/episode/johnnycab-automation-
paradox-pt-2/)

~~~
ballenf
Instead of autopilot, think of this as (or should be) more like fly-by-wire
systems in _every_ modern fighter jet. The computer reads the pilot's inputs
and translates them to the most effective possible control adjustments, making
many more changes than a pilot would be able to make even given the controls.

Or maybe the tool should be limited to retrospective evaluation of doctors'
decisions. Basically an automated peer review.

~~~
FussyZeus
I would argue against this one, simply because of the Airbus incident from
some time ago where to avoid pushing feedback into the controls, the plane was
designed to average out the inputs from the two pilots. The older, more
experienced pilot was attempting to maneuver the plane in such a way to
correct their problem (I believe it was a stall) and the younger pilot was
panicking and attempting the opposite maneuver, the plane averaged it to
roughly no course adjustment, and they hit a mountainside.

I'm all for automating driving and other such tasks once the computers are
ready, but until we know they're at least most likely ready to do it, I want
the ability to turn it off and do it myself.

(And frankly, I want that ability anyway because I genuinely love driving and
it makes me sad to think someday I won't be able to do it.)

~~~
wlesieutre
For reference, I believe this is the crash you're referring to:
[https://en.wikipedia.org/wiki/Air_France_Flight_447](https://en.wikipedia.org/wiki/Air_France_Flight_447)

The fly-by-wire control system ordinarily prevents stalls, but it had
disengaged due to an iced over sensor and was operating without stall
protection. The plane stalled at 38,000 feet and fell into the ocean.

They should have had plenty of time to correct the stall, but one pilot was
pulling back his control stick (the opposite of what they needed to do), and
since the two sticks aren't physically linked the other pilot didn't know he
was doing that.

It's one of the topics discussed in the podcast that I linked above.

~~~
FussyZeus
Yeah that's the one.

------
deviationblue
We're putting a lot of faith in tools that can best be described as immature.
I don't think it's out of the question to get A.I. (not speaking of Watson,
which is A.I. adjacent) to the point where it can perform perfectly at human
intelligence tasks. The point is to have these systems perform better than
their human counterparts (at least, I think that should be aim for something
like Watson for Health). The people who built these things are learning
themselves. As we make more progress in technology and AI, I don't doubt that
we could break that barrier. There may be an upper limit because maybe the
human brain is incapable of solving some problems, but even there I think it's
not difficult to imagine workarounds. Simply put, I don't think the answer is
to think these problems could never be solved.

Additionally, human-assisted A.I. is not the solution, it's a non-answer to
the problem of creating systems that can think and perform at human levels of
intelligence. It's okay to admit if we don't have the ability to make these
things, but its disingenuous to believe that human involvement in helping
computers get to the right answer is the right answer. Though yes, we need
this right now to move things along where they otherwise might stand still.

~~~
binarymax
Totally agree. Applying existing ML/"AI" to treating cancer is extremely
misguided, and IMO dangerous and unethical.

ML performs very well for specific well defined tasks that have an obvious
outcome and are highly narrow in scope.

Cancer is a disease that we can't even treat ourselves in many cases. It
requires a great deal of creativity and critical thinking to reach solutions
on a case by case basis. Who is so arrogant they thought this should be
replaced by a bunch of overhyped software?

~~~
ballenf
There has long been a calculus of using more risky or long shot treatments for
the more deadly or hopeless diseases. Certain types of cancer are essentially
death sentences.

Maybe it's counterintuitive, but it makes a lot more sense to use ML for
cancer than say a broken arm. There are so many systems interacting in cancer
that affect its progression that humans really are at there limits in trying
to understand them.

And as others have said, no one is letting Baymax loose in the oncology ward
and firing all the doctors. This is just one more tool in a doc's tool belt --
and far from the only that will give misleading results.

~~~
ntsplnkv2
> There are so many systems interacting in cancer that affect its progression
> that humans really are at there limits in trying to understand them.

Bacterial infections also have a ridiculous amount of systems involved and yet
that was figured out by humans.

I don't see how ML helps with cancer at all right now. The problem isn't the
amount of data. It's the quality of it.

------
fatjokes
Honest question: how much top-tier ML research comes out of IBM Research? I
feel like they get what they paid for---and they pay shit from what I hear.

~~~
ma2rten
I don't think that top-tier research is necessarily equal to top pay.
Otherwise universities would not stand any chance.

That said, I don't think IBM Watson has published much top-tier research.

~~~
vorpalhex
University work tends to pay in different ways. Lots of opportunity for your
own projects, cool toys, small teams that report directly to the owner.

------
otakucode
The headline, and conclusion of the article, feel incomplete. A complete line
would read "IBM Watson Reportedly Recommended 'Unsafe and Incorrect' Cancer
Treatments At A Rate X% Higher Than Doctors." Doctors make mistake. Machine-
based intelligences will make mistakes too. If you throw out a system that
errs in 1 in every 100,000 cases so that you can stick with a system that errs
in 1 in every 250 cases, you're not going to get anywhere.

------
ummonk
Sounds like they prepared limited and bad training data for it. As usual,
garbage in, garbage out.

------
xyhopguy
I work in this space. The breakthrough will most likely come from
computational biologists, not an external ML group. Liquid biopsy is close and
that's the domain that reeeallly needs ML/AI in additional to strong
mechanistic models.

~~~
azinman2
What’s liquid biopsy?

~~~
xyhopguy
Here's a nice review:
[https://www.ncbi.nlm.nih.gov/m/pubmed/28233803/](https://www.ncbi.nlm.nih.gov/m/pubmed/28233803/)

a biopsy from your peripheral plasma, opposed to solid tumor. A big issue in
cancer medicine is it's super fucking hard to get any kind of noninvasive
measurement. Typically it's done with surgery or with CAT scans, which have
extremely low precision.

Research in the last few years has been pointing to the idea that we can
detect tumor derived DNA fragments in plasma. The challenge being, healthy dna
makes up 99.9% of it, this means that currently the methods only work when the
tumor burden (metastatic for instance) is high. Not ideal for early detection
or treatment monitoring. But if the computational tools improve (rn none
assume such a low mixture), you could see sensitivity and precision increase
to the point that it's useful for predicting therapeutic outcomes and for
early detection.

------
skynetv2
remember there was an article about how Watson was recommending treatments
that doctors were never considering and that they were more effective.

i guess no, they were not even relevant, which is why doctors were not
recommending them

~~~
rosser
Both that story and this one can be true.

It can recommend unconsidered, more effective treatments just as readily as it
can recommend unconsidered, dangerous ones — particularly if the system was as
extensively trained using MSK's physicians' _preferred_ treatments, instead of
(or even additionally to) actual clinical data as The Fine Article suggests.

------
Gatsky
Watson is medical AI's Theranos. That IBM would actually try and sell
something like this doesn't bode well for their future.

------
spike021
Out of something from science fiction.. would an "issue" like this ever be the
type of problem where the AI can "understand" the cancer better than we can
and see where a certain treatment might be more beneficial than we've thought
of?

Or is this just a plain old failure?

~~~
framebit
Not to mention that even if the AI technology here was decent (which it
doesn't appear to be) it suffered greatly from a dearth of samples. Machine
learning requires big data, and cancer treatment data is, in many instances,
just not big enough. An AI solution would need millions of well-labeled,
perfectly formatted cases to learn from; there may be thousands of those for a
particular subtype of cancer, but there's not millions.

~~~
xyhopguy
Not if you understand mechanism. In reality the breakthrough will be
biologists applying ML/AI, not the other way around. Domain and mechanistic
knowledge are king.

------
m0rphy99
Anyone who has done their research would have known IBM Watson is a scam. IBM
has become a complete different company the moment they discovered that they
could make far more money just by providing extended consulting service to
many governments.

------
ricanontherun
Really makes me wonder if AI will every be able to model the heuristics which
domain experts uses on a daily basis.

------
billwill
To Err is Human. Thus IBM Watson with all the Random Clock work has to work on
Human Error.

------
transfire
Watson was probably right but it didn't fall in line with BigPharma's $$$
plans.

------
dang
Url changed from [https://gizmodo.com/ibm-watson-reportedly-recommended-
cancer...](https://gizmodo.com/ibm-watson-reportedly-recommended-cancer-
treatments-tha-1827868882), which points to this.

~~~
ummonk
The new url is paywalled though...

~~~
dang
Oops. Changed back. Thanks!

