
AI can predict heart attacks more accurately than doctors - denzil_correa
https://www.engadget.com/2017/04/16/ai-can-predict-heart-attacks-more-accurately-than-doctors/
======
brandonb
As others pointed out, it's an overreach to call this artificial intelligence.

What the authors showed is that by training standard machine learning
algorithms (random forests, logistic regression, gradient boosting, and a
shallow neural net) on readily-available signals from the medical record
(e.g., prior diagnoses), you can increase the c-statistic from 0.728 to 0.764.
These machine learning techniques are well-suited to the data set and the
empirical evaluation is strong, so this work should really stand on its own
without trying to brand it AI.

There is some very high quality work on artificial intelligence in medicine
being done today. Google Brain published a validation study of a convolutional
net to diagnose diabetic retinopathy in December, and Stanford published
similar work but applied to skin cancer. NIPS 2016 had an excellent Machine
Learning for Health workshop with several works-in-progress:
[http://www.nipsml4hc.ws/](http://www.nipsml4hc.ws/)

~~~
Kenji
What's the difference between AI and 'standard machine learning algorithms'?
Seems like they are used interchangeably today. Especially if you count neural
nets as a standard algorithm too - which, let's be frank - it is in today's
world.

~~~
snackai
Machine Learning is just a tool belt. AI is the idea of intelligent machines
being able to solve problems (and recognizing them) on their own. Therefore of
course AI can utilize the tool belt that is Machine Learning.

~~~
gnaritas
The timeless problem of AI, anything that starts out AI ends up being
rebranded as not AI once it's understood. It's the no true Scotsman fallacy.
Machine learning is AI, it's just not human level AI.

~~~
felippee
I'd disagree, there is more here than the no true Scotsman fallacy. It is
equivalent to saying that any magic trick ceases to be magic once exposed as a
trick. We don't have a good definition of I in AI therefore such arguments go
on forever, yet somehow we know that the "I as in human being "I" is different
than the "I" as in "AI" statistical smoke and mirrors. My opinion is that a
good definition could be established that would clear such BS once and for
all, some attempt here: [http://blog.piekniewski.info/2017/04/13/ai-confuses-
intellig...](http://blog.piekniewski.info/2017/04/13/ai-confuses-intelligent/)

~~~
gnaritas
You're conflating human level intelligence with artificial intelligence for
one, AI doesn't have to be human level to be AI. Secondly "statistical smoke
and mirrors" is the Chinese room fallacy; it doesn't matter what it is under
the covers, if it behaves intelligently then it's intelligent regardless of
whether or not it can be boiled down to some math. Your link covers that near
the beginning when he discusses duck typing, but he comes to the wrong
conclusion. The article should have ended there, if it behaves intelligently,
then it's intelligent, that simply is the truth and looking at the
implementation is cheating because we don't know brains aren't doing the same
thing.

What's really going on is you are branding "human intelligence" special
because we don't understand its implementation and labeling everything else
not intelligent because we do, for all we know the human mind itself could be
nothing more than statistical smoke and mirrors. The only problem here is
human ego.

A car that can drive me somewhere on its own simply by being given a
destination, is AI, not matter how it's implemented as long as it's the
computer doing the driving and is operating locally by actually having sensors
that see the road. It doesn't have to be able to ponder its own existence to
be AI.

Neural nets were an attempt to model how the brain works; they are by
definition AI regardless of whether they boil down to some maths. Everything a
computer does boils down eventually to some maths, that is not an escape hatch
to claim something isn't AI.

Machine learning is AI. It is not AGI, but it most certainly is AI.

~~~
mythbuster2001
You are dogmatically defending Turing test which I think is the primary source
of this confusion. Turing test says: if it fools humans into thinking its
intelligent it is intelligent. That is fair. But once some other humans
understand the inner workings of some simple "AI" mechanism it no longer fools
humans, since they now know what adversarial questions to ask to uncover it.
Therefore it consequently fails the Turing test and we have the AI effect.
This test is just a bad idea and it impairs research (for a number of reasons
stated in the post which you prematurely dismiss).

The coffee criterion for AGI
([https://en.wikipedia.org/wiki/Artificial_general_intelligenc...](https://en.wikipedia.org/wiki/Artificial_general_intelligence))
is much better, since it requires ability to creatively interact with
unpredictable reality as a test for intelligence. It avoids all the
philosophical bullshit and all the smoke and mirrors, since you cannot fool
physics. Somehow the so called "AI researchers" avoid robotics like fire, sine
there stuff actually needs to work (not just statistically) and outrageous BS
claims cannot be made.

And yes, ultimately the human brain may be smoke and mirrors. But frankly,
quite sophisticated smoke and mirrors, not anywhere close to the crap that is
being put forward right now.

~~~
gnaritas
No, I'm defending the Chinese room thought experiment. It's cheating to look
at the implementation and then claim it isn't AI; you can't look at the human
implementation which could very well also be based on simple math we simply
haven't figured out yet. It's only fair to judge by inputs and outputs. And
you are confusing AI with AGI; something does not have to have human level
intelligence to be AI. The Turing test is about AGI, not AI.

Useful and relevant world changing AI will happen long before AGI which could
very well be a pipe dream. A car that drives far better than humans is useful
AI and yet would fall short of AGI, a robot that can clean my house is useful
AI but could fall way short of AGI, there are vast world changing things to be
done by AI long before AGI ever becomes a reality and that we understand how
something works DOES NOT disqualify it from being AI, even if it boils down to
little more than some statistical inference.

Saying it isn't AI because you understand how it works is like saying
submarines can't swim; it doesn't have to work like nature to be valid nor
does it have to be like human intelligence to be intelligent and any
intelligence we build is by definition artificial intelligence. Machine
learning that can diagnose better than a doctor... is AI, not matter how well
you understand it's just math, it's still AI. Those who conflate AI with
consciousness are the ones in err. AI does not and has not ever meant
artificial self aware consciousness, while such a thing would be AI, it would
be the pinnacle of AI, AGI.

------
carbocation
What a silly headline. Here is a brief description of what was done, from the
original article (helpfully linked by `henrypray):

> Four machine-learning algorithms (random forest, logistic regression,
> gradient boosting machines, neural networks) were compared to an established
> algorithm (American College of Cardiology guidelines) to predict first
> cardiovascular event over 10-years

So, routine learning algorithms were applied to a task and performed better
than the simplified (for simple computation) results of a prior learning
algorithm.

~~~
gpsx
Based on my experience with several doctors and other medical professionals
here in the US, it appears as if they always diagnose problems with a
simplified test or some kind of decision tree like the standard test they did
here (the "established algorithm"). Doctors I have seen only seem to get the
diagnosis right if it is a problem most people have.

The most blatent example, though not the only one, is when I spent months and
thousands of dollars (with the help of my insurance company) chasing a problem
in my SI joint. Four different physical therapists kept telling me I had
referred pain from my spine. This is based on the fact that I felt the pain if
I raised my leg when I was lying down. That seems like a pretty poor test. Any
other doctor or physical therapist that actually looked at my SI joint said I
had an inflammed SI joint problem. The problem turned out to be my hip muscles
would get tight when I slept on my side.

I would guess the main problem is not the doctors themselves but the
administration that doesn't trust the doctors to use their brains. If that is
true, it is not silly that the AI result beat the simple test. Hopefully
either they will start using better tests of trust doctors a little more to
think on their own.

------
henrypray
Academic article PDF:
[http://journals.plos.org/plosone/article/file?id=10.1371/jou...](http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0174944&type=printable)

~~~
epalmer
Thanks for this url. I am studying the paper now.

------
epalmer
How do I interpret Missing BMI as a risk factor. In the ML: Neural Networks
run BMI Missing was considered a top 10 risk factor. The body text said this
about BMI Missing: "This study suggests that missing values, in particular,
for routine biometric variables such as BMI, are independent predictors of
CVD."

I'm having a hard time wrapping my brain around this concept.

Thanks

~~~
brandonb
This could be a signal that people who don't visit the doctor often are more
likely to develop CVD. It makes sense—if caught early, risk factors like high
BMI, high blood pressure, and high LDL cholesterol can be treated and thereby
prevent heart attacks and other cardiovascular events.

~~~
qbrass
Or it's selection bias from people who don't normally go to the doctor mostly
going when there's a problem.

The people who don't go to the doctor because they feel fine aren't included
in the study.

