
Google DeepMind and healthcare in an age of algorithms - DyslexicAtheist
https://link.springer.com/article/10.1007/s12553-017-0179-1
======
ghufran_syed
I used to be a doctor in the NHS, and have worked on a medical startup also,
so I’d like to think I have some relevant perspective and experience with
respect to patient safety and confidentiality.

There is a lot of hand-wringing in this article, with no reasonable attempt to
discuss the balance between possible benefit and possible harm to the patient.

A lot of the paper argues that there was no direct care relationship between
deep mind and each patient, and that the data transfer was therefore
inappropriate. I disagree, and think in the future we will see many
algorithmic systems involved in direct patient care. After all, you could pay
an army of clerks to review the notes and results (I.e. patient data) of every
patient at the royal free hospital, run it through an algorithm (paper
flowsheet), then notify the doctor that there might be impending acute kidney
injury, leaving it to the doctor to make the clinical decision about whether
there was in fact AKI (true positive) or not (false negative), like we do with
EVERY piece of information we get when assessing a patient. That could have
been done any time in the last 100 years, without anyone being concerned about
the appropriateness of it. Deep mind was essentially doing the same thing on
behalf of the doctors at the hospital. The risk of harm and the regulatory
framework should certainly be considered and developed, but it has to be
weighed against the potential benefits, same as every other medical advance
(and setback) in human history. Listing a whole bunch of potential dangers is
an easy way to write an article that gets attention, but I’m more worried
about articles like this unreasonably impeding progress in the field of
“algorithmic medicine” than I am about Deep Mind and the Royal Free’s approach
to this, at this exploratory stage.

I would feel differently if there was an identifiable business model that
conflicted with the interests of the patient, but I think at this super early
stage we should err on the side of exploration, not restriction.

~~~
quickben
I like your optimism.

Personally, having sat on 'watson for healthcare' presentations by top
architects, and having had conversations with them, I'm far less optimistic
about the whole AI in healthcare approach.

~~~
killjoywashere
As someone working in this space, there is a massive disconnect between the
engineers and the medical providers. One has never been in clinic. The other
never took linear algebra. To stand with a foot in both worlds is utterly
fascinating.

~~~
ghufran_syed
Funny you say that, I just started studying for an MS in math and stats, while
working as an ER doc :) Thonking about maybe going on to do a PhD - I was
interested in learning how machine learning might safely improve medical care,
and began to suspect that to do so without understanding the math would be
like diagnosing disease without knowing anatomy, phyiology, biochemistry,
pathology, pharmacology,... It turns out that math is at least as fun as
emergency medicine.. :)

~~~
killjoywashere
We are small minority. I want to get the AAMC to add stronger math
requirements to the medical school pre-reqs.

------
Hallucinaut
While having some cynicism over Tory deals regarding NHS data is
understandable, and it reads like the agreement with DeepMind was not up to
the standards expected, it seems to me less about a political party trying to
"monetize" patients as it is an NHS trust trying to maximize what they can
deliver in the context of limited resources.

I had a conversation with a representative from a medical analytics company
just before Christmas who portrayed radiographers and others who would have
their jobs threatened by advances in medical imaging analytics as being
obstructionist. It's also a common trope to hear that the Trusts do endless
proof-of-concepts and redevelop solutions other Trusts have already created
due to political power dynamics.

While entering a situation where Google has monopolistic power over data or
analytics would be undesirable, I think it's unreasonable to say that the only
solution for the NHS is to feed more money into the present dysfunctional
system. I personally find it quite distasteful that we can't bring in cheaper,
more accurate solutions with fewer barriers just because they come from the
private sector.

~~~
QAPereo
A very brief history

[http://www.bbc.com/news/uk-politics-22528719](http://www.bbc.com/news/uk-
politics-22528719)

[https://www.theguardian.com/society/2015/mar/12/nhs-
agrees-l...](https://www.theguardian.com/society/2015/mar/12/nhs-agrees-
largest-ever-privatisation-deal-to-tackle-backlog)

[https://www.theguardian.com/society/2016/aug/15/creeping-
pri...](https://www.theguardian.com/society/2016/aug/15/creeping-
privatisation-nhs-official-data-owen-smith-outsourcing)

[http://www.independent.co.uk/news/uk/politics/jeremy-hunt-
he...](http://www.independent.co.uk/news/uk/politics/jeremy-hunt-health-
department-nhs-legal-action-americanise-privatisation-customers-id-
pay-a8033986.html)

[http://www.independent.co.uk/news/uk/politics/nhs-
privatisat...](http://www.independent.co.uk/news/uk/politics/nhs-
privatisation-health-service-exposed-private-cancer-patients-hospitals-
treatment-work-government-a7974096.html)

------
dm319
> DeepMind did not have the requisite approvals for research from the Health
> Research Authority (HRA) and, in the case of identifiable data in
> particular, the Confidentiality Advisory Group (CAG)

This is kinda the crux of the matter. All the exciting, and likely beneficial,
machine-learning that we would expect from a relationship between an
organisation that is good with analytics and another organisation with a lot
of data, can happen only under the guise of _research_.

Any kind of medical research requires ethical approval, and in the UK, this is
not an easy process. It requires a responsible organisation (usually academic,
like a university, but it can be commercial), extensive documentation on how
the data will be collected, handled, processed. Consent and information
leaflets, duration of study etc etc is all part of this, and this needs to
forwarded to the regional ethics committee - a terrifying panel of around 8
people who are there to ensure that you've thought of everything surrounding
the ethical aspects of your research. After this, the hospital's R&D
department need to check through and make sure that the things they are
responsible for - i.e. ensuring patient data does not leave the
premises/country without good reason, are all ok.

There's good reason for all of this, too - it's not that long ago that major
breaches of medical ethics have been part of research [1], and medical
confidentiality is taken seriously in the UK.

As an example of research done right - here is an observational study on
nearly 1 million patients, tracking their admission, discharge from hospital
and subsequent complications [2]. Every single patient consented and agreed to
their information being used by the researchers for these purposes.

Personally, I'm excited to see machine/deep learning applied to medical data -
it's clearly going to be of huge benefit (well, it's happening anyway within
academic institutions), but no one should be suggesting it is done without the
usual ethical approval. Probably what needs to happen here is for Google or
similar to pair up to organisations used to obtaining ethical approval for
research - Universities, and work collaboratively with them.

[1] [http://www.nbcnews.com/id/41811750/ns/health-
health_care/t/u...](http://www.nbcnews.com/id/41811750/ns/health-
health_care/t/ugly-past-us-human-experiments-uncovered/)

[2]
[http://www.bmj.com/content/339/bmj.b4583](http://www.bmj.com/content/339/bmj.b4583)

------
bawana
if google wants data, why don't they just ask the public for it? You know the
old fashioned way. Open a store, staff it with real humans who can talk to
real humans. My guess is that they are trying a 'heuristic' solution- trying
to 'add value' to data that already exists. If that's the case, why don't they
just ask 23 and me for their DNA database. But I'll tell you right now, I am
tired of being the 'product' for Facebook , google, etc. If they want my data,
they need to ASK me for it.

~~~
londons_explore
It turns out, that the value of this data in aggregate is pretty high - there
is the possibility of saving probably 100 million person-years per year of
lives through algorithmic medicine (ie. each person on earth lives 1 year
longer), and the raw records could form a substantial cut of that value.

The flip side is researchers don't need _everybodies_ data. A few million
records would be very beneficial, and they can likely offer something worth a
few cents to persuade enough people to hand over data.

------
zaroth
Lots of hand wringing in the article, and the comments section too. My highly
limited understanding of the domain is that the data suck, the algorithms
suck, the the process sucks, and the patient is completely disregarded at
best.

We don't have the time, money, or discipline to collect and code and scrub the
data to the necessary degree where an algorithm could pick up where we left
off and actually make a diagnosis that wasn't blatantly obvious from the
onset. But there's so much money sloshing around in the whole system that
damned if we don't have the IBMs and the Googles of the world trying, even
though they both know better. Solving the real problems just isn't sexy.

The 5/10/25% efficiency improvement isn't in an "AI" flagging charts with
diagnostic codes. It's in somehow restraining the bureaucracy from chasing
these damn projects with their billions of dollars and get back to the very
hard and boring work of properly running their hospitals.

I once had dinner with a CFO of a large hospital system. Hospitals are like
airlines. $25 billion dollars to open the doors each year even if no one is in
the rooms. Hospitals, like airlines, are all about ASM (available seat miles)
and RASM (revenue per available seat mile), except with hospitals I'm not sure
what the standardized billing unit is, but we might as well denote it in pints
of blood.

These massive systems are run with exactly the efficiency and precision which
you would expect of a large, modern, American system. That is to say, they are
more than likely to crash and derail on their inaugural voyage then not. They
are not, as we like to say in SV, a meritocracy. And they most certainly will
not benefit from more algorithms telling them what they already know they need
to be doing but don't have nearly the time to do in the first place.

When I can show up for an appointment and trust that, a) it hasn't been
rescheduled without me knowing, b) that the doctor isn't actually in a
conference in Madrid this week and no one cared to clear his schedule, c) that
they didn't book patients in 20 minutes blocks for the whole day when they
know full well the average time to clear is 45 minutes, and d) the doctor
actually knows my name and has some way to read a summary of my chart that
might actually inform him of why I'm there today.... then maybe we can start
thinking about how adding some algebraic analysis can help.

~~~
visarga
Are you saying that AI should wait for hospital management and doctors to step
their game up, before being involved in diagnosis and medical monitoring?

Medicine is a fast evolving field. If a doctor takes time to read up the
latest papers, she doesn't have enough time left to treat patients, if she
spends all her time treating patients, she's got no time left for refreshing
the latest discoveries. In the end, you are going to see an outdated doctor,
because that's how it works. I'd prefer that doctors rely on AI and be up to
date while treating patients.

AI might have a huge impact in poor countries where doctors are scarce. The
alternative to AI is "no doctors, no help" for many. Just like self driving
cars - they don't need to be perfect, just less deadly than human drivers, and
in this case, access to medicine = life.

------
659087
I don't want advertising companies anywhere near my medical records. If I was
in a position where the NHS handed my data to Google, I'd be beyond pissed
off.

------
killjoywashere
It's worth noting that DeepMind and Royal Free have revised their agreements
to address the initial complaints, such as those this paper addresses (note,
this paper came out in March).

------
peter303
A.I. Expert systems like MYCIN were supposed to revolutionize medince in the
1980s. They were sort of like the symptom handbooks you might see in WebMD.

------
amenod
If this is true, it paints a rather ugly picture of the future. "1984" comes
to mind, but not in the sense that it's cruel - just in sense that humans
don't hold their destiny in their own hands. It looks like even a powerful
government like UK can't protect their citizens against large corporations in
a field which is deeply personal - health.

I must say I never thought about it before, but Deep Mind really doesn't seem
trustworthy entity to me. Owned by an advertising agency and willing to pull
deals like this? Not good.

~~~
chiefalchemist
Allow me to play devil's advocate for a second, please. I think it's worth
asking: Is health really personal? That is, given the influence of
environment, as well as societal (i.e., you conform to the behaviours around
you) is it time to apply a more holistic lens?

Furthermore, wouldn't prevention also benefit from supplementing the micro
(individual) with the macro? The awareness and "counterattack" to obesity
comes to mind.

~~~
nvarsj
> I think it's worth asking: Is health really personal?

No one asks that because the answer is obvious.

I don't mind the government, or a non profit, using data (in an anonymous and
transparent way) to benefit the health of everyone. But giving such personal
data to a for-profit mega-corp, whose own processes are opaque and unaudited,
is criminal.

~~~
DennisAleynikov
What is the obvious answer? It's obvious to me that my ailments and sickness
aren't unique to me and there is no point in claiming they are. While bad
actors can cause social damage with leaking healthcare details that points
more to the issue that healthcare is taboo in society. Security through
obscurity is no way to live.

Healthcare is obviously not personal, it's something we're all in together.
Turning healthcare into a me vs them survival game isn't healthy but is the
current capitalist solution.

