
Google accused of 'trust demolition' over health app - 0xmohit
https://www.bbc.com/news/technology-46206677
======
crazygringo
The article doesn't seem to have any substance, it's just an article about one
person's opinionated tweet.

The team is joining DeepMind/Alphabet, and it appears DeepMind promised that
health data would never be joined to your Google account.

And there's zero evidence that they are joined or that they would be joined.
As far as I can tell, fearmongering like this would suggest that Alphabet
couldn't ever deal with health care info, which feels silly... There are
plenty of very strong laws enforcing health care privacy (e.g. HIPAA in the
US), so this really just feels like FUD.

~~~
Lapalata
Google is taking over everything and has the mindset of the most evil big
brother you could imagine. And yet you hope they will just nicely make sure no
correlation is done between you and your health data ?

I don't get how people still try to defend Google. When you see what Google is
doing with our data, they should be prohibited from accessing any health
information on us. Google has just proven themselves to be one of the most
data-angry and unethical company of the moment.

~~~
bun_at_work
And what exactly is Google doing with our data?

And how does that apply in this context? Are you conflating Google and their
parent: Alphabet?

Also, comparing Google to Big Brother is disingenuous at best. Google isn't
trying to control or limit what people think. They are merely trying to allow
advertisers to target audiences precisely.

I hate advertising, for sure. But the anti-Google fear-mongering and hate
based on nothing substantial and leveraging an inaccurate view of what Google
does is harmful for everyone involved.

~~~
math_and_stuff
> Google isn't trying to control or limit what people think.

I think proactive censorship for authoritarian governments (e.g., China) falls
into this category.

~~~
wafflesraccoon
Is that really Google's fault or China's for creating the policy? Google is
forced to follow the local laws in the countries that it operates in.

~~~
Drakim
While this isn't something I disagree with in principle, I can't help but to
feel uneasy about it. Should companies have helped Nazi Germany to identify
Jews since that was the local law?

If not, the clearly the principle isn't so solid after all. But I don't know
what to replace it with.

~~~
bduerst
Even if Google didn't change search results, people who clicked on the
censored website links still wouldn't be able to load the sites. You have to
ask which is better - serving information in the framework provided or serving
no information at all.

~~~
Drakim
Yeah you are probably right about that.

------
ChrisSD
The core issue:

> "DeepMind repeatedly, unconditionally promised to 'never connect people's
> intimate, identifiable health data to Google'. Now it's announced... exactly
> that. This isn't transparency, it's trust demolition," [Lawyer and privacy
> expert Julia Powles] added.

I remember this from when DeepMind was first given access to UK patient data.
The firewall between them and Google proper was a major point at the time.

The article is scant on details but while this move might not "demolish" trust
it does seem to erode it.

~~~
mtgx
This is why Google's so-called "AI ethics board" is nothing but a sham. The AI
ethics board should have already stopped this, or the fact that Google
intended to use its AI in China for censorship, or use it for military drones
to find targets to kill.

But it didn't. It's just another PR thing Google did to get people off their
backs while they continue with their original plans for AI in advertising/user
tracking.

~~~
netcan
Do you happen to know what, more specifically than "ai ethics," the board
considers part of its job?

Is privacy issues around data sufficiently "ai" to be part of this.

I mean, I can easily picture a board of hyper-intelligent academic types who
are only interested in skynet or brain-in-a-vat situations.

~~~
burkaman
Nobody knows anything about the board.

> DeepMind has consistently refused to say who is on the board, what it
> discusses, or publicly confirm whether or not it has even officially met.

[https://www.theguardian.com/technology/2017/jan/26/google-
de...](https://www.theguardian.com/technology/2017/jan/26/google-deepmind-ai-
ethics-board)

------
CaptainZapp
When being admitted to hospital, or even for an MRI one of the questions on
the form is usually

 _Are we allowed to share your data in anonymized form for research_ ?

I never thought much of that and, of course, always answered Yes.

Nowadays, and after the shit that Facebook[1] and Google are tyring to pull
off my answer is a surrounding:

 _Hell, No!_

Do those guys actually consider how much they hurt science and by extension
patients?

Scum!

[1] [http://fortune.com/2018/04/06/facebook-medical-data-
sharing-...](http://fortune.com/2018/04/06/facebook-medical-data-sharing-
hospitals/)

~~~
brlewis
Do you think the hospital is lying about "anonymized form"?

~~~
TeMPOraL
Yes and no.

As I'm fond of saying, there ain't such thing as "anonymized", there's only
"anonymized until combined with other data sets".

Plenty of non-obvious things can deanonymize you. An accurate enough
timestamp. A rare enough medication or treatment you received. The combination
of treatments you received. It's all fine until a chain of data sets form that
can identify you with high probability.

Unrelated, I too used to be all for "share my data with whoever you need for
medical research". These days, I worry that "medical research" doesn't mean
actual research, but random startups getting deals with hospitals - startups
that I don't trust to not play fast and loose with the data, and don't trust
to share the results with wider community. I think there was even a HN story
about that some time ago.

~~~
ikiris
After working in healthcare, the startups probably have much better data
security. Medical is a horror show of bad data practice. I'd trust Uber with
my data before I thought any major health org had any chance of doing it
right.

------
drcode
The health+ML space is in a real ethical quandary right now, because at some
level it's in all of our interests to have machine learning algos run against
large patient data sets. I'm guessing that 100 years from now people will look
down on us for not solving this coordination problem faster, and using all of
the medical data we've collected to accelerate progress on disease research...

...but how do we do it without also transforming into a distopia of "radical
transparency"?

~~~
013a
I tend to believe that it isn't a problem with the core idea or philosophy of
privacy (and I'm an intensely staunch privacy advocate). I believe it's a
problem with Google. Or more generally, any company which has any financial
incentive to use data in ways that the customer does not expect. It's about
the human responsibility on the receiving end of the data, and Google is a Bad
Seed.

With the Apple Watch, Apple partnered with Stanford for a heart study to
identity a-fib using smartwatch-sourced heart rate data. They attracted
400,000 participants. [1]

I would guess that exactly zero of these participants feel cheated by their
participation in this study (so far). Why? It's the same general concept as
this DeepMind stuff. I'd argue its three things: clear, precise, & opt-in
consent of the data shared, clear & constrained explanation of its usage, and
trust in the receiving parties.

Apple and Stanford have these three things in spades. Google is incapable of
all of them, and you can't Design or Buy your way through corporate culture
and customer trust. They gather and correlate so much data, their consent
process is far from precise or opt-in. Once they have the data, they have a
history of using it for Whatever they want. Which all ties back to trust;
Google has permanently ruined any trust customers would have toward them for
things that Actually Matter.

Point being, other companies can pick up this mantle. It won't be Google. And
if you're a health-focused company, joining Google is a literal death sentence
for your product. You'll end up prototyping something amazing, then be
completely incapable of deploying it for any public good because no one will
give you data.

[1] [http://fortune.com/2018/11/02/stanford-apple-watch-heart-
stu...](http://fortune.com/2018/11/02/stanford-apple-watch-heart-study/)

~~~
gowld
Who _doesn 't_ have an incentive to misuse data?

A company can sell it to advertisers. A government can use it to run a
genocide. An individual can use it for blackmail. Who's left?

~~~
013a
Who doesn't have an incentive to mug someone on the street? So, what stops
them? Morality. Law. Responsibility. Reputation.

All of this applies to companies. Morality comes from shared culture and
values. Laws like GDPR help. Reputation is a huge one; if companies want any
sort of meaningful enterprise contracts, especially with PII/PHI, data privacy
and security is paramount.

------
scrooched_moose
This gets to the core reason I've basically given up on all new services.

Eventually, no matter how virtuous a company claims to be, they'll sell the
company to FAANG who I try to avoid as much as possible. When the true end
goal of most startups is an exit payday to Google I can't trust them with my
data.

It's frustrating that avoiding Google isn't even as simple as "Don't use
Google". It's become "Don't use anyone who might be an acquisition target for
Google".

~~~
Cyclone_
Apple hasn't proven to be untrustworthy with data, they have a much different
business model than the others. I've never heard of Netflix doing anything
that bad with data either.

~~~
scrooched_moose
That's true, I just grabbed the acronym for a shortcut to "the huge technology
companies who suck up user data all costs". It probably wasn't fair to include
them, and Microsoft might be worth tossing in. FAAM?

~~~
FabHK
Did you mean FAMG?

~~~
scrooched_moose
Yeah, I did. Oops.

------
denzil_correa
At this point of time, the only way to ensure privacy is to have it embedded
as a legal contract. Otherwise, all claims of "never use" or "will not be
shared" means nothing.

------
petilon
Speaking of 'trust demolition', here's how Google has demolished my trust: In
Chrome they have added a new option: "Allow Chrome sign-in". It appears to be
a placebo. Even if it is turned off, if you sign in to Gmail then your browser
is also signed in, enabling Google to surveil you even more. There appears to
be a pattern here. Google maps tracks your location even when you turn off
location history. I used to be an unabashed Google fan, now I am in the
process of de-googling my life wherever I can.

~~~
FabHK
Welcome to the club. I'm on DDG for search, Apple Maps where possible, Zoho
for shared spreadsheets etc., Youtube in a browser in incognito mode, or
downloading via youtube-dl in the terminal, and my old gmail is set to
forwarding to other accounts, which I primarily use. Anything else one can do
to degoogle? (How nice it would be if that word also entered the language...)

------
Jmcdd
Surely anyone bothered by this can just request that their data be nuked?
Thanks GDPR.

~~~
LeoPanthera
Will the GDPR still apply in the UK after it leaves the EU?

~~~
pdpi
AIUI, the GDPR by itself doesn't apply to the UK (or any other EU member state
in particular). Instead, the GDPR forces member states to enact laws that
implement those rules.

This means that, after Brexit, the GDPR implementation laws will still be law
in the UK. Depending on the outcome of the Brexit negotiations, the UK might
or might not be in a position to repeal those laws at their own discretion.

~~~
detaro
the key word is General Data Protection _Regulation_. Regulations are law, and
apply directly, no local implementation needed (except for "interfaces", e.g.
in the case of GDPR changes to existing laws to clarify how they interact with
GDPR and to make exceptions GDPR explicitly allows the states to make)

 _Directives_ are the ones that only direct the states to enact laws
implementing them.

~~~
DanBC
But in the case of GDPR the UK has also implemented the DPA 2018 which
implements GDPR.

------
throw2016
This is the perfect example of why unconstrained self interest and greed are
destructive for the ecosystems that sustain them.

There are no limits to greed and that's why there are serious constraints on
all sorts of things that can damage the commons and why the only legitimate
force is the common good.

And a lesson for the tech community who have seen first hand the rapid
transformation of seemingly well meaning ethical actors into self obsessed
exploitative bad actors completely divorced from ethics.

------
orbifold
What I really hate about this is how once great nations like Great Britain,
which at one point ruled 1/5th of the planet, surrender more and more of their
competencies to cooperations. It is high time to fight this and stem the tide.
The rulers of these cooperations should tremble before the might of
(realistically only some) nation states, instead it is the opposite way
around.

~~~
lawlessone
>once great nations like Great Britain, which at one point ruled 1/5th

Nah screw that empire. 29 Million killed in India and 1 million killed in
Ireland.

~~~
orbifold
Well that is kind of my point, as a company you would think twice about
screwing over a country that is capable of such brutality. If you look at
Britains leaders today they are old, really poor and hold second class BA
degrees in geography, compared to the highly skilled and wealthy adversaries
they are up against, such as Eric Schmidt or Jeff Bezos, they are basically
outcompeted in every aspect. It is not a level playing field neither on an
individual nor even organisational level.

------
altfredd
The most alarming part about this story is that DeepMind's involvement is
largely unneeded and their "accomplishments" are superficial to say the least:

> DeepMind Health went on to work with Moorfields Eye Hospital, with machine-
> learning algorithms scouring images of eyes for signs of conditions such as
> macular degeneration

Sorry, but "scouring images of eyes for signs of conditions" on a scale of
single hospital is a task for two CS graduates, easily accomplished with
freely available machine learning tools. The hospital in question could have
done that themselves at minuscule cost. Are UK hospitals legally prohibited
from hiring non-medical staff or something? Instead they are _partnering_
(conspiring) with international companies to... do what again? Write Android
apps and feed images to neural networks? In exchange for their entire medical
data??

Is UK becoming another India or something?

------
naaymoo
Whoever wrote this goes into detail about how Google should not be trusted
with this app, and they will now have the power to post your personal
information online. This app could prove helpful to doctors and nurses. I do
agree with the author that it is scary that a multi-million internet company
will have access to medical information, but the thing is that hospitals have
been using internet-based devices for storing patient information for years.
Technically, Google could have already had access to all of this information
(they wouldn't because that is illegal). We also have no idea how this app
works, it could in end to end encryption, which means that Google could not
get the information if they tried. We have no idea what Google's plans are,
but I am fairly sure they will not be breaking any HIPPA laws.

~~~
KaiserPro
_could_ help doctors

They are not bound by HIPAA is its the UK.

------
ocdtrekkie
I fail to understand how this is legal. It was found the NHS gave DeepMind the
data illegally, and the inquiry was only closed because of the assurance
Google would never get the data. Now that the inquiry is closed, they are
giving Google the data. The ICO needs to reopen its investigation.

------
growlist
I'll be contacting Moorfields to request my eye scans are not passed to
Google.

------
carapace
Compare and contrast this story about a staunchly Western nation and a
capitalist corporation with the story that just appeared [1] about Venezuela
and China's ZTE.

In China the tech/data hegemony is part of the central government while in the
West it's separate, FAANG et. al. are expected to keep a distance from the
government and _vice versa_.

I was trying to imagine a scenario for a science-fiction story set around
2040, and my brain conjured an image of the people of China chipped and
managed by computer... It was chilling. As for the West, I imagine we're going
to have to nationalize the data and infrastructure of the tech companies OR
acknowledge them as the new technocratic form of government. Either that or
bifurcate into Morlocks and Eloi, in which case it doesn't matter what form
the control system takes.

I guess what I'm asking is, which system do you think will be stable in the
long term ("long" meaning 20 to 70 years, the time it takes for the weather to
get really hard to ignore), and why? Or will something else happen?

[1] "How ZTE helps Venezuela create China-style social control"

[https://www.reuters.com/investigates/special-
report/venezuel...](https://www.reuters.com/investigates/special-
report/venezuela-zte/)

"A new Venezuelan ID, created with China's ZTE, tracks citizen behavior"
(reuters.com)

[https://news.ycombinator.com/item?id=18451109](https://news.ycombinator.com/item?id=18451109)

------
duxup
What does this AI do medically?

Is it just playing the odds "oh this and this are probabbly this or this or
this."?

~~~
EpicEng
> "oh this and this are probabbly this or this or this."

What do you imagine a doctor does? They use their education and experience to
make an educated guess as to a course of testing/treatment.

ML models are developed under the supervision of doctors (often leaders in
their field) and engineers and are validated against large/statistically
significant cohorts.

Source: Spent five years working for a company which released an IF/ML
prognostic model for late stage prostate cancer.

~~~
duxup
I don't doubt a doctor does the same thing with varying levels of success.

I was just curious what the end game was with it.

I am a bit skeptical of just playing the averages but no more / less than
individual doctors.

~~~
EpicEng
>I am a bit skeptical of just playing the averages but no more / less than
individual doctors

That would be the minimum standard, but you gain efficiency and lower costs
(well, theoretically... companies throw crap in just to hit a higher
reimbursement tier.) The models often do better than doctors (where
applicable) because often times doctors won't agree with each other. Like in
any profession, you have varying levels of competence. My work has always been
in the realm of pathology, so I don't know much about other areas. In
pathology you will often see five different doctors give five different
interpretations when looking at the same sample.

------
ChrisSD
Btw, can the link be changed to the non-AMP url[0]? I was wondering why the
page was taking so long to load until I noticed.

[0]:
[https://www.bbc.co.uk/news/technology-46206677](https://www.bbc.co.uk/news/technology-46206677)

~~~
sp332
I'm not really pro-AMP, but your link takes twice as long to load for me as
the AMP one. And I'm on Firefox.

~~~
ChrisSD
The link above loads more or less instantly for me. The AMP version shows
nothing but white for awhile before showing the page.

~~~
josefx
I get a slow redirect from the co.uk to the .com version.

~~~
ChrisSD
Ha, now the link has been changed to the .com site I get a slow redirect to
.uk. In fairness it's still faster than AMP and it's only the first redirect
that was slow. But still, that's one slow redirect.

------
sandworm101
The next wave of medical advancement may require us to give up some privacy.
It has worked in the past.

Once upon a time childhood cancer was a death sentence. Cancer in young kids
moves very fast. Doctors in the 50s/60s did their best but studied the problem
as individuals, writing and presenting papers based on patients at their
hospitals. But nobody had enough data to discover the incremental
improvements. Then docs started getting together and adding patients from
multiple hospitals into larger and larger studies. From this larger pool of
patients came trends and treatment advice that, today, means childhood cancer
is largely survivable. (It is still horrible, but today many childhood cancers
are very treatable.) That movement required patient data leave the hands of
their individual doctors. Today EVERY kid with cancer is part of multiple
studies and it is normal for their information to be shared far and wide. AI
may be the next great thing, but it needs data. It may be necessary for
patients to again give up a little privacy to enable progress.

~~~
Spooky23
I have no problem with data sharing for clinical or research purposes,
especially since ethical standards for medical research provide meaningful
privacy protection.

I do have a problem with insurance organizations and pharmacies abusing data
sharing agreements intended for subrogation and similar procedures to manage
pharmaceutical sales quotas and conduct outbound marketing.

Case in point: My wife was admitted to the hospital due to complications that
from what was an early miscarriage. The health insurer sells data that allows
an advertiser to surmise that there was a hospital admission to the OB
department. The PBM provides anyone paying with information regarding
prescriptions before my insurer even gets the claim.

Outcome: An advertiser (infant formula company) determines that my wife is
likely pregnant and likely to deliver on Month/Day/Year. Guess what arrives on
that day? A Fedex care package of formula.

That was a very hurtful event for us, and similar violations happen thousands
of times every day.

~~~
zaroth
There’s a “funny” Target story where a family starts getting ads and coupons
for baby products in their Target mailers after the teen daughter becomes
pregnant (unbeknownst to the parents).

In that case it had nothing to do with mining medical records for advertising
purposes. The daughter’s browsing and shopping habits sent a strong enough
signal to trigger the ad targeting.

I don’t know anything about your case, and am very sorry to hear about your
family’s loss. I don’t know if you can draw a line to the insurance company
selling you out. But now I’m very curious to learn more about what data
insurance companies are allowed to sell, and to whom.

~~~
Spooky23
I appreciate that. I've posted this a few times on these matters because it
was very impactful to us and really sets the stage about the farce of medical
privacy. It is my small way of perhaps inspiring a positive outcome from an
awful event.

I know the insurance companies, hospital and PBM sold pieces of the data
because the formula company immediately upon request disclosed the list that
they obtained my name from, and identified who had the relevant information by
process of elimination. I don't know specifically all of the ways this is
done.

Basically, claims data is sold, but not diagnosis. There is other context
(type of admission, source of claim) that can identify the reason with
confidence. (ie. ER admission, hospital admission, claim from OB/gyn) The
prescription, can strengthen the assumed condition, and your pharmacy provides
that data in near real-time to pharmaceutical companies, brokers and others.
That script is tied to the DEA number of the doctor and can be cross-
referenced to the admission.

The formula company takes that data and mashes against people who have used
their coupons in the past.

