
Mental-health information 'sold to advertisers' - abhi3
https://www.bbc.co.uk/news/technology-49578500
======
dangrover
I can't separate what actually happened from the sensationalizing in the
article.

It says the websites had Google cookies and others. OK, anybody using Google
Analytics has Google cookies. Anyone using FB plugins has FB cookies. Yes,
websites certainly have a lot of cookies these days, and the information
inferred by use of these can be in a generic fashion to target and rank ads.
That's worthy of scrutiny -- but this isn't what the article is about. What is
the evidence of "mental health" being treated in some scary specific way?

The only substantial thing in the article supporting the headline is one
charge that a site sent quiz answers to Player.qualifo.com -- which appears to
be an unregistered domain.

OK, so what information was sold? When did money change hands? Is there any
hard evidence that mental health information was "sold to advertisers", as is
claimed in the headline? Or is this just bullshit they made up to get clicks?

For that matter, even if you wanted to do some evil psychographic ad thing,
why would anyone sell that info to advertisers? What are they gonna do with
it? Like, why hand them the literal data, when you could allow them to just
bid on ads that may be targeted/ranked using your painstakingly collected
psychographic data (a la Cambridge Analytica)? Why sell cow when you can sell
the milk?

The fearmongering and disinformation around adtech and privacy numbs us to
legitimately scary things being done that should be covered more.

Makes me think of this Blake Ross rant: [https://medium.com/@blakeross/don-t-
outsource-your-thinking-...](https://medium.com/@blakeross/don-t-outsource-
your-thinking-ad825a9b4653)

~~~
msbarnett
> It says the websites had Google cookies and others. OK, anybody using Google
> Analytics has Google cookies. Anyone using FB plugins has FB cookies.

It's not that complicated: certain websites, dealing in certain kinds of
sensitive information, may be legally required to seek explicit user consent
_before_ even using Google Analytics or Facebook Plugins.

Yes, Google Analytics use is damn near ubiquitous right now. That does not
mean it is unambiguously OK to slap it on ever website under the sun.

> What is the evidence of "mental health" being treated in some scary specific
> way?

No evidence is required. The law is clear. That someone subject to GDPR
regulations has the information that they were seeking information related to
depression transmitted to google and facebook in the form of a Google
Analytics and Facebook scripts on page, without their explicit consent to
that, is _in and of itself_ a violation of the law.

~~~
kodablah
I'm not sure it's clear that generic page analytics necessarily means that
ones presence on certain pages is any more sensitive than Google maps knowing
that you zoomed in on a depression clinic. I can understand this is easy with
third party analytics on specific-subject pages, but the waters may appear
muddier on web server logs, dns lookups/caches, browser history, search
results etc where generic approaches may only be illegal depending on what's
visited or looked up. What if the analytics scripts promised not to collect
URL or page content (not saying that's the case here)?

If not already clarified somewhere (pardon my ignorance on the bodies/law
involved), I think these common cases should be listed as clearly illegal not
in canon, but as a helper to these services. Keeping a list containing things
like "it's illegal for sites remotely related to health to use third party
analytics of any kind sans prior consent" (or "it's illegal for sites remotely
related to health to let the URL or content of the page be known to third
parties sans prior consent") could help.

I too struggled w/ some of the sensationalizing in the article, because I
read:

> Sensitive personal information about mental health is routinely being traded
> to advertisers around the web

as a bold headline, only to read:

> This was problematic because sensitive information could be broadcast to all
> of those bidding, PI said

I couldn't really tell what was really being sold routinely, personal info or
ad space. And I couldn't separate what "could be broadcast" vs what was.

~~~
msbarnett
> I couldn't really tell what was really being sold routinely, personal info
> or ad space

It’s the same thing.

Look, the website deals in mental health information, right? And health
information is part of a class of information that’s protected under the GDPR.
Under that protection, explicit user consent is required before that info can
be shared with anyone.

When the website invokes the Google ad APIs, that results in Google learning
that this user, who they build an identity around via tracking cookies, is
reading a website with information about a health condition. Google isn’t the
website the user visited, it’s a third-party the website just informed without
explicitly asking for the user’s permission. That violates the GDPR. The
website makes money off of informing Google about this. That’s “selling user
health information” to an advertiser — Google.

Google turns around to its ad bid network and solicits bids to show an ad to
google push_id abcdefghihk123, who is visiting a website about mental health.
That, again, is a GDPR violation, because disclosing information about a
pseudonym ID still counts under the GDPR (and the info can be de-anonymize
easily by aggregation networks). That, again, is Google selling personal info
to advertisers.

What it boils down to is you’re thinking about this is all wrong; the way
Google sells ads, _there’s no difference between selling personal info and
selling ad space_ because you’re always bidding to show an individual you’re
given data about an ad. The bids are based on how much the person’s profile
matches the bidder’s interests, and anything new learned about the person that
can be gleaned from the current website visit is added to that profile.
Advertisers are literally being told “here: this person with id 12345 and
these characteristics matching your target profile is currently reading about
depression. How much to show them an ad?” The bid request data they’re given
in turn is enough to identify the user in any number of data aggregation
services — most of which require “share and share alike” participation — so
the bidder looks up the bid request data and discovers probable email address,
location, income, marital status, interests, etc, and in return tells the
aggregation service “this user is currently visiting doihavedepression.com”.
Again, GDPR violation.

That’s the nature of modern online advertising. You’re never bidding to just
show N ads on site Y. There is no such thing, in using Google’s advertising
system, as selling an ad space on your site to Google that isn’t also a sale
of personal information to Google and by Google to the advertisers bidding to
fill that spot. As such there are entire classes of websites that probably
should not be showing Google ads.

------
RL_Quine
I have a rather specific condition that I’ve seen YouTube advertisements for
the drugs for, despite not even searching for the details of it except through
an anonymizer. I’ve always wondered what the cost per click for an
advertisement for a $100,000/yr drugs is.

~~~
globuous
That’s so messed up. What do you think of advertisements for drugs in general?
Have they been helpful finding treatment for your condition ? They are illegal
where i’m from i believe (i’ve never seen one myself except in the us).

Btw, this reminds of this absolutely hilarious story i think i first heard of
here on HN: [https://ghostinfluence.com/the-ultimate-retaliation-
pranking...](https://ghostinfluence.com/the-ultimate-retaliation-pranking-my-
roommate-with-targeted-facebook-ads/)

~~~
thfuran
There's basically some segment of the population who will demand
$SOME_DRUG_THEY_SAW_ADS_FOR and will change doctors repeatedly until they can
get a prescription for it, even if there are better alternatives that were
recommended by the previous doctors. Eliminating direct advertisement won't
completely avoid that since there will always be blogs full of crazy nonsense
or whatever, but I don't think there's any reason that consumers should be
getting targeted for ads for things they can't even legally buy.

~~~
noir_lord
It's illegal to do that in the UK and therefore we have no (major) issues like
you describe.

One of those regulations that is a net good to society.

Interesting video
[https://www.youtube.com/watch?v=ic_FpRG7Z_k](https://www.youtube.com/watch?v=ic_FpRG7Z_k)

------
pimterry
One of the team behind this research posted an interesting walk-through of the
traffic from one of the worst offenders:
[https://twitter.com/Bendineliot/status/1169259912184115206](https://twitter.com/Bendineliot/status/1169259912184115206)

------
rhcom2
Personally, it really sucks to get ads targeted at my mental health diagnoses.

~~~
soared
Ads are not targeted at your diagnosis - that is absurd. Ads are targeted at
your browsing habits, which may reflect your diagnosis. One is illegal, one is
legal. If we want to have an open discussion about advertising making claims
like that only hurt privacy advocates.

~~~
Ensorceled
It's absurd that you think advertisers can't generate this information:

1\. Get addresses of mental care clinics and offices.

2\. Geofence addresses

3\. Correlate devices that visited geofence addresses (using LiveRamp data)
with devices that saw your ads on Mental Health sites.

4\. Bonus, look at the path on the pages to figure out what disease they were
viewing when your ad was displayed if you weren't already targeting specific
page content.

~~~
sambe
How would they access location data unless authorised by the user?

("absurd" is a strange word to use; it might be nicer to educate without
condescension).

~~~
Ensorceled
Sorry, I used "absurd" to echo the condescension of the parent comment. Which
you didn't respond to....

Location data is so easily available that is largely a commodity now, sources
include GPS from "always on" apps like the Weather Network but also apps that
are collecting this data without your permission. Apple and Google are
constantly kicking apps that do this out of the store.

Also tricks like "local wifi" devices are being used, Apple just reduced apps
ability to sniff networks for this reason. There is another response that
lists some other data sources, your cell phone company being the worst
offender for many reasons.

------
i_am_nomad
This kind of thing makes me want to start a small independent ISP that
automatically blocks trackers, a la Pi-Hole, unless the customer specifically
opts in for them. Though, I’m not sure if there are legal hurdles in the US
for doing this.

~~~
fredley
An ISP won't really cut it these days, since more and more internet use,
particularly in developing countries, is mobile. You need to start a small
independent _mobile carrier_.

~~~
jschwartzi
Someone needs to call John Legere and get T-Mobile working on this.

------
ropiwqefjnpoa
When I read things like this, any quams I have about using ad/tracking
blockers melt away.

------
Spooky23
Companies have practices for opioid abuse surveillance and prevention that
highlight how available this data is. Example

[https://www2.deloitte.com/us/en/pages/public-
sector/solution...](https://www2.deloitte.com/us/en/pages/public-
sector/solutions/solving-the-countrys-opioid-crisis.html)

They use insurer datasets to model at-risk populations, especially populations
with high costs, to identify intervention opportunities. I saw one model where
they could identify all pregnant women in a state with 98% accuracy and score
them for risk of opioid dependency.

You can layer that type of data with online advertising from vendors like
Google and identify opportunities to target behavior factors that combined
with medical risk factors present opportunities. For example, a blue collar
worker with back pain treated opioid treatment has a baseline risk of abuse.
If her address changes or behaviors like online gambling happen, that
increases the risk of abuse.

Similar tech has been developed to combat extremist behavior.

[http://chicagopolicyreview.org/2019/04/18/can-online-ads-
hel...](http://chicagopolicyreview.org/2019/04/18/can-online-ads-help-prevent-
violent-extremism/)

In short, your health data is not meaningfully private.

------
inflatableDodo
Unfortunately, if you have the clout and money and a facile excuse, you can
also get data on patients straight from the NHS itself.

'Revealed: Google AI has access to huge haul of NHS patient data' \-
[https://www.newscientist.com/article/2086454-revealed-
google...](https://www.newscientist.com/article/2086454-revealed-google-ai-
has-access-to-huge-haul-of-nhs-patient-data/)

'Data deadlines loom large for the NHS' \-
[https://www.bmj.com/content/360/bmj.k1215](https://www.bmj.com/content/360/bmj.k1215)

------
cryoshon
this is exactly why i view surveillance and advertising as attacks on my
personal autonomy. advertisers will use anything and everything against me,
whether or not i ever consented to anyone knowing or using any of my
information.

if i'm surveilled, anything i do can be monetized and that monetization
directly and seriously harms me and exposes me to risk that i did not opt
into. if an advertiser knows that i am mentally ill on the basis of my search
queries or other data which they illegitimately procure without my consent and
against my active objections, they can target me for exploitation with an
arsenal of dirty psychological tricks designed to get me to buy their
products. if i am like most people, in the long run, they will win because i
will buy at least one product which they have forced onto me.

in other words, if i am forced against my will to view a targeted
advertisements, it is an inexcusable and unprovoked attack on my right to
refrain from economic activity that i do not wish to undertake. it is an
attempt at coercion using weaponized persuasion. it is not an attempt in good
faith to improve my life or to help me.

this remains true whether that advertisement targets an area where people are
explicitly exceptionally vulnerable, such as in the case of mental health, or
something more mundane, like my love of fast cars and nice wine. however, in
the case of mental health, it may be such that the act of targeting someone
with a relevant advertisement genuinely makes their condition worse. so, as we
all knew all along, these advertisements are actively, maliciously, and
viciously harming people for the sake of a few clicks.

the bottom line here is that advertisers and advertiser-enablers are long
overdue for their comeuppance. i'd support a ban on targeted advertisements,
but that won't happen legislatively. GDPR and similar laws are a start, but
they don't go nearly far enough to punish transgressors. i'd be more satisfied
with criminal liability for advertisements exploiting protected classes of
PII, but we'll see how things evolve.

------
jimbob45
Oh awesome. So does that mean we can finally do away with the gigantic money
sink that is HIPAA? Because if we're gonna have our health information leaked,
then there's no reason to keep up this charade.

~~~
djsumdog
I get the feeling these are just general websites with mental health
information, articles and little tests, but not actual health providers. If
it's just a directory and not your health insurance company or doctor, they
don't fall under HIPAA. HIPAA only applies to the US. EU laws and the GDPR,
which this article are discussing, are very different.

------
droithomme
Always use a VPN and incognito mode when accessing any sites for which your
interest has value to others that could harm you.

It's not surprising at all that web sites covering medical issues are also
tracking interest in those issues, correlating them with identities, and
selling them to third parties, perhaps including life and health insurance
companies, potential employers, etc.

------
amelius
> And many used Hotjar, a company that provides software that allows
> everything users type or click on to be logged and played back.

Er, what?

~~~
tiborsaas
It even records the mouse movements, generate hotmaps, etc. You have to do
extra efforts to filter out certain input fields, like passwords or credit
card details.

[https://help.hotjar.com/hc/en-
us/articles/115012439167-How-t...](https://help.hotjar.com/hc/en-
us/articles/115012439167-How-to-Manually-Suppress-Specific-Elements-and-
Images)

~~~
coldcode
IBM had something like this once, forget the name, something like TeaLeaf. We
were forced to use it in our mobile apps until it became obvious it was
terribly broken.

------
mediathrowaway
I worked for a major DSP / advertising platform and I was asked to setup a
campaign to advertise some particular drug to bi-polar people in a manner that
would specifically target that population.

I declined to do so and I was later fired. They were within their rights to
fire me and I was within my rights to decline. That is all.

------
body12
We need to start thinking ahead about who is going to run the federal agencies
and "tobacco truth" type organizations funded by the billions of damages from
the inevitable settlements, to avoid regulatory capture.

------
SCAQTony
The question that should be asked is what technology companies bought that
information and was that info sold to human resources departments?

------
seamyb88
Show HN: A boat for servers to keep your tech company on international waters
post-digital civil rights.

~~~
jetrink
Patent US7525207B2 - Water-based data center (Google LLC)

[https://patents.google.com/patent/US7525207B2/en](https://patents.google.com/patent/US7525207B2/en)

Google's [...] floating data centers off US coasts (2013)

[https://www.theguardian.com/technology/2013/oct/30/google-
se...](https://www.theguardian.com/technology/2013/oct/30/google-secret-
floating-data-centers-california-maine)

~~~
bduerst
Why waste money on floating the servers when you can sink them?

Microsoft is already piloting it:

[https://www.bbc.com/news/technology-44368813](https://www.bbc.com/news/technology-44368813)

~~~
uoaei
Passive cooling with no rent costs is hard to beat. Just gotta get to the
bottom of the ocean first.

------
maxk42
As much as I value privacy and think America needs a new constitutional
amendment enshrining it in our rights, this title is clickbait and the entire
article is designed to whip you into a panic. Here's the crux:

"Privacy International (PI) investigated more than 100 mental health websites
in France, Germany and the UK.

It found many shared user data with third parties, including advertisers and
large technology companies."

Yes. Mental health websites use third-party ads just like everyone else. Case
closed.

They're not "selling mental-health information" as if they're violating HIPAA
or something. They're just ordinary websites with ads and other tracking
cookies.

This is more of the silly kind of half-truths that were used to put cookie
warnings on nearly every site on the internet. Don't fall for it.

~~~
msbarnett
> They're not "selling mental-health information" as if they're violating
> HIPAA or something. They're just ordinary websites with ads and other
> tracking cookies

> This is more of the silly kind of half-truths that were used to put cookie
> warnings on nearly every site on the internet. Don't fall for it.

I’d suggest instead that your post is the kind of “silly half-truth” that
people shouldn’t fall for.

As the article points out (and which you neglected to mention):

> And in the case of particularly sensitive data, such as health information,
> this consent must be explicit.

> But the PI investigation found many cookies were installed on people's
> devices before any consent had been given.

Contrary to your “it’s just an ordinary website, and ordinary websites use
cookies, nothing to see here” narrative, these are websites that specialize in
giving health advice. The knowledge that a person is seeking advice and tests
for indicators re: depression is _absolutely_ sensitive health information
(insurance companies and potential employers would happily weaponize that
information given half a chance), and the law therefore required _explicit
consent_ before using those cookies _and forwarding those depression indicator
survey answers to third-party marketing data-aggregation affiliates_ , which
these websites did not seek.

This is a clear-cut violation.

~~~
TeMPOraL
Indeed.

I'd also add that:

> _half-truths that were used to put cookie warnings on nearly every site on
> the internet_

The half-truth here is that half-truths were used. Cookie warnings are "this
site spies on you" warnings (you don't need them if you only use cookies for
making the site work), which were introduced as a strong suggestion for the
industry to get its act together. It didn't, so now we have GDPR.

