
With ePrivacy looming, German publishers scramble to get users logged in - okket
https://digiday.com/media/eprivacy-looming-german-publishers-scramble-get-users-logged/
======
guy98238710
> _lose upward of €300 million ($372 million) in yearly ad sales_

I really like these euphemism, talking about lost ad sales when it's really
about losing personal data sales.

------
driverdan
I don't understand the outcry. Cookies are not required to show users ads or
track clickthrough rates. Why will there be a loss of ad revenue? We had ads
before privacy invasive techniques like retargeting.

~~~
esrauch
> Why will there be a loss of ad revenue?

Presumably the privacy invasive techniques are done specifically because they
increase ad revenue, not because the newspapers want to be nefarious.

~~~
ahultgren
But it can easily be argued that it's been a zero-sum game, where one actor
has been forced to use more targeted ads because others have been more
efficient (cheaper) by doing so. It's not like ads spending has increased;
unless you're arguing that consumption has increased due to targeted/invasive
ads?

~~~
esrauch
It is difficult for me to see how it would be zero sum. Instead it seems very
likely to me that the total amount spent by users online is higher with
invasive ads than non-invasive ones: if no ad you see is relevant to you then
you are less likely to buy anything online at all.

~~~
strken
There's an argument I've seen - no idea whether it's correct - that users
acclimatise to new advertising technology and eventually regress back to the
mean. I suppose this would mean that newer and more invasive ads cause a spike
in consumer spending, which drops over time until the next innovation.

~~~
daveFNbuck
I've never seen someone make that argument for better targeting, which is what
they're saying cookies enable. You're not going to acclimate to being more
interested in the ads you're seeing.

~~~
TeMPOraL
The argument for better targeting is simple: targeting works (whether directly
or just through being a novelty), and those who engage in it get ahead of
those who don't.

The argument for getting used to ads is mostly psychology/neuroscience.

------
guy98238710
If Wikipedia is right that _" no consent is needed for non-privacy intrusive
cookies improving internet experience"_, then that's a good sign EU is finally
adopting ethics-based regulation instead of the old capability-based
regulation that threatened to cripple the Internet. It also means they are now
specifically targeting advertising business instead of legislating universally
technophobic nonsense.

~~~
pjc50
That was always the case (consent only required for _third party_ cookies),
but nobody understood it. Or was willing to give up Google Analytics. The EU
has a bit of an issue with communicating to the public.

~~~
guy98238710
Where is it written that it applies only to third party cookies? AFAIK the old
cookie law only allowed cookies without permission, first party or third
party, where they were _required_ for website to function, e.g. shopping cart
cookies. I interpret this as banning even those cookies that improved user
experience, but that weren't strictly required for the website to function.

~~~
pjc50
See
[http://ec.europa.eu/ipg/basics/legal/cookies/index_en.htm](http://ec.europa.eu/ipg/basics/legal/cookies/index_en.htm)
which lists various examples. I've got to admit that I'm struggling to come up
with an example that "improves user experience" that would genuinely not be
allowed.

~~~
guy98238710
Actually coming up with an example is quite easy. Consider website with huge
catalog that customizes recommendations based on your previous activity on the
website. This makes user experience better, but it is not strictly required
for the site to function. None of the exceptions on the page you linked apply
to this case.

~~~
pjc50
If the user has an account cookie that would be subsumed in there, along with
the consent notice that the site was storing your activity.

If there is no other cookie, then it gets interesting - you're storing a chunk
of their browser history, which if you can identify it to a particular user
becomes _personal data_. Then you need their permission to store this - and
that's the intent of the GDPR.

------
zerostar07
Browsers should have taken that space long ago. Mozilla tried with Persona,
but inexplicably it was abandoned. We could have granular control of our real
and pseudo identities.

~~~
glenstein
Persona is exactly where my mind went with this as well. While Yahoo was
flailing, trying to find some new vision, one of the few good things they did
was partner with a company that really has a vision in Mozilla. Yahoo could
have moved to align the significant portion of the web under their control
into compatibility with Mozilla's various efforts, such as Persona.

To me that's going to go down in internet history as one of the ultimate what-
ifs. I feel like it was a missed opportunity for both companies.

~~~
nathanaldensr
We don't need companies controlling the internet, and we _certainly_ don't
want them controlling our identity systems. The current status quo is already
bad enough. Where is the implementation that allows _me_ to own my identity?
Why are my user records locked away in some (many) corporation's database?

~~~
kbenson
You're right that we don't need companies Co trolling g the internet, but it
sure would have been nice to have a good login system backed by a company with
the intentions and motivations of Mozilla (a non profit).

Businesses aren't going to trade their content out of the goodness of their
heart, they want something in exchange. I would much rather trade ad views
through Mozilla using a list of my interests that I control than to allow
excessive tracking and profile building by some untrusted third party.

What we apparently have now is login collectives starting up, but who's to say
how good or bad the practices are for each, or whether they have the ability
to adequately secure their system.

------
TekMol
What is this about? What is this ePrivacy thing? How is it related to logins?

~~~
zerostar07
i think they re talking about this:
[https://en.wikipedia.org/wiki/General_Data_Protection_Regula...](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation)

~~~
marknadal
If that is true, why not throw things into some P2P cryptographically secure
user account system like this[1] (disclaimer: ours).

From a technology perspective, it is non-trivial for corps/companies to do
this (since they don't value that). But the technology to do it already
exists, so if laws are forcing people this direction, what is the best way to
market the availability of systems like this?

Any thoughts/clues? Thanks!

[1] [https://hackernoon.com/so-you-want-to-build-a-p2p-twitter-
wi...](https://hackernoon.com/so-you-want-to-build-a-p2p-twitter-
with-e2e-encryption-f90505b2ff8)

~~~
anomie31
That's a good idea, but if I were a German citizen I'd rather they use
something that's tried and true like shibboleth. I can't speak for the rest of
the world, but shibboleth is what most of America's colleges and universities
use to allow students to access research material the institution has paid
for. Something that isn't obvious though, is anyone can setup a shibboleth
identity provider, the only difference being that it's not on the whitelists
that academic identity providers are on.

~~~
closeparen
The general term for this property is “federated identity” and academia is
probably the most mature and widespread implementation - academic institutions
regularly allow visitors from other institutions to use their home
institution’s credentials, for example in the worldwide “eduroam” WiFi
network.

------
erikrothoff
I really don't understand what the EU's goal is here. Someone said that it's
to model the "walled garden of regulation" approach that China has: Make
regulations so tough and block out outside actors, creating an internal market
that can thrive. Also wanting to break out the dominance of American tech
companies within EU, and foster innovation.

However, I have yet to see a single argument where these regulations (both
GDPR and ePrivacy) don't hurt EU businesses, and favors companies outside that
don't have to abide by these stricter regulations.

So why has EU adopted this extremely hard line approach? What is the best case
scenario?

~~~
PeterisP
The reason for this regulation is the privacy of EU citizens. Businesses (both
in EU and outside) don't "behave well" by themselves, since there's an
inherent profit motive in violating users' privacy and interests, so this
regulation is forcing them to do this.

The best case is just that - businesses serving EU citizens (this includes
most large multinational businesses as well, companies hosted outside _do_
have to abide with these stricter regulation at least for EU users, since
almost always they also have EU customers or EU presence) have to obey these
rules. They don't get to (ab)use user's data that much and are unhappy about
this, but figure out on how to live on without that data.

~~~
erikrothoff
I absolutely agree that privacy is important. However, what I'd like to
understand more is: What are the regulators actually expecting to happen?
Looking at the first version of the cookie law, basically the end result was a
worse user experience across EU websites. Large warnings that take up large
screen real estate, people learning to just blindly accept without reading. I
was trying to explain to my SO that I'd been to a presentation about GDPR and
ePrivacy, and said "It's the new version of the cookie warning" "Oh is that
those awful warnings? I hate those I don't even understand what they are
trying to say"

From a pragmatic standpoint the result of these new regulations seems to be
not that publishers and other ad-financed businesses are saying "OK, ads are
bad, let's do less of those". They are finding workarounds that abide by the
law, but still allow them to target ads as optimally as they can.

The true winners in this are obviously the platforms with the largest database
of users and user interest: Google and Facebook. The two companies are are
dominating already, and in essence killing smaller local (EU) publishers. So
from that perspective, are the regulators hoping that publishers and local
companies band together, creating cookie pools and sharing user information
amongst themselves to strengthen their position against FB and GOOG? Or are
they hoping that EU companies come up with the holy grail of new revenue
streams that nobody has though of yet?

TL;DR: In my mind: T stop bad things regulators make laws against something,
and also to promote the "right way". Example: People get hit by cars crossing
the roads anywhere. They invent a crosswalk and make it illegal to cross the
road anywhere other than the crosswalk. What is the ePrivacy version of the
crosswalk?

~~~
PeterisP
The main cause with the "cookie law" is that the law as written failed to
implement the intent; the intent was to reduce tracking by cookies, but the
actual requirement could be satisfied by those obnoxious warnings while doing
tracking as before, and so it was. If the concept of "consent to cookies" was
worded more strongly, as in the current laws, then many websites would
actually have not made these warnings and simply made do without the cookies,
since they wouldn't be able to obtain consent from most people anyway.

The new laws seem to avoid this mistake (of course, only time will tell for
sure) - they are harsh enough to ensure the strategy "figure out what legalese
we need to add to keep doing the same things" is not viable, and they have to
actually stop gathering and using that data.

In my reading, GDPR seems like it will _not_ enable workarounds that still
allow targeting ads as optimally as they could in 2017. They'll still try to
target them as optimally as they can, they'll probably figure out _some_
workarounds for parts of it but that will by necessity be ads that are less
targeted and use less private information (at least for EU residents) than
last year. Also, the intent of lawmakers is clear - whatever workarounds the
ad industry figures out to circumvent that intent are likely to be closed in
3-5 years with the next iteration of this directive.

The regulators are _not_ "hoping that publishers and local companies band
together, creating cookie pools and sharing user information amongst
themselves" \- they are explicitly creating conditions to ensure that it's
almost impossible to get consent for that sharing of user information, and
also that Facebook and Google will be able to get much less information than
they could before - for example, the GDPR has a big impact on information
gathered by all the FB like buttons and google analytics snippets in third
party websites; and the "4% of global turnover" clause is included explicitly
so that it has leverage to force Google and FB to change their behavior (at
least for EU residents). There's no such thing as a "service-wide" opt-in for
everything anymore; I will be able to deny Google consent to use my gmail data
for targeting ads on google.com, and Google will have to obey; I will be able
to ask Facebook for the data they've gathered about me on third party websites
and get them to remove that data.

Since it's expected that most EU consumers will not give that consent (why
would they?) it will pull Google and FB down to the level of those ordinary
publishers; they will lose the data that historically gave them this
advantage. They're not hoping that EU companies will magically grow new
revenue streams, they are taking away those revenue streams (regarding EU
customers) for all companies, EU and foreign.

This seems like what the regulators are actually expect to happen, and from
what I've seeing in the internal activities of local businesses, it's going to
happen - pretty much everyone here has major ongoing IT projects that will
reduce the amount of private data that they are gathering and storing. _That_
is the ePrivacy version of the crosswalk - stop gathering private data about
users, anonymize/cleanse the data you already have, and do your advertising in
a more coarse-grained manner.

~~~
ec109685
I think you are expecting too much. Folks are going to click consent buttons
in order to use the websites they want to use and never think to go to privacy
dashboard and view/modify/delete their data.

~~~
PeterisP
No, GDPR quite explicitly plans for that.

First, there's no such thing as "click consent buttons in order to use the
websites they want to use" \- if clicking that button is needed to use the
website, then it doesn't count as freely given consent. I.e. either the
choices are "Agree and proceed" and "Disagree, but still proceed" or you might
not have bothered with asking, since that click is not consent. If the data is
not inherently needed to provide the service (i.e. a delivery address if you
want something delivered), then users must have the option to refuse.

Second, there can be no single "consent to do everything". You have to explain
every single usecase _separately_ with _details_ and _specifics_ , and get
consent for each use case _separately_. This is a big issue. If a small
company has something particular in mind, it's not a big deal. But if a
megacorp has (had) a habit to use your data everywhere for everything, that's
not going to work. As GDPR states, “A purpose that is vague or general, such
as for instance ‘Improving users’ experience’, ‘marketing purposes’, or
‘future research’ will – without further detail – usually not meet the
criteria of being ‘specific’”. If the list of use cases and their details
doesn't fit on a single screen, users simply won't accept them. For an
organization like Facebook, it'll be a major challenge to simply list the
places and ways where they are _already_ using my data - and they'll either
have to give that full list to me and somehow convince me that I should agree
(as the default option is to refuse with no consequence to me) or stop doing
it.

In essence, you have to bring them to the privacy dashboard first. And
everything has to be opt-in (default off). If you have a random consent button
like the classic "Agree" under terms and conditions - well, that doesn't count
as informed consent, so if that was your system then you don't have consent
from _any_ of your users, and your use of their data was prohibited. Even if
most users won't care, a few activists can force you to change the behavior
for everyone or risk major fines - i.e. if I see that you're using my data
without my consent, then it doesn't stop at you removing my data; the
regulator can ask you to prove that you got informed, specific, freely given
consent from all your other users.

Third, previous consent doesn't imply future consent - if you want to use the
data for something new, previously this was done by quietly amending the terms
and conditions, and adding a new option to the privacy dashboard (often pre-
selected). Under GDPR, every new use requires new consent.

Fourth, a lot of the tracking has historically been done by third parties. I
can see myself giving consent to use my data on some forum or blog that I
frequent; however, the consent that I give _them_ doesn't count as consent for
all the ad networks running the ads on their site. The ad networks would have
to ask permission separately (which they won't realistically get, ever) and
they also can't count on getting the site owner to give them my data, since
site owner needs my (separate! opt-in! specific instead of "may share with
third parties"!) consent for that.

~~~
ec109685
What if your website doesn’t work without cookies?

“We are collecting this data in order to provide relevant stories and
advertisements on this website and partner websites. Please click Yes to
enable your personalized feed”.

~~~
PeterisP
Each purpose must be listed separately; the fact that user gave you the data
and consent to use it for one purpose doesn't imply that you're allowed to use
that data for another purpose.

In your example:

"This data is absolutely required for the service you requested" >> no consent
needed, you're just informing the user;

"Also, we'd like to use that data to personalize your feed" >> opt-in;

"Also, we'd like to use that data to personalize your advertisements on
partner websites" >> a separate, different opt-in. The user can enable the
personalized feed while denying you consent to share that data with
advertisers.

~~~
ec109685
And if they say no along the way, what prevents the company from blocking
access? Also, how does law differentiate between something that has to be
separate or AND'd?

~~~
PeterisP
If the company does block access when you decline, then this means that the
regulator is exceedingly likely to consider that any "consent" the company got
from those who agreed was not freely given, and so it does not have the right
to use any user's data for this purpose; which would expose the company to
substantial fines.

The law intentionally doesn't set out explicit conditions to differentiate
between what needs to be separate, and other conditions as well, to prevent
gaming for workarounds. Instead, it lies out general principles and rights of
the users, and gives a lot of leeway to the regulator(s) to evaluate if the
usage of data was in accordance to the user's wishes.

In essence, the question is quite simple - did the user _want_ you to use that
data for whatever you did? And, can you demonstrate to the regulator that they
_intentionally_ gave you _informed_ consent by freely opting in to that use?

If they genuinely wish you to have and use that data, then it's ok (and
they're not going to complain) and if they did not actually want you to use
that data but your process somehow managed them to click all the required
boxes, then your process is wrong, since whatever it measures is not informed,
freely given opt-in consent. E.g. if I'm the company, then it's my duty to
show why I have the right to user other's private data. If the only
justification I can provide is that they clicked on "We are collecting this
data in order to provide relevant stories and advertisements on this website
and partner websites. Please click Yes to enable your personalized feed", then
that's a lousy justification - it demonstrates that users wanted me to enable
their personalized feed (so I have permission to do that), but it doesn't
really show that those who clicked 'Yes' _wanted_ me to use their data for
advertisements on partner websites, which is the criteria that matters. Any
regulator will reasonably assume that it's likely that some of those users
wanted the feed but did not want their data to be given to advertisers, in
fact, they might have some evidence in forms of complaint from such users.
Maybe some of them really wanted me to do just that (e.g. they freely wished
to support my site in this way by donating access to their data), but that
question did not give them the ability to clearly express this intent. If I
had separated those things, I might have gotten consent from some users to
share data with advertisers, but with the combined question I did not; so it'd
be in my interests to separate those questions.

All the consent-related articles of the law and comments to them pretty much
amount to repeatedly stating that if the user doesn't wish you to use that
data, then the only way out is not to use it (or use it under some of the non-
consensual use exemptions) - the law doesn't provide any option to claim
consent if the customer doesn't actually want you to do what you did with
their data.

For a crude comparison, the consent criteria seem to be comparable with those
of sexual assault - no means no, a lack of answer is not consent, consent must
be freely given and not coerced with some threats, consent can be withdrawn at
any time, etc, etc; essentially it's only consent if they actually want to do
that, no tricks or technicalities. The answer to "what prevents the company
from blocking access if they say no along the way" is somewhat analogous to
"what prevents your boss from firing you if you say no to requests of sexual
favors" \- they _can_ do that, but the (implied) threat of it means that the
act was not consensual and unacceptable even if you didn't say no.

~~~
ec109685
I am not sure you intepretation is correct but I really appreciated your
detailed answers and reasoning behind them.

------
alkonaut
Is the article switching half way from talking about _login_ cookies, to ad
sales (which presumably is about tracking cookies of other kinds)?

I wish the law would just attack the problem at the root: remove _all_
incentive to track users and gather their personal data. E.g. don’t allow
advertisers to show ads to people based on anything they haven’t given that
site. If I haven’t told the newspaper I’m visiting that I like fishing, they
can’t show me fishing ads because I liked it on Facebook.

And if they can’t risk showing targeted ads, then the value for anyone to
trade in this information vanishes.

This would also solve the problem of more and more advertising being harder to
block because it’s “native”.

------
jbernardo95
Portugal is heading to the same route unfortunately
([https://digiday.com/media/portugals-media-companies-
alliance...](https://digiday.com/media/portugals-media-companies-alliance/)).

There is already a website warning for the privacy issues of the platform
([https://nonio.pt/](https://nonio.pt/)).

------
CandidlyFake
Logins for news sites was tried in the 2000s and it failed miserably. The only
ones who could survive such a strategy are news institutions with a large
subscriber base like WSJ or the NYTimes. But even they are struggling.

Would a login collective be any better? I highly doubt it.

------
majormajor
I'm pretty ad-friendly compared to most on HN (people have shown countless
times that they don't want to pay if they don't have to, and
microtransaction/favor/tipping systems are a giant mess and likely always will
be), but the "people have to track to keep stuff running!" view strikes me as
naive.

a) Right now we have "content" created and tailored more to pull in ad revenue
than to do anything user-beneficial. That's not ideal, most pure clickbait
going away would not be a net negative. Consider news: was the internet of 15
years ago meaningfully devoid of news content compared to today, or was it
better off cause it wasn't flooded with as much polarized crap?

b) There's an assumption here that ad spending will decrease. One alternative
possibility is that ad companies have been stuck in arms races for years now,
without actually increasing yield. Look at the reasons why companies are
"pivoting to video" \- even with all this tracking and "AI" garbage, online
display ad effectiveness still sucks. As long as every network and publisher
has to follow the same rules, I don't believe any advertiser is truly worse
off. And from a principle-based view, I think any sort of smart-enough
tracking to produce truly well-targeted ads (vs today's "here's that thing you
already bought, let me show you it on every website now" crap) is too far on
the creepy side to be something we want anyway.

c) The other side of the "sites won't make money anymore" side is that with a
more even playing field there's a chance for some of the money to shift away
from the people playing tech games and _to_ the people producing the content.
There's not a ton of effective competition in ad networks right now, look
where all the revenue goes. A Facebook and Google duopoly also only makes the
implications of the tracking worse.

d) Finally, from the advertiser's perspective: where else are they going to
send that budget? TV? Newspapers? Please. And even the worst case here - a net
decrease in ad spending - is hard to paint as such a huge loss since there's a
_lot_ of more productive things out there than showing ads.

There are a few other trends that will need to be ultimately addressed
separately, though. One is the shift to video even for stuff that's rather
stupid to force into the video format, just because the audience is more
captive to ads, and this is a big problem IMO. Not really relevant to what
we're discussing here, though. A more interesting debate may be around fraud
prevention, I could see a plausible claim that if you have less history on the
user you have fewer heuristics you can apply for fraud-fighting. I'm mostly
meh on that, though, and willing to let fraud be the cost of business with
internet display ads and hope it just washes out in the long run. (I'd prefer
if it caused people to shift to paid models, but I'm not optimistic.)

------
freeone3000
Oh no, those poor advertisers! Won't someone think of the advertisers?

~~~
graeme
That's not what the article is about. Advertisers will always find a way to
advertise.

The article is about what _publishers_ to do, and speculates that smaller
publisher may not be able to create a business model, but larger publishers
will be able to adapt ad systems.

~~~
puzzle
The paradox is that part of this was brought by large German publishers, or at
least one of them, including the election of Juncker.

~~~
oh_sigh
Why a paradox? It seems like classic rent seeking.

~~~
puzzle
Yeah, but all these years the targets of the publishers' ires were and have
been US tech companies. That includes an op-ed claiming that the publishers
are afraid of tech. The same publishers that bragged about swaying elections
and decision making in Brussels.

