Hacker News new | past | comments | ask | show | jobs | submit login
GDPR enforcer rules that IAB Europe’s consent popups are unlawful (iccl.ie)
492 points by bajtos on Feb 2, 2022 | hide | past | favorite | 421 comments



It was obvious to anyone technical they didn't work as they presented themselves to work, but it takes time for the courts to deal with such things.

They are also totally annoying and I suspect there primary purpose was to annoy users and not actually comply with the GDPR. It was a way for these companies to fight the GDPR with a war of attrition. I'm glad you see with this round hasn't worked... Yet.

I suspect that based on this ruling, things will not get better, as in providing a less annoying user experience and more compliance with the GDPR. Instead I predict another round of pseudo compliance and a more annoying user experience. Eventually they'll start a policy campaign in earnest stating that the GDPR is unworkable.


You are right. I also believe many publishers knew this too but the IAB provides a shield of sorts and buys time when it is inevitably (as in now) ruled illegal.

Most ad-tech, and programatic advertising, is not compatible with GDPR. I think that is intentional on part of the EU - and something I am a fan of personally.

The industry needs to shift - contextual ads or other innovations - others have done this. They refused to self-regulate all these years and had opportunity to move away from their invasive practices.


It's hard to start doing this when your competitors keep playing by the old rules.

My hope is that ever more aggressive enforcement will finally lead us to the point where the dams break and everyone scrambles to get compliant at once.

The sooner, the better. But I realize that the legal system needs to ramp up the pressure, they cannot start with company-destroying fines on day one.

These rulings and fines keep me in good spirits, because I think we're actually getting there. Slowly, but still.


True, but the months ahead of GDR directive coming in was one of those potential moments.

Google kept promising us their own framework and consent system. They kept pushing the date to unveil it and as we ran out of time I had to build my own, and many others jumped into the IAB framework because of so few options (and it came down to the wire there too despite knowing for years this was in the works).

>> they cannot start with company-destroying fines on day one

I think they can - and the GDPR fines are linked to revenue - and I think they have no choice. Companies need to take this seriously.

>> Slowly, but still

I'll take slowly over backwards.


> I think they can

Certainly not. The courts would take a very dim view of that. You need to show through a series of regulatory interventions and escalating fines that you afforded those companies due process and that the fines are reasonable.

The maximum allowed under a law is virtually never reasonable in a first-time enforcement.


> Instead I predict another round of pseudo compliance and a more annoying user experience. Eventually they'll start a policy campaign in earnest stating that the GDPR is unworkable.

I predict all of this to fail, at considerable expense for the IAB and its clients. The GDPR is popular amongst us EU residents.


I hope it does fail. Although I'm not in the EU I like the ideas the GDPR puts forward.

My fear is that is legislation works in EU anything like it does in the US is that things that the people like but the corporations do not like... Well, corporate interests win out. I suspect that the whole reason the GDPR was allowed to pass was the corporations figured they could ignore it. Now finding out they can't they will fight in earnest.

I do hope I just being old and cynical and I'm ultimately wrong.


My not-yet-completely-cynical take is that EU still puts people before corporations, so we still have a chance to win. Fingers crossed.


This headline and article is a gross misrepresentation of the ruling. The ruling is that the TCF consent string contains personal data and that the IAB is the data controller for this bit of data. This ruling has no impact what so ever on consent popups. It basically "just" trashes the industry standard that is used to pass consent signals. There are plenty of custom or non TCF implementations (all equally awful) of consent dialogs.

This ruling puts Google and FB in a much more powerful position - because they do not have to rely on standards like TCF to pass consent signals.

Instead of going after publishers and website owners who integrate these popups in the first place - they went after the inventor of the spec.


Not quite. It does base some of its ruling on the consent string (it's the only personal data the IAB manages), but it does also conclude that the IAB is just as responsible as any complying participants. From what I understand, it argues that the IAB sets minimum requirements for the consent screens and ad serving, and those are not good enough.

See also page 126 for a summary of the ruling. An editorial of my favourites:

> order the defendant to

> a. prohibit, via the terms of use of the TCF, the reliance on legitimate interests as a legal ground for the processing of personal data by organisations participating in the TCF

> d. take technical and organisational measures to prevent consent from being ticked by default in the consent interfaces

> e. force consent management platforms to adopt a uniform and GDPR-compliant approach to the information they submit to users


The whole thing is based on them declaring the IAB the controller of PII data (in this case the consent string). If upheld all the things you list will apply because these are the responsibilities of data controllers as per GDPR. If the TCF string was not deemed PII data then there would not be a controller because the GDPR would not apply.

IMHO, if they were really serious about this, they would have to go after the actual controllers (not the inventor of the spec) - mainly the actual websites that implement these (misleading) banners in the first place. It's beyond me how they can qualify the IAB as a controller when they never collect, process or store any of TCF data.

If this wasn't so politically charged I'd say the IAB has a solid shot of getting this overturned in court.


this is just the first step. If the consent string wasn't PII, all the other data tied to the consent string would not be PII as well, because this is the cookie that brings all the data together.

So now that we have confirmed that they do indeed process PII and use the consent string as the unique identifier that ties the whole profile together we can start doing what you want. Going after the companies that attach other datasets to the consent string.

Before this ruling, the companies/controllers would have said that we process no personal data, thus GDPR doesn't apply. Now we have a ruling, saying that this is not a valid excuse.


"Before this ruling, the companies/controllers would have said that we process no personal data, thus GDPR doesn't apply."

That is not correct. These companies use TCF because the GDPR applies. If it did not - they would not have to use it. The GDPR automatically applies as soon as cookies come into play - regardless of what is in the TCF string.

The main thing here is not that PII data comes into play but that the IAB is the controller. Until now the controller was/is the website that actually controls (and passes to 3rd parties) user data. That is why you have to agree to joint controller agreements if you want to integrate the TCF frameworks on larger web sites.

Some background in IPs: The ruling mentions the reason TCF is PII because it can be combined with IP addresses. No one challenges IP addresses as PII data anymore. There were many ruling that classify IPs as PII - specifically in Germany (even pre GDPR).



I could apply as a data deletion specialist.


I'll do it for cheaper by dousing the servers in gasoline and throwing a match. Nuking it from orbit is the only way to be sure as they say.


> The Belgian Data Protection Authority said IAB Europe “was aware of risks linked to non-compliance” and “was negligent”. It also found that IAB Europe had failed to honour its data protection obligations to maintain records of data processing (Article 30 GDPR), to conduct a data protection impact assessment (DPIA) (Article 35 GDPR), and to appoint a Data Protection Officer (Article 37 GDPR).

Even if you were to give IAB the greatest possible benefit of the doubt, the fact that they didn't appoint a data protection officer makes it clear just how little they care(d).


Even with good salary, who in their right mind would possibly accept the DPO job at IAB? That's pretty much guaranteed legal trouble, because IAB will always try to point their finger at you.

Unless you're fresh in the job market and still believe in the good of people, maybe.


Arguably, in a company where the primary purpose is legal there'd be teams of legal experts (lawyers, attorneys, etc) that would be the ones deciding features and wording, not the devs themselves (assuming you mean, "who" as in "what developers", since we are in HN)


Normally, you could outsource that function.


Nonsurprisingly, the Interactive Advertising Bureau has a slightly different spin on the ruling [1]: "APD Ruling Clears Way For Work on Developing TCF into a Formal GDPR Code of Conduct".

I'm surprised that ICCL very assertively states that all data collected through TCF must be deleted. The Belgian DPA only mentions a €250.000 fine and gives IAB two months to present an action plan [2]. Interesting to see how this plays out. :)

[1] https://iabeurope.eu/all-news/apd-ruling-clears-way-for-work... [2] https://www.dataprotectionauthority.be/citizen/iab-europe-he...


The PDF[0] linked to from the original article says this, in "Sanctions" C.533:

2) In application of Article 100, §1, 10° DPA, order IAB Europe to permanently delete all TC Strings and other personal data already processed in the TCF from all its IT systems, files and data carriers, and from the IT systems, files and data carriers of processors contracted by IAB Europe;

Page 114.

[0] https://www.gegevensbeschermingsautoriteit.be/publications/b...


Thanks for the pointer! Do we have any idea why the Belgian DPA's press release would skip this part?


Some crazy figures here:

The maximum fine for such a breach is 4% of the company's global revenue.

Microsoft, in 2021, turned over $168Bn. Google turned over $181.69Bn. Amazon turned over a staggering $457.96.

Between them they had a combined turnover of $807.65Bn, making them liable for a fine of up to $32.3Bn per year (assuming revenue is flat and they all get hit for the maximum penalty and don't do any kind of damage limitation).

The EU general budget in 2019 was only €148.2Bn. So such a fine would actually cover nearly 20% of the running cost of a 27 member multilateral trading entity with a population larger than the United States.


IAB Europe is the entity being fined, not their participating partners. The linked PDF says their fine is 250k euros, which is "proportionate to the infringment" and less than the maximum.


Seems like the IAB is essentially volunteering to be the scapegoat to shield everyone else. 250k is peanuts compared to how much the industry has made breaching the GDPR over the last 4 years.


IAB's business model is broken at this point. The EU's point is that you can't rules-lawyer your way around GDPR violations by outsourcing the lawbreaking to a paid scapegoat company: having ruled the practices to be essentially illegal, this is going to end up being kicked up a level to the big advertising corporations themselves (by which I mean: Google, Amazon, Microsoft).


> this is going to end up being kicked up a level to the big advertising corporations themselves

When? In a century?

It took them years to reach a conclusion that even a layman skim-reading the GDPR would reach in an hour.


> making them liable for a fine of up to $32.3Bn per year

> their fine is 250k euros

massive disconnect between reality and imaginary worlds.


When the GDPR was first becoming law/being talked about a lot, I recall there being a lot of posts from people in Europe explaining to us Americans one of the major differences between the European system of regulations and ours, which I will paraphrase to the best of my understanding:

When the EU sets a maximum fine level, that's there to give their courts discretion to drop the hammer on companies that have clearly been abusive. Expected practice there is more generally to lead with something that's more of a warning. Then, if they do it again, they can escalate toward the maximum.

The 32.3 billion figure there was the maximum possible fine for the combination of Microsoft, Google, and Amazon. Personally, I'm unclear on whether anyone besides IAB is currently being fined, but in either case, the point here appears to be to send the message "what you're doing isn't OK, clean it up now" rather than "all your revenue are belong to us".

For now.


That's pretty much it. Intent matters. A lot. Unless there's evidence suggesting they deliberately broke the law, the fine is going to be rather low.

If they keep doing it, they can't say they intended to follow the law, and they'll be punished more severely.


System is very broken if they can avoid liability with this.


We designers must reasonably but seriously convey the user-hostility of these patterns to higher-ups at every available opportunity. Sure, you'll get overruled by the dollar-focused Jr. Marketing Exec. On the other hand, the folks who say things like "Refuse! It's a designers job to say no!" probably have much bigger savings accounts than I and most others do... but not saying anything implies consent, and that's when behavior that's bad for your users and bad for the world become a silently absorbed into your corporate praxis.


Most large companies have ethics hotlines you are expected to call when there is something questionable ethically or legally going on that might be difficult to bring up to a superior.

Personally I'd refuse to add a dark pattern cookie dialog, but I'm in the privileged position of being able to switch jobs.

But regardless, I'd probably send the ethics hotline an email saying that regulations are violated. Perhaps I'd send an email to the relevant regulator too, just in case.


Weird— I haven't worked for a big software development organization in some time but I've never heard of that. That's a positive thing if the company has good corporate culture and uses the information well instead of just calling your boss and asking them to consider being more ethical because you complained.


These things are there so that investor's money are safe. In my case the ethics hotline is above the company, in the owning company (Berkshire). They don't want massive lawsuits, or tarnished brands because of things like VW dieselgate or things like that. And I think a lot of times even minor things can get pretty big consequences if it's actually illegal and not just "well it's a matter if interpretation".


Unfortunately the CEO does not report to the ethics hotline.


> We designers must [...]

This isn't something that's inflicted on us by web developers (on the whole); it's done by accountants. So fines are the most appropriate remedy.

No judge wants to impose a fine that bankrupts a company; but fines that start gently, but double after each offence, are much more likely to cause the accountants to smell the coffee.


Absolutely. Ideally it's a policy problem that's all on management which should be too expensive to dissolve into some megacorp's operating budget.

Discussing how we can make achievable improvements now is also important.


It's not really done by accountants either. It's done by executives.


>but not saying anything implies consent...

Hey, lets have sex.

<Silence>

WARNING: DO NOT ATTEMPT

You seem to have gotten confused as to the fundamental nature of consent.

See the problem is it isn't put in writing. Putting things in writing gets people to pay attention.


There's a big difference between groups gauging each others' attitudes and individuals communicating specific decisions. Humans instinctively try to be harmonious with their group, and the group not responding to something negative communicates that this group doesn't care about it enough to respond to it. Furthermore, group attitude influencing individual behavior is a well-known facet of humanity. This concept is the foundation of television and other types of advertising.


This is amazing news.

I implemented GDPR consent management for some US publishers with EU exposure. As part of this I evaluated vendors and various systems like the IAB framework.

IMHO it was clear it was not compliant. It could never know the potential adtech it was going to load in advance (and therefore could not ask someone to consent), and it still allowed ads/adtech/trackers to load in page before asking for consent.

They ignored anyone who pointed this out.


But don't the adtech vendors have to declare what they do with the data? (Purposes and Special Features)?


IAB europe had a shared list of vendors and their purposes amongst the ad industry, and everyone's popups using the TCF framework just prompted with the same list because they _might_ be in the ads, not because they'd actually be on the page. Many of the vendors claimed every purpose, often as legitimate interest, regardless of what they actually planned to do and if they _did_ count as legitimate interest.


Also, in loading ads from these vendors, many often included external JS to whatever flavor-of-the-month adtech vendors or trackers they were using.

These were often not even listed in the framework. There was little-to-no compliance/auditing that I am aware. It was business as usual for many ad networks.


A former employer in the adtech space did audit that the ads were only including vendors from the list, but I don't know how many of our competitors did the same.


How were you auditing where the data went for advertisers that lost the auction though?


We were on the advertiser side of the equation - we just wouldn't bid on european IPs with ads that hadn't yet been audited or failed the audit.


Ah okay, that makes more sense that way.


Am glad to hear some folks were doing that properly.


The list is here [0] if anyone is interested.

[0] https://vendor-list.consensu.org/v2/vendor-list.json


This ruling should not be a surprise.

The writing has been on the wall for a long time that GDPR informed consent is to be interpreted in a narrow sense (i.e. actually being informed, not just clicking). And we know EU legal measures often take a long time but can bite hard. So here we are now!

[Edit]: Note that the decision can be appealed - so it's going to be a long while before we get a final verdict.


Good, but i 'd like to see someone going after the root perpetrators of this racket, the advertisers themselves. That industry is surprisingly immune from scrutiny despite the fact that they 've wholesale sold their soul to google which is now both the buyer and seller of billions of advertiser money. They re just enabling the monopoly


Advertisers need content to advertise on. The GDPR basically forces content providers to get rid of tracking. That will require Google to provide an ad platform without tracking. And then the fun is over for the advertisers.

Of course with underfunded government privacy enforcement bodies, that process takes a long time. And then there is Ireland.


No, advertisers want attention. And they are lazy, they are not going to look for content that fits their product. Google tells them "i have X users interested in your product" and that's what they buy. What's going to happen is they will move all their ad inventory into google search advertising.


Google will still tell them 'I have X users interested in your product'. It is just that Google will compute that from the contents of the wedsite instead of from tracking users.

It would be amazing if there would be no ads outside google search. But that will not happen. That is a void that will be filled very quickly.


> no ads outside google search

There will be no content then, so the inside of google will be equally empty


Good. Now pick a random one of the companies that used this particular product/service and make an example of them.

The problem I think until now has basically been that sites that rely on tracking ads know they are in violation. They don't want to comply, because it would be too costly.

Basically, a meeting at one of these businesses (I'm imagining) has a conversation where people say "Ok what do we do about the cookies? Unless we at least write the X and Y and Z tracking cookies, we can't keep the lights on so we cant't risk users just clicking 'Reject all' and getting dumb ads. What should we do? I think we should use that dark pattern dialog which leaves X Y and Z on for 75% of visitors who just click the biggest button. That at least buys us some time. If regulators complain we can always change it".

A regulation that was scary enough would see sites prefer shutting down over using a dark pattern. For that to happen, the fines not only need to be big enough to be fatal to the business, they have to actually go further and be personal fines to key employees.


> Now pick a random one of the companies that used this particular product/service and make an example of them.

In particular, the random number should be a point on an interval that is split into regions proportional to the size of the companies, so bigger companies are more likely to be selected.

Is there a name for such a weighted random system? It seems like it could be used in some non-deterministic electoral systems too (which isn't as bad an idea as it sounds).


If you want to solve the problem, roll up on Google and Facebook headquarters, throw Sundar and Zuckerberg in jail for a year.

Companies will think twice about their approach of "claim compliance until proven otherwise and then take the wrist slap".

Put CEOs in prison and you'll see lasting change. As long as they can harm billions of people and only pay a modest fine in return, they will not change.


You are confused about what the "lasting change" would be.

What would happen is that most of these major tech companies would simply ban all EU users.

If the EU wants to be shut out of most of the tech world, fine. Because that would absolutely be the result of if all "tracking" was effectively blocked or stopped.


That would be a great outcome for the EU. It means that EU companies have a home market that is shielded from their biggest competitors, while being free to compete on the world wide market.

If the EU would do that intentionally there would quickly be a complaint at the WTO.

In reality, as long as Google, Facebook, etc can make money in the EU, they are not going to leave.


I think if locking up crooks cause them to stop doing crooked things in your country, that's an absolute success, and one of the ideal outcomes of putting people in prison. ;)

But also, I would strongly encourage the United States to do this. Extradition is obviously far more complicated, and as you say, could just lead them to excluding the geographic region that holds them responsible.


That would be the biggest business opportunity in human history if major tech companies simultaneously decided to surrender access to one-sixth of the world's economy.


Whoopsie daisy! I'm sure IAB's err was a total storm-of-the-century, couldn't ever have been expected, failure of their otherwise iron clad commitment to honoring and respecting digital user privacy.


Anybody wants to shed some light what exactly was illegal at the consent popups? I think Google, Microsoft and others use all different/branded popups, so I would want to know what the problem is there.


From a UI point of view, the failures I typically see is that agreeing to everything is easy but declining is difficult despite both options needing to be equally prominent.

From a technical point of view, the tracking scripts are often loaded to begin with (where your IP address & browser fingerprint is already leaked) and declining tracking merely "asks them nicely" with no guarantee they'll obey the signal or whether the already-collected data (from just loading the script) will be deleted.


Also the legitimate interest abuse was also ruled against. Many of these claim legitimate interest so they have a second set of options you need to untick or click "object" to individually. This isn't valid consent as not giving consent is harder than giving it, and per the ruling isn't valid legitimate interest as the companies did not conduct an adequate balancing of the user's interests vs the businesses and when the DPAs did so they did not find the company's interest outweighed the user's privacy interests.


I can't find any English language news about it, but Yahoo Japan are going to withdraw a bunch of services from Europe in April including webmail and news. They're citing GDPR costs.

Edit: Apparently it's been picked up since last time I looked: https://www.theverge.com/2022/2/1/22911965/yahoo-japan-europ...


What does Yahoo Japan have to do with Europe?


Not sure exactly what your question means but I'll attempt an answer: They currently offer services in the EEA and UK, such as webmail and news alerts (all in Japanese) and they will withdraw those services (presumably by geoblocking) in April.


Hopefully that will encourage Japanese expats living in the EEA and UK to push for equally good data protection laws back in Japan.


>EU data protection authorities find that the consent popups that plagued Europeans for years are illegal. All data collected through them must be deleted. This decision impacts Google’s, Amazon’s and Microsoft’s online advertising businesses.

Laughable really. How the hell do you reconcile all this data and make the bean counters happy that yes: this is the data we collected through the popups over the years.


This comment is being downvoted but I’m also wondering: how will this be enforced? Will authorities go and audit the data? How will they know where to look? Etc. “Hey did you delete the data?” “Yes, we deleted it” would, indeed, be laughable. This is not to mention the problem of identifying “the data” which has certainly now been processed ad nauseum. I think the reason companies don’t take these things seriously is because they know they’ll get away with it, one way or another. You can’t expect to enforce any of this if you don’t also legislate the technical specifics of how data must be collected, stored and processed so that its provenance is maintained.


> how will this be enforced? Will authorities go and audit the data? How will they know where to look? Etc. “Hey did you delete the data?” “Yes, we deleted it” would, indeed, be laughable.

If you're not familiar with Northern European culture, I'm quite sure the companies can expect literal inspectors in their offices expecting clear answers to where the data is and what was done with it. They will be pleasant but firm, focused and unswerving. Infractions and evasions will be carefully noted. These notes will then form the basis of further lawsuits. These people are not fucking around.


> If you're not familiar with Northern European culture, I'm quite sure the companies can expect literal inspectors in their offices expecting clear answers to where the data is and what was done with it.

zero chance of this ever happening.


What happens when, say, a restaurant does not allow inspectors to look at their operations? They get shut down or fined. Same thing.

And, no, it's not different because the tech companies are serving up bits and bytes. Same mechanism.


restaurants provide food. serving bad food gets clients killed.

no comparison whatsoever with some websites. what you're writing about has to do with state overreach.


I'm fine with the state as my representative physically shutting down and fining companies that without consent collect data on me and my loved ones. So, no overreach.


How well-versed are they in file systems, database schemas? Will they look at source code? Will they understand it? How will they determine what data came from where?


All EU countries have a Data Protection Authority org, so yes the inspectors will have the capabilities to carry out these things.

Also this is not criminal law where someone is innocent until proven otherwise. Companies have to prove themselves that they comply with the law. Like food companies have to log cleaning to show they follow the food regulations, as one example.


Ah, I see that you expect a US / Southern European style revolving-door wink-and-a-nudge quid pro quo! Yep. Nope.

These people will be frighteningly competent


How is anything enforced? I don't see this as much different from anything else that companies have to apply with. You can never reach 100% certainty that anyone complies with the law. Be it GDPR, work environment law, product health req, etc.

You do inspections. You demand proves of compliance, and when said proves are deemed inadequate you sanction them until something adequate is provided.

Like everything else with law its fuzzy and ongoing.


If you run a company that violates the GDPR, you might get sued and have to pay some fines. This is a calculated risk taken by many executives.

If you then get a letter from the regulator stating that you were in violation, and have to delete some data, and you answer that you did, and signed it -- then you're likely up to criminal charges if that was a lie.

This is not a line most executives are comfortable with crossing.

If any subsequent GDPR shenanigans come up, and they found you intentionally lied to the regulators, you're in some deep shit.

There might or might not be auditors visiting you after the first letter. If you lie and are found out, your career is over, and you might wind up in prison.

It's not perfect for enforcing privacy, but it's much better than not having such a ruling.


> There might or might not be auditors visiting you after the first letter.

The ICO in the UK doesn't work like that, AFAIAA. You first get a polite letter; then a firmer letter containing helpful advice on how to come into compliance.

After that, you join a huge queue of companies awaiting legal enforcement action. The ICO is deliberately underfunded; it always has been. The government passed data protection laws, but they reserved the power of enforcement to an agency that was crippled from the start.

I welcome this court decision, obviously.

[Edit] Most of the penalties levied by the UK ICO used to be against local governments and government agencies. They were rarely against commercial operations. I see that there are some companies (that I've never heard of) now appearing in the list.

https://ico.org.uk/action-weve-taken/enforcement/


You can enforce it by feeding a system with data, then checking if the data is in the system (e.g. by trying to buy the data, or pretending to be an advertiser).


Businesses cooking the books and lying to auditors is a tradition as old as time.

Enforcement isn't the real crux of the issue, it's that for some reason it's uncouth to come out and say: this regulation is targeting known liars that we should expect to ratfuck the system as hard as possible.

If that was the commonly accepted understanding of those conmen, enforcement methodology would get solved quickly. Which is why they work so hard to not be seen as ratfuckers.


Nobody's going to check that all the collected data has been deleted. But if it turns out that someone has retained data about me (or any other individual) that they claimed to have deleted, then they're in violation of a clear court order, and are eligible to be clobbered with a fine.


Well, that's their problem. They must delete the data or face legal consequences. That should act as a deterrent to future "too smart for their own good" ad people.


I agree! I'm just curious how would you do it? Look at when you deployed the popup to production and then delete all data from that timestamp forward?

Engineering leaders now have ammo to push back against illegal roadmaps foisted on them.


I guess that if you cant deliminate the unlawful data from the rest, you'll just have to delete all of it.


If they can not prove that data was not gained by illegal means, the only way would be to delete all data.


When GDPR was introduced we flagged every datapoint with their point of origin I believe big tech did the same.


So, how long until at least one online media giant realizes that not tracking their users and good old display ads are the easy way out?


Unfortunately, this is a Prisoners' Dilemma. If there were no personalized ads, regular ads would soak up all the ad budget and therefore be sustainable. But as soon as there are personalized ads, they quickly outcompete regular ads. Hence regulation is required.


Never, as long as their core business model is based on privacy invasive tracking? They have every incentive to fight this back and none to actually comply (unless fines start getting higher, I suppose).


Well, me they've lost, I'm ad blocked to the hilt. But back in the day when tracking became pervasive the only thing that all that presumably smart coding did was irritate me, especially because I never saw a single ad that really appealed to me. This may well be because I'm weird, but even then: that's what tracking is for right, to personalize the experience.


not gonna happen. (I don't know if it's related, twitter just showed me a cookie dialog out of the blue). Google is big enough to set their own consent standards, the IAB was a ruse anyway


I don't understand the findings. The TCF system doesn't collect personal information. The spec is at [0]. CMPs are the popups responsible for creating the TCF string. The IAB provides a spec for how these should operate, but does not supply one of its own. These can absolutely misbehave, and the IAB has previously notified the adtech industry about known misbehaving CMPs.

[0] https://github.com/InteractiveAdvertisingBureau/GDPR-Transpa...


My understanding so far is that the TCF allows providers to accept 'legitimate interest' (instead of direct user consent) as a valid legal basis to store or process user data. This is commonly used for user tracking and advertisement / profiling, meaning you'll get tracked even if you clicked the 'Reject All' button.


My understanding is that Legitimate Interest is something defined by the GDPR lawmakers, not the IAB. If so, and now it appears that LI is not a valid legal basis, then every business operating in Europe needs to be concerned with this ruling, not just adtech.

For example, HN probably collects my IP address under LI. Now it may be illegal for it to do that.


It's defined in GDPR as something that 'can be reasonably expected for the business' and has 'little risk of infringing on privacy'. They specifically list fraud prevention, information security, dealing with employee data, as valid use cases. Marketing most definitely is not.

This move is basically clarifying that you can't simply claim legitimate interest for most advertising purposes, which the TCF was encouraging/facilitating.


To be frank, the practical result of GDPR is that it made my browsing experience worse.

Nearly every website opens with an annoying cookie popup, often blocking the content (or reducing it to a fraction of my screen on mobile).

I've never once clicked "Yes, track everything", except by accident when tricked into it by deceptive UI (eg. a button designed to look more inviting than its less invasive counterpart).

I get that wasn't the intent, and there are less intrusive ways for companies to comply. But the result we ended up with is a mess.


This is those companies successfully instrumenting you to lobby on their behalf. It is purposely and spitefully made to be annoying. Let's not reward that.


Don't mistake my comment as an endorsement for data collection.

It was about the practical effects that came about after the legislation was introduced. I hardly believe webmasters around the world coordinated a premeditated, mass conspiracy to annoy their visitors. I rather think the mess results from a misunderstanding on the part of businesses about what is actually required by the various legislation, complacence by the poor chap who's just trying to publish a site, and, yes, dark patterns on the part of platforms providing elements of the stack.

e.g. Those annoying banners aren't needed if you construct your site to not use cookies at all, until they're actually required for functions a user explicitly requests. Platforms have no business asking for my consent in the first place to cookies they know darn well do not serve any bonafide interest for the user.


Well, all those popups are at least showing how much you've been fucked up before by tracking and analytics and other systems in place.

While the outcome isn't optimal (for the moment) we now at least see what's happening.


> the practical result of GDPR is that it made my browsing experience worse

Actually it's the website operators that did that. The GDPR doesn't mandate all these cookie popups.

GDPR declared war on trackers. The popups is the trackers fighting back. We are civilians caught in a warzone. I for one hope that GDPR wins; but there's a way to go yet.


Nope.

> there are less intrusive ways for companies to comply.

These intrusive ways are companies not complying. This is what is currently being litigated, an industry pulling out all the stops to not comply with the GDPR.

This ruling is a major victory along the way.


There should be some separate law that would oblige companies to accept "do-not-track" HTTP header.

Then we could just set it in our browser settings.


Do we even need a law, or would another case by Max Schrems suffice? The intent of Do Not Track is quite clear.


DNT compliance is voluntary. I don't think Schrems would have a legal leg to stand on.


DNT compliance is only voluntary in that it can be ignored when there is no law requiring consent to track.

If someone says "Do not track me", it's a bit disingenuous to interrupt them with a dialog asking them all the ways they might want to be tracked. It's either an attempt at coercion (we'll keep wasting your time until you give in) or an attempt to gain fraudulent consent through trickery/mistakes.


I always click the "Yes, track everything" because it gives me better ads.


Incredible.

On one hand our Data Protection Authority gets that done and on the other hand the European commission is about to start legal action against Belgium for GDPR infringements https://www.brusselstimes.com/news/belgium-all-news/173086/e...

And we just passed a law that permits our IRS to have our bank account's data.

And there is an ongoing project to store and register citizens' health data in one single database, available to insurers and government agencies.

Over the last year there's been drama and real concern around the DPA https://iapp.org/news/a/belgian-dpa-director-resigns/ with director resigning and claiming pressure from the authorities post resignation (as PI rummaging through here trash bins).

We have a guy who single handedly decides if databases projects are OK with GDPR and privacy laws and he's the one providing the software solutions.

Belgian surrealism at its finest.

I know there are people from the north on HN, I wonder what are their view on these matters ?


Americans think they can ignore the GDPR because it doesn't apply to them. Guess again. Moreover, other countries outside the EU are modeling their own, new legislation on the GDPR. Eventually, the US private sector will be forced to implement the GDPR for convenience' sake. The only issue will be the finding that because of built-in,NSA/FBI backdoors, data sent to the US cannot be secured under any circumstances.


I think you are confusing "Americans" with mega-corporations.


Plenty of American HN commenters agree with the mega-corporations in these threads, including at least some not employed by them


Google, Amazon, and the entire tracking industry relies on IAB Europe’s consent system, which has now been found to be illegal following complaints coordinated by ICCL. EU data protection authorities find that the consent popups that plagued Europeans for years are illegal. All data collected through them must be deleted. This decision impacts Google’s, Amazon’s and Microsoft’s online advertising businesses.


Ok but I don’t get how this consent system ran for years? How can one get pre approved? The issue here isn’t that they collected data (it’s own problems), but they they didn’t use the right language! Does this mean it will be a long term of conditions like apple does every time we use a website? ICCL might have made internet worse with this. Not better.


> Does this mean it will be a long term of conditions like apple does every time we use a website?

No. Freely and unambiguous given informed consent means that the users need to actually be able to understand what they consent to. Encrypting the information in a 500 page novel, obfuscating it beyond human ability to understand or interpret it, is not informed consent.

ToS are not currently under the same requirement of Freely and unambiguous given informed consent. They just require consent, which for now has been interpreted to mean basically anything that a lawyer want it to mean. People have given away their spiritual souls and first born child in ToS, through the ability to enforce such contracts is open to debate.


The issue here is larger than using the right language. I'm browsing through the full ruling [0], but C.1. Breaches, pages 115-117 is a good summary.

- "First, the consent of the data subjects is currently not given in a sufficiently specific, informed and granular manner"

- "Second, the legitimate interest of the organisations participating in the TCF is outweighed by the interests of the data subjects, in view of the large-scale processing of the users’ preferences (collected under the TCF) in the context of the OpenRTB protocol and the impact this can have on them."

- "In the absence of systematic and automated monitoring systems of the participating CMPs and adtech vendors by the defendant, the integrity of the TC String is not sufficiently ensured, since it is possible for the CMPs to falsify the signal in order to generate an euconsent-v2 cookie and thus reproduce a "false consent" of the users for all purposes and for all types of partners. As indicated above248, this hypothesis is also specifically foreseen in the terms and conditions of the TCF" - no way to verify consent

- "The Litigation Chamber also finds that the current version of the TCF does not facilitate the exercise of the data subject rights, especially taking into consideration the joint- controllership relation between the publisher, the implemented CMP and the defendant. " - no way to revoke consent, or request your data

As to why the system ran for so long: yes, enforcement is (too) slow.

- Many complaints were made to several European DPAs in 2019.

- Litigation commenced 13 October 2020

- Interim Decision 8 January 2021, amended 23 February 2021

It looks like IAB made a lot of procedural complaints when it became clear their arguments were rejected

[0] https://www.gegevensbeschermingsautoriteit.be/publications/b...


Thanks! This is an informed take.


To this day Twitter is not even trying to comply with GDPR. They have a banner "we track you, deal with it" and that's it. So far nothing happened.

I hope that they get fined billions for keeping it illegal for so long but I doubt it.


The DPAs are not in the business of pre-approving, much like your local court won't pre-approve your pre-nup and so you might have to fight over it in court in an acrimonious divorce.

You can of course retain outside help to advise you but there's no guarantee that they are right and many of the consultancies and providers were incentivized to compete on maximum opt ins. Maybe the CMPs and the adtech companies can fight it out in court over whether the CMPs misled the adtech companies or they just gave the adtech companies options which the adtech companies misused.

The ruling is not just "fix your language", though that's what the industry will be incentivized to try, again. They all bandwagoned on hiding secondary opt out checkboxes under "legitimate interest" and this wrist slap tells them it's not ok:

> Fails to properly request consent, and relies on a lawful basis (legitimate interest) that is not permissible because of the severe risk posed by the online advertising tracking (Article 5(1)a, and Article 6 GDPR)

> Fails to respect the requirement for “data protection by design” (Article 25 GDPR)

The route to complying is clear. Don't track without opt in. Know where the user data is going, not just "whichever vendor happens to be in the winning ad". Don't use dark patterns to encourage the opt in. It's the industry's attempts to bury its head in the sand because it hurts their bottom line and their search for increasingly convoluted workarounds that is making this complicated.


> Does this mean it will be a long term of conditions like apple does every time we use a website?

I guess it is the opposite. GDRP requires clear and understandable text in privacy policies.


Ironically, nothing about GDPR itself is clear and understandable, as is evidenced by the fact that everyone keeps discovering years after implementation that some random country disagrees on their interpretation of it.


Let's be real here, IAB Europe knew exactly that what they were doing was borderline illegal. Now it's officially illegal.


The only people who misunderstand GDPR are people whose salaries depend on misunderstanding GDPR. The requirements are quite clear, advertiser just don't like them and are trying to avoid complying with them.


Yeah? So nobody in the EU is using Google Fonts, AWS, GCP, Azure, CloudFlare, Akamai or any other US provider then, given that this ruling is based on the fact that loading the consent settings screen from the shared domain requires "sharing" an IP address? Nobody in the EU runs an online business reliant on advertising? Of course they are.

I'm convinced pro-GDPR views are always ideological in nature. It's impossible to read GDPR or related case law from the perspective of trying to comply with it and not be disgusted. Every single requirement is vague and subjective - words like "appropriate", "necessary", "reasonable", "proportionate" etc aren't just a part of this law, they are the entire essence of it. And even the occasional term that looks precise often has totally unintuitive definitions, like the way they define large random numbers as "personally identifiable" even though there's no database that links these numbers to any actual personal identity.

Even this announcement about a new ruling is a fog of confusion. Why is asking users for consent, a key piece of GDPR compliance previously, suddenly not OK? Why is this being phrased as "freeing users from consent spam"?

This sort of thing wrecks the EU in the eyes of people actually building things. It makes it seem that this is a part of the world without rule of law of any kind. You can invest hundreds of millions into GDPR compliance and years later discover it was all in vain, without any warning whatsoever. You're being constantly trolled in courts by random academics and "civil liberties" organizations who don't seem to care about actual civil liberties issues like mandatory medical interventions but who define advertising cookies as a grave threat. Dealing with the EU gets ever more painful and if this keeps up, people there are gonna discover they're being denied services or simply charged more as a "GDPR litigation premium". And then they'll be stuck, because the home grown EU software industry is stillborn.


> Every single requirement is vague and subjective - words like "appropriate", "necessary", "reasonable", "proportionate"

This is how laws work and why the "law as code" people are not going to succeed. The US leaves this to the enforcement stage, e.g. many tests in US law for ascertaining enforcement include things like the reasonable person test (https://en.wikipedia.org/wiki/Reasonable_person). Proportionality is a well enshrined standard in EU law in particular, and cuts both ways - it's why this ruling is not the maximum fine out the gate.

Or let's take this clause from the DMCA (regarding what is considered obsolete and therefore the library may format shift): "For purposes of this subsection, a format shall be considered obsolete if the machine or device necessary to render perceptible a work stored in that format is no longer manufactured or is no longer reasonably available in the commercial marketplace."


>Why is asking users for consent, a key piece of GDPR compliance previously, suddenly not OK?

Asking for consent is still OK. Just the way how IAB has been doing it is not OK as it was found to not constitute explicit consent.

And before you say that explicit consent is not defined there are easily accessible guidelines from the European Data Protection Board. https://edpb.europa.eu/our-work-tools/our-documents/guidelin...


> Does this mean it will be a long term of conditions like apple does every time we use a website

We call that a privacy agreement. But having a proper privacy agreement that lists what data is collected and what happens with it is far from the only part of the ruling


GDPR enforcement is completely arbitrary (in both senses of the word). People might cheer for the downfall of the tech giants but it's really just a way for the EU to control US companies, extending their power beyond their jurisdiction.


If those companies extend their business beyond the US' jurisdiction, why do you feel they shouldn't be subject to some form of control where they operate? I'm legitimately asking. This is about something that was done within the EU to EU citizens. Why shouldn't the EU have a say?


I don’t feel that, actually. I’m not sure where you got that impression - maybe straw men are easier to debate?

There are laws and then are how laws are enacted. Hint: pay attention to how homegrown EU companies are treated.

EDIT: https://www.enforcementtracker.com/ Look here specifically. Sort by fine amount. Look at the companies that are being fined the hardest. It's not just the US that is being targeted. There's this island nation that recently decided they didn't want to be part of the EU...


The largest fines are to US tech companies, which is expected due to (a) the fines being proportionate to revenue and these being the largest companies in the world and (b) these businesses having a significant involvement in large scale tracking of users.

I think the argument of like "well the law was passed to harm US companies specifically because US companies specifically do this" ignores that this is a undesirable behaviour with significant negative externalities, so this feels a bit like complaining that encouraging green energy at the expense of fossil fuels is discriminating against Russia and the middle east.

Once we get past the tech companies the next biggest fine is for H&M, for surveillance of call center employees, not just at workstations (which is probably also not allowed), but in their private lives, disclosure of that detail with managers, and targeted harassment from that information. This seems pretty egregious, and not political retribution against the UK.

Next up are some Italian companies fined in Italy, UK companies getting fined _by the UK_, and Vodafone subsidiaries getting fined everywhere. You could argue Vodafone is a UK company being unfairly targeted, but from what I remember of coverage of the (Spanish, I think?) ruling, they're a repeat offender in this regard.


Sorry, it was not my intention to construct a strawman: maybe I misunderstood what you were saying.

> a way for the EU to control US companies, extending their power beyond their jurisdiction

How are they extending their power beyond their jurisdiction, considering that this is something done in the EU to EU citizens?


Because judgements are arbitrary and in practice unfairly hurts foreign companies.

There's an analogue that has happened in the U.S. Let's say that my little white town passes a law that forbids jaywalking. Protects pedestrians... Makes it easier to drive... Sensible law right? But in practice, it's the 1940's and the cops ONLY ticket black people. In practice, it's not a law against jaywalking - it's a law to drive out all the black people and make the white town inhospitable to anybody with skin tone.

GDPR claims to protect the people but is used as an economic weapon.


Or maybe the problem is that the US and UK also happen to be places that foster an attitude in their people that everyone else should just bow to them and do things the way they want...?


> extending their power beyond their jurisdiction

US companies inject all sort of trackers and spyware into browsers of EU citizens and you talk about jurisdiction?


Or just a way to keep peoples’ data inside EU and not allowing it to leak for-profit companies.


That is one purpose yes and that’s why it has support of the people. The PATRIOT Act is similar. Its purpose is to protect Americans from terrorism.


I wish my government looked out for me like this.


The scary thing is that it's the EU doing this. Our national elected governments are not interested in actually fixing things like this because it doesn't immediately win votes, and there is only a limited number of national civil servants so nobody is working on this kind of thing on a national scale.

But put those civil servants in a committee in Brussels with not as much short term pressure, and they can work out regulations that achieve the right thing.


The "EU doing things" is not detached from your national government. In fact all EU legislation is being approved by your national government in the EU Council and the EU commission has to report there. (As well as the EU parliament, however the EU parliament is weak ...)

Edit: maybe as addition in the last point in parentheses: The EU parliament is purposely weak, as the EU is a union of states and the member state government want the power in the council and don't want to give up power.


Well said. I'd add that the EU has for decades been a convenient scapegoat for member governments to point to, when "forced" by the EU to do things that needed doing but are politically difficult. Think of all the national champions forced to live by market rules, like flag carriers, telecom monopolies etc.


I know that. What I consider "scary" is that the EU can only do this because they're aren't directly elected and so not as subject to the typical democratic pressures.

It points at a clear weakness of democracy.


Pressure your government to vote "no" on policies you don't like or pressure your government to initiate other legislation. They have the power and responsibility.

And yes, I personally would like to have a stronger EU Parliament relative to the Commission and Council. However there is no reason to let the national government escape with "it's EU law" after they approved it. (And yes, Council doesn't require unanimous vote for most items anymore since the Lisbon treaty, thus it is possible your government voted "no", but that then is democracy and they have to convince other governments ...)

(Just a side note: I like GDPR and think it is to large parts good and push my government to support it)


I think you misunderstand.

Almost all law coming out of the EU is really beneficial for the people, in my experience. Making a law like the GDPR and implementing it is hard work that doesn't grab headlines and first gives us a few years of annoying popups, but in the end it will actually improve privacy for EU citizens.

And national politicians can't do this anymore, because they have to be in the news each day and be in constant campaign mode because the next election may come sooner than expected. They need big words and shiny results.

If we make the EU more democratic, will it become less effective too?


> If we make the EU more democratic, will it become less effective too?

This is probably the first time I'm hearing somebody claiming EU was effective ;)

However you are right - the fact that there is less attention on EU legislation enables different dynamics.

However I think it is quite different between countries how well they do. Here in Germany I am quite optimistic that the new government will do quite a few good things ... but maybe I'm too optimistic, but lots of good signals from my pov


No it doesn't. Democracies don't do this because it's posturing designed to appeal to a particular kind of person (e.g. your kind of person).

Normal people don't care about cookies or consent popups and merely find them annoying/frustrating. I've never, ever heard anyone praise these popups outside of Europeans posting on Hacker News. That's a small community and it's a bubble convinced of its own purity.

Here's why democracies don't do this kind of thing: democratically elected governments are expected to generate economic growth and jobs by voters. Constantly levying massive fines on companies who aren't actually upsetting most citizens, via ultra-vague laws that create "tails we win, heads we also win" outcomes for the bureaucracy, is something that most mature democracies realized don't work out well in the long run. So they don't do it.

The EU has no such concerns because it's not accountable to anyone, for anything, despite what sometimes people like to try and claim. Result: a stagnant economy with an ever shrinking proportion of global GDP that tries to cover up its damningly consistent failure to produce successful tech firms by pretending it's too morally righteous to do so.

Signed,

A European. But not an "EU citizen".


> In fact all EU legislation is being approved by your national government in the EU Council

it's via QMV, not unanimity

so no need for "your" national government to approve it


Considering that almost all governments voted "yes" and only Austria voted "no" as they considered it to weak I think it is fair to say their government supported it.

https://web.archive.org/web/20171125221345/http://www.votewa...

In general you have somewhat of a point, but then it is democracy that the government would be responsible to argue for their point and convince others.

"EU did it" is a cheap excuse.


There's a bit of that. But I think a big part of the reason is that national governments can not address international issues.

The EU represents 300M people, and has the economic and political weight to make a dent.

The same goes for other international issues, such as climate change, corporate tax evasion, cyber crime, etc.


445 million people.


> Our national elected governments are not interested in actually fixing things like this

Data protection laws existed before GDPR. GDPR itself is not that different from Swedish data protection laws, for example.

Everyone ignored them for years (in case of French laws, for decades, apparently). So, the next step is to pass and enforce the law through the EU.


This is not really accurate.

The enforcement of GDPR is still up to national civil services/judiciaries, in this case it was a cooperation of multiple national protection authorities.

Even the legislation itself necessarily involved national governments and national civil servants in national ministries

GDPR being an EU level legislation has more to do with the absolute nightmare it would be for the internal market to have 27 different standards and the drastically lower leverage available for enforcement than disinterest in the subject


There's this:

• Austria: Datenschutz-Grundverordnung (DSGVO) • Belgium: algemene verordening gegevensbescherming / règlement général sur la protection des données (RGPD) • Bulgaria: Общ регламент относно защитата на данните • Croatia: Opća uredba o zaštiti podataka • Cyprus: Γενικός Κανονισμός για την Προστασία Δεδομένων • Czech Republic: obecné nařízení o ochraně osobních údajů • Denmark: generel forordning om databeskyttelse • Estonia: isikuandmete kaitse üldmäärus • Finland: yleinen tietosuoja-asetus • France: règlement général sur la protection des données (RGPD) • Germany: Datenschutz-Grundverordnung (DSGVO) • Greece: Γενικός Κανονισμός για την Προστασία Δεδομένων • Hungary: általános adatvédelmi rendelet • Ireland: An Rialachán Ginearálta maidir le Cosaint Sonraí / General Data Protection Regulation (GDPR) • Italy: regolamento generale sulla protezione dei dati (RGPD) • Latvia: Vispārīgā datu aizsardzības regula • Lithuania: Bendrasis duomenų apsaugos reglamentas (BDAR) • Luxembourg: règlement général sur la protection des données (RGPD) / Datenschutz-Grundverordnung (DSGVO) • Malta: Regolament Ġenerali dwar il-Protezzjoni tad-Data • The Netherlands: algemene verordening gegevensbescherming • Poland: ogólne rozporządzenie o ochronie danych • Portugal: Regulamento Geral sobre a Proteção de Dados (RGPD) • Romania: Regulamentul general privind protecția datelor • Slovakia: všeobecné nariadenie o ochrane údajov • Slovenia: Splošna uredba o varstvu podatkov • Spain: Reglamento general de protección de datos (RGPD) • Sweden: Dataskyddsförordning • The United Kingdom: General Data Protection Regulation (GDPR)


This is just the translation of GDPR in the EU languages, did you mean to reply under a different comment?


If there's one thing you can rely on, it's an Irish enforcement body doing sweet feckall. No wonder they like to use the Irish DPC!

The EU is our saving grace far too often.


I would argue that many national governments (and local data protection agencies) are doing things, this was the Belgium national data protection agency. The issue is really Ireland, whose data protection agency has been twarting enforcement efforts. The reason why they are important is that they are technically responsible for enforcement against many of the big guys because they have their hqs in Ireland, which was also the reason why they didn't want to enforce, economic interests.


Remember you have MEPs representing you as well. European elections too often play a distant second fiddle to domestic ones but this really should not be the case.


Domestic politicians have the ability to instigate changes to legislation. MEPs lack that power - all they can do is block bad legislation from getting passed.


MEPs sit in the European Parliament, which is a talking shop, with very limited powers. It's hardly surprising that few Europeans know who their MEP is.


I don't really have a MEP that's "mine", it's proportional representation,not a district system.


No districts? Is this a national "party list" system?

Where is that (pardon my inquisitiveness, and feel free not to answer)?

Instinctively it feels wrong not to be able to vote for a representative you can identify; but I can't formulate a coherent reason why it's wrong.


In the Netherlands, in my case. Party lists, yes. Most European countries have that as far as I know, I thought the UK was an outlier.


I see. I wasn't aware that the UK EU constituencies were an idiosyncratic deviation.

Of course, we no longer have MEPs! I often forget this - that's how much difference the MEPs made to my life.


Europe is obviously walking very slowly towards federalism without saying it too loudly.


Germany and France are saying it fairly loudly - they've never hidden their intent to make the EU a federal, unified, state.

And with Brexit, the biggest obstacle to that has been removed - the UK never wanted to be part of a Federal EU (because we always considered ourselves part of the British Empire/Commonwealth). There are other EU countries who aren't wildly enthusiastic about a Federal EU too, but it was always the UK being the most loudly opposed to it.


It's true that the UK was always the biggest opponent, but don't kid yourself that the rest of the EU is on board with federalizing. There is no popular mandate for that whatsoever.

Just look at what happens whenever some EU treaty needs ratifying by national referendum.


> There is no popular mandate for that whatsoever.

that never stopped it before, just look at the "Constitution for Europe"

rejected by the French and Dutch electorates

it was then rejigged slightly and then pushed through as the Treaty of Lisbon (without pesky referendums)


The Lisbon Treaty did at least need a referendum in Ireland. It was rejected initially, partially as a warning shot to the then unpopular government between elections, and partially because of genuine concerns about the impact it would have on Ireland's military neutrality and for the concerns that the EU could then impose a minimum corporate tax rate on the country.

As a result, the EU agreed a set of guarantees [1] that the Lisbon treaty would not be used to do either of these things (to Ireland specifically), and only then did it pass in Ireland.

An EU army has more widespread opposition these days, so hasn't been raised since. Minimum corporate tax rates did not pass through the EU, though this year the US led an effort that is going to result in them globally via other avenues.

[1]: https://www.iiea.com/images/uploads/resources/230535195500_L...


> And with Brexit, the biggest obstacle to that has been removed

this is false. very few european countries want a federal, unified state.

so nothing of meaning will happen until a lot of things change.


Yes, of course, that's been stated as the end goal since the very first treaties in the 50s.

"Ever closer union"


Then vote for it!

Just kidding.


Then riot for it!

Maybe kidding? Seems the only way to get a single-issue topic on the agenda these days.


I do, as near as I can anyway. My government has been captured by the capital class and will be difficult to recover.


Why kidding?


Say you live in a two-party first past the post system. If what you want to express is "I like privacy regulations", the single bit of information that your vote conveys does a very limited job of communicating what issues you actually care about.

The signal in traditional voting is very diluted.

You vote on a person that you think supports some of the things you care about. You are not allowed to weight in on individual issues in a way that matters.

The person works for several years, and the only feedback you have on that process, the only tether that holds that person accountable, is whether you vote for them the second time.


How many EU countries run a "two-party first past the post system" nowadays?

If you are in a first past the post system, and in a safe seat, vote for one of the no-chance-of-winning candidates who best represents your views. Although they won't win, the fact that they are getting votes will be noticed and the main 2 parties will respond by adopting some of their policies. E.g. in the UK as more people vote for the Green party, other parties will become more Green to get those votes back, even though the Green party has only ever got a single MP.


Even in a decent PR multi-party system. Our green party for example is environment first, left wing economics second, public transit, pro-agriculture, anti-nuclear, somewhere down the list is internet privacy.

Or maybe I could vote for the labour party, which are centre left economics, pro-EU, pro-housing expansion, pro-healthcare investment, pro-environment, somewhere down the list is internet privacy

The idea that there's a party that (a) both has the same views on all issues as you do, (b) has sufficient votes to get seats and (c) orders issues in the same importance you do, for everyone, is clearly not valid. More parties = more choices, and this is often better, but ultimately we'd end up with de facto direct democracy to have a party with the exact views for every person.

Similarly, even for myself, I consider internet privacy important. Maybe I should vote the for the pirate party then? Except I consider the environment more important and our pirate party is so small that it hasn't even considered a position on non-privacy related issues, never mind have an adequate plan for how we're going to make a transition from a heavily fossil fuel based power supply. Even on that environmental issue, I think the green party's anti-nuclear stance has historically been a mistake, but if the others are just going to build more gas plants, I'll deal with it.


Because calling for a riot is likely sedition? (Depending on jurisdiction)


Because voting doesn’t matter when your choices are corporate stooge A and corporate stooge B.


I always think of South Park. It's always a choice between a giant douche or a turd sandwich.


More like if it was a choice between two, shitty giant douches and one was painted orange and the other was painted with a rainbow. They’re both the same thing with a different color paint.


Here is what I don't understand. They clearly mean to ban online tracking. They make the laws. But instead of making a law that makes tracking illegal, they make a law that says you must consent, and leave blank what consent means. Then they make rulings about what consent means that amount to "it is illegal to collect data for tracking." Why not just ban tracking and be done with it?


> They clearly mean to ban online tracking.

There's your error. GDPR is not about online advertising.

Things regulated by GDPR:

* CCTV in public spaces.

* Medical records.

* Employment records that businesses keep about their employees.

* Credit reports.

* Government records like voter databases and housing information.

* Trawling public business filings to send direct-mail spam.

* The loyalty card issued by your grocery store which tracks your purchases.

* The CRM database used by the sales guys in your SaaS company to keep track of hot leads.

GDPR regulates a wide array of data collection, and outright banning is not the correct solution for most of them. So it's about what obligations are attached to data collection and processing. Online advertising is only a small part of what's being regulated.

Even online, there are modes of data collection which are permissible. E.g. collecting anonymous site statistics for your own internal use. The obligations get harder and harder to satisfy when your business practice is to spread data hither and yon to whomever will pay a nickel for it.


> and leave blank what consent means

Actually, this is not left blank at all...

--------------------

Consent means offering individuals real choice and control. Genuine consent should put individuals in charge, build trust and engagement, and enhance your reputation.

Consent requires a positive opt-in. Don’t use pre-ticked boxes or any other method of default consent.

Keep your consent requests separate from other terms and conditions.

Be specific and ‘granular’ so that you get separate consent for separate things. Vague or blanket consent is not enough.

Be clear and concise.

Make it easy for people to withdraw consent and tell them how.

Avoid making consent to processing a precondition of a service.

https://ico.org.uk/for-organisations/guide-to-data-protectio...


This is a good question. I think the answer is that it's difficult to define up-front what is illegitimate "online tracking" and what is legitimate tracking of users necessary for things like accounts and saving of preferences (without drowning in special cases and loopholes).

The idea was to let users decide for themselves, case by case, whether they wanted the tradeoff of being tracked for the rewards (including things like saving your preferences).

The tracking industry didn't want to be banned and wouldn't give up without a fight, so they looked for a loophole in this fake consent spam.


The issue is that GDPR isn't fundamentally about online data collection or tracking, it's about all data collection in general. You can't ban everyone from collecting data about anyone in all situations, because there are many cases where people legitimately want or need their personal information to be collected and processed by someone else. For example medical records, magazine subscriptions, bank accounts, etc. These are all covered by the GDPR, in addition to illegitimate data collection for the purpose of ad tracking.

So how do you define, in law, when a person legitimately wants a company to process their personal information, and when it should count as illegal tracking? The GDPR actually makes an attempt at defining this (doesn't just leave it blank), but many adtech companies just ignore this and break that law. See the article for an example.


> EU data protection authorities find that the consent popups that plagued Europeans for years are illegal. All data collected through them must be deleted. This decision impacts Google’s, Amazon’s and Microsoft’s online advertising businesses.

How much data is being collected through these pop-ups?


The data being collected is a random identifier and a consent opt-in for tracking/cookies. These businesses use this framework as their source-of-truth declaring whether an internet user consented to cookies or tracking, and their ad systems behave according to the user's preferences.

So now they're being asked to delete their records of who opted in or out, because that data was illegitimately acquired.

[edit] This could also have implications regarding data collected through other systems based on the assumption that an opt-in was valid.


All the data they were collecting before that GDPR said they had to stop collecting (without freely given consent).


How much data is that? How do we know? It's not clear to me how the ICCL will know that "all data collected" is deleted. Even if the IAB is sanctioned or you storm their datacenters, the ICCL said that the tracking industry collected data through the IAB. How is the ICCL going to ensure that the tracking industry deletes the collected data?


It doesn't really matter "how much" data. What matters is the type of data and whether or not it's strictly necessary to deliver the content.


They will ask the companies to delete the data and take action if there is evidence they didn't, just like how all of GDPR is enforced.


How would they know if they did or didn't, though?


They don't know. But if evidence comes up showing a company didn't then they will take legal action against that company, in which case intent to break the law from the would be crystal clear so they would get maximum fines which are huge for GDPR.

It isn't like laws prevents all crimes, the goal is to reduce illegit data usage, there is nobody who thinks it can ever get completely stamped out.


I'm asking what kind of evidence can exist that proves a negative? Without knowing what was collected how can they prove it was deleted? Doesn't make any sense.


> Without knowing what was collected how can they prove it was deleted?

They don't need to know what data was collected. GDPR requires you to track all data and mark where you got it from, so the companies are legally required to track this for you, they should already have a switch where they can delete this data at the notice of the user, so they should have no problems honouring such a request from the government.

The government don't know if the data was deleted, but a user will know if a company has data the user didn't agree to give to the company, in which case that company is violating GDPR regardless how they got that data. That wont always come up, but if it does the government will go after those companies.


What you're saying is literally illogical in the case of IAB acting as an intermediary... Not sure you know what you're talking about in this case. The entire point of the original article is that the user's data is being fed through via IAB to tracking companies. This isn't a normal GDPR situation where the user's data directly is being stored in a way that's accessible to the user as well. Obviously in that scenario the user themselves could just request their data be deleted as that's what GDPR allows. IAB in this case has been acting as an intermediary, allowing tracking companies to collect metadata on users through them. Even if IAB deletes their data, the question is how will the Council know if the end-tracking companies deleted their data?


If you keep data about a person in the EU that data is protected by GDPR regardless where or how you got it, having an intermediary doesn't matter.

> how will the Council know if the end-tracking companies deleted their data?

That doesn't matter, all they need is to ask the companies and the companies to say that they deleted the data. That is how everything else works with GDPR. When you ask a company to delete your data you don't know the company deleted it, they could still store it but keep it hidden etc. The government asking this is exactly the same.

If it later comes up that companies has a lot of data about users that they can't explain how they got, or that traces back to this case where they said they deleted it, then those companies will get huge fines. Open violations of laws where there is no question that the company knew they were breaking it are a very different case from companies toeing the line, the fines would get much higher.


yeah you're not understanding what I'm saying. cheers.


This isn't a normal GDPR situation where the user's data directly is being stored in a way that's accessible to the user as well.

It's you that fails to understand the GDPR: that situation is not possible. In this case, the IAB is acting as the data controller for this data. As per GDPR requirements, when they share this data (for whatever purpose) with third-party processors, they must ensure through their contracts that the processor can comply with data deletion requests coming from users through the IAB.

If they cannot comply with that, both the controller and the processor are in violation of the GDPR, the controller doubly so because the GDPR requires them to audit their chosen data processors for GDPR compliance.


> how will the Council know if the end-tracking companies deleted their data?

There could be a tipoff, for example, from an employee. And if that whistleblower is right, then the company will suffer huge fines.

Or any other numerous ways that someone might be caught for a crime.. it lets go with whistleblower, as that is easy to understand.


Nothing can prove the negative. But if any shred of evidence comes out that they didn't comply, there will be severe consequences for them, which makes it at least reasonably safe to assume they will comply. It's hard to keep a secret like that.


Yes, it's possible for companies to act in secret to deliberately not comply with the law.

There have been highly public cases of that blowing up spectacularly for those companies; cases where it becomes public and nothing really happens; and - I'm sure - many many more where nobody outside the company ever found out.

Is there some aspect of this situation in particular where you're trying to ask something more specific than that?


I seem to be the only HN user who really does not care at all if I am tracked. Judging from the horrible quality of ads I get, they're infinitely far away from reaching an accurate model of my behavior.


No, I'm European and use a pixel phone with all data sharing enabled. I also enabled facial recognition in Google photos last time I was in the USA. I also share all my exercise data with Google, including heart rate via Google fit. I block most ads, except for Google ads and analytics. I always click on "accept all" when I get cookie and GDPR forms. My Google Drive is full of documents like scans of my passport, ESTA requests and some financial documents. I also have zero of the Google account privacy options enabled.

I'm also a local guide on Google Maps with a real photo, and my real name on the profile.


Just because you get bad ads doesn’t mean you’re not getting tracked well. In fact, it might mean you’re tracked really well and the only ads you’re getting served are those that are by one bidder. Everyone else decided you weren’t worth advertising to - so you get generic mass appeal ads that are very low cost to the company.

No different than getting spam snail mail that gets delivered to every house. Sure - you toss it in the recycling every week but someone will read it eventually and it’s basically nothing for the company to send out.


Its cute that people still think that all of the data that Google and Metabook are amassing is used to sell them toothpaste.



Collecting and selling digital data is not a legitimate business enterprise. It’s spyware.

If no one wants to pay for your product, the market has spoken. Too bad.

We must correct the insanity and digital economic imbalance that spyware businesses have created.


> Collecting and selling digital data is not a legitimate business enterprise.

According to who, you?

> It’s spyware.

How is it spying when the people are freely giving away their data?

> If no one wants to pay for your product, the market has spoken. Too bad.

Very true, however it's not clear how a truism about something else relates to the topic? Was this supposed to be persuasive about collecting digital data?

> We must correct the insanity and digital economic imbalance that spyware businesses have created.

Fair enough, but that entails not creating or fostering an imbalance by constantly providing the internet with your personal information.


>According to who, you?

Spyware is illegal. So it’s just a matter of defining the data collection practices of internet companies as spyware.

>How is it spying when the people are freely giving away their data?

It’s not “freely given away” when you need a team of attorneys to understand what you’ve agreed to and you have no audit rights. Point me to the public FB page where they clearly and easily define all points of data they collect.

> Fair enough, but that entails not creating or fostering an imbalance by constantly providing the internet with your personal information.

Quite absurd to take this position after big tech companies ruined the internet economy with their spyware model. Is it your position that these companies were just responding to consumer demand to unknowingly give up their data in exchange for free services?

>Very true, however it's not clear how a truism about something else relates to the topic?

The only reason we have this spyware economy is because tech companies thought it easier to grow their enterprise off spyware than selling a legitimate product at a price.


> Spyware is illegal. So it’s just a matter of defining the data collection practices of internet companies as spyware.

I’ve seen this phenomenon before but never so explicitly. When you can’t convince someone that something is bad, you re-define it as something they do consider bad.

Some examples I’ve seen:

- Some speech is so hateful and racist that its opponents wish to define it as “violence”.

- Facebook offers advertisers the ability to target the demographics their ads reach. Some have tried to term this as “selling your data”.

In this case, it’s clear the average person doesn’t hold data collection in such low esteem as yourself, so you must redefine it as “spyware” in order to convince them.

This subtle shift is in interesting to me, but it leaves me unconvinced. Words are not violence. Facebook does not sell data. Data collection is not the same as spyware.


"Free speech" is not a good example in my opinion when we already have so many exceptions to it:

https://en.wikipedia.org/wiki/United_States_free_speech_exce...

I'll reconsider not defending free speech from getting a "hate speech" exception when so called "free speech" proponents start talking about getting rid of the copyright exception instead of just wanting to say racist stuff.

It makes complete sense to want to expand the scope of terms that are associated with laws if you don't believe the law is accurate enough. Language evolves through social changes, and so do laws.


I made no mention of free speech. I'm Canadian and support the significant mechanisms we have in place to combat hate speech!

My point is only that speech is not violence. One does not need to change the meaning of the word violence in order to place sensible restrictions on speech. It is a cheap rhetorical trick.


It's intrinsically linked to "free speech" exemptions through being violent, because violence doesn't have to be physical, here's an excerpt from Wikipedia's opening paragraph on violence[1]:

> Other definitions are also used, such as the World Health Organization's definition of violence as "the intentional use of physical force or power, threatened[4] or actual, against oneself, another person, or against a group or community, which either results in or has a high likelihood of resulting in injury, death, psychological harm, maldevelopment, or deprivation."[5]

There's no doubt that hate speech _does_ commit psychological harm, for example, but the article contains way more nuance than I have time for in this post so I implore you to read the article -- "violence" is just not as simple and limited as physical harm.

[1]: https://en.wikipedia.org/wiki/Violence


Violence huh?

That's what this conversation is devolving into, a fluidic interpretation of violence? Seems like a strawman argument; change the topic to violence, then argue a truism that violence is bad... all the while maintaining a pretend causal link between privacy and violence?

Sorry. Not. Persuasive.

That said, for the sake of civility and moving past this distractio... I will concede the point you seem so adamant to make, violence is not so simple. But again, not on-topic here, and it adds nothing to the conversation.


Don't act like the term "spyware" is nebulous and undefined. Spyware is collection of information without consent of users. It's well defined, well understood, and illegal.

The only thing people are hiding behind here is that users agree to it in some novel length TOS that they don't read and don't understand.


> How is it spying when the people are freely giving away their data?

The ruling has proved that no, people are not freely giving away their data. One of the infringing issues is that the system "Fails to properly request consent."


When consent is enforced by a system that can't be bypassed via dark patterns such as Apple's App Tracking Transparency, the actual opt-in rate is around 4%, suggesting that when given a proper choice users don't actually want to give away their data.


> around 4%

Which, coincidentally, happens to be Lizardman’s Constant.

https://slatestarcodex.com/2013/04/12/noisy-poll-results-and...


> Very true, however it's not clear how a truism about something else relates to the topic? Was this supposed to be persuasive about collecting digital data?

If a business model depends on spying on users, it's not sustainable, and moreover, it's illegal in the EU. People are not giving their data away freely if they have a) no way of understanding the consequences of clicking a single button, b) get tricked into consenting using dark patterns, and c) their refusal to consent isn't even obeyed (TCF loads tracking scripts before users can consent).

In general, one of the requirements of the GDPR is that all information on usage of provided data has to be written in simple, comprehensible terms. Please tell me how you knew the implications of giving consent on a IAB site, namely your data being shared and sold across thousands of companies. If even techies fail to understand that, how can anyone expect that of ordinary people, our parents, kids?

It should be clear that with a law like the GDPR in effect, the IAB is acting unlawfully.


"freely giving away their data" is a misrepresentation. People are either putting up with their data being collected, or unaware that it is happening, because that is the only way many services can be accessed nowadays. I bet you couldn't find a single person off the street who would answer "yes" if asked whether they go on the internet specifically to give personal information to ad companies.

Or, to put it more briefly: "How is it spying when the White House employees freely hung up our gift painting with the bug in it on their wall?"


How does anyone have a choice in this?


Isn't the point you're making already possible with "if no one visits your spyware ridden site, the market has spoken, too bad?


Transparency. People are overwhelmingly unaware of the volume and types of data that is collected on them.

And at this point the free market can’t resolve this. The spyware model has absolutely ruined the internet economy. There is no way to compete against a spyware company with a paid product.


If people are choosing "spyware" over a "paid" product, perhaps they just don't care about data collection that much. Isn't that the logical conclusion?

Objectively ads have added more money into the internet economy than ever before. Curious where you're getting your numbers. 20 years ago "YouTuber" wasn't even a profession. The idea some rando with a microphone and a camera could make millions was unheard of. It's only possible with ads.


People don't care about a lot of things. Mainly because they don't understand them, or don't know about them: climate change, cancerous substances, plastic waste, homeless people, illegal whaling, domestic cats killing singing birds, sewing winter clothing or properly managing their savings.

That is why we have subject matter experts providing guidance for people in a world too complex to grasp or even care about everything they have to deal with. People _shouldn't have to care_, as long as they can trust on those experts to do the right things. We're the experts. Advertising companies are ruining the internet for everyone, some people are just too unaware to realise it.


> Advertising companies are ruining the internet for everyone, some people are just too unaware to realise it.

Sounds like projection. Though I'd agree that most internet users don't like ads, what's true is that most internet users don't like paying for things. Using YouTube as an example, the most popular site on the internet, the vast majority of people do not pay for YouTube premium even though it's available.

At the end of the day no one is stopping you from going back to circa-2000s internet, using IRC, going on plain text websites, using BBS, etc.


I think the point is that megacorps' business structure is way too complex for anyone to actually understand. Why is Gmail free? It makes money for Google, that part is clear, but I have no idea how exactly. I'm not even sure they can measure it exactly.

Those companies have transformed the internet, and its users, by offering services for free, in exchange for user data. We have raised an entire generation (two, maybe?) of people taking that model for granted, nicely illustrated by completely ad-dependent YouTube superstars.

Of course, we cannot simply ban all advertising and start charging for everything. But I'm of the firm opinion that, in order to go forward, we have to leave this business model behind, as humanity. The only way to achieve this is by making it unattractive via legislation.


> Of course, we cannot simply ban all advertising and start charging for everything. But I'm of the firm opinion that, in order to go forward, we have to leave this business model behind, as humanity. The only way to achieve this is by making it unattractive via legislation.

What exactly are you proposing? Anyone who wants to pay for email can do so already.

It's very trivial to not use any of Google's services or be tracked. Install uBlock, and don't go to any Google or subsidiary service. Done.

What exactly is the issue?


The issue isn't that I'm personally inconvenienced, this is not about me. It's that the internet is dominated by ad-based companies, and that is not a good thing?

I propose that we try to move humanity past this way of generating revenue, because it's wasting productivity and resources, encouraging shady behaviour, and leads to less freedom overall.

We have social networks manipulating users into staying on the platforms as long as possible, scrolling down infinite feeds, to expose them to as much advertising as possible, thus wasting productivity.

We have big players such as the IAB and its members tracking and spying on users to obtain more data to sell, as an alternative revenue stream to charging for their services.

We have quality journalism disappear in favour of whatever generates the most clicks, in order to expose more readers to advertisements. YouTube stars pushing hidden ads on children. Advertisers crossing ever more boundaries of privacy, by aggregating data from thousands of companies, with basically no oversight by anyone.

That is the issue. I think we should do something against that, and I think "something" may be to nudge the market into another direction, from ad-based to transaction-based revenue sources.


Here's a job. You need a job. It pays in company scrip [1]. People took those jobs despite the negative consequences. The government eventually made such schemes illegal.

Your argument is "People have free choice, so anything that they do is legal."

The excuse of the perpetrators is "I'm not the one (directly) responsible for your poor economic situation, or your lack of education, so it's fair and moral for me to offer you a terrible proposition that you absolutely would not make if you were in a better economic situation." This is just where extreme capitalism gets you.

The list of examples is endless. Scrip. Children working in mines or cleaning chimneys. Click-through TOS. "Free" email. Indentured servitude.

At the end of the day it's no different than "You need to get on this boat to america, or this gentleman here is going to cut your wife's throat. Hey, it's not me doing the cutting. I'm the good guy. I'm trying to keep you safe. But its your choice."

It really depends on what we mean by "choice".

[1] https://en.wikipedia.org/wiki/Company_scrip


If a site offers the choice between either funding via "good" (Non tracking, no malware etc) ads, and a fee, then I'm completely happy with their business model. A lot of sites however, don't do this.

Of course, the reality is that if they _did_ use "good" ads, then the free version wouldn't make enough money (at least not in today's ad market). So either the free version couldn't exist, OR it would need to be subsidized by the paid version being even more expensive.

But this problem could go away if "bad" ads weren't allowed or possible. Because then the price sites get per impression on those ads could go up, as advertisers can't simply pay more for precisely targeted ads.

Now, there are a few risks with this: 1) There is every risk that money on the regular web dries up, as targeting is more effective in apps and other siloed environments. We have already seen this to some extent 2) If online advertising is less efficient because of worse targeting, then traditional advertising will again be relatively more attractive, so some of the money would leave the internet economy that way, returning to traditional advertising.

1 and 2 taken together might mean that a lot of "free" content (and I use scare quotes) will simply disappear. And I think that's a risk we should be willing to take. And not only that: I'd go so far as saying that even if 90% of internet users answered in a survey that "I don't care about tracking ads, I just want free content", that's not something regulators should care about at all.


I disagree with what you’re saying. Basically your point is that most content on the internet should go away because you don’t like ads.

Good luck with that. Instead, you should treat all the sites that have ads as inaccessible and personally use the small percentage that fit your needs.

Everyone wins.


> Basically your point is that most content on the internet should go away because you don’t like ads.

No. I'm conpletely fine with ads. This isn't about ads vs.no ads. This is about "bad" ads. The wholeseale trading in people's information. It's a transaction where the price (Being their PII sold somewhere) isn't visible to the buyer.

The reason we ended up where we are now where a site MUST use horrible adtech, is this: Because there exists ways of displaying pinpoint targeted tracking ads through unscrupulous adtech companies then that's what sets the baseline revenue for ads. Show ads that are 1/10th as efficient? You'll get just 1/10th the revenue. It's what a website has to do.

So if I'm a site that wants to show "ethical" ads, I can't. Because the ad market is such that ethical ads don't make money. If, however, bad ads don't exist - then ethical advertising could be able to make more money again. The endgame of all this isn't forcing all sites to either die or become paid services. To me the important outcome is to level the playing field between those that display (or want to display) "better" ads.


And, to go a step further:

At least according to what I have read, before "bad" ads existed, the overall advertising budget of the corporate sector was roughly the same as it is now. This means that ad-supported business models were just as viable without all this crap.

The problem is that the tracking and whatnot is perceived to increase value, so the ad spending shifted to prefer the more invasive and "targeted" types of ads. But if we outlawed invasive, targeted ads and the tracking required to generate them...yes, there would be a certain amount of redistribution of ad spend, but overall, it doesn't seem like it would actually dry up and blow away.

So there's no good reason to think that getting rid of the really bad stuff would reduce the overall amount of ad-supported content out there.


If the budgets stay the same then some content might go where the ad money is. For example, the internet strangled the free metro newspaper business because having an ad-supported print media in the 2000's was difficult. If internet ads became dumber, then print ads don't look so dumb. And some money might flow back into things like this: https://en.wikipedia.org/wiki/Metro_(Swedish_newspaper) So if anything, the ad-supported content might shift to other places such as print.


>Though I'd agree that most internet users don't like ads, what's true is that most internet users don't like paying for things. Using YouTube as an example, the most popular site on the internet, the vast majority of people do not pay for YouTube premium even though it's available.

This is a bit circular. People would rather use a free service than a paid service. So long as free services exist it will be hard or impossible for paid services to exist or thrive.

>At the end of the day no one is stopping you from going back to circa-2000s internet, using IRC, going on plain text websites, using BBS, etc.

There's no reason that we should have to make this choice. We don't have to live in a spyware dystopia so that we can have cheaper internet services. This spyware economy is less than 20 years old, and we should throw it out.


> This is a bit circular. People would rather use a free service than a paid service. So long as free services exist it will be hard or impossible for paid services to exist or thrive.

Of course. If a paid product wants to thrive it needs to be better. People do use paid search engines, email, maps, etc. most people don’t because most people don’t value it that much.

> There's no reason that we should have to make this choice. We don't have to live in a spyware dystopia so that we can have cheaper internet services. This spyware economy is less than 20 years old, and we should throw it out.

I don’t think there’s a dystopia. If you want to regress you can do so alone. We have irc and bbs that won’t track you. I’m sure there are also some plain text sites you can peruse.

Not really understanding why you want to change things for others. Just change it for yourself and then you’re good.


> paid search engines, [...], maps

Can you find me one? The only one I know about is Kagi which is in beta and invite-only.

The problem with the current status-quo is that as long as advertising powered by illicit data collection is possible in practice, it's not viable for a paid service to compete.

> Just change it for yourself and then you’re good.

It doesn't matter what you do if ad-tech scum will track you anyway and create a shadow profile by tricking your friends into giving out information about you such as how Facebook infers social graphs (including non-users) by sneaking into people's contacts lists.


Try out Neeva in addition to Kagi. If you pay and get your friends to, it’ll compete.

> It doesn't matter what you do if ad-tech scum will track you anyway

Stay away from sites that use trackers and you won’t be tracked. I recommend turning off JavaScript and sticking to plain text sites.


> Stay away from sites that use trackers and you won’t be tracked.

That's not enough. If your friends give Instagram and the likes access to their contacts to "find friends", then they unintentionally leak your social circle too, and data warehouses sell this info to the highest bidder, lowest bidder, and everyone inbetween, and government agencies also tap into this for mass surveillance. Even the goddamn Mastercard sells transaction histories to Google. Everything's scraped and sold, doesn't matter if you use the internet at all.

Any notion of user consent to this is ridiculous, because barely anyone understands how much is truly collected, shared and linked together from various sources, and then used and abused. That's why Google, Facebook et al fight so furiously against legislation like the GDPR that mandates informed (!) consent.


Let's flip it. How about you continue with the spyware dystopia and the rest of us regulate away spyware businesses?


Why stop at circa-2000s internet when we got a whole past to go back to. We don't need to go far before much of current day market regulations didn't exist and people could freely choose to do things and not do things as much they wanted.

Coal mines that used indebted servitude out competed mines that did not, and if people didn't want to go into indebted servitude they could always choose to not sign the contract. The market spoke and the customers choose of their own free will to go to the company store, paying more than they earned, and increased their debt year after year.


A lot of content on the internet isn't worth paying for. Taking your YT Premium example, I'd say over half the content on there is garbage clickbait that is only viable because of ads - it doesn't actually provide enough value to the viewer (and often completely wastes their time) but the current system rewards that because by the time the viewer realizes that it's already too late and they've been paid for their ad impression.

The problem with YT Premium specifically is that it still requires a Google account, agreeing to their "privacy" policy and provides no guarantee that Google isn't still going to stalk you.


I highly doubt you've personally watched even 1% of what's on YouTube, so you can hardly determine what's clickbait. That being said, who are we to say what's valuable?


I'm extrapolating based on what I see on the front page and in suggestions as well as other social networks. Most of this content is only there to solicit an ad impression, not enough people would pay money for it.

> That being said, who are we to say what's valuable?

We can infer this based on whether enough people pay for the content. There's a reason you don't see a Patreon or other way of paying for the vast majority of clickbait content.


> Most of this content is only there to solicit an ad impression, not enough people would pay money for it.

Exactly. There’s no issue.

> There's a reason you don't see a Patreon or other way of paying for the vast majority of clickbait content.

The vast majority of content, clickbait or not doesn’t have a patreon to begin with. Most patreon have social media presence which includes ads. Sounds like the worst of both worlds.


There is an issue.

The content is clearly not valuable enough for people to pay for it, and in fact it's called "clickbait" for a reason because people clearly feel cheated by what they got as opposed to what they were led to believe they were clicking on.

In a market where consumers of the content pay for it, this wouldn't fly. In a market corrupted by advertising, this flies and the by-product is wasted time, computing resources, pricing out good content from the market (as you can't compete with free) and the risks associated with advertising (the ads aren't properly reviewed, scams, spam and malware can and does fall through the cracks) and data collection.

This also subverts the entire market and is the reason you can't even buy a good TV or appliance anymore without going for niche, commercial-grade products. Do you want to live in a society where you literally can't buy a TV that doesn't spy on you or show ads?

Advertising in its current form is absolutely out of control and ends up being a tax that we all pay for both in time (whether watching the ads or playing cat & mouse with countermeasures such as AdBlock, Pi-Hole, etc) as well as money (as it's ultimately part of the price of the goods we all buy).


> > Most of this content is only there to solicit an ad impression, not enough people would pay money for it.

> Exactly. There’s no issue.

Except all the work and energy wasted on creating trash content only intended for ad impressions? The world would be better off without it.


Nope. Trash content is just your opinion.

Just because you don’t like it doesn’t mean it shouldn’t exist.


The fact that nobody wants to pay their hard-earned dollars for it suggests it's more than just his opinion.

In fact, let's imagine a system where one side provides ads that you can watch to accumulate a monetary balance and the other provides content (including the aforementioned trash).

Do you think people will still watch and choose to pay for said trash content? Or will they choose to spend that money on better content, or even cash it out and buy a meal or drink?


That’s not the situation so it’s irrelevant.


There is not much that would be lost if Youtube ceased to exist, either. People are wasting hours on this platform that could be spend better in many ways.


That's only your opinion. There are billions who disagree. At the end of the day no one is forcing you to use YouTube.


There should be enough people in those billions to sustain a paid alternative.


There are paid alternatives.


> If people are choosing "spyware" over a "paid" product, perhaps they just don't care about data collection that much. Isn't that the logical conclusion?

I think the idea that people would be able to even make this decision themselves is too optimistic.

> Objectively ads have added more money into the internet economy than ever before

Is that good though? Is there some actual value being created by this or is it simply that ad money has been flowing out of other places like print and TV, and into online advertising?


Suppose there are two products on the market.

One, a children’s toy that sells for $9.99. The other a very similar children’s toy that uses lead paint, instead of a safer paint, but is otherwise very similar. It is not clearly labeled as having lead paint. The lead paint toy sells for $4.99.

If people buy the cheaper toy, does that mean people are “choosing” lead paint over the more expensive toy? No! It means people are unaware of the lead paint, or are unaware of the dangers of lead to children.


You're comparing seeing ads with children consuming lead paint?

Lead paint has very obvious, bodily harm to children. Do ads harm children? Perhaps, but even if they did, there's no cost to visit a free-site without ads, or pay for a site without ads.


> You're comparing seeing ads with children consuming lead paint?

No, nobody said it's the same thing. It is an example of misleading business practices.


> You're comparing seeing ads with children consuming lead paint?

No, I’m not. You can tell, because I never made a comparison between the two.

What I did was make a hypothetical that was more extreme, with an analogy of the underlying reasoning, to make my objections to that reasoning more apparent.

But, that’s absolutely not a comparison, so no.


People are rarely, if ever, given that choice.

Because it spells out the nature of the non-paying option as spyware.


You're conflating ads with spyware. Ads have existed as long as commerce has existed. Ads don't need spyware.


I think it needs repeating every time this discussion comes up: the discussion isn't about ads, it's about "bad" ads (tracking, trading PII, ads with malware...).

The group that is against ads including images on the sides of buses (basically the argument is usually something along the lines of that it encourages unnecessary consumption or similar) is so small as to be irrelevant. You should consider "ads" in the context of this discussion to be "bad ads", where the limit for what constitutes "bad" is of course different from person to person, but for the sae of discussion assume it is "bad enough".


How do you propose you track ad spend without "spyware"? Or should the people spending money on ads just "trust" that their ads are actually being displayed to the types of people they want to show them to?


The same way ads work in every other industry. Have you ever been to Times Square?


You mean the same ad industry that was easily supplanted by internet ads?

So you want a regression, why exactly? If you don’t want to be tracked stop using sites that track you and install ad block.


Is it a regression? I'd argue an ad in Times Square (or a reputable print newspaper) is a major upgrade from the cesspool that is internet advertising. Seeing an ad there signals to me that the brand has enough money to clear the huge barrier to entry (thus is unlikely to be a fly-by-night scam) and doesn't mind being seen by everyone. This gives me more confidence as a consumer to purchase their product.

> If you don’t want to be tracked stop using sites that track you

Can you even tell that before being tracked? The GDPR attempts to make tracking opt-in so that you have a way to consider the downsides before agreeing. There's technically no problem with targeted ads and data collection as long as users are given a clear description of what data they're sharing and how it will be processed.

> install ad block

The same people behind all this illicit data collection would rather not have you do that.


If it’s an upgrade then you should do it and stop trying to force things for others.

I’m really not even sure what your point is. You just want others to do what you want?


My point is that I want everyone to be provided a clear choice - that's what the GDPR attempts to do. The GDPR doesn't actually outlaw targeted ads, it just mandates that the user is given a clear breakdown of the data being collected and how it will be used and then they can choose whether they're willing to opt-in.

By your reasoning, malware should also be legal and it's up to people to learn the ramifications of it and how to protect themselves. We should not force others to miss out on the "benefits" of malware wouldn't we?


Malware is already legal to begin with. And yes, people should learn to trust trusted entities. Those entities will not allow malware. Government intervention is unnecessary.


> Malware is already legal to begin with.

Source?

> people should learn to trust trusted entities

When advertising corrupts the market there is no such thing as trusted entities. Find me a modern, 4K HDR TV that doesn't have advertising or advertising-related data collection in a big-box store without going for niche options such as professional digital-signage displays.

> Government intervention is unnecessary.

Companies who had their computers ransomwared would disagree, and so will consumers who had their payment details compromised or sensitive pictures disclosed. I wonder, in your mind, where do you draw the line? Violence? You could argue violence also doesn't need to be outlawed and it's up to everyone to build their houses like bunkers, always wear armor, drive a tank and carry guns to defend themselves.


I don't know where you're from but malware is definitely illegal in my country.


Regression for... who? The ad industry? Woe is me. They'll be fine just as they were fine for literally the inception of commerce until about 20 years ago.


> So you want a regression, why exactly?

Yes, if you get market advantages from harming people, you will eventually be required to at a minimum give it up again.


Is it "spyware" if I install it myself to track my own activities?

Is it "spyware" if I get someone to install it so that I can track my own activities?

Is it "spyware" if it is someone else's idea to install it and get data related to me but I know about it and I am OK with it?


this looks like a classic "flame bait" comment.

> Collecting and selling digital data is not a legitimate business enterprise.

a whole international industry, legislators across the planet, entrepreneurs, employees, voters, users and clients disagree.

> If no one wants to pay for your product

who doesn't want to pay for the product?


> a whole international industry, legislators across the planet, entrepreneurs, employees, voters, users and clients disagree.

It's not like the GDPR was one guy's idea that got formalized into law overnight. It's has its roots in existing data protection legislation that is decades old as well as previous, failed attempts (ePrivacy directive aka cookie law), so there's equally a significant number of people who disagree with nonconsensual data collection.


These GDPR banners have made the internet a worse place for most users IMO, there needs to be an easy way to consent to all tracking and skip the banners across all sites. I'm fine with this being opt in, but it should be easy to do on a "normal" browser (like chrome or edge including mobile) without the need for an extension. Forcing everyone to deal with these things is bad.


You could just, not track. That's always the option and it's what the final intention of these rules are.

Just browsing a website shouldn't be grounds to start tracking users.


> EU data protection authorities find that the consent popups that plagued Europeans for years are illegal.

Plagued Europeans? Are they seeing additional consent pop ups beyond the ones all the rest of us are tortured with?


If you live in the US as I do: yes, they are. I traveled to Germany and Belgium shortly before COVID, and the pop-ups were everywhere, even on sites that I know didn't have them back home.

Anyway, I'd prefer if we had privacy laws like this in the US too.


California does. There is no comparable federal legislation. No other State comes close to California.


Without knowing what country you're in, I suggest that your comment indicates that Europe has more strict laws about tracking than your own country.


I'm under the impression that some sites have created an especially annoying cookie wall for the EU, while serving their baseline annoying cookie wall to the rest of the world. Most of the sites can't be bother to make the distinction though.


Yes, there are more popups when in Europe and they're often a lot bigger. You'll see it if you use a VPN to a server in Europe.


As we're discovering: plagues are universal


No, we're probably seeing the same as you.


Can someone explain to me what the actual ruling is? Is the agency in question out of compliance, their specific implementation of a consent pop up, or the entire concept of a consent popup?

We use a consent pop up for non-advertising related cookies. And I'm trying to figure out if we are no longer in compliance.


Those popups did teach one good thing: when you see "legitimate interest" you know you're about to get scammed.


I'd love to know how often a 'reject all' button actually objected to all 'legitimate interest' crap too.

I expected the answer is site and consent management system dependent, so where I really couldn't avoid one of these sites, I'd manually object to all legitimate interest first before pressing it. Such a PITA and probably pointless ultimately, but hey..


Never, as far as I could tell. That was the whole point of the "reject all" button: to trick you into implicitly "agreeing" to the "legitimate interest" section.


"reject all", then go to "legitimate interest" and click "object all". Or, you know, just disable JS.


Except "reject all" closes the popup. Gotcha!

So you have to first got to "legitimate interest", uncheck all the individual "purposes", because usually there is no "object all". Once you've done that (with "object all" if you're lucky), you then have to go to individual vendors, because objecting to all the purposes does not cover all the vendors. Yeah. Again, if you're lucky there's an "object all", but usually there isn't. So gotta uncheck all those. There's lots. And often there isn't even a good scrollbar indicator to show how far you've gotten. If there is it's just depressing.

Then you can hit "Reject All". And it's not entirely clear if "Reject All" doesn't turn the LIs back on, because, once again, that dismisses the dialog.


Indeed. Then again, I'm pleasantly surprised by those rare web sites that, even when using some standard "consent" dialog, default the legitimate consent bit to objected. Thanks for not scamming me, I guess...


Oh no, I carefully trained google and Facebook to only show me ads about home renovation products by accepting cookies on specific webshops :/

Only half joking here.


My favorite part is:

> All data collected through the TCF must now be deleted by the more than 1,000 companies that pay IAB Europe to use the TCF. This includes Google’s, Amazon’s and Microsoft’s online advertising businesses.

It's not just that they need to find new ways to screw users. It's that since they screwed users, they also must lose their ill-gained data. Which will probably be a nice deterrent against them pulling the same shit again.

Edit: loose -> lose


It's also extremely important that companies can't insulate themselves from consequences by outsourcing compliance functions to a "designated villain".


It seems like that's what's happening here though. The IAB appears to take all the blame while everyone else gets away.


They lose all the data though. They may have avoided some name smearing but it's the data that they really want.


I'd argue that the data has already been integrated into ML models or mixed in such a way that there's no way to even tell where the data originated from. While the logical conclusion would be to just delete any data they can't prove a legitimate origin for, I very much doubt this is going to happen.

Most importantly, tens of billions have already been made using this ill-gotten data.


Or just force all models to be deleted that had any input of that data in the first place. If they don't do that in practice let the whistleblowers do their job in exposing the companies.


Good luck identifying these models. By now what caused what is so muddled, it could get a small army of lawyers to even start detangling


According to the GDPR the burden of proving compliance is on the controller by keeping paper trails and documentation. So technically they would already need to be able to prove were all data has come from, or else they can't have it. So either they start untangling or they delete it. :)


The models aren't the data and aren't regulated under GDPR. It would be crazy to try to do so tbh.


The models aren't subject to GDPR, but which data went to which model is: every data treatment must be documented.


The GDPR doesn't apply to data that can't be related to a natural person. Those models would therefore no longer be under the scope.

Another example: You get consent from me, count your distinct visitors for January and I revoke my consent tomorrow. You do not have to change your visitor count retroactively.


> The GDPR doesn't apply to data that can't be related to a natural person. Those models would therefore no longer be under the scope.

Give me the models and a week, and I'll dox some people with them.


I don't think that's a fair example, because the issue is not about inaccurate data (the view count), but illegally gathered data.

An analog example would be stealing paint and painting your car with it. Should the paint be stripped off the car and given back? I don't know, but the victims are entitled to compensation, which isn't happening in the Google/Amazon case.


With your paint analogy, it feels like a "you wouldn't download car paint" situation.


I wonder if these models would become inaccurate over time without the data inflow.


I'm not sure to what extent this should be classified as a data breach, since the data was in effect illegally harvested and processed.

In case of a data breach, the controllers (i.e. the 1000+ companies) would be required to provide notification to the respective supervisory authorities of the affected users [33] -- although due to the one-stop-shop mechanism, that notification will be considered already done. But on top of that they would also be obligated to inform the affected users themselves [34].

Article 34 also includes this stipulation: The communication to the data subject [..] shall not be required if [..] it would involve disproportionate effort. In such a case, there shall instead be a public communication or similar measure whereby the data subjects are informed in an equally effective manner.

Note that these requirements are on the controllers, not the processor. IAB in this case is the processor. So if the data authority were to consider this a data breach, the controllers would not get away scot-free.

[33] https://gdpr-info.eu/art-33-gdpr/

[34] https://gdpr-info.eu/art-34-gdpr/


> All data collected through the TCF must now be deleted

At best the companies will have to delete months of data, the rest being stale or already fed through some ML loop that extracted any useful value from it.


Would they need to delete derived ML data as well?


My understanding is that as long as there is no PII is in the ML model (there’s not) then any existing models do not need to be deleted.


How do you prove there is no PII in the ML model?

It has been proven countless times that it's possible to extract learning data from models. I can't see how you can prove the opposite, except, maybe, with federated learning (but even then, you need to good "ratio" of noise)


I suppose the models might theoretically at risk if the learning data can be extracted, but I don’t think this will practically happen because it’s so far different from current GDPR practices. Someone would have to prove their protected information is inside of a model before they might have a chance. After that, I am not a lawyer.


Of course you can’t prove that some data cannot be de-anonymized unless there are duplicate entries. However, GDPR explicitly encourages anonymization, or “pseudonymization”, which therefore suggests that reasonable attempts to keep data generic are considered legal by this particular law. People have already pointed out that GDPR’s language here is too vague and makes bad assumptions about how identifying multiple quasi-identifiers can be.


GDPR encourages pseudonymization as a best practice, but also draws a sharp distinction between anonymous and pseudonymous data. Pseudonymous data is still personal data and subject to all other obligations under GDPR. Any data that's pseudonymous would still be subject to the deletion order.


I shouldn’t have mentioned pseudonymization, that wasn’t my point. It doesn’t change the fact that the law is vague and to some degree contradicts itself, suggesting that data can be anonymous. There is a real and actual overlap between anonymized data and personally identifiable data. The way the GDPR is written, it would be extremely difficult to prosecute someone for breach of data they had taken best practice steps to anonymize. The law wasn’t written to handle ML based de-anonymization. It also doesn’t help here that if you Google PII, the hundreds and hundreds of examples are things like name and address, nothing remotely close to anonymous yet identifiable.


> How do you prove there is no PII in the ML model?

Is "innocent until proven guilty" not a maxim in European justice?


They have just been found guilty, that's what the ruling is, and the outcome of the ruling is they should delete data derived from the related data. The ML models took the data as input, I think it's fair to say that if they want to argue the ML models do not derive from it despite that, they should maintain the burden of proof.


It's not possible to discuss legality of something, until a judge said we are allowed to discuss it? What?

So I can murder someone, and say "Innocent until proven guilty", and forbid anyone from discussing whether I'm a murderer, until I'm actually judged guilty?

But ok, sounds like you're nitpicking on my words, so let me rephrase the comment you're replying to.

"Considering that we have dozens of research papers showing that public models contain PII, how can we trust that FAANG's private models doesn't without auditing? It sounds safe to assume it does contain PII"


> So I can murder someone, and say "Innocent until proven guilty", and forbid anyone from discussing whether I'm a murderer, until I'm actually judged guilty?

In many European countries it's in fact against the law to publish the name of a suspect until a court has found them guilty. And is some it's even illegal to publish the name at all.


That's not what I said. I said that it would presumably require a trial to prove that the ML models contain PII, as opposed to the government being able to assume they do and demanding the company prove they don't to some arbitrary standard.


Generally not in administrative law. Executive authorities (e.g. tax office) make some decision and you can appeal to administrative court, but you have to prove why the decision was bad.


OK, that's interesting. Thank you.


If you choose to handle PII you have to keep track of where it ends up. Feeding PII into a black box and pretending it isn't there anymore without taking reasonable precautions, especially on something like ML that is known to leak its input, doesn't seem like it should be an option. If you don't know the safe assumption should be that the ML model can leak PII and should be destroyed along with the training data.


It is, however in this case innocence means having a complete paper trail of your data processing as defined under GDPR. Not having such a paper trail is one of the things the IAB was found guilty of in this ruling.


Not in money laundering.


I don't think it's much of a deterrent, because there's no clawback of the ill gotten gains from the use of that data. That's something done routinely in, say, fraud cases.


Precisely. Until all the profits + substantial deterrent fines occur, nothing will change. This will have been worth it to the violators.

In effect this just encourages them to keep this practise going. This has to be treated like fraud.

Why isn’t anyone going to prison for this? Happens regularly with fraud.


They did get a 250,000 eur fine, however that is based just on IAB membership fees.

This ruling should make it a lot harder for advertisers to hide behind the IAB though. One would hope that opens members up to more substantial fines in the future.


> Which will probably be a nice deterrent against them pulling the same shit again.

Unfortunately, there are reasons they want these cookies on there so badly that justify the cost to figure out how to comply with the policy and try again.


I wonder if it's that or if it's also a case of marketing departments going wild with GTM and Segment and the like, literally throwing the kitchen sink in front of the user's experience in a desperate attempt to measure and drive 'engagement'.

I mean, if you take a news website like The Independent, there's not a chance in hell that a competent design and engineering team would sign off on all the bullshit that is dumped on top of the page. It's always added on at runtime.


Are they marketing departments not being driven the by the same industry wide focus on OKRs and measurement as the engineers? They just are even less likely to get a default assumption of being valuable.


> that justify the cost to figure out how to comply with the policy and try again

I wonder if this judgment opens them up to civil suits.


My understanding is that they did not comply to begin with. However, enforcement is still lacking as I don't see a monetary fine, so effectively they got away with it.


I’m not sure how true this is, many types of data have diminishing returns after a few months. I’d be surprised if they lost money compared to not using these methods —- they’ve just lost the tail of incremental value.


Millions of Google Analytics customers with suddenly blank histories going back (years?) will notice quite a bit, whole classes of employees basically just make charts about historical performance.

Attribution in advertising is something which can last months for some products, and it's doubtful that a large proportion of companies import from GA and will lose their ability to gauge current performance compared to the past.


I agree with your position that people will notice and that some attributions will be affected.

It seems like we’ve still given cake to the glutton, though, just without a cherry on top.


Cynically, I expect this will go the way of tthe cookie popups themselves: the law will be blamed, not the transgressor.


In the end it should be mandated that all user data is stored locally and cannot be processed outside of its local jurisdiction.

The U.S. is never going to accede that its intelligence agencies cannot access data gathered by its Tech Giants. All claims and soothing words to the contrary are a false belief.


That's a very likely outcome. Saudi Arabia passed its own GDPR (the PDPL) which does not permit the transfer of Saudi PII outside the Kingdom except in "extreme" circumstances.


I think you meant to use the word lose not loose. I suspect you mean lose the data as in delete it, not loose as in releasing the data to others.


There is a popular anti-drunk-driving campaign in the US with the slogan "Booze it and Lose It!" ("it" being your license)

My town messed up on one of the billboards, though, and for a while commuters got to see "Booze it and Loose It!", which conveys a somewhat more carefree message.


Pics! or it didn't happen. j/k

These are the kind of classic "Spell checking. It's impotent!" type of situations. For long bits of text, I can see how somethings might slip through. When it's only 4 friggin words, and it's a campaign being slapped up on multiple billboards for everyone to see, one might think letting someone else review/approve would be a good idea. Thinking it might have actually done that and still nobody caught it is even more funny/sad.


thanks


> All data collected through the TCF

there is no data collected via TCF:

https://github.com/InteractiveAdvertisingBureau/GDPR-Transpa...

CMPs are the popups that save the preferences and thus enable the collection of the data.

IAB only provides a spec.


This kind of "but technically" is not going to go well for them if that's what they try. Technically the CMPs don't collect the data either, the ads do and the website controls how the CMP relates to ad loading.

The TCF is a spec, the industry agreed on this spec, built implementations and used it as justification of tracking. I think it's fair to call data collected by ads loaded under the idea that a valid implementation of the spec was proof of GDPR consent as "data collected through the TCF".

Anyway, this is the press release, not the ruling. See C.2 of the ruling if you want to nitpick the way this is actually being ordered.

https://www.gegevensbeschermingsautoriteit.be/publications/b...


> but technically

well, the difference is quite important.


So now they've destroyed user security in an attempt to destroy user privacy? That's actually genius.


How are they screwing users? By showing them relevant ads?


I gave them two decades and they never figured out how to show ads that are relevant. They never figured it out and I don't believe they ever will, I'm done sharing data with them and I'm done watching ads.


Countless articles and media pieces over the years on the plethora of unethical ways our data gets sold, resold and abused and still you ask what's wrong...


Yet No alternative have arrived to ads, people are used to free software how do you circumvent these things ?


Ads aren't the problem, the surveillance is. That you can't have the former without the latter is a myth FB and Google peddle to justify their existence. They don't even need your data all that much - the duopoly the myth perpetuates is what matters. There's no conclusive proof that personalized ads are more efficient than old banner networks, much less that FB's or Google's services are worth the huge share of profits they take as intermediaries.


Who said this ruling forbids ads? It only forbids user tracking (actually not even that - it just requires meaningful consent to be obtained before tracking users).


It amazes me how people--even technical people--have been tricked into believing that ads require pervasive tracking.

Ads have been around for as long as there has been trade. So, thousands of years. Pervasive tracking has been around for less than thirty years. But yeah, "how in the world will we ever be able to show ads to people and pay for software?"


>How are they screwing users? By showing them relevant ads?

If the true purpose of ads is just an innocent venture in creating beneficial user experiences with helpful suggestions, then we can improve that system by orders of magnitude by getting rid of distortions associated with paid placement.

Then we can reap all of the benefits without having to worry about the experience being compromised by the distorting effects of self-interest, associated with privileged placement in exchange for payment.


Breaching privacy, having a detailed dossier of my online behavior, is itself a harm done onto me.


> By showing them relevant ads?

If only. Generally they just manage to show me ads to buy more of the stuff I just bought. Or something I looked at and decided not to buy.


I think the elephant in the room are political ads, and in some regions personal civil rights.

So the screwing is not done by the advertisers but by the kind of ads and the third party access to data.

Also companies like Google seem to have a very clear stance wrt to both, while companies like FB in the past have been pivotal in political landslides, screwed-over level personalized political influencing...


Under GDPR, users are perfectly free to want relevant ads.

Users are getting screwed by IAB, because IAB does their best to remove users that freedom.


By not complying with users' lawful rights under the GDPR.


Hopefully the deletion includes both backups and any ML models trained on that data.


Coming up next: Full page with mandatory reading (through eye scanning which will require camera access with popup consent for camera access). Followed by a 10 Quizzes to test your understanding for what you consented for. Then an email/ID verification to confirm your identity and consent.

This is going to be fun.


But the good part is you can just decline to consent. Because under GDPR if they need consent at all (that is they really don't need the data), then you can decline.


> then you can decline.

Not if the consent form looks like this:

[Register] [Accept]


For it to be informed consent declining must, by law, be as easy as accepting. Yes, this is being widely violated, but that's just a problem of enforcement.


Can the site deny your access then?


Much like the few US-based news sites that already decided to just not bother and show me the "you're coming from the EU and we can't be bothered to not collect your data" blank page instead.

At which point I'm free to decide I wasn't interested in their content anyway.


No. They consider that "coercion". So there really is no point in even asking, as the only correct answer is to decline. Anyone who accepts can be presumed to have been tricked into falsely thinking they would get something in exchange for granting permission.


They cannot coerce you to sign away your fundamental rights in exchange for service. If they cannot offer service without violating your rights, then the service is illegal in Europe.


Then the result will be that those services simply stop serving EU customers.


If legal offering for a market with half a billion people is not worth it for them, then by all means they can and probably should pull their criminal enterprise out of Europe. That just creates an opportunity for Europeans to say good riddance and perhaps start something new and worthwhile that isn't funded by exploitation.


Not legally. Personal data isn’t transactional.


afaik no, but you might lose some features


Finally! Some people keep arguing that GDPR is toothless and unenforced, but I think it's just that it takes time to tame the wild west. It's work in progress, and that progress is looking ok.

I really hope also pass at least the part of DSA where they make terminal signals for opting out of tracking legally binding.


> Some people keep arguing that GDPR is toothless and unenforced, but I think it's just that it takes time to tame the wild west.

Yes, the logic is frustrating: the big advertising companies have been trying malicious compliance for political reasons. It’s not like they couldn’t build better systems if they were trying to honor the intention of the law.


Yep, overall I'm really happy with the GDPR. The main thing I'd like to see changed is that consent dialogs should be a built-in browser feature with a standardized interface that all websites were required to use instead of coming up with their own. That way we could finally end this farce of the ad-industry's attempts at weaseling their way around the word of the law (and the latest rulings) by designing dark pattern consent boxes.


In general, I agree that it would be nice. Not sure what the right way to legislate that would be, but I'm sure there are ways.

However, if DNT/GPC (which can signal opt out but not much else) becomes legally binding (as they very well might, with DSA), that'd be a huge win for me personally, because I don't see my self ever consenting, and reading consent dialogs isn't worth my time.

As I understand it, GPC is already legally binding in California thanks to CCPA.


No need for that if they just complied with GDPR.

Consent must be given consciously in informed way - therefore NOTHING can be pre-checked by any dialog to make it comply with GDPR.

They just need to somehow ban dark patters, or standardize the dialog. To be honest, just one high profile case that interprets dark pattern as 'uninformed consent'(therefore not legal under GDPR) would be enough.


This is what the DoNotTrack header was designed for, originally.

I guess the big corporations didn't like it and lobbied for the next-worst thing, the cookie popups, hoping that it would become a big failure.


Well, the DNT header was kneecapped from the beginning, as it was required to be off by default, whereas GDPR rightly reuqires users to explicitly request to be tracked.


>The main thing I'd like to see changed is that consent dialogs should be a built-in browser feature with a standardized interface that all websites were required to use instead of coming up with their own.

I love that idea. Something like Apple's nutrition labels but with check boxes next to data uses. However this is only good if it's legally enforceable since there is no API that would prove/verify data is used the way it's been given permission to.


[flagged]


For cookies at least that will be a thing in the new ePrivacy Regulation.

https://digital-strategy.ec.europa.eu/en/policies/eprivacy-r...

One of the parts of it is that the Do Not Track setting becomes actually mandatory to follow.


I understand the frustration and would probably have said similar till recently. I'm now starting to think these companies can't be trusted to keep pushing things beyond the spirit of the law and that we should simply outlaw certain forms of data collection so that even asking for consent isn't required.

I have no problem with a Web site owner monitoring my progress around their site, timing my interactions, recording what things I was interested in, and then using that data to "optimize" my experience. But do I think having Facebook track me around hundreds of non-Facebook sites is OK? Or an ad network doing the same? Not really. I would be quite happy if they fully legalised first party data collection and outlawed third party collection entirely (including proxying first party data to a third party automatically - to close that loophole), to be honest, and then we wouldn't need consent buttons or banners, perhaps.


The industry should really get together and set up something like P3PP but good. These settings should be set in the browser, not in the client.

Of course the ad and web stalking people don't want that, because that means users can easily opt out. With Google's misguided attempt to force FLOC down everyone's throats we may see them join forces with Apple, Microsoft and Mozilla at some point to develop a consent protocol that can be configured easily without the stupid popups.

For example, the browser could hide all the requested consent in a little button in the top right that opens into a menu to let the user pick what they do or do not consent to for what parties (with UI to show the necessary reasons for processing), with defaults configurable in the settings. The defaults would differ per browser of course (probably opt-in on Firefox and Safari, opt-out in Chrome and Edge) but it'd still work out for users because they could change the defaults.

Hell, with the rate HTTP is evolving (bodies in GET requests, QUERY, etc.) I can see a HTTP CONSENT verb coming to http4 eventually.

There are definitively other concerns with such a protocol, like the ability for malicious actors to use it for fingerprinting, but I think it's the only way forward for browsers. Big tech has ignored legislation for a while now, but if they don't show initiative the law will only get worse for them.

I bet the EU would happily list such a protocol as a requirement for most websites. People like you could just blanket allow everything, people like me could blanket block everything, and we'd all get rid of these stupid popups forever.


How would the browser be able to enforce what the Actual server does with the data? This would work only for those binary track everything/don't track anything scenarios. Those are rare cases. What the the majority of us want and the whole purpose of the GDPR is, is the "informed consent" part. A detailed list of what information is gathered and how it is going to be used. A browser can not really enforce "I give you consent to use my data for "Use-case" in this site but don't use it for advertising or sell it". And since the potential uses are thousands, a generic form can not be used as a real consent form. The only way is the planned way. Every site/App Declares what/how they are using the collected data and take legal responsibility in case of infraction.


It wouldn't be able to control anything on the backend, but neither can it control the tracker behabvour in the cookie popups. That's where the border between technical and legal issues is crossed.

My idea for consent would be a sort of challenge/response protocol, where the sending party sends a request for consent with all the details they need and the browser approves or denies it. Preferably, this would be done automatically based on the user's settings. It could even be part of the CORS system, leveraging the browser's "firewall" to ensure no data gets leaked to misconfigured trackers and forcing companies to comply.

The thing about consent is that it must be freely given. Therefore, it should always be opt-in. The user can opt into certain stuff from some kind of simple control after reviewing the requests the other party sends, but that stuff should be hidden and denied by default.

A general declarative method would probably lack some finesse. For example, when your user account has a certain country set, a server might load in payment providers on the fly, and the manifest should reflect that. The manifests we have today would get cached way too quickly, I think.


> The industry should really get together and

The industry already got together and decided they are going to ignore GDPR in particular and people's privacy in general.


I wish there was HTTP header that meant "I want to give you the minimum amount of data, to make your site work".


Good news: no special header is necessary, this should be the default as per the GDPR.


Business pepe works just say the minimum is name, email address, etc. is the minimum in that case... And if you don't provide it, the site won't work


that can be challenged in court though.


I want one for "If your business model is advertisement, get off my Internet".


Wouldn’t this be easier to implement as a server side header that said “if you don’t like my business model get off my internet”?


I'm fine with that. If you don't have something worth paying for, show me the warning instead of your website, and I'm not coming back again.


Isn't this already possible with uBlock and just configuring it to not allow you to go to sites that have any trackers at all?


Does it also scrub those sites from search results?


Depending on the search engine you use, you can figure that manually, yes.


This is a popup I'd be happy to see.


https://globalprivacycontrol.org/ goes kind of in that direction. It's a rebranded Do Not Track header, but referencing specific privacy rights under GDPR/CCPA. That hopefully makes it enforceable, whereas advertisers could just ignore Do Not Track.


I like the idea, but that protocol is too simple. For example, I don't have too much of a problem with Matomo tracking cookies, but I don't want Google Analytics to follow me around the web.

This header doesn't specify any of that, and I'd still need to give some kind of consent through a cookie pop-up to websites that want me to use that stuff.

I'd rather see a modern version of P3P (https://en.wikipedia.org/wiki/P3P) with UI designed in this decade.


I see your point, but one of the main problems of P3P was its complexity. There's more than two decades of privacy-enhancing technology research showing that privacy controls need to be fundamentally simple.

I think DNT/GPC can be more fine-grained than you make it out to be. The spec is simple, but there's nothing in there that stops you from developing a browser extension that only sends DNT/GPC signals to a curated list of known bad trackers. That would give you as an advanced user some configurability while it's a simple checkbox for most folks.


I agree that P3P was way too complex, but so are the cookie popups that plague us today. P3P was built around legalese and privacy statements rather than simple consent, I think a modern take can do much better.

The extension you propose would be my vision of a modern P3P, but with categories you can set up with defaults. You don't want to force a NoScript/uMatrix style screen onto users, so the browser should simplify a bit, but a header that says "yes for necessary services, yes for analytics, no for tracking, no for advertising" (or something like that) would fit my requirements.

I think websites should also have a way to show _why_ and _how_ they process data, because that's part of the informed consent users give. A simple text field with a maximum size to force short descriptions, maybe with a "more details" button next to the selected purpose could be enough.

I don't think just sending a header would suffice because you'd still get consent popups if there's no other way to get consent. A boolean "sell my data" kust doesn't encompass the consent you're giving websites when you allow/deny.

It's a challenge to keep simple, for sure, but the UI and server-side API can be simpler than the underlying protocol. Consider the browser language list that nobody uses: to the user it's just an ordered list of languages, but in the user agent headers each language gets a numeric weight added to it. Or Firefoxs's "block trackers" button that substitutes Javascript when you enable it and applies all kinds of weird rules and detections to work.


> I wish there was an HTTP header that meant "I don't give a shit about what you do with my data, just let me get the information I want from this website".

I'm OK with that as long as there is an equivalent HTTP header which means "NO! Do not track anything, do not profile, do not collect any information besides the bare minimum PROVEN to be essential for the site to function at all. Either something's truly essential or it isn't, there is NO Legitimate Interest category".

Unlike the failed Do Not Track header, this one should actually have legal teeth (well, at least in EU) and sites which refuse any service to visitors carrying this header should be fined (after a grace period to implement any needed changes). And why not, add provisions to pierce the corporate veil so they can't set up a hollow company to take the fall for noncompliance.

Remember, you can still show ads and profit from them, you can't just violate my privacy and vacuum all my data to Feed The Beast.


> I'm OK with that as long as there is an equivalent HTTP header which means "NO! Do not track anything".

Why is there a condition attached to this? If I communicate clearly to Google that they should track me as much as they want and hide all popups from me, say by sending them a notarized letter, what legitimate interest do you have at this point to interfere?

Given how much of my time and wellbeing has been wasted the last couple of years with those popups, my instinctive reaction to this phrasing is honestly that the GDPR-fanboy faction would be well punished if they had to continue to deal with them for the rest of their earthly lives.


> Why is there a condition attached to this?

Because the law explicitly wants to avoid companies being able to annoy people into doing this, and thus requires to make the opposite action equally possible and easy.


> HTTP header which means "NO! Do not track anything, do not profile, do not collect any information [...]"

I think it's called uBlock Origin. No other way. Yet.


I wish there was one I could set that just said "Fine, send me your cookies just don't expect them back".


Cookies are already pretty much irrelevant. IP address, user agents and browser fingerprinting is where it's at.


I know, I just wish they would stop asking me about it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: