I can't even read this article without using firefox reader mode to skip the cookie warning / prompt (this works for a lot of sites!).
That's a choice that techdirt make. By framing it as an unavoidable consequence of the cookie legislation or GDPR moves the focus to the wrong place.
Just track everything and I don't want to think about it!!
Would facebook have 1 billion users if it cost $5.99 a month?
Data collection is not going anywhere, so long as people are willing (even unknowingly) to give up info for a perceived discount.
Now here - fill out my form with your address, email, phone, photo id, and passport number for a chance to win a brand new 2019 Honda!
"Accept data collection or pay for it" is a false dichotomy. Ad-supported websites don't need data collection to be profitable, just as NBC doesn't.
Furthermore, there are many paid services that collect data as well. Last time I flew with KLM the online check-in didn't work because some JS errored out as its data collection script was blocked. Turned out it was sending data to 17 domains on the on-line checkin page:
And KLM isn't even a budget airline like RyanAir. I paid good money for my flight.
People are willing in large part because they have no clue what their data is actually worth and/or what pieces of their data are actually out there. Data collection survives because people don't realize they're already paying $5.99 a month (or whatever the real breakeven number is.)
I think it's a pretty good rule of markets that people should know what it is they're exchanging. To that end I should be able to see what these companies gather on me when I use their service.
Companies should only collect what they need, and only keep it for as long as they need it, and they have to store it safely while they have it.
All companies get hacked. GDPR compliant companies will have less personal data than other companies who see personal data as something to be gathered in huge amounts and stored for as long as possible, or even sold off.
For a public company that’s just not possible. They’d be trowing money out the window just for kicks. The only way we’ll ever get there is through law.
Disable cookies? See annoying useless cookie warnings all the time everywhere!
You have to enable cookies to make it remember not to spam you with the warnings
EDIT: and good point in the article, one may wonder how making it easier to request data helps to improve privacy
No one’s going to fine me for blanket adding the feature so that’s fine.
If users don't bounce, is that based on short-term or long-term research?
Fears of bounces over cookie warnings are overstated. Users do not care.
It's a lot easier for a government to fine a foreign corp than to fine a local one whose workers are all voters and tax payers.
This is in violation of the GDPR, is a pain to turn off, and indicates that no, techdirt does not care about the users privacy.
But that's not really the point I think, the article claims that they care "very much" about your privacy, while at the same time sending your data to dozens of different companies and making it hard to get it to stop doing that. That's not caring "very much" about privacy.
GDPR is a attack on memory, starting with conditioning to accept regulation on what experiences can be legally remembered* (like who visited your property and their attributes). It starts on a subclass to make it acceptable.
Once that is 'OK' the power centers can expand memory restrictions and go back to adding rules on what can be said (transmitted or acknowledged).
*I'm deliberately trying to mix 'remembered' (like wetware does) with 'saved', or 'written' (like wetware creations do).
That has been tried before (SOPA, PIPA).
Anyway, what I was alluding to already happened: Article 13
It will get worse before the EU breaks up completely. I bet POLEXIT next.
(Yes, ideally services would collect less data, but if your account gets hacked they can access some data regardless)
Web does not require email for most things.
A better analogy would be catalytic converters, which increase the loss when a car is stolen, since there's a significant quantity of precious metal up the tailpipe.
Perhaps an even better analogy (since it provides a direct safety benefit to the purchaser) is that of airbags. For a while, they created an attractive break-in/theft target on their own, due to their very high value to size/weight ratio. I'm pretty sure that was an unintended consequence, too.
Presumably it's only less of an issue because the cost of the part has come down from $1k or more (in '92 dollars, no less).
The GDPR also requires companies to provide another means to access data that is different from the right to data portability, this different article is known as 'the right of access by the data subject' and has much more stringent requirements. It can apply to things like your work place or previous places that you have worked, it can apply to health providers, it can apply to a security consultancy agency you hired 15 years ago to install alarms to your house, etc. The purpose of this article is to provide the 'checks' part in checks and balances, it allows a user to verify whether a company is holding information on them, what data they're holding, why they're holding it, and the rights of rectification or erasure (that is again separate from the 'right to erasure' article) among other things. This may seem similar to the right of data portability at first glance but it covers different niches and is much more broad with a bigger bite, it can apply to companies that do not have a website and to companies you do not have an account with (but may still be holding data on you).
Techdirt however confuses the purpose of these two articles and instead transposes the rationale behind article 15 onto article 20 and calls it a failing of the GDPR. Quoted here:
>That's because, under the GDPR, platforms are supposed to make all of the data they have on you easily downloadable. The theory is that this will help you understand what a company has on you (and, potentially, to request certain data be deleted). But, it also means that should anyone else get access to your account, they could access an awful lot of important and/or personal data.
Let's be clear here, this is not a failing of the GDPR and is arguably a reason as to why the GDPR needs to exist in the first place especially in regards to requiring clear and informed consent or having clear explanations of what data is kept and why. The last part of the quote rings true, if someone has access to your account they can collect the data that is on that account. It should almost go without saying, but it is an embarrassment that it needs to be explained to a tech blog that is masquerading as tech journalism. Other people in the thread have given the example that if someone has access to your email account they can download all of your emails. If someone has access to your Facebook account, they can access all your messages and posts, private or otherwise - hopefully you haven't sent any private pictures to anybody. If someone has access to your Google account they likely have access to 1) your emails, 2) your full search history for however long you have had that account, 3) your full Youtube search history, 4) any private or unlisted Youtube videos that you may have uploaded, 5) any files you have uploaded to Google Drive, 6) any spreadsheets or documents you may have uploaded (if you have flown before and have opened your e-ticket in Google Docs this will have your passport number on it), 7) your full payment history through Google Play or Google Wallet (now defunct), 8) your full location/gps history if you have location enabled on your mobile device, etc. The list goes on. More importantly than having access to all of this, with nothing more than knowing the password, a black hat will be able to crawl all of this data using public scripts that can be found on Github and they can do all of this without the right to data portability. This is one area where black hats as well as technically inclined people have been more aware of the risks of using services like Google than the average person has, and it should remind anybody of the adage 'convenience is the enemy of security'.
The article goes on,
>As Jean notes in a later tweet, this kind of thing could really come back to bite other services, such as Lyft or Uber. She jokes: "Would be pretty bad to get hacked and kidnapped in the same day."
Yes, that would be unfortunate. What is more unfortunate is that companies have trained users to accept that there is no compromise, that it's all or nothing, that users need to store their full location and travel data or none at all. I understand the convenience that being able to rebook frequent frequently travelled taxi routes, I understand the convenience of having a fitness tracker that logs GPS data, however is it a convenience that needs to come with clear and informed consent, with an explanation of the implications of keeping this data that may be accessed and updated in real time, and it needs to come with the option of selectively being able to choose where or how much you would like to opt out. I am struggling to think of how this could possibly be a failing of the GDPR over a failing of the companies to provide these features and opt-outs without formal legislation, as a thought experiment, what would happen if Uber or Lyft had a data breach that had leaked all of their booking history? What would happen if Google had an authentication failure and allowed anybody to view your location history? Or how about allowing anybody to use 'Find your phone'?
The final insult to injury in the article is this quote,
>There are possible technological solutions that could help (again, as Jean suggests), such as using multi-factor authentication to access your own data (one-time passwords, Yubikey, etc), but it's telling that few companies (or regulators!) have really thought about that, because that vector of attack probably hasn't occurred to many people. But, it probably will now.
This is not a new attack vector by any stretch of the imagination and to suggest that it's due to the GDPR is quite frankly horribly misinformed. There was a technique that was popular around 2004-2006 (if Google Trends is anything to go by) that was known as 'fusking', the gist of it is that incremental or predictable file names can easily be guessed and crawled by computer scripts and utilities, it was more often than not used to extract all urls from an image gallery (usually pornographic) however it presented difficulties in personal image hosting websites, as filenames along the lines of "2004-07-22-0035.jpg" could just as easily lead to images that could accidentally be crawled if an attacker were to put "2004-07-22-[0000-0100].jpg" into their fusker utility. This presented some challenges to hosting companies who needed to add UUIDs to the filenames, and eventually the attack was somewhat mitigated when mobile phones started naming images with much finer granularity or even adding a salt to the image so that it could not be guessed. This is why websites like Facebook have long and unwieldy urls so that they cannot be guessed. While this attack is an old one it still pops up from time to time, in 2006 both Microsoft and Google had a vulnerability where their url shortening services could be guessed, which led to accidental exposure for users who were using short urls to generate links to private folders. You may be thinking that this is only tangentially related to being able to download user profiles, and I'll admit that it is, but I want to reinforce the point that black hats and other attackers, or even more technically inclined people, are far more equipped to think about the possibility of crawling and downloading large amounts of data that a regular user may be oblivious to or not even realise exists.
To give the article a tiny bit of credit, the GDPR does not stipulate that the right to data portability should require additional authentication like multi-factor (which can be as simple as an email link with a one time token), and this is certainly a shortcoming that should be addressed, but it is also a shortcoming that a company that cares about your privacy should be able to address of their own accord.
EDIT: on reflection it is a novel idea that just anybody can download your full profile if they have access to your account but at that point the damage has arguably already been done, a site like Facebook requires you to wait for a while before a download link is generated and ones like Google require a password before you can change any account settings. It's probably less intrusive and noticeable if you crawl the profile than to use the download link as there won't be any emails sent.
It's like complaining that someone who "hacks" into your email account can download all your email.
The encrypted archive remains accessible, but actually reading it requires a key only the client holds. This neutralises most email account phishing attacks.
Email history is a useful feature. You're at risk of unauthorized access to it in exchange for that useful feature. Anyone could theoretically offer a service without that feature, though there might not be demand for it. (then again, see mailinator.com)
Everything-history is a compliance feature. You're at risk of unauthorized access to it in exchange for compliance with the law. Offering a service without that feature would be illegal.
It's entirely fair to blame the increased risk on the law. The law's benefit might outweigh those costs, but pretending that the costs do not exist serves no one.
But the GDPR itself, yes, of course it has negative effect like most centralized bessermachen from the EU commissars has. I've never understood all the starry-eyed hosannas for this slide down into supranational control of things no governing body ought concern itself with.
It has its moments, if you're into dadaism. The other day I had phone call from my vet. Because of GDPR they felt obliged - belatedly - to mail all customers some info about why and what. They didn't have my email address, could I please give it them?