It is pretty simple, only 3 levels (strikes for the fellow Americans):
Strike 1 - Stern warning letter
Strike 2 - 2% of your TOTAL GLOBAL REVENUE
Strike 3 - 4% of your TOTAL GLOBAL REVENUE (or 20mil EUR, whichever is higher)
And now you know why GDPR is a board level topic. Keep in mind that the EU/US Safe Harbor agreement got axed due to a lawsuit of a single student from Vienna against Facebook. So all you need is a single pissed off German customer you ignored when asking for their data report card and you're fucked.
For startups - GDPR is like Y2K at the time, a GOLDMINE. So much opportunity to sell solutions, from real to snake oil. GDPR compliance is already and will continue to trigger a massive wave of investment.
I have national sales responsibilities for one of the majors. Think IBM/Microsoft/Oracle/etc leading a sales team of 74 reps.
You'd be surprised at how LITTLE sales we've generated from GDPR. We've been providing free GDPR assessments for the past 1.5 years for over 200 accounts as lead gen opportunity and very little sales have resulted.
It all boils down to companies simply don't believe the fines will be enforced given just how expensive the fines are.
And since GDPR doesn't go into affect until May 2018, companies are just waiting and seeing what happens.
It's really hard to sell GDPR because it's essentially an insurance policy. Why spend $5m on software and another $5m in services ($10m combined) if your total fine is only $20m. Do you as a company have a 50% chance of getting fined? If not, then roll the dice and not buy a solution.
For a company that has the means to spend $10m on software and services I'd think that 4% of global revenue would be clearly the larger sum.
If you will; it's the difference between the VW approach and those of (as it appears anyhow) all the other carmakers. They're all cheating; most simply were wise enough to avoid doing so explicitly.
Data protection is also harder to enforce than emissions; and just look at how laughably incompetent emissions enforcement is to get an idea of how seriously you're likely to get caught if you happen to collect too much private information.
I expect the same here as in emissions: no real compliance for years (if not decades), and when enforcement comes, it won't be the regulator that actually catches even egregious wrong-doing. I mean; the high-profile players will play lip-service of course, but that's it.
There's no need to theorise about how the EU might enforce such laws, we've got actual examples of them enforcing laws like this already and they do not mess around.
Of course they'll try to avoid that in the future, but the fine is mild enough that it's not going to cause firms to err on the side of caution. They're going to look for the absolute edge of the law.
Frankly, if google had not leveraged their search "monopoly" (not quite a monopoly), I suspect their market cap would have been more than 2.4bn lower; so this was a pure win - especially since conviction and detection aren't a slam dunk.
The lump sum was just for backdated non compliance.
I'm not saying it's nothing: it's that it's a risk worth taking given the gains. If you're building a trillion dollar company (i.e. google), then eliminating competition or accepting some judicial friction as a way to establish dominance in your (data-mining) field is perhaps acceptable or even wise.
In that, these fines simply aren't punitive enough, especially since they come so late. And again - it's not black and white. The existence of such rules will alter behavior; it's just a question of whether the reaction will be legal mitigation tactics, a company-wide change in approach, or something in between.
Put it this way: if you can corner a market worth trillions, risking how much loss is acceptable to reduce or eliminate competition? I'd venture that these fines are at least one order of magnitude too small to be really frightening (which isn't to say that the behavior google was convicted for deserves that amount, simply that anything less than that means that law can't really be enforced)
Most of it is done by our internal development team or the core system providers we use. No need for external consultants.
From what I can tell, the whole banking industry is busy implementing this, at least here in Norway.
The fines are real..
What kind of companies do you sell to? Maybe they are actually trying to handle it internally too? What do they say?
It sounds like a goldmine for the EU government. If they install a group of people chasing for noncompliant companies, they will pay for themselves.
No war except economic war. ;)
The European Parliament wanted a higher percentage but that got negotiated down during discussions with the Council of the European Union.
For companies with lower profit margins or companies that are hit by the lower limit of $20M euro, the threat is much greater. Small companies may go immediately bancrupt from a $20M euro fine, while companies with slimmer margins may be taken from black to red results by a 4% fine.
From my pesonal experience, many such companies do not fully grasp to what extent these regulations will affect their business models if actually enforced.
Plus, they need the cooperation of governments. Their patents are just pieces of paper without governments enforcing them, for example, and their data centers need special power solutions. Governments don't trust companies that ignore government regulation.
And, even if they hope that the US government is more lax with their restrictions and will turn a blind eye, the US is very heavily dependent on the OECD working, and not helping the EU prosecute blatant criminals would be a big problem for American credibility.
This is not even getting into the fact that the EU is a much larger market than the American market is, too.
In practice, just pretending regulations doesn't exist (depending on the regulations in question, of course) can be stupidly expensive for the company in question. Share holders don't like stuff that's risky and expensive and would replace a CEO that mad.
The key point is; it's not the CEO's company. He works for a pay check, the share holders invest for long term profits. Big difference.
If by substantially smaller you mean 'pretty much the same size' then yes you're right.
https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP) is pretty clear:
2016 GDP (PPP) figures: US $18.6T, EU $19.7T (6% higher than US), UK $2.8T, EU - UK $16.9T (10% lower than US)
A very real possibility is that Google/Facebook will have these choices:
- Comply with GDPR.
- Shut down business in Europe entirely and eventually be forced to repatriate offshore assets and incur the relevant taxes.
- Continue business in Europe, ignore the GDPR and pay a fine of 4% worldwide revenue.
- Continue business in Europe, ignore the GDPR, don't pay the fine, have all European assets frozen.
That's ignoring the possibility that the EU may be able to reach their US assets as well.
They could shut down all their European operations, but that would cost them a lot more than 4% of revenue.
There's an interesting assumption in there, and I'm not entirely sure it's a correct one. Facebook and its subsidiaries play a large part in many people's everyday lives now, in particular forming the main way a lot of people stay in touch with their friends and family. It is not at all clear to me what would happen if Facebook decided to call the EU's bluff here and literally switched off its service to everyone in the EU for a day or two, replacing it with a single page explaining that until the law was changed they would not be able to provide their service to EU customers.
If that happens, how long do you think it will take a bunch of companies to spin up replacements for Facebook? It's not exactly rocket science to create a social network application; most of the value of Facebook also does not lie in the platforms' code, but in the network effects that it managed to create. Thus, there would be a timeframe in which a huge number of new social networks would try to win over a critical amount of users, with one of them eventually emerging as the dominant one. As soon as that happened, Facebook would have quite a big problem in case they ever wanted to re-enter the European market, as it would suddenly have to compete with a big network with serious network effects keeping their users from re-joining Facebook, even if that was suddenly possible again. They would probably decide to shill out the largest sum of money ever to simply buy up this competitor, because otherwise there would be a certain risk of the new competitor eventually winning the global race for the dominant network (I assume that due to network effects there will always be a clear gravitation towards a single global general-purpose social network, as long as access to this network is not purposely blocked), and no matter how low this risk is, Facebook would most likely try to eliminate it (remember the large sums they paid to buy up possible dangers in the past).
Other commenters already compared the situation with China; I think that is a pretty good comparison, just that the whole development/transition phase would happen much faster, now that it is pretty well known how the final product would have to look like to be accepted by the user.
True enough, but assuming they decided to pull the stunt before the new laws came into effect, which isn't until the middle of next year, the real question is whether public opinion could be shifted in so little time.
In most cases, I'd say that was extremely optimistic. However, in this case we're talking about a service used by probably a large majority of voters across Europe, often for considerable time every day and for communications and arrangements that matter to them personally. "If they don't change this, you'll lose Facebook, Instagram and WhatsApp" is sure to get some people's attention, and all of those services going dark would probably make the front page of major news outlets. There would certainly be a lot of discussion; in fact, it might be a doubly effective move, because a lot of people would immediately then realise that without those services their main ways of telling their friends about something wasn't working.
If that happens, how long do you think it will take a bunch of companies to spin up replacements for Facebook? [...] Thus, there would be a timeframe in which a huge number of new social networks would try to win over a critical amount of users, with one of them eventually emerging as the dominant one.
I'm not sure that's how things would play out, if Facebook really did go offline permanently in Europe because of this. It would take vast resources to operate a social network on the scale of Facebook, but more importantly, before you could even start, the first thing you'd have to do is figure out how to comply with the same EU data protection rules and presumably then you'd also have to convince some serious investors that you could do it. If Facebook had already failed in that -- because obviously Facebook isn't just going to surrender the entire EU market and all its advertising revenue without a very good reason -- then why would we expect any new social network without all of Facebook's advantages to be able to comply if it was otherwise operating on the same basis, i.e., free to use but ad-supported?
Interesting idea. I did not think about this, mostly probably because I would never do this if I was Facebook because there is a very real risk of it back-firing. Many people here (I'm living in Germany), especially among the politically active, are in a kind of love-hate relationship with Facebook, and a clear attempt of blackmailing an entire continents' population in order to force political action in favor of an absurdly rich, multinational corporation could very well kill off whatever positive attitude there is towards Facebook in particular.
As of competition having to comply with the privacy law: of course it would have to. But I also assume that this is not impossible at all, it is just inconvenient and costly, especially if you have a huge legacy system built under the assumption that you can do practically everything with the data of your users. If you design your system in compliance with the data protection law in the first place, this gets considerably easier. Money would not be a problem at all: there are more than enough investors in Europe who would love to throw money at an attempt to create a second multi-billion-dollar money-printing machine in a market that is currently assumed to have a very high barrier of entry (but exactly that would change if Facebook gave up on Europe).
I also assume that it will not be impossible at all for Facebook to comply with these regulations, just pretty inconvenient. Under this assumption, any refusal to comply must automatically be motivated by a desire to maximize profits and minimize political influence on the platform, not by a sheer struggle for survival. Facebook's PR department might try to spin this into a different story, however...
This is a common assumption, and it might prove to be correct, but I'm unwilling to accept it as axiomatic.
Fundamentally, just looking at the right of a data subject to withdraw consent, it would mean Facebook needed to track every piece of data that could conceivably be tied back to an identifiable person throughout its entire organisation. That's not just their status updates or that time a friend tagged them in a photo. It's every photo in Facebook's entire database that ever included a recognisable image of them, tagged or not. It's every line in a log file that was saved by an engineer investigating a server glitch that relates to any activity that user took. It's everyone who uploads their contacts to find friends and has one of those data subjects in their contact list.
Now, I'm not saying I think Facebook should necessarily be able to do all of the above. In particular, I have often questioned their hoarding of data from things like contact details and photos that will inevitably include other people who may not have chosen to use Facebook or give their consent.
But I am questioning whether it is practically viable, even for an organisation with Facebook's scale and resources, to follow the letter of this law and still operate at all while continuing to provide similar services, if a few people decide to make a point and explicitly deny consent to hold any data about them, even if such data was supplied by other people. There are practical, ethical and legal issues here about third parties and automated systems that we have barely begun to explore, and we're talking about them at a scale where businesses like Facebook and Google have already had to invent new techniques and strategies for organising data just to cope with what they already do.
None of this has even touched yet on whether Facebook would still have a viable commercial model if users have a right to opt out of processing their data for purposes such as advertising but Facebook isn't allowed to deny them service in return, which is another interpretation I've seen talked about a lot (on the basis that an opt-out that stops you using something independent as well isn't a true opt-out and so wouldn't count). So far, I haven't studied the GDPR and informed reviews of it enough to reach any firm conclusions or opinions on that side of things, but again there are surely issues about the obligations of an organisation that offers a useful service but relies on advertising to fund it that go far deeper than just Facebook and the GDPR that haven't really been explored up to this point.
As surprising as it may seem given my comments in this discussion, I'm actually a pretty firm believer in stronger privacy rights and a confirmed sceptic when it comes to the big data hoarders like Facebook and Google. But I'm also someone who runs businesses and has first-hand experience of what happens when the EU's non-technical legislators meddle in technical issues they don't fully understand, often missing even the blindingly obvious consequences, never mind the more subtle and/or long-term implications. So I don't think we should dive into changes like this without considerable thought, and contrary to what various officials from the EU and the national data protection authorities like to say, I don't believe for a moment that this sort of change is a small, incremental development of the existing privacy frameworks we already operate under.
That is from my understanding wrong. In fact Facebook would not be permitted to establish the link. Just you appearing in the picture but it not being your picture does not qualify for personal data.
In short, if you're in a photo and it's recognisably you, it's personal data.
However, the wording I used before was actually taken directly from the GDPR itself.
Moreover, the interpretation that an otherwise unidentified image of someone may become personal data even if collected incidentally such as in a photo taken for other purposes or on CCTV is supported by among others the ICO (the UK's data protection regulator). There are a few specific examples in some of their published guidance [2,3,4].
Unless your argument is that Facebook isn't processing those photos in any way that would cause that interpretation to apply? In that case, I can maybe see how your argument works, but given what we know of Facebook processing uploaded photos just from the features they offer publicly, it's hard to imagine how their processing could possibly not be sufficient for all photos they hold to be treated as personal data.
 http://data.consilium.europa.eu/doc/document/ST-5419-2016-IN..., Article 4, clause (1), where "personal data" is defined
If you have a social network and you have a user A and a user B. Each of them uploads a picture in private showing both of the users on the image. If user A deletes his or her account the picture in user B's account is not to be deleted even though it contains a picture of user A.
As I mentioned before, modern technologies raise complex issues about third parties that we have barely begun to explore. SOP at social networks is very much to get people to provide information about not only themselves but also other people they know, and that's a minefield if those other people aren't happy about it. Obviously you can't just say social networks can do what they want if someone else provided the personal data, because that undermines the entire principle of data protection and privacy. But equally, if you require explicit consent from everyone for everything, you create a huge burden that might make the whole idea unworkable or at least remove a lot of the value these services offer to their users when maybe a lot of people wouldn't have a problem with, say, a friend tagging them in a photo anyway.
As things stand, taking the GDPR at face value, I don't see how it would be legal for a social network to retain any photo in which someone is identifiable if that person doesn't consent, unless that social network also took rather dramatic steps like avoiding any sort of automated processing and analysis of photos that might identify people in them, as well as removing features like letting a user tag someone who isn't a member of the social network.
Except that is not included in this. You are in fact not even supposed to retain data to identify a user after their account has been used to match them on other data.
I don't know the law by heart right now but I had discussions even a year ago with people consulting on this about this very topic what to do for such cases.
This law was not drafted in a vacuum where nobody looked at real world situations.
Sadly, given that we're talking about an EU law in a technical field, I suspect that what you just wrote is actually quite close to what did happen. That would be consistent with other recent EU rules affecting creative and technical businesses. In some cases, even senior EU and national government figures have admitted that those involved hadn't seen major unintended consequences coming at all, at least not until it was too late in the process to avoid them.
Essentially, the EU often exhibits good intentions and its laws might be made with laudable overall goals, but it frequently produces poor implementations that haven't been thought through in enough detail before legislating. So far the GDPR is shaping up to be another textbook example, with perhaps a side order of political football so the EU can beat up big US tech businesses because the EU's business environment hasn't resulted in creating equivalent services of its own.
This doesn't seem healthy for either our tech industry or our society as a whole to me. I'm actually a rather strong advocate of privacy online, but rules intended to protect it do need to be reasonably clear and practical or they're not going to be worth very much.
We will see soon enough how this plays out. From where I'm standing I'm very welcoming of this development because it's the first time I see an actual attempt of companies doing something that is in the interest of the customer when it comes to data.
People stop using it as much.
That seems particularly bad for Facebook, given the network effects it relies on.
If everyone in Europe woke up one morning next year to find Facebook saying that as a result of new EU law they had been required to turn off their service and please would everyone write to their representatives (using the form conveniently provided below) to ask why they'd done this, what do you think would happen?
SOPA felt like a gift to Evil Media Corporations at the expense of The Internet People.
If Facebook goes dark as a protest against GDDR, people are far more likely to side with the EU, than with the Evil Corporation That Avoids Taxes.
Facebook would disappear from the EU
I bet customers would rally against the law rather than pay microtransactions.
Sadly, I think it's quite clear that most people are not that worried about it, even in Europe. Or at least, if they are, they're willing to put up with it for the convenience of the services they get in return.
The real problems, IMHO, are a lack of awareness particularly among non-technical people of what is really happening and its implications, and a lack of competition so that those who do value their privacy more highly can choose to protect it without giving up normal parts of modern life.
The advantage of having armies and police forces is that you can lock people up who don't adhere to your rules. Good luck having any employees in Europe if you decide to ignore their regulations!
Ireland and the Netherlands subcompanies are essential part of US Corporate tax evasion. (Double Irish with a Dutch sandwich).
Whenever you sell to an european company, you must be compliant, as otherwise the european company is liable
Ultimately if the fines or compliance costs get too high that's exactly what happens. A company that does no business in the EU is not subject to EU rules. But because it is such a large and wealthy market the threshold is very high. If Kazakstan passed a similar law it'd be a very different story.
The following outages would impact their US customers, who would sue under US jurisdiction.
Art 83 covers the different triggers.
Having said that there are various schools of thought around levels of potential fines, including from different data protection authorities in the EU.
Considering the ICO in the UK is yet to levy a maximum fine despite egregious violations is at least one factor to suggest fines will not increase dramatically.
Also there are not only increased fines to consider but an increased focus on compensation to data subjects in the event of a violation of their rights (together with an evolving case law to support that in the UK at least).
In reality a bunch of small shops are going to go bankrupt because they don't have a "GDPR implementation" position filled and they didn't do some report properly.
Go get the salt.
Unless you're a start-up that handles personal data, in which case it's another bureaucratic overhead that also carries a risk of draconian penalties if you make a mistake, even if you have perfectly sensible reasons for working with that data and you're not doing anything at all surprising or dubious with it.
Of course, the EU has form for this, given its similar approach to both consumer protection and VAT rules in recent years. It does seem to have an unhelpful habit of imposing regulations at big business scale to deal with big business scale problems, but not considering that both of these may be wildly disproportionate for smaller businesses.
GDPR explicitly mentions that "warnings" and "periodic data audits" should be considered measures to take before the fine is applied.
It also says that when deciding on the fine, due regard shall be given to nature, gravity, and duration of the infringement; degree of cooperation, intention or negligence, actions previously taken by the authorities, previous infringements, nature of data etc.
It seems unlikely that anyone in good faith would get screwed by this.
Most small businesses don't have a lot of outside investment, if any. If you're running a bootstrapped business funded with your own savings, the last thing you need is to have to spend significant time and money figuring out where you stand on GDPR compliance and what you have to do with regards to any other organisations that your business in turn depends on. The theoretical fines aren't the most immediate problem for small businesses; the overheads are.
Almost every US supreme court decision come from a single guy challenging something. Are you suggesting that under a certain size, one shouldn't be allowed to sue in court?
It takes a single person whom was wronged. That's it.
I predict, lawyers are going to make a lot of money on this.
For startups - GDPR is like Y2K at the time, a GOLDMINE. So much opportunity to sell solutions, from real to snake oil. GDPR compliance is already and will continue to trigger a massive wave of investment.
I take issue with the quoted statement because it ignores the downside for startups. Companies like Google, Facebook et. al., have the money and time to hire teams of lawyers to find a way to work around these regulations in a manner that maximises their ability to continue tracking while minimising their risk in getting smacked by the hand of the law. Getting hauled off to court won't bankrupt them and they have the legal teams to probably win regularly enough. Even barring that, they have the finances to adjust their business to be fully compliant (in whatever degree business adjustments require) without going bankrupt.
Joe's Advertising Supported Free Service does not. Joe's not going to start his own ad network and start mining personal data for it -- it's way too expensive to try to compete with Google/Facebook (and it was already way too expensive to do it, before). If he does, he's the one who's going to get hit with the second and third strike; probably from that "single pissed off German customer".
Investing in firms that touch this space will be met with far more skepticism. I wouldn't be surprised if any company that simply asks for a user ID and password won't face a little scrutiny from investors, at least until the regulatory atmosphere is understood (I doubt it'll be that extreme for terribly long, but one bad court ruling/fine laid out where it wasn't expected could change that). The cost of establishing many, many kinds of companies will now increase because the risks are high enough that going to market without having your legal bases covered on this one. That money has now been shifted to a business who -- potentially -- is selling snake-oil (and startups are going to be more likely to do business with that snake-oil salesman since they'll probably also be the least expensive).
Then there's the "unintended consequences". Here's a crazy hypothetical, but a lesser variation of it is plausible if this were a US law: Some individual exercises his free-speech rights and chucks something up on the Internet that has a bunch of horrible things on it, say, like 'a guide on how to slaughter and prepare kittens for healthy and inexpensive dinners'. Some kid reads it and kills/eats his neighbor's cat. The guy didn't do anything illegal, really, but his web host knocks him off the web and people are calling for blood. He happens to use an ad-network, but doesn't, himself, collect personal information. However, this ad-network does, and at one point was nailed under this law. He uses the ad-network, so some overzealous prosecutor figures out a way to bring it in front of a judge that he's responsible for what this third-party did and should also be prosecuted. At the height of outrage, a jury isn't hard to find to connect the dots.
 And hey, that's fine, you're an internet commenting individual just like me -- we don't have to present both sides of the story -- that's the replier's job.
 Yeah, I took that a little far, but I think back to when "The Columbine Massacre" happened and everyone believed it was FPS video games that warped those evil children's "precious little minds". It took all of two seconds to call for banning violent video games (constitution be dammed), many idiotic and ultimately overturned laws were passed, and if there was some way to haul the developers who wrote the game off to jail (I think they were blaming DOOM at the time), it would have been possible within those first few weeks.
Since it was already too expensive to roll his own, Joe's site will simply include content from whatever ad network he chooses. All he has to do is make sure that the network is GDPR compliant. If he didn't think to do that, the first warning should give him the necessary time to find a different ad network or to specially handle EU-based customers.
Your "kitten meal" example also wouldn't work, since the first violation doesn't carry a fine, and if the website has been taken down, there is no possibility of a second violation.
It also gives a REALLY good pathway for any big company to take down a smaller company that is competing on data.
20mil EUR even for zero revenue?!
That's not a startup, that's a hobby.
So there is no setting up an offshored tiny company to wear the liability - your parent company is on the hook, wherever that money may reside.
Why would they impose a 20M limit and not stick to 4% revenue irregardless of it...
edit: I'm not sure that the down voting is about.. it's still a limit, a limit that means a company turning over 0-500M will pay up to a 20M fine.. not so bad the closer you get to 500M but not so great if you're a small company, especially so as the regulation is so open to the "law of unintended consequences" right now and only larger companies will have the funds / man power to navigate it.
I've just started a business myself, and this regulation affects my company too. It makes development costlier; it'll take from the precious little time we have to spend on compliance paperwork rather than work on our core business. In the short run, it does hurt our chances of success.
Yet, none of the trouble is even comparable to what's to be gained here. And it bothers me (though doesn't surprise me) that some people don't see that.
It also bothers me that such vocal opposition barely comes up when the discussion is just about bigger companies such as Google and Facebook. How can we expect "un-evilness" from bigger companies when we're barely willing to do anything in that regard ourselves?
The spirit of the law is nonsensical. It makes all commercial activity illegal, to the extent that all businesses keep records of their sales, inventory, etc. which reflect the activities of their customers, employees, and suppliers.
Google's current ecosystem of data-sharing means that Assistant can make educated context guesses on what I mean when I talk to it based on my browser history and map navigation history. If the new privacy constraints damage that passive interconnection, that's not a net good for me.
I agree, but that's a different topic, really. The comments here aren't about the (un)/importance of privacy. The main debate seems to be either about the technicality of the law and its possible unintended consequences, which are legitimate concerns, or they're about how "this is gonna make my job much harder," which is not really a legitimate concern in this context, and those comments were the ones I was talking about.
> Google's current ecosystem of data-sharing means that Assistant can make educated context guesses on what I mean when I talk to it based on my browser history and map navigation history. If the new privacy constraints damage that passive interconnection, that's not a net good for me.
I think we're overestimating a technical difficulty here, and downplaying a moral principle.
Providing a personalised service without storing large amounts of personal information in a central location is not impossible. It's just technically harder to do.
And even if it was impossible, then still, we need to sort out the moral consequences first. Not by banning technological progress of course, but perhaps by bringing more oversight to corporations. Or by making sure that people of lower socioeconomic background aren't hit harder than the wealthy.
Take the right to be forgotten. First of all, it should be common sense that no one has the right to force legitimate news articles to disappear because they don't like the content, but that is what the EU has ruled should happen.
I get the desire to have a company forget about you, and remove all the personal information they have. It makes sense from a personal standpoint. But how do you do it technically?
If you follow GDPR strictly you would need to be able to purge the data from your backups. Now most backups are considered immutable, so you aren't going to do that, meaning you need a way to ensure that "forgotten" users never get restored.
But how do you even delete the live data? Does the tech company you work for have the ability to delete all traces of a user from their system, cleaning severing all relationships with other objects in your system? Do you have the ability to retrieve everything you know about a specific user, and provide it to them? You will need to write the code to do this.
There is a good chance your little startup that isn't cash flow positive will have to spend $1 million of its VC money on becoming GDPR compliant.
Do you sell a SaaS service to businesses, and those businesses send you their customer's data? Then you are the processor and they are the controller. Cool, less for you to do, sort of. Except that controller must agree to every sub-processor you use. Want to switch from AWS to GCP? You can only do it if all your customers agree. Want to use try out a new metrics or logging service? If it will have any PII you can't do it without customer (controller) permission.
You will basically need to hire full-time compliance officers to deal with this. The big tech companies already have compliance officers, but GDPR is so massively invasive to businesses that even small companies now need compliance officers.
No, it is about deleting personal data attached to your user account, not "news articles". This thing intends to make the "delete my account" button to actually, you know, "delete my account", instead of fake-deleting it by setting a "deleted" flag and telling me that everything is gone now while still keeping gigabytes of data associated with me in your database.
> [...] meaning you need a way to ensure that "forgotten" users never get restored.
If this is considered to be a hard problem, then I assume storing some list of deleted users in a separate place and immediately purge those users from the backup after restore must be some kind of rocket science.
> There is a good chance your little startup that isn't cash flow positive will have to spend $1 million of its VC money on becoming GDPR compliant.
I wouldn't call it "to become GDPR compliant", I would call it "to build a sound database structure". Because if you are unable to purge all data associated to one of your users' accounts from your system without destroying the integrity of the rest of your data, then you obviously have a half-baked system at your hands that lacks a core feature - to actually delete accounts. And you surely should spend some of your money to refactor this crap into a long-term viable solution while you are still small and agile enough to do that. Because it's only going to be way more expensive later on...
You've gone from: backup.bak to backup-encrypted.bak and backup-keys.bak
This does make your backups slightly less reliable, because it's one more thing that touches them, but if you do a sane implementation and exhaustively test it, the risk is manageable.
I'm also not sure you really need to keep that many backups of this file. Replicate it and make sure you can roll back when your replication is borked, but if you really need to restore your database from months ago, using a newer list of encryption keys shouldn't be a problem.
Does your data not have a lifetime anyways? Do you really need to store everything forever?
If you have system that just tracks changes and one that occasionally records full state, after you delete someone from prod you could simply overwrite old full-state backups with your new, post-deletion backup and update your change-only backups to replace data about that user with `deleted`.
In your backup, encrypt each user's data using a per-user key (AES or something). The keys will be tiny, so you can store the keys in a hot database. When a user deletes their account, simply purge the user's key.
Tada - like magic all of that user's data on your tape backups has turned into unreadable noise.
Complying with this requirement will require us as an industry to make some changes to how we store user data. But the amount of work each company needs to do is proportional to the complexity of our existing backup system. If you're a tiny startup and don't keep offline backups at all, you can just delete the user from your database. The more complex & rigorous your backup system is, the more complex your user deletion system will need to be.
Its a hassle, but no moreso than any other requirements we deal with on a daily basis.
A fundamental axiom of data recovery is that you don't really ever "delete" anything, because accidentally losing data is considered such a horrible problem. So virtually all backup systems consider backups fundamentally immutable, because everyone knows how just one small bug in something designed to modify a backup could fuck the whole thing.
I don't really disagree with the spirit of the law, but I think the backups problem to honestly be pretty much a technical impossibility for large companies, it will basically just get ignored (the backups part at least).
A month would be better than a week, but it still allows a situation where the wrong user account is accidentally deleted and the mistake is not noticed for a more than month.
I think the best approach is for the EU regulation to give the company 90 days to delete the user's data. The company can then just have a cron job to delete older backups. Simple.
No it is exactly that! The test case was from a man specifically Mario Costeja González wishing for his past financial embarrassments to be erased.
Doesn't it just affect companies which rely heavily on lack of privacy for monetisation? I think that's sort of the point - that your business should not rely on tracking individuals and selling that information without their consent to gov/private bodies. It's obviously a huge change, since so many big tech players rely on this to make profits. But the internet will be a much nicer place for everyone else if right to privacy is protected.
This seems incredibly broad from the article and would touch nearly every startup. Maybe there are limits on the businesses affected? Otherwise I'm not sure how one could formally define "rely heavily on lack of privacy for monetisation."
https://unroll.me/ is a good example. They provide a free service to users but make money by leveraging their total access to your inbox to sell ad analyics and competitive intelligence.
It might hurt the bad players but it really depends on how readable the text will be to the average user. If it's going to be a checkbox it will likely not do much.
No, unfortunately not.
HN itself is illegal under EU regulations because you can't delete old comments, and we know that the admins know how many RPS they are getting but I haven't specifically opted in to using records of my
HTTP requests for traffic monitoring.
And as far as this stuff being difficult to do, sure, but isn't it worth doing? Why shouldn't a customer have a say which cloud provider hosts their data? Why shouldn't we be able to make sure no data is kept about us after we stop using a service? Like with anything novel in software it only seems hard to do because we haven't done it, but in a ground up design it's not that hard to add gdpr compliance, and a few years down the line this stuff will be business as usual.
You don't need a compliance officer, but you do need a security officer, and their job now also involves data lineage, not just data security. You already should have that person if you're building a SaaS solution.
I was referring to Google vs. Costeja, which I realize isn't GDPR, but an EU court did rule that way.
> You don't need a compliance officer, but you do need a security officer
I strongly believe that compliance and security are two very different things, that are only slightly related. They come at it from a very different perspectives. A security engineer should be doing threat modeling and protecting against threat vectors. A compliance officer may consult a security engineer, but ultimately their job is to check boxes to make sure regulations are followed. I think compliance staff are more appropriately part of a legal team than an engineering team.
That isn't to say compliance officers aren't useful. Having a strong compliance voice can be great. I've seen companies without a compliance officer reduce security because an auditor told them regulations required something. A good compliance officer would have been able to push back against the auditors, pointing out what regulations actually require, and working with the security engineers to come up with a solution that meets regulations and actually improves security.
That is not correct. The right to privacy is not an absolute right. It has to be balanced against other rights, such as the right to free press. In a normal news article case, free press would prevail.
I advise a lot of small customers to implement manual procedures to retrieve or delete data in case a request for it might be done. And to set up a basic privacy and security policy which they should have had already. This doesn't cost much.
> Except that controller must agree to every sub-processor you use.
This can be a generic agreement where the processor notifies the processor.
> Want to switch from AWS to GCP? You can only do it if all your customers agree.
Not true, you do however need to be able to tell customers what companies receive their data. Which can be quite a challenge with sub-sub-subcontractors.
Want to use try out a new metrics or logging service? If it will have any PII you can't do it without customer (controller) permission.
Not true if the processing agreement contains a clause that instructs processor to perform metrics or logging. Customer consent is often not needed unless it has big impact on their privacy. Consent is only one of the legal grounds.
> You will basically need to hire full-time compliance officers to deal with this. The big tech companies already have compliance officers, but GDPR is so massively invasive to businesses that even small companies now need compliance officers.
If this were true I'd be a lot busier. It would be wise if companies assign the responsibility for privacy and security, but it doesn't always need to be a full time job with a level background.
Or they could, I don't know, just not collect that data in the first place.
The company I work for has no ads. It does no analytics on personal info. It does nothing you would care about. What it is is a SasS product for businesses. We aren't the controller, so we don't need permission from end users, our customers need to get permission from their customers. But end users can ask our customer (the controller) to delete data, and our customer can ask us (the processor) to delete it.
Fine, we now have an engineer building GDPR features instead of features that benefit our customers. Oh well. But it isn't as clear a win for end users as people make it out to be when they only come at it from an anti-Google and anti-Facebook perspective.
It sounds good in theory, but things like Article 28 certainly make it harder to move quickly.
> The processor shall not engage another processor without prior specific or general written authorisation of the controller. In the case of general written authorisation, the processor shall inform the controller of any intended changes concerning the addition or replacement of other processors, thereby giving the controller the opportunity to object to such changes.
I am not finding any shred of sympathy for your story. To me this sounds approximately as evil as saying you are a pipeline company having to comply with all of those pesky environmental and occupational regulations by spending money on worthless safety features for people working and living on or around the pipe, and that none of this benefits your customers: the oil companies.
Yes: you built a bunch of software around a specific set of assumptions about what you were allowed to do, and in the process you took advantage of cost savings by ignoring externalities such as information privacy, and now that this law exists you will be negatively affected. However, the point of this law is to say what you were doing was NOT OK and that future companies should not do this and existing ones had better figure out a way to stop doing this.
In a perfect world, everyone would have built these featurs in to their systems without this law, but they didn't, so now you all are going to get punished. If your business is still possible (and I have no particularly care if it isn't) and any of your competitors had spent the in your mind wasted effort making sure this was possible in the past, then I am not just OK with but extremely delighted that they will now have a competitive advantage over you as you scramble to retool.
You are essentially asking for sympathy here without first taking a step back and showing that any of what you were doing was not just expedient for you, and not just beneficial to you, but that it was also simultaneously what people other than you deserved: the presumption here is that you are the villain, and it is really hard to ask for sympathy from that position, and I can tell you all you are doing from my reading is digging yourself a deeper pit.
It's easy to post bold privacy advocacy from the cheap seats, but I suspect you wouldn't like a world where these new rules really were enforced to the letter. Many of the organisations whose products and services make your life better in some way would most likely cease to exist, and the economy on which your personal quality of life depends would surely take a huge hit.
GP's company isn't doing the tracking and analytics, but it is pulling data from companies that do. Therefore, regulations that affect GP's customers affect GP. This is right and proper, and I don't see what the problem is.
The problem is that it will be almost impossible to comply with the letter of the law in this case without either imposing prohibitive levels of overhead or disregarding other good practices like logging diagnostics and keeping robust backups in case things go wrong.
There's a saying about babies and bathwater, but this is more like requiring the entire house to be rebuilt in order to throw out the bathwater. Sure, you can do it, but it's much easier to say that when it's someone else's manual labour being paid for by someone else's money that will make it happen.
If the business requires this much overhead in order to internalize the data-externalities that it's generating, the business does not deserve to exist. Privacy violations are an externality, just like pollution, climate change, or deforestation. The way we deal with these externalities is through regulations and taxes that force businesses to internalize the costs they're imposing upon the rest of us. OP's business is like a chemical plant that gets its feedstock from polluting suppliers. If pollution regulations make the feedstock prohibitively expensive, then it's a signal that the existing process for making the product product wasn't providing a net economic benefit to society, and that the process needs to be either reengineered or shut down. By the same token, if privacy regulations make your product unprofitable, then your business model either needs to be reengineered, or you need to shut down.
There's no rule saying that cities have to be covered in smog. Likewise there is no rule saying that online media has to be funded through advertising. In the case of pollution, externalities that appeared to be inevitable turned out to be the result of choices resulting from economic incentives. When regulation changed the incentives, the externalities were massively reduced (as evidenced by the fact that Pittsburgh today has some of the best air quality in the US). I'm confident that the same is true of online media. The only reason that it's funded by privacy-violating advertising is because privacy-violating advertising is the cheapest and easiest business model. But if you take that off the table, businesses will be forced to innovate and come up with new payment structures that better align their interests with those of their customers.
I think most people are okay with that since this at best will be a temporary inconvenience.
How about data breach? Can they also guarantee that the data is stored safely?
Now, there's plenty of scope for debate about that, for example in what uses should be accepted as reasonable by default, what should require explicit consent, and what should be subject to someone opting out even if it's allowed by default. Much of the data protection framework in Europe, both past and near future, exists in this space.
But there also has to be a balance, because if you start assuming ill intent and trying to prevent anyone from doing anything with personal data just in case it might be leaked or abused in some hypothetical future, you stop being able to work with other people effectively at all. In this context, paranoia is no more helpful than complacency.
GDPR benefits customers. What you say is similar to justifying not to provide good security with the reason to benefit customers or justifying not to provide safety features in cars. After all, it happens not that often.
So you do no analytics on personal info, but if someone wants their personal info deleted, you have to delete some of your data.
How does that make any sense? Doesn't that imply that you are, in fact, using their personal info? Or do you subscribe to a moral theory where e.g. browser history is not "personal info" and this is a gripe about how regulators disagree?
(I don't think it absolves them of any responsibility to implement privacy measures, but it does at least make sense.)
That companies haven't even thought of being able to delete user data makes the law even more important. It should be common business practice to delete data if a user asks, not something technically impossible.
Do you think it's reasonable to have to get individual user approval to move their data from one database vendor to another?
Then you should already be treating your data in this way. If you're not, then I'm glad you're being forced to now.
As for your other point, sure. Customers should absolutely have a say in what happens to their data.
I think a lot of EU people (I'm not one) would disagree with you here. The notions of privacy and of the goals of the criminal-justice system in several parts of Europe are radically different from the notions your "common sense" position is based on, which means that what seems "common sense" to them seems ludicrous to you, and vice-versa.
Overall, this will be yet another additional challenge for EU-based startups in comparison with their US peers and keep armies of lawyers busy.
And if you're complying because you want users in the EU, then you might as well design your system to comply for everybody. And that's a Good Thing as more privacy is better.
This is just another opportunity for easy money if you're in the states and enjoy/don't mind compliance work.
Perhaps, but it's a design problem that approximately 100% of otherwise reasonable backup systems will have, and working around it comprehensively will be extraordinarily expensive.
Do we really want to impose rules that incentivize businesses storing personal data on behalf of their customers not to back that data up properly, in order to avoid any potential liability under the GDPR? Because that's exactly what this law does, as it stands.
Yeah, if you want the data, you need to be able to handle the data in a compliant way. The other solution is to not collect the data. Keep in mind that this is targeted at user tracking. Don't expect me to be sympathetic to the troubles of backing up all of that tracking data.
But since that data will include things like routine server logs, back-ups of customer records necessary for statutory financial record-keeping purposes, and so on, it's never that easy. With such a broadly written law, you could spend a small fortune on legal advice just to find out what your real, practical obligations are to make a good faith attempt to comply.
Keep in mind that this is targeted at user tracking.
The intent might have been to go after user tracking, but unfortunately, that's not what the law they made actually says.
If you follow GDPR strictly you would need to be
able to purge the data from your backups.
Now most backups are considered immutable, so
you aren't going to do that
"Then don't do that"
Sure, but now you actually have an imposition because the ability to do this kind of thing can't be done on any commercially available backup system.
"purge the data from your backups"
Again you should already be doing this if you do business with the EU or you are breaking the law.
"Do you have the ability to retrieve everything you know about a specific user"
Again this is already in the EU data protection law you should already be able to do this or you have been breaking the law.
Im not a lawyer so take this all with a pinch of salt this is just stuff I need to know as an EU developer. Sure it might be slightly more work for US tech companies but I can't be arsed is not a valid reason to break the law. If you think it will cost to much then don't do it, there are plenty of EU companies that do.
This isn't actually that complicated if their software is designed from scratch with GDPR in mind. Current approach is collect all the data you can with the intention of selling this to data brokers. GDPRs discourages this. It shouldn't be that complicated and that expensive if you store the minimum amount of information you can to cut costs.
> The big tech companies already have compliance officers, but GDPR is so massively invasive to businesses that even small companies now need compliance officers.
This can be done away in a way small business hire contractor lawyers and accountants with an hourly rate. If you are small, you shouldn't do anything which might involve high fees from them.
So here is how to avoid the GDPR penalties.
1. Get compliant - it is pretty much ISO27001 and it will cost you money
2. Don't collect excessive PII data and if you do, store it securely - after all it is a very basic ask
3. Avoid collecting PII data at all cost - think of it as another form of PCI
Frankly, there is no need to panic.
Yes, users will click yes on basically anything. Facebook could put up a message that says "In order to proceed, click yes to give us half the money in your checking account" and the majority of Facebook users will still click through. Look at EU cookie warnings. Did any of those warnings noticeably impact anybody's traffic after the first week?
The actual original cookie law, that was decided on EU level, requires users actually to be able to opt out.
But it was a directive, and so countries could interpret it for local implementations.
The GDPR is a regulation, which means its text is directly law, and it also means it can be a lot stricter.
And the law required prior informed consent to cookies with opt-out not generally being considered to be valid consent.
No, they won't. When the EU imposed new consumer protection rules not so long ago, it resulted in having to put some scary-looking legalese directly on your sales funnel pages if you were supplying digital content, even if said legalese was of no practical value to anyone including your customer. That alone was enough to hurt conversions, even if you didn't require something like a token checkbox to be ticked before continuing. The GDPR compliance requirements are potentially on an entirely different scale.
The talk listed all the possible ways the law allows you to store/manipulate user data without requiring explicit consent... There are a shocking number and iirc they apply basically whenever you have a direct consumer relationship with some company.
As I see it the most relevant processing conditions for companies offering a service and storing / processing data without gaining explicit consent are likely to be
6(1)(b) - Processing is necessary for the performance of a contract with the data subject or to take steps to enter into a contract
6(1)(c) - Processing is necessary for compliance with a legal obligation
My understanding is that these are far from a blank cheque to store / manipulate arbitrary personal information. Specifically, the storage and use of data in question must be provably fundamental to either provision of the relevant service in (b), or meeting legal obligations in (c).
So yes, a company providing you a service will gain the right to store certain customer details demonstrably necessary to provide that service - say hosting your email. It won't however allow arbitrary use of such data to e.g. provide targeted advertising, since such use is not fundamentally required for performance of the service. This would require a specific opt-in (and from what I recall, a failure to opt-in cannot interfere with the provision of said service - not so clear on this however).
In other words I know that clicking on a Facebook dialog box saying "you agree to give us 50% of your income" is meaningless and so I will click on it and use the website.
GDPR prevents the companies from discontinuing service for users whom wish not to be tracked.
This prohibition of freely using all available data will create great arbitrage opportunity for the shadow economy, and will have a net negative effect on innovation.
I think prohibition has very bad side effects, and that MORE transparency is the way forward in politics, economy, and also society. This includes allowing businesses to use all the data they can get their hands on. People can produce infinitely more data than any google can realistically process.
I cannot understand why people who are otherwise for transparency and against prohibition are celebrating this as a big win against FB/AMZ/GOOG, as those players can easily shell out another $10M here and there to be compliant with this regulatory monster.
There are all kinds of 'innovations' that don't involve collecting data about me, and the companies creating these kinds of products don't have to care one lick about the GDPR.
Transparency is when powerful entities (companies, governmental bodies, elected officials…) disclose stuff about themselves. Allowing a business as big as Google or Facebook to use all their user's data as they see fit is not transparency, it's mass surveillance.
Leaked info to governments, especially in places in the world where it can mean imprisonment or death is a real issue.
Processing of PII may gave people the willies, but being annoyed by targeted ads seems like #firstworldproblems compared to people who've experienced real attacks via PII leaks.
It also creates opportunity costs for improving human society. How many human diseases could be cured if "processed" PII health data, anonymized statistics or case studies, were used by researchers freely? How much additional burden does it incur if each time this data is transferred to a sub-processor everyone must re-opt-in again?
Would a world of perfect privacy be a utopia, or a nightmare?
Take a look at this link, for example: https://iconewsblog.org.uk/2017/08/25/gdpr-is-an-evolution-i...
If you don't care about ethics then expect an unethical economy.
I think this is the wrong approach, EU should try to make it easier for European businesses to compete with US-based ones.
But in the end we have another layer of bureaucracy on top of all the things a US startup has to worry about, and those mostly non-technical/non-innovative people want to be a part of the picture.
Tracking people around the internet. To follow them everywhere they go and save their personal information, their political ideology, etc. It is not innovation, that's just stepping over personal rights.
> I don't think it is about ethics, it is about control.
Yes. About giving back control of citizen privacy to the citizens themselves. It is not the government that decides who can own your data or when to delete it. It is European citizens that decide individually who should have their data and whom can not.
> EU should try to make it easier for European businesses to compete with US-based ones.
If you give away freedom for economic gain, you don't deserve either one.
What gives you the right to deny others the right to trade their personal information as they see fit?
What gives you the right to force me to trade in my personal information?
Just look at how many people were behind the facebook lawsuits in Europe to see that those are not developers.
What gives you the right to decide that corporations are free to use my personal information for whatever they please?
Well the EU seem to think that not all of this innovation is good innovation and the law specifically targets this.
I don't want US innovation. Thanks for the offer.
No. Can't. I can't do my job without Google and my social life wouldn't survive without Facebook (messenger, groups, events). Using these services is not voluntary at this point.
Of course it is. You just don't want to because it's not convenient.
If you never get on Facebook again, what? You have to write snail-mail letters or use group MMS?
How has HN become an argument for why eating food is not in fact mandatory?
Like the fucking plague. You should too.
Plenty of us do.
This also affects American companies and the degree it affects you primarily depends on how much of your business model was depending on you doing nefarious things with customer data.
Yeah, yeah, "giving up lots of customers," but if you're a small startup it might be attractive to target a smaller problem space to start with. Also, everyone does this already with, "I am older than 13" boxes since it's broadly illegal to collect childrens' data; you probably wouldn't even have to do any verification as long as you don't wilfully stick your fingers in your ears.
That works for B2C situations but it won't work if you have European companies as customers.
What's the EU going to do? They have no jurisdiction over me.
A lot of facebook's money comes from selling ads to European companies.
The big problem for the EU is that consumers actually choose the best product in a free market (the internet of free services), and they overwhelmingly decided to use the US-based options.
All the framing as "nefarious" is propaganda, consumers choose freely the option they value the most. If someone else comes along providing more value than google or facebook everyone would switch in an instant.
> All the framing as "nefarious" is propaganda, consumers choose freely the option they value the most.
Customers cannot choose freely. I'm a customer and I cannot chose certain products because they do not exist. Companies I never engage with are tracking my activities through tracking pixels and other things and because I never establish a business relationship with them, I cannot avoid that. This bill now forces a company I might do business with not do business with companies that do not permit me to get rid of my data.
I think this a good development because it finally makes certain backroom deals visible.
You can easily avoid being tracked by using an adblocker. Other websites only track you because they are business partners of the tracking companies, which provide a lot of value in terms of analytics for the business - free of charge.
> I think this a good development because it finally makes certain backroom deals visible.
As a German, I'd like to have more transparency into the backroom deals that are done in Berlin and Brussels.
But this won't happen unfortunately, and they'll try to regulate IT to death to the benefit of local corporations who failed again and again providing the consumer with as valuable producs as their US counterparts.
Except not really. Plenty if tracking happens regardless based on fingerprinting. And even ignoring ads there are plenty of free services that after a while turn iut to be so shoddy that they lose the data i left on their services and provide no way for me to demand deletion.
I get a mail every other month that my email address and password where found in a data leak.
This regulation is a good first step of forcing companies to think about the consequences of having data.
> As a German, I'd like to have more transparency into the backroom deals that are done in Berlin and Brussels
Same. I want a lot of transparency including from my own government. I'm however going to accept any positive development and won't demand them to be in a certain order :P
There are a lot of people who are trying to shape the EU into a better institution. It's not perfect but it's a pretty good start.
It really isn't.
The EU are global leaders in data protection, and it comes from a belief that the right to a private life is a fundamental human right.
From what's written in the article it seems it impacts anyone who is doing anything with data, which is basically every startup.
Don't collect data you don't need. Don't collect data you don't have explicit consent (or a legitimate need) for. Don't use data collected for one purpose for another purpose.
That'll get you almost all of the way to complying.
If they are the middle man with no dependence on private information, then yes it will cost them to be compliant but it won't break their business model.
Sounds like you missed the part where the fines are based on a percentage of your global revenue.