There's been so many articles about Facebook and the recent privacy catastrophe that I'm finding it hard to keep up. Does anybody actually know what their response will be to the GDPR? Are the privacy benefits from the GDPR going to be exclusive to EU citizens? This seems problematic.
Whatever happens, Facebook has irreparably damaged my trust in their handling of user data and I think many on here would agree. My wife and I have switched to Riot [0] for now for personal communication and are considering other decentralized alternatives.
GDPR requires you to handle personally identifiable information in a way that makes sense to the users and that is auditable. Facebook overall does that far better than anyone.
The situation with Cambridge Analytica was that they let users export the information about their friends, information that users had access to; not allowing that export at all would probably be met with legally-binding criticism. What the API allowed at the time was to conveniently do that through a third-party application that users were free to use. The main use-case that was discussed then was to empower services like Riot, to encourage competition — something that, surprisingly, Facebook was very supportive of at the time.
There was no expectation of fiduciary duties at the time, so whether they should treat users as grown-up and obey to their request to export their social graph, or whether they had a duty to prevent that from happening was not clear at the time. It has since became clear that people were not reading the permissions that they were granting and more applications were abusing them than trying to build an alternative to Facebook. With the benefit of being the central platform, Facebook was the first to notice and started cutting accesses, to the great anger of many third-party services that grew dependent of the feature. Some services who needed the social graph for legitimate reasons well understood by the users (e.g. Tinder) have kept their now-not-publicly available access.
As much as people want to blame the only site that they can see in the process and the one that appears the most powerful, namely Facebook, the company acted with far more awareness than the law would, even with something as progressive as GDPR in action.
Facebook not only has been very effective at showing me and letting me edit the data they have about me in https://www.facebook.com/ads/preferences/ but they are the first service that let me see whether my data has been sold by data brokers to (often unsuspecting) brands. Check the “With my personal info” section. Do you see companies you have never heard about here?
Don’t be misinformed and attack Facebook for selling your data — they did not. They are the ones revealing to you that those company purchased it from elsewhere and allowing you to know who to pursue, who to ask to remove your info or who to ask how they got it.
The amount of blaming the nurse for your fever on those issues is getting really concerning.
> the company acted with far more awareness than the law would
Their response to the criticism, externally, was to deflect, and internally, was to ignore it [1]. Zuckerberg's response to the Android call and text scraping endeavor was more equivocation [2]. Then he decided to pipe up again about not applying GDPR globally [3].
One has to squint to see any sense of awareness in Facebook.
> Don’t be misinformed and attack Facebook for selling your data — they did not
They sold ads to an entity that flagrantly broke their rules. Said rule breaking may have had deleterious, and possibly illegal, effects in multiple countries.
Did Facebook know what CA was up to? Probably not. Did they incentivise themselves not to? Absolutely. Complicity comes in shades of grey.
My point was that in 2014 when Facebook understood that one of the dozen of thousands of partners could abuse the data that they were collected, they acted swiftly and asked their partners to justify their use case; they asked the accused parties (Kogan, GSR and SCL) to delete the data and obtained legally binding documents. I’m not sure how they could, legally force themselves into a business in a different jurisdiction when that business had not broken the law — only a Platform user agreement.
Facebook did that four years before the law, namely GDPR, came into action. That’s not even accounting for the fact that outside of Europe, US and elsewhere, what CA did appears legal. Who the US Congress should be judging is probably whomever is in charge of writing laws.
Unless Facebook had a way to let, four years later (an eternity by Facebook standard) the programmatic ad platform know that those three entities (Kogan, GSR and SCL) that the compliance team interacted with four years earlier were related to CA; or even that CA, working for the official Romney campaign was related to the Pro-Trump SuperPACs buying ads (which would be coordination and illegal) blocking them without the revelations of Alex Wylie would be prescient.
I really don’t think you are making yourself actually smarter by judging Facebook in hindsight.
As Facebook done shady stuff? Absolutely: the Android Contact thing is certainly representative of the “gather first, ask questions later” early attitude which explains why Messenger is so bloated. Anyone familiar will confirm this is laziness over mischief. I worked there: you don’t need to make things up to find issues with Facebook.
On the particular problem in point, GSR, Facebook was outwitted and made misinformed decisions but they would absolutely not have been helped by either the public opinion of developers (we wanted more sharing) or the law (inapplicable) at the time.
The company learned to be more careful; its critics should too, because we absolutely will need those critics to be smart.
> I really don’t think you are making yourself actually smarter by judging Facebook in hindsight.
I've been blocking every Facebook domain I can find in my browser since 2010. Why? Because I knew they could connect up referer headers on Like buttons to my Facebook cookie, and create a complete profile of me and the kind of articles I read.
(I did the same thing with Google's Plus stuff, as best I could.)
Facebook didn't need to implement things that way. But they've been acting in a sinister way for years, their game plan has been clear as day. And I for one don't consent.
> not allowing that export at all would probably be met with legally-binding criticism
Yeah? Can you point me towards any law that says I have the right to export data I did not enter?
> The main use-case that was discussed then was to empower services like Riot, to encourage competition — something that, surprisingly, Facebook was very supportive of at the time.
You have any evidence for that encouraging competing social networds was the "main use-case" for this feature? This is a pretty strong claim to make with no support.
> It has since became clear that people were not reading the permissions that they were granting and more applications were abusing them than trying to build an alternative to Facebook
The vast majority of people whose data was stolen did not grant ANY access. You appear to be deliberately mis-representing events.
> Some services who needed the social graph for legitimate reasons well understood by the users (e.g. Tinder) have kept their now-not-publicly available access.
Access to the social graph, and full access to the activity of all your friends are not the same thing. Again you seem to be deliberately misleading people.
> Facebook not only has been very effective at showing me and letting me edit
So how can I monitor and delete the shadow profile they have build for me?
> Don’t be misinformed and attack Facebook for selling your data — they did not.
Nobody has said their sold the data. They gave it away for free
> The amount of blaming the nurse for your fever on those issues is getting really concerning.
I didn't have a fever before I reached Facebook, something Facebook did got me sick, so why shouldn't I blame them for the fever?
> Yeah? Can you point me towards any law that says I have the right to export data I did not enter?
In the case in point, you did approve, i.e. enter, all the relation on your graph so I’m not sure how your question is related.
To your question: GDPR allows you to access any information associated to identifiable information, there are explicitly no limits on whether you entered it, it was scraped, logged or if it was inferred.
>> main use-case that was discussed
I have the notes from my PhD, yeah. Statistics on blogs posts mainly. They are on another computer, but if you really think this is important. The main use case was obviously to have your friends as a feature in social games, music sharing but that was not really discussed — unless, like for Apple Music, the intention appeared to be to build a competing graph.
I probably should have phrased it better, “the most discussed case”.
> The vast majority of people whose data was stolen did not grant ANY access.
Every one of them granted access to their likes and social graph to their friends. Their friends then overlooked how the platform granted them the ability to share that further. That’s the thing about a social graph that hardly anyone seems to notice now: it’s shared personal data. That’s why calling it “ownership” makes it confusing: information isn’t an excludable good.
> So how can I monitor and delete the shadow profile they have build for me?
I don’t think you have a shadow profile. Your friends shared information about you with Facebook, namely that they know socially the person controlling your email address. What you are asking is for you to be able to tell Facebook that the company should not accept, or store, the information that your friends want to connect with whomever ends up connecting using a certain address. But, if you change your mind, Facebook needs to be able to change that too. Storing your intention, or controlling the ability for you to change it, that would be a profile, missing most feature — a ghost or shadow profile, if you wish.
If you want to prevent your friends from sharing your personal information with programatic agents, I’d love to get your take on how to do that. I use Facebook for most of my social life because I know that, because as a central control the tool, they enable me to prevent my friends from abusing my trust (like email would) and they monitor other programatic agents.
> Nobody has said their sold the data.
This is alas a commonly repeated story (like the shadow profile). If you go through the paragraph, it should be fairly clear that I was actually trying less trying to debunk that and more trying to contrast Facebook and data brokers.
> something Facebook did got me sick
If you don’t have a Facebook account, I’m not sure how Facebook or Cambridge Analytica would have been able to hurt you personally.
> In the case in point, you did approve, i.e. enter, all the relation on your graph so I’m not sure how your question is related. To your question: GDPR allows you to access any information associated to identifiable information, there are explicitly no limits on whether you entered it, it was scraped, logged or if it was inferred.
I did not enter my friend's birthday, or other "extended profile properties". In what world does GDPR legally require facebook to allow me to export my friend's birthdays and their extended profile information?
> I probably should have phrased it better, “the most discussed case”.
Yeah, I'd totally buy that they tried to sell the feature externally as supporting social competitors. That is very distinct from what you claimed.
> Every one of them granted access to their likes and social graph to their friends.
You were distinctly talking about the permissions people were granting facebook applications and blaming the problem on people not reading those permissions carefully. Now you seem to be blaming people for using Facebook at all?
> What you are asking is for you to be able to tell Facebook that the company should not accept, or store, the information that your friends want
I'm not asking for it, (though GDPR will provide that), but you were claiming it existed.
> In what world does GDPR legally require facebook to allow me to export my friend's birthdays and their extended profile information?
It doesn’t and that’s not what the API allows today. At the time this was a feature, there were arguments that allowing that would help new competing services to emerge, but they never became law.
Because they did not, competing services now rely on a handful of people claiming they switched, rather than have more effective (or invasive) ways to remind people to switch. That means that it’s extremely unlikely that any project competing with Facebook, many of which have recently felt a gust of interest will actually take off meaningfully. So the reaction you are asking now from Facebook, thinking you are being critical and provocative, happened six years ago and locked them as a monopoly. I guess that’s hindsight.
> That is very distinct from what you claimed.
Yes, because what I claimed is that external activists, developers who set up OpenSocial (OAuth and OAuth 2.0, Activity Streams, and Portable Contacts) were the ones asking for it.
> Now you seem to be blaming people for using Facebook at all?
I’m not blaming anyone (except you): I’m just stating that having a social service means sharing access to personal information. Once information is shared, you have to trust people who are not the person who the information is about, but their friends, with said information. Facebook empowers that trust: you can learn about how long lost friends are doing, which is a great way to leverage that trust; or you can sell their details for a dollar, which is less great.
> At the time this was a feature, there were arguments that allowing that would help new competing services to emerge, but they never became law.
>"not allowing that export at all would probably be met with legally-binding criticism"
What legally binding criticism were you talking about? Why did you bring up the GDPR to defend this statement?
> Because they did not, competing services now rely on a handful of people claiming they switched, rather than have more effective (or invasive) ways to remind people to switch. That means that it’s extremely unlikely that any project competing with Facebook, many of which have recently felt a gust of interest will actually take off meaningfully. So the reaction you are asking now from Facebook, thinking you are being critical and provocative, happened six years ago and locked them as a monopoly. I guess that’s hindsight.
Oh please tell me, what reaction am I asking for? Are you saying we should have legally force Facebook to continue letting CA strip mine users data? WFT are you talking about?
> The main use-case that was discussed then was to empower services like Riot, to encourage competition — something that, surprisingly, Facebook was very supportive of at the time.
> Yes, because what I claimed is that external activists, developers who set up OpenSocial (OAuth and OAuth 2.0, Activity Streams, and Portable Contacts) were the ones asking for it.
No, you never claimed that at all. You claimed that facebook was discussing this API mainly as a means of fostering competition.
> Once information is shared, you have to trust people who are not the person who the information is about.
And you have to trust the platform to respect your privacy and not give any random quiz app full access. Obviously Facebook is not trust worthy and should not be given this information.
> Facebook empowers that trust: you can learn about how long lost friends are doing
Facebook doesn't empower trust at all: it abuses it to make money off of our information.
GDPR also relates to the consent of having personal data. Facebook is well known for using fishnet trawler techniques to gather whatever personal data they can with little regard for consent. Wouldn't be surprised if everything is passed on to Palantir anyway.
The fact that several aspects of their business model will have to change to accommodate GDPR should be telling.
I was; I clearly mention it when relevant. I left to work on Deliveroo; I’m now at Booking.
I also wrote a PhD on how to implement monopoly enforcement to the company and I’ve published my critical understanding of the company’s position for more than ten years prior to joining the company, at academic conference, on my blogs, on Quora, occasionally here. I was the first person to write scary things about Facebook, probably in 2005.
As I wrote repeatedly, the company has a ton of issues and is generally extremely open about it (that there are, less what they are specifically). The trawler approach to data gathering was one of them. Thank you for pointing that out: focusing on real problems is important. Facebook has been fixing aspects of that repeatedly, but it is hard: some data gathering or sharing is actually relevant and expected, so you can’t cut things without understanding what you would break -- a clear strategy change in the last three years.
Why not do more faster? Because the company is already trying to respond as fast as they can. Employees and ex-employees are indeed more tolerant of this because we know personally of the insane amount of work there is to do; prioritisation, i.e. the arbitrage between your most and second most important task, what you do now or later is insane. I left the company to work in a more balanced environment that happened to be the fastest growing start-up ever, where I got woken up ever night at 4am because scaled killed our database again, migrating to a more scalable technology every four months.
Facebook had to reconsider offering services like targeting based on what Experian knows about their users, who I believe are almost exclusively American resident. I don’t think that has to do with GDPR because it’s not on the same continent but I’m not privy to details. EU citizen living in the US or Americans who moved to the EU are both large enough demographics to warrant caution.
They were not hiding the feature because there was an expectation from Americans that their credit card companies sold economic data; Facebook just made that integration easier -- I’m assuming as a reaction to how common a source of Custom Audience that was. If you didn’t like it before, you could hide it on https://www.facebook.com/ads/preferences/ with a click.
After the CA scandal, American became more sensitive to those approach and Facebook responded, almost instantly -- so fast advertisers are a little confused. That balance, pro-users, is also something that anyone familiar with the company, investors, board members but also employees can confirm: there is a strict hierarchy when one of the four “orgs” objectives disagree. Security is always right; User Engagement takes over Advertising.
Every business in Europe has to accommodate to GDPR, mainly processes but all advertising-based company massively so; one would in denial if they think that’s not the case -- the text is still widely open to interpretations. The fact that Facebook had the least amount to change is indeed telling. What you notice is the scale of the company, the prejudice and the attention. What you are missing is in front of you: Facebook is very willing to admit its wrongs and fix them.
I’m sorry: what problem do you think Facebook is ignoring? I have heard interesting arguments elsewhere — but not on the company ignoring anything.
All I’ve heard in this thread is people judging the company in hindsight, and based on a rather convoluted speculation (that happen to be false: Cambridge Analytica used credit card data and voter records, not Facebook data, to assess psychological profile).
Facebook has made difficult decisions with partial information that ended up proving to be suboptimal — but I don’t see when they have ignored either symptoms or criticism.
> I’m sorry: what problem do you think Facebook is ignoring? I have heard interesting arguments elsewhere — but not on the company ignoring anything.
I think they've been willfully ignoring the likelihood that this sort of data exfiltration has been happening on a very widespread level for years. Many of those 800,000 "quiz" apps are likely designed for this purpose, and their proliferation should've set off warning bells inside Facebook - it did outside.
Zuckerberg's being out there acting like this was all an unforeseeable, shocking, limited-scope issue is disturbing to me.
Staying with this analogy: The nurse also has a near monopoly on providing targeted care and it sure appears that (s)he knowingly or negligently assisted the spread of the fever causing pathogen. Then, leveraging her unique familiarity of the illness, (s)he made ~$500B selling a targeted care plan.
No. It only covers residents ("data subjects") who are inside the EU or dealing with companies in the EU. People who aren't in the EU (including EU citizens currently in foreign countries) and are dealing with companies outside the EU aren't covered.
Ok, what if a EU resident goes on a holiday in the US? Will all their data now be open to malicious treatment for the duration of the trip? Or only the data they enter/view during the trip?
This is a good answer. The question has been raised and answered (by tzs and others) on HN so often recently. It‘s interesting to watch how the answers get streamlined to the essential information over time.
That psuedocode is inaccurate - if a company (including its parent's subsidiaries) is not in the EU and does not provide services to companies which operate in the EU, then the GDPR has no inherent jurisdiction.
From my understanding it is correct. It applies to companies outside the EU if they collect data about people inside the EU. If this is enforceable is another question.
> This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to:
the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or
the monitoring of their behaviour as far as their behaviour takes place within the Union.
I think this is a bit of an oversimplification. How do Facebook's EU subsidiaries fit into this? Can Facebook US simply divest themselves of responsibility in this case?
“offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or
the monitoring of their behaviour as far as their behaviour takes place within the Union.”
So Facebook US cannot divest itself as long as it serves customers in the EU or exchanges data about data subjects in the EU with its EU subsidiary.
Is anyone talking about the harmful effects on startup companies that may want to create new social platforms to compete against the incumbent players? All the talk about regulating facebook, twitter, etc are actually great for those companies because they can afford compliance. But it raises the bar of entry so high that new companies wouldn't be able to compete since with limited resources they wouldn't be able to focus on the critical period of acquiring users and instead would be forced into building compliance features.
I firmly believe that the majority of people still don't care about their privacy in the first place or they wouldn't use such platforms. IMO this is government overreach and anti-competitive.
The GDPR makes some things easier for start ups. Users now have a right to their personal data in a "commonly used" digital file. Now the start up can have a "Import your Facebook data" feature.
Currently a provacy conscious start up is competing with those who aren't, making it harder. But with this law, you won't have as many shady companies like Facebook.
Storing less private data makes you less liable to get hacked and get bad PR.
Would this data file include the user's friend connections? In other words, if I exported my data, and my "Facebook friend" also expected their data, would it be possible to determine from the two files that the two users are friends?
I don't think GDPR compliance is as onerous as you seem to think it is, but even if it were, would it matter? We don't give special provisions to start ups writing safety critical code or developing new health care technology, why would this be any different?
There's nothing inherently wrong with a high bar to entry if that bar exists for a very good reason. If it were hard to break into this space due to regulation (I don't believe it is or will be) then yes, competition will be less, but the alternative is worse.
> We don't give special provisions to start ups writing safety critical code or developing new health care technology, why would this be any different?
Safety critical code and health care technology are life and death situations.
It's also important to understand that the regulations in those sectors have destroyed (or deterred) an incredibly large number of startups, and the net lives saved as a result is quite likely negative because the value of life-saving technological advances generally exceeds the cost of mistakes in developing them.
People have severe emotional reactions to this. A doctor's experiment may kill fifty already-terminal patients but uncover a cure that goes on to save five million. But the families of the fifty dead patients can blame a specific person for their deaths while the five million aren't even aware what they lost, so the regulations are biased against progress.
This is obviously not a good template for making decisions in other industries where emotions don't run so high.
> People's personal info can be a matter of life or death too.
That's the point. If we pass regulations that result in continued and increased centralization because only large organizations can afford compliance, that is not advantage to the people whose lives are at risk.
If you're a homosexual in Russia or a democracy activist in China or an advocate for womens' education in parts of the middle east or a Jew in WWII Germany, "privacy laws" can't save you. A company's fear of the state can't protect anyone from a corrupt state. But structural and technological privacy protections might. Which are the things hamfisted regulations inhibit.
I don't think GDPR compliance is as onerous as you seem to think it is, but even if it were, would it matter?
The answer is yes, it is onerous. And yes, it does matter.
Regulations always start as an idea that sounds good. The companies most impacted are then motivated to gain control of the regulations. Once they do, then they happily add on to regulations because that becomes a barrier to entry for new competitors, but do so in a way that ceases to be a problem for themselves. In the end the regulatory framework stops working and we have the very disaster that we were trying to block.
This is called regulatory capture. It is very, very common.
In the case of Facebook, here is the problem. The regulators are controlled by politicians who wish to remain in power. If Facebook breaks the rules in favor of those politicians, it becomes easier for the politicians to remain in power. The incentive is therefore for the politicians to become complicit in letting Facebook break the rules. However no new startup can provide the politicians with an incentive that matters - only Facebook, Google, and other similarly large players can bribe politicians in back room deals.
The payback for Facebook is that they get to solve their biggest existential crisis. The barrier to entry for a new social network just aren't as big as it seems. They can keep milking more from their users and buying up the Instagrams for only so long until something like Snapchat or Discord or someone not yet thought of succeeds. If Facebook is to avoid being replaced in the way that they replaced MySpace, and MySpace replaced Friendster, they need a new barrier to entry.
Regulation provides that for them. In public they will get chastised. You'll get speeches that you love. In private, they will happily become part of an effective surveillance state for those already in power in return for a blind eye being turned to their ongoing transgressions.
The result? The regulation that you are cheering won't accomplish the causes that you want. And if history is a guide, the very politicians whose speeches are the most to your taste will tend to be the ones who behind closed doors are selling you out. With their public speeches being nothing more than bargaining chips for private deals.
And for the record, I grew up in Canada. I am not opposed to the idea of regulation in principle. However every approach has failure modes. And regulation works a lot better in practice when you exercise skepticism about the actual aim as opposed to the stated one.
If you wish to build your skills at skepticism, I highly recommend watching the series Yes, Minister. It is from the UK in the 1980s. However the lessons about how bureaucrats manage to get their way while pretending to listen to politicians are timeless. It also came out much later that it is less fiction than it first appears - most episodes were based on actual incidents. And some were downright prophetic - compare https://www.youtube.com/watch?v=37iHSwA1SwE with actual British policy towards the EU since.
I have no reason to believe that the picture painted then of the bureaucracy in Whitehall is significantly better than the bureaucracy that has sprung up in the EU.
Based on the first link, that letter scares me a lot. I have a feeling that this level of regulation will destroy any social startup. You'd need a compliance department larger than engineering just to remain legal. This is clearly a win to Facebook.
Or you just build your permissions and opt-in platform as a base for the social app.
We wouldn't let a self driving startup ignore traffic laws because it's "too hard". Likewise we shouldn't let a social startup ignore privacy laws and auditing.
At least on the surface it doesn't seem that bad. You just have an opt-in data collection with (type-of-data, purpose-of-data) tuples and let users actually delete data on request.
Allow Socially to collect the following information for the purposes of providing you service:
- Minimal Account Information: email address and password
To prevent spam if you don't provide additional profile information you will be required to verify your account with a valid government ID. Only the expiration date will be stored.
- Information posted to your timeline.
Without this you will be unable to post updates.
- Messages sent to others.
Without this you will be unable to send messages.
- Profile Information: Name, Address ...
Allow Socially to collect the following information for the purposes of protecting your account:
- Network Addresses used to access the service.
- Login location
- Login times
After a short time using the service if we see a login that doesn't match the information on record we will notify the primary email for approval.
- Links to other sites you click.
We will check links you click against our list of known phishing sites and scams and warn you before redirecting you.
Allow Socially to collect the following information for running internal studies and improving our service.
- Features you use.
- Posts you read.
- Links to other sites you click.
Allow Socially to collect the following information to help make ads more relevant to you:
I would strongly recommend reading what Pagefair has been putting. They have been one of the few sources I've found that is take GDPR literally. It isn't even clear what level Google's compliance will be - https://pagefair.com/blog/2018/googles-nonpersonal-ads/
There are a lot of extremely serious questions that arise regarding network security, anti-fraud, and anti-abuse measures. Just looking at basic bot detection measures, all of the sophisticated methods are now illegal. It certainly requires a major re-think of how websites serve content as well as the sustainability of advertising as a revenue channel. I can't even wrap my head around how someone would run a GDPR-compliant dating website/app.
If you think Pagefair's interpretations of the GDPR are correct then Google and others are calling the EU's bluff. They are implementing part of the GDPR strictly but the parts which invalidate their business models are being interpreted more liberally or ignored altogether.
I'm not saying that the GDPR is a good idea, bad idea, morally right or wrong. Rather, a lot of things we have come to view as a given -- such as how we detects bots, fraud, and abuse -- are no longer valid. Infrastructure, both technical and business, will need to be re-designed either to comply with the GDPR or evade it.
I kind of feel like every question in the first link is entirely reasonable and people _should_ be able to get those answers, though. Nothing in there is onerous if you're following good practices anyway.
I really feel like the answers to all of those questions are going to be basically identical between people, and all you really need to do is be able to export whatever data you have on somebody quickly in order to be able to respond to that email in under quarter of an hour.
I guess it could make a decent DoS tactic against a small company, but lots of other things would too.
> respond to that email in under quarter of an hour.
Let's take an app like Instagram as an example. Instagram had over 1 million users within two months and 10 million within a year, and no profits. You're running on a shoestring trying to keep servers online without any serious budget to speak of. It's probably you and a few friends/associates working closely together.
All of a sudden with GDPR, you have to pay a lawyer to help you understand what you need to do to comply with the regulations. You also have to spend engineering time developing solutions to enable the queries in that letter, enable purging records from long-term backups, etc. And people have to spend the 15 minutes responding to each request.
Now, let's say each request does only take 15 minutes like you suggest (which I find highly unlikely). If a small fraction like 0.5% of your customer base sends such a letter, then that's 50,000 letters. At 15 minutes each, that's 12,500 hours which is over 6 full-time employees. Many small business don't even have 6 employees to conduct the entirety of their business right now!
If the concern is that business owners can no longer cut costs by being lax with people's data... isn't that the whole point of the GDPR? That we've collectively decided that letting people cut those costs is having too many negative concequences too often and that we need to stop?
thanks, wow responding to a letter like your first link could significantly bog down resources for a young company... you can imagine if you launched and even received moderate user growth early on, but then started receiving such letters, your productivity could go down the tubes.
I disagree. Here's an outline of what a response to the letter in that first link should look like for a small, well-meaning* startup:
The letter is nicely formatted into 9 bullets. All are optional for small companies, and all can be automated - the answer should be the same for all users.
1. This is a "yes" or "no" question. If the answer is "no", you can ignore the rest of the letter. If yes, the answer is the same for all users.
2. Simple, short, same for all users.
3. You can avoid doing if you want. If you are doing this, you're signing up to take on this additional burden of informing your users. Consider this when making this decision. This is the only bullet in the list that is in any way burdensome as you will need to update this text in your automated response whenever you take on 3rd-parties (if at all).
4. Simple, short, same for all users.
5. and 6. are "if" conditionals that you shouldn't be doing. The answer should be "No".
7. Amounts to "has my data been hacked". If yes, that's unfortunate, but obviously you have a moral obligation to respond here regardless. Presuming you're hacked once, you provide full details once and send automatically to any users who ask.
8. and 9. are out of place. GDPR doesn't require you to respond to these questions within this quoted 1 month time limit (you do have to have what's detailed within them in place to comply with GDPR but that's tangential to info requests). These seem to have been put into this blog post as extra scaremongering.
* by "well-meaning" I basically mean "not selling all of your users personal data to myriad nefarious 3rd-parties"
> 3. You can avoid doing if you want. If you are doing this, you're signing up to take on this additional burden of informing your users. Consider this when making this decision. This is the only bullet in the list that is in any way burdensome as you will need to update this text in your automated response whenever you take on 3rd-parties (if at all).
Pretty much everyone is going to. Google Analytics, Zendesk, Salesforce, and more all qualify. Hell, even AWS qualifies...
> 5. and 6. are "if" conditionals that you shouldn't be doing. The answer should be "No".
Why do you say that? Given that we're discussing technical companies, I fully expect that automated decisions will be made.
> 7. Amounts to "has my data been hacked". If yes, that's unfortunate, but obviously you have a moral obligation to respond here regardless. Presuming you're hacked once, you provide full details once and send automatically to any users who ask.
And "detail all your security measures". Which, for a small company that doesn't have an InfoSec group, probably means next to nothing. An admission that feels a lot like liability...
> 8. and 9. are out of place. GDPR doesn't require you to respond to these questions within this quoted 1 month time limit (you do have to have what's detailed within them in place to comply with GDPR but that's tangential to info requests). These seem to have been put into this blog post as extra scaremongering.
It's the sort of thing an angry consumer might do, and most startup founders subject to GDPR are not deeply knowledgeable about it.
> Pretty much everyone is going to [...] even AWS qualifies...
I worded this badly. This is optional on a case by case basis, i.e. there's a cost-benefit to using each 3rd-party, and this burden is worth considering for each. It's still not a massively onerous burden tbh if you do use a lot of 3rd parties.
> And "detail all your security measures". Which, for a small company that doesn't have an InfoSec group, probably means next to nothing. An admission that feels a lot like liability...
I'm sorry but if you're really defending companies with no competent security measures in place, regardless of size, I think you're in the wrong forum here. If you are a commercial entity of any size there should be moral hazard in ignoring security of your users' personal data.
> It's the sort of thing an angry consumer might do, and most startup founders subject to GDPR are not deeply knowledgeable about it.
Exactly. And unlikely to be more knowledgeable if they're reading misleading scaremongering articles like this on LinkedIn!
> I worded this badly. This is optional on a case by case basis, i.e. there's a cost-benefit to using each 3rd-party, and this burden is worth considering for each. It's still not a massively onerous burden tbh if you do use a lot of 3rd parties.
I'm up close and personal with a vendor assurance process right now. It's often a non-trivial amount of time for any given vendor.
> I'm sorry but if you're really defending companies with no competent security measures in place, regardless of size, I think you're in the wrong forum here. If you are a commercial entity of any size there should be moral hazard in ignoring security of your users' personal data.
I'm sorry, I worded this badly. I'm saying that small startups have a tendency to prioritize getting a product working and seeing if it's worth investing heavily in before standing up a strong information security unit. You're absolutely, completely, 100% correct that there should be incentives to be very careful with user data.
I think it's possible to see where some people might find the level of expense and expertise required to be appropriately careful somewhat scary. I can even see where some people might decide to not create a social media startup to challenge Facebook because of this fear.
Honestly, those questions should be pretty easy to answer especially if your company is small. If as a business you can’t answer these basic questions about the data you want to collect from me, I’m going to be hesitant to share it.
People keep sharing that “nightmare letter” link but won’t point out which question gives them nightmares and why.
A couple of things stand out to me as potentially scary. First, the hard one-month timeline. For a brand new baby startup, a month is a lot of time and any distraction potentially killer.
Second, a list of everything across all types of storage in any and all systems stands out. Even large companies often lack the ability to search ZenDesk, Salesforce, email, AWS S3, and Slack logs all at once.
Third, there's a clause that asks quite specifically for a thorough list of any and all potential future plans. That's a lot, especially given how startups are subject to pivoting.
Fourth, the section about third parties is essentially asking for the outcome of a vendor assurance process. A lot of small companies can't pass a reasonable vendor assurance process. They often can't afford the time and assurance specialists to manage one for their vendors. Even large companies often have trouble maintaining the level of control required for thorough vendor assurance. The bit about legal reasoning implies the involvement of a lawyer as well.
Fifth, there's a strong implication that no matter what you might say in response, it's not going to be good enough. There's always something that can be pointed to as not enough.
With all of the above combined, I can see where some might view GDPR as intimidating and favoring big companies over small ones through sheer costs.
> People keep sharing that “nightmare letter” link but won’t point out which question gives them nightmares and why.
There is a standard way in which "reasonable" regulations kill small companies. It works like this. You impose some small burden, something like an hour of labor a week. That won't destroy a small company, but that is not the only rule in the world. That rule takes an hour, another rule an hour and a half, a third rule a half hour. By the 60th rule, a two person company is past sunk. Even if every individual rule is nominally reasonable, the combination is hopelessly destructive.
The problem with tech companies is the rules don't just add together, they get multiplied by the user base, and it's entirely common for a very small company to have ten million users.
So you take a letter like that. The first time you get one it will take you a week to figure it out, but over time you get the response time down to an hour. Only with 10 million users, if 0.1% of the users make such a request per year, you're looking at 27 of those every day. That's more than three full time employees doing nothing but that. For this one "reasonable" regulation.
I'll point out which question gives me nightmares, as the founder of a EU startup:
- the requirement to have a DPO. Based on the requirements for the DPO, no one in the company can fill the role (conflict of interest), so we must hire an employee or consultant (expensive either way for a small startup)
- one month to respond. That's a lot of informations to collect the first time, and I might have other fires to put out (or I have to be pro-active and have a prepared respond, which has the take the place of something else important to do)
- the sheer amount of informations to collect. In the age of plug and play solutions, that's a LOT of things to audit (Mailchimp, AWS, GA, Heroku, various Wordpress plugins, logging solution I don't even remember the name, just to name a few)
- tracking every single PI of a user. If your systems are not built for this, it's going to be lengthy. If you were created before the GDPR, they are probably not.
- tracking down the usage of those PI may be complicated depending of the expected scope and usage you do (fortunately for me, there is no ad nor data resell, so really only the scope is the problem)
- some process asked for have a serious implication you should have some and do some sort of things. This is not feasible for a small startup.
It boils down to: it takes time, and time is something I'd rather use for something else, and it also requires to do things that have huge fixed cost that the size of a small company can't absorb (at least not until there is a ready-made solution).
I define small startup as startups with less than 20 employees, that might have received Seed funding but not more. Those points might not all be applicable to a new startup created with GDPR in mind.
Simply build a secure and private platform, don't be reckless with user data. Health startups already deal with this through HIPPA and it isn't really a big deal, just common sense practices for security and privacy
I'm going to be honest: I have no clue about social. I operate in socio-medical domain, we don't share by default.
We are mostly fine with the spirit of the GDPR, it's the work we have to do to follow it to the letter which is a problem (and the lack of process internally).
I equate the events right now surrounding Facebook to Upton Sinclair's book "The Jungle", and GDPR being the privacy-analogue to the creation of the FDA.
The FDA makes the medical field hard to break into for startups, but for good reason. New medical devices need to go through rigorous verification and validation to show that they work as intended. If a company making pacemakers had the same "move fast and break things" attitude as most of SV seems to have, I might never trust medical companies again. As a consumer, I'm extremely content with the quality of pharmaceuticals and devices, and I wish I could trust Facebook or Google as much as I trust Medtronic or Philips Healthcare.
As someone who works in a startup in the healthcare space, I will point out that nobody lets health startups off the hook for HIPAA. You don’t get to be sloppy with people’s protected health information just because it makes your life easier.
I'm sorry but I don't see a comparison between what people *willingly post online to public forums compared to their personal health ledger... it's not apples to apples
The content of the data is not the issue. The point is that society has decided to pass a law stating that certain data needs to be treated a certain way or there are serious penalties because of past abuses. We in the US take for granted that this law exists, but there was much complaining in the medical establishment about how burdensome it is to them conducting their work because of all the extra protections it required. This was especially true in biomedical research where patient data was pretty carelessly treated in many cases. Not because the people involved were bad people, but because society as a whole had not thought through the consequences of walking around with an unencrypted list of cancer patients on a floppy disk.
I don't see a comparison between what people *willingly post online to public forums compared to their personal health ledger
There is, or at least there was a Facebook project for exactly that [1]
The thing is that none of those "anonymized" subjects would have ever been asked for consent if they really knew about the consequences.
Such behavior has really, really bad real world implications: When I got a knee operated one of the questions on the questionaire you need to fill is if you agree that your data can be shared in anonymous form for research. At that point (and given that this was a fairly
benign condition) I didn't see a problem with consenting.
After that revelation about what Facebook was up to my answer in the future is a clear NO!
Facebook handling medical data. What could ever go wrong with that?
>> with limited resources they wouldn't be able to focus on the critical period of acquiring users and instead would be forced into building compliance features
If you cannot comply with privacy rules you should not do social media, whatever you growth phase
It depends on what part of the GDPR you're against. I'm generally in favor of a lot of the GDPR's goals, but the execution is pretty clumsy and a few of the provisions are at best useless and impose unnecessary costs.
I wonder which ones specifically? I am reading into it because I am onto implementing it in our small company.
Everything is as in citation from GDPR:
"Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes
... implement appropriate technical and organisational measures ..."
1. Most things fall into this category: Lack of clarity in the law (and a remaining lack of clarity from WP29 and the Commission) about dozens of issues. The Privacy Professional community has been proactive about trying to get info on a lot of these items, but there's just not much coming, and in a few cases what has come out has either departed from what seemed like more obvious meanings or in some cases has muddied the waters further.
2. The essential ban on offering services, downloads, etc. in exchange for consent to use data reduces consumer autonomy and will decrease the availability of free resources.
3. It will be extremely easy to use SARs maliciously, and the law includes NO check whatsoever on this. All it would take to cripple many SMBs is for some jerk to spin up a website that provides a nasty SAR template (that the users don't even realize is such a burden) that random people on the Internet can auto-send to every business they've ever used under some innocuous-sounding reason like "See what information businesses have on you!" 99% aren't using data against subjects' interests, so the net effect of this alone (in the way it is designed) is potentially-immense costs for small benefits.
As a recommendation, the $250 my company spent on buying me a membership to the IAPP has been one of the highest ROI decisions in recent memory. It has saved me a ton of time and effort (and the company quite a bit of money) from the member resources available, and the members listserv is essentially free light consulting from people who have already dug into everything.
The more noise I hear those who work in "ad tech" and other fields that have been marching towards the destruction of privacy making about GDPR, the more confident I become that it might actually help.
You're right! All the stuff about right to be forgotten, right to view, right to make corrections, and so on should be very straightforward and easy for any company of any size interested in being honest. Especially for new players, who don't have ugly legacy systems to wrangle.
Yet... I've read through GDPR. All ninety-nine articles are chock full of "reasonable measures" and similar verbiage. Unless you can afford a compliance specialist - which isn't automatic for a new player - it's intimidating as all hell. What are reasonable security measures, as seen from by a careerist somewhere in Brussels? The text is silent on what exactly that means.
It's possible that respecting users and having good intentions may not be enough...
I think "reasonable measures" is pretty typical language when talking about compliance. I don't know GDPR regulation very well but I know FDA regulation reasonably well and I imagine compliance will be similar, and much easier for the new GDPR.
Most important is to document everything. Have a design history file that you can show in case you get audited. When you design your software, save your designs in the DHF. When you update or make changes to the design, put that in your DHF too.
For each GDPR article where it makes sense, have it written somewhere how you are compliant with what they ask for (you probably don't need to demonstrate compliance with Article 4 [1] but you should have it written somewhere how you are compliant with all the points in Article 5 [2]. When it says "Personal data shall be: (b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes" You should be able to procure a document that lists the various kinds of personal data collected and how each is used; e.g. "Username: The username serves to associate a person's login id to their profile. [... other details] Profile Picture: The profile picture serves to display an image of the user. [... other details]."
When it tells you to have reasonable security measures, then document what your security measures are. "This data is encrypted" or "This data is saved on an external server disconnected from the internet and only accessible by someone with a dongle". If you're still worried that your user data could be insecure, then it might be worth hiring a security specialist to check it out.
With all that said, my point was that it's not obvious what is and isn't reasonable. Hiring a security specialist won't necessarily help you understand what bureaucrats will or won't deem reasonable, especially when there's no history to provide context.
You're right about that. I guess for a rule of thumb, imagine what the reaction on HN will be if your system gets hacked, and assume bureaucrats will say the same thing. Will they be criticizing "Everyones' credit card information was saved in plain text" or will it be "Even though this disgruntled employee uploaded everyone's usernames to [untrustworthysite], most of that information is still encrypted and the company made a public announcement about it hours later".
It's your best guess what is and isn't reasonable. As long as you've documented what you did and why you did it, then you've satisfied that requirement. If an auditor finds what you've done to be insufficient, you'll probably get a warning but you'll still be considered compliant for having done something.
I know it's not a satisfying answer and I'm sorry that I don't have a better one, but complying with regulation is not as definite a "yes/no" as programming.
My knowledge is with the FDA so I'll give an example I'm familiar with. I worked with CT scanners and we needed to do verification/validation. The FDA requirement was to the effect of "must define reasonable requirements for the device" and "must set up testing procedures that reasonably demonstrate that a device can meet its requirements" and so the team I worked with set requirements like "radiation dose: <20rad when run on [x] setting" and then tested it at [x] setting 5-20 times, then documented "passes radiation dose test with 99% certainty, which exceeds our cutoff for passing which is 95%".
CT is an old industry so there was a bit more to it than that, but we were still following requirements that we had written, and testing them in with procedures we had made. The point is the requirements even in the health industry can be vague, so you really just have to do your best to come up with something reasonable.
And because it's vague, that's also why it's so important to document everything.
What about deleting data in backups for an EU resident who submitted a request for data deletion? If a company is using mysqldump or equivalent it seems difficult to just drop certain records from those .sql files.
Have a reasonable retention policy on these backups. Backups are a "legitimate business interest" and you don't need to purge "right to be forgotten" requests from your backups if you stick to a reasonable and publicly-documented retention policy. This is advice that I've received from counsel. However, I am not a lawyer, and this in no way should be taken as legal advice.
Of course. For us, it came down to cost. EU customers make up <1% of our revenue. Implementing this non trivial feature didn’t make sense for us financially. So, we decided to drop all EU customers entirely.
But how can you have enough user-growth to get vast amount of money from investor if you can't play fast and loose with the data you collect on your users?
Well the world has changed clearly. When facebook/myspace started out, would they have been able to achieve success if they were bogged down with data privacy compliance?
A lot of people don’t care about fire safety either (until their house is burning down) which is why we have regulations, building codes, mandatory sprinklers in offices, etc.
I am starting to look at privacy like it should be treated as a public safety concern, since it’s invisible to people until it’s not.
can social media kill you though? I mean all this talk of regulating social networks is under the assumption that it's something you need to have. I would argue that safe shelter is a true human need, but posting cat gifs or pictures of drunken escapades or political musings does not seem equally comparable and thus I do not see how regulation does anything other than hamper competition.
Yeah, it can and it did. Not only did the Ashley Madison leak led to a few deaths, check out what happens in countries where homosexuality is punished by death when private information goes public...
It can be both. Responsibility is not a conserved quantity. Entities who take on private data should be considering the effects of that data becoming released, including what others may use it for.
“It could kill” is a pretty high bar for accepting that something needs to be regulated. We accept regulations on other data collection / storage activities, like financial data and health data. We also have licensure for occupations where public safety or wellbeing is a concern but lives are not necessarily on the line.
I don't think it's an equal comparison... the effects of social media on societies is a somewhat subjective matter. It's more likely that social media is just another tool that exposes the underlying human nature.
That said, if a fire occurs in my shelter and I don't have sprinklers installed, I could die.
> A lot of people don’t care about fire safety either (until their house is burning down) which is why we have regulations, building codes, mandatory sprinklers in offices, etc.
We don't have mandatory sprinklers in home offices and undeveloped land and buildings that are still under construction.
The problem with the equivalent distinction in software is that there is no clear point that software is "finished" like a building is. The architect doesn't come back and make changes a year after the occupants move into a building.
If there is no exception for new code still under testing then there is no way to test new code. But if there is, everyone will live their lives inside of it.
A big part of the problem is we have a brand new set of very vaguely written rules with no case law. Given time, I expect we should see case law and software change to be more GDPR friendly.
I am very curious to see what happens to EU ad revenue after GDPR. If it doesn't drop (outside of Google & Facebook's internal platforms), I'm guessing there isn't much GDPR compliance going on.
It’s not really as bad as that. Practically, the EU lawyers are not going to prosecute some dumbass no-revenue “Tinder for cocktails” or similar. They are out for money and only going after the guys who are big enough to pay but haven’t complied yet.
"[any big US tech companies] Urged to Adopt GDPR Globally as a Standard"
As an European living abroad, I still have no idea if I will be protected by the GDPR. I read at least 2 opposite answers on HN on the last days: "yes because you are a EU citizen", "no because it's where you are when the data are collected that matters".
GDPR doesn't have a citizen requirement, only residency. The word "citizen" doesn't appear in the law at all!
Art 3 of the GDPR ( https://eur-lex.europa.eu/eli/reg/2016/679/oj ) explains all. It's only when one of the person , or the company, is in the EU (or maybe EEA). So EU citizens in the USA aren't covered.
Does "in the Union" mean within the geographic borders of EU states?
Does "established" mean having a physical presence? Having been incorporated? Registered with a regulatory body? Having remote employees who live there?
“Urging” Facebook to do anything not in its commercial interest isn’t worth squat. Best case: another vague promise to be broken as soon as we forget.
Facebook needs to be broken up and an American GDPR codified into law. If you care about this, pick up the phone and call your Congressperson and Senators.
With all the comments about not wanting EU law to be the international standard, it’s not really that unusual for regulations in one country to do this. For example, U.S. fuel efficiency standards have a significant impact on the entire market. If these are reduced by the current administration, but California maintains the same standards, even California’s regulations might be enough to keep the same or nearly the same impact on the market.
Could GDPR be the end of Facebook? GDPR mandates that the user data be portable. Users can now download their data and upload it to a new social network. What is stopping a new startup to come and create a social network where users upload their facebook data.
Why would people trust another social network by uploading all of their data which they want to keep private I'm assuming if they took it off FB? I think this is the beginning of the death of social networks, and the rise of messaging apps as the primary method of keeping in touch with your friends.
> What is stopping a new startup to come and create a social network [...]
Nothing. But Google Plus already had your name and a lot of information on you, yet still failed. Ello had a ton of hype around it and people signed up, yet no-one really stuck around.
One thing that concerns me is the security concerns with the export function in GDPR. So now a hacker can get in and just export all my data in an few minutes so that changing my password won’t lock them out?
I'm curious, would anyone surprised by FB actions mind describing how you expected FB to act with what specific data, and how FB actions deviated from that expectation?
I was under the impression that most people fully expect (even if they disdain) free web services vacuuming any and all user data for advertising profit.
Is this data selling/ad targeting a surprise, or rather is it just finally enough to make you leave or get upset even though you knew that was the business model all along?
Also, are you quitting other web services that operate ad based, data driven revenue models like Google, Reddit, Twitter, etc?
This is a genuine question not a sarcastic comment.
It is completely obvious that many people did not understand the extents to which facebook accumulates data. I don't think anyone in this thread even has actual knowledge to the full extent.
It is even harder for most people to understand the implications of even small amounts of data collection.
As an Indian citizen I would like to oppose this pseudo colonising attempt. EU is anyways a basket case bureaucracy and most member nations are considering leaving EU, Britain having left it already.
I see not reason and logic to the fact that nations who have not opted in into this be subjected to laws that are essentially created by no-skin-in-the-game bureaucrats.
Such attempts should be opposed at all costs.
(I know this "urging" is supposed to be "voluntary action" by facebook but nevertheless stinks of the same white man's burden colonizers talked about)
Having worked on GDPR, it is unnecessarily harsh and in no way, shape or form would I support this standard going global. There are plenty of ways to make users data completely private without being ridiculously overreaching the way GDPR is. And having the sword of infinite lawsuits hanging over your head has and never will work, look at how ambulance chasers in the US have taken the most mundane laws and turned them into free money.
If the industry didn't want what they're trying to spin as an "overreaching" standard like GDPR, maybe they shouldn't have spent so much time and effort seeing how far they could push their abuse of users' privacy.
Towards what end? GDPR is not going to (or meant to) eliminate targeted online advertising. Which means it is not going to thwart Cambridge Analytica and the likes. We need to address the root of the problem, which is money/advertising in politics.
I'm not the one who commented above, but I can see some problems with freedom of speech related to GDPR.
The main problem is that EU legislation is complex and subject to interpretation for which we have no precedents. Such legislation is easily exploited by authorities to silence opposition. As Napolen never said, "A Constitution should be short and obscure." GDPR is long and obscure. That leaves even more power to the executive.
This situation benefits large corporations - such as Facebook - who can afford an army of lawyers and can deploy resources to legal fights in any country. Small actors with dissenting views are hopelessly disadvantaged in this kind of a setup, and I forecast that we'll see authorities shutting down blogs and websites using GDPR as their tool.
> The main problem is that EU legislation is complex and subject to interpretation for which we have no precedents. Such legislation is easily exploited by authorities to silence opposition. As Napolen never said, "A Constitution should be short and obscure." GDPR is long and obscure. That leaves even more power to the executive.
I'm not sure what this has to with free speech though. Many laws (in any country / federation / commission / union) are long and complex. Not all are related to speech and/or freedom thereof. GDPR is not.
I take your point that complex laws favour the legal establishment and large corporations that can afford them, but again... what does that have to do with free speech in the context of the GDPR?
There seems to be no argument here... Is there something in GDPR I'm missing?
> we'll see authorities shutting down blogs and websites using GDPR as their tool
If they're shutting down blogs, there's 2 possible reasons for it:
1. The blog is using a non-compliant commenting system. This may be hand-rolled or 3rd-party: in either case, disabling comments is a common-sense measure to stay up. No legal complexity of any document should obscure the simplicity of this solution.
2. The hosting company hosting the blogging platform is non-compliant and gets shut down completely. In this case, your argument re: the company being small and not understanding legalese hopefully shouldn't apply.
If they're shutting down websites, those websites are offering a user-oriented service of some kind, and should get their act together w.r.t. understanding the legal implications of doing this, no matter how small they are.
If you're not processing user data, you're not a target. Exercising free speech does not require processing user data.
> If they're shutting down blogs, there's 2 possible reasons for it:
There are other possible reasons. Like "we don't like it".
I live in the country where the corruption index (calculated by Transparency International) is lowest in the world. Still, we have a "black list" of web sites that the police distributes to Internet access providers to block users from accessing the sites. The legal basis of this is supposed to be stopping child pornography, but still, the mechanism is used for blocking sites that criticize the police, and have no pornography at all. And there is no legal mechanism to challenge the police and stop them from doing this.
GDPR gives many additional tools for authorities to perform censorship like this.
I have seen a lot of GDPR critics on HN saying that the right to be forgotten is censorship. That news articles about criminals should not be deleted. The GP is probably reiterating such sentiments.
Strangely, none of these critics ever seem to consider the opposite cases: what if a person was wrongly accused of murder, but was later found innocent? Old articles about his "suspicion for murder" should either be rectified or deleted. What is more important: to prevent an innocent person from being punished, or to be able to punish a legit criminal?
Or let's take something more mundane. If you posted embarassing party photos while you were a teenagers, and some site made a copy of those photos, shouldn't you be able to have them removed?
(Mind you: this particular critique on GDPR wasn't valid in the first place. GDPR article 17 states that the right to be forgotten does not apply "for archiving purposes in the public interest", among others.)
See also my other comment in this thread about the plastic surgery meme, "The only thing you’ll ever have to worry about is how to tell the kids". The meme is false but it ruined the woman's career.
Who said anything about the police? The GDPR does not say that the police has the right remove stuff. It allows courts to rule that a company must comply to a removal order.
Interesting you mention that. In many places in Europe, it is forbidden to release the name of a suspect until they are found guilty or there's heavy evidence weighing against them. Specifically for that reason.
But what if a person was in fact found guilty, but counter-evidence later came up and he was once again ruled innocent? His name has been long published by then.
Yes those articles are correct, but do you think that's what most people would think when they read it? Most likely the articles on suspicion for murder are ranked top in the search results while the articles about being cleared are hard to find. Many people tend to be judgemental, or at least err on the side of caution. In this case, potential employers could feel "where there is smoke, there is fire" and would rather close the "suspicion for murder" news article tab immediately and decide not hire this person, rather than searching further.
Let me give you a real world example. You know that meme about plastic surgery, "The only thing you’ll ever have to worry about is how to tell the kids"? The woman in that meme in fact did not have plastic surgery, but most people thought the meme was true and was about her, without researching the truth. It ruined her career. https://nextshark.com/heidi-yeh-chinese-family-plastic-surge...
You can't treat the Internet as an append only database where you can rectify things by publishing more stuff. The human mind is bounded rational and most people only look at the first Google search results page.
No. Google should rank the "Suspect found NOT GUILTY of murder" as the top, most relevant page, because that is truly the most relevant. If Google refuses to rank this the top, it's out of sheer laziness, just like how they refused to fix the subversive content in Youtube Kids "because algorithms". If they did this in the first place, I think most people wouldn't care so much.
You say "should". Who guarantees that? News outlets are for profit organizations. If they judge "not guilty" as something that doesn't sell as well as "guilty" then they won't publish so much content about it, and because of Google's algorithm that latter won't be ranked very high.
Even if it's ranked high, a lot of readers would still end up with this feeling that "yeah the latest news article say that but MAYBE that guy DID do something wrong... let's not hire him just to be sure", i.e. "where there is smoke there is fire".
I just gave you a practical example about the plastic surgery meme. News articles about how the woman was ruined rank nowhere near as high as the meme itself.
I think we are in agreement. Google needs to change their AI/algorithms so that the more important, relevant aspect of every story increases in rank. Not the flashy click-baity articles.
GDPR is a threat to freedom of speech while not changing much in term of privacy as worst actors is governements themselves. Edward Snowden revalations are 100x worse than whatever worse FB scenario you are picking.
GDPR sets a bad precedent with local laws impacting foreign businesses. In this logic, why Chinese speech laws shouldn’t apply to EU and US companies if GDPR applies globally?
Nothing wrong with taking inspiration from other country laws. My original point is that GDPR is fundamently wrong. It’s infringing with freedom of speech by making new social network harder to create for dubious privacy gains. If you think FB is wrong, the best way to take it down is to replace it, not try to make it compliant.
GDPR isn’t so much a freedom of speech regulation insofar it’s more of setting out the rules on how corporations need to handle personal data. The ideas behind GDPR are good, but I’m unsure of the burden of compliance. Users will enjoy, presumably, an increased level of data control. Companies may need to re-engineer their offerings to fit into this new model, the result of which may mean unforseeable inconveniences and cost to the end user.
Not wanting European laws is likely the core of your American DNA.
American's national identity is wrapped up in founding of their government and throwing off "oppressive" European laws is at the center of that narrative.
I don't think it's about adopting European laws, it's about following a set of principles on how they handle personal data. These set of principles happen to be inspired in an European law, all they're asking is to use them as a baseline.
> How is the country/continent of origin of a regulation that is entirely in your best interest of any relevance?
Laws carry their culture. GDPR is, from an American perspective, an overworked mess designed to support a big bureaucracy. This side of the Atlantic, we'd do something slimmer, more reliant on privately-funded cases (and regulatory complaints) versus public ombudsmen, and better attuned to start-ups’ needs.
Indeed. A law is just a text in a hierachy of norms, and so is this text. Accordingly, its weight may vary from country to country in the EU, first because the relationship between the Constitution of a country and the EU norms may not be the same. Moreover, one should not forget that enforcing a law requires a whole judicial system, and once again, this judicial system may vary from country to country in the EU. Think of the GDPR as a program : it runs with some privileges in a given software context and requires some hardware ressources to run.
IMHO, as an EU citizen, an American perspective would be welcome. A text must fit the local hierarchy of norms and the local judicial system.
I don't fully understand what you mean. Do you want Europe to be a lawless continent? Or do you not want a private, multi-national company to voluntarily apply a data protection framework uniformly to all of its users simply because the EU requires them to apply it to EU residents?
Both of these things are out of scope for you, no? I suppose you could vote with your shares if you own stock in Facebook...
I generally agree with your statement of not wanting European laws due to cultural differences, but America is in sore need of privacy laws for citizens. People will be rallied to oppose said laws for the exact reason of not wanting Euro-style laws passed, which IMO is a huge shame.
It's as absurd as Google/Facebook/et al. adopting US cultural norms as global standard of banning ads, YouTube content, social media content and everything else for the whole world.
While I don't agree with some of Google/Facebook/et al. standards, I think that's not a fair comparison, since they are US based companies following US cultural norms, not EU based companies following US cultural norms.
They may be founded in the US, and primarily headquartered there, but the large sprawling EU headquarters of the same companies do apply US cultural norms locally. I don't see the unfairness in the comparison.
Google and Facebook both have significant European and Asian businesses (complete with offices stuffed full of engineers, marketing and management), so calling them US companies is wrong: they are _global_ companies which happen to be headquartered in the US.
It would be absurd if they could be _forced_ to globally adopt an EU regulation, but I think not as much if they _do_ globally adopt this EU regulation, for two reasons:
1. There might be significant internal hassle involved in working with more than one standard of integrity within a single company dealing with integrity as its currency and user base as pillars. It may be easier for them to just use a common one even if it is more restrictive, at least if it is backed by a population of 500 million people.
2. Facebook are going through a PR crisis and need to dig out of that hole somehow. If they only adopt EU regulations in the EU, it could be seen as Facebook are only doing the least they can to protect the integrity of their users and that this is seen as a better move?
It's funny how GDPR coincides with the Facebook scandals as of late, though... Their lawyers and engineers have got to be buried with work. I can't even begin to imagine what a company of Facebook's scale and business model need to do to support GDPR globally.
In the automotive industry, California emissions and efficiency standards become the defacto standards in the US mainly because it wouldn’t make economic sense to maintain a California model and a more polluting model. Also, better fuel efficiency is generally seen as a good thing by consumers.
EU has a population of 510m. FB is blemished by privacy concerns. If Facebook makes a GDPR compliant version – it wouldn’t be unreasonable of them to roll that same version out to the rest of the world.
I think in the opposit we should urge US companies to fight against GDPR. Local foreign laws shouldn’t dictate how our companies should behave. Why not respecting speech laws in China or in Russia if we follow this precedent?
GDPR is more overeaching than that. You don’t need physical presence in EU to be subject to it. In theory, just having a webserver storing access logs (default of Apache and Nginx) makes you infringing it as EU IPs are now considered personal data.
Here's the part of it that covers your webserver: "Whereas the mere accessibility of the controller's, processor's or an intermediary's website in the Union, of an email address or of other contact details, or the use of a language generally used in the third country where the controller is established, is insufficient to ascertain such intention, [...]".
> envisage d'offrir des services à des personnes concernées dans un ou plusieurs États membres de l'Union
They just have to prove you are considering EU in your app. It can be anything. Like Having EU timezones, or a country input with EU countries is enough to prove intent to server EU residents. If you collect IPs via your web sever, you are infringing.
> is enough to prove intent to server EU residents
Given that it's still April, there's literally no way for you to know that. Also, the sentence you're quoting starts with "may make it apparent" not "does make it apparent".
Having said that, if you're building a service that let's people select EU timezones, countries, currencies and so on you're probably going to have a hard time proving that you're not providing goods or services to Europeans (because you probably are). If you're providing goods or services to Europeans GDPR applies.
Yeah and unless you don't business in the EU as an EU entity, you can pay just as much attention to that as EU companies who don't do business in the US as a US company pay to the trainwreck that are American software patents.
You are making the wrong assumptions. I am a EU citizen, but live in the US. I will vote in an heartbeat for a 1st amendment like in an Europe law. You just don’t see how Europe speech is reatricted, and how laws like GDPR contributes to it.
You're moving the goalposts. Your statement suggested you want a US-style "1st amendment" in Europe. I don't. That has nothing to do with "free speech" as a concept.
Under the US interpretation of free speech political donations are protected as "speech" and politicians can go on TV and say they want someone to be murdered and not face any consequences.
I'm German so you can imagine why I fundamentally disagree with that notion, even if our laws are sometimes a bit too strict (though that often has more to do with post-WW2 denazification than free speech in particular -- e.g. not being allowed to put nazi symbology in video games, not even as enemies).
UK libel laws and their advertising code are another example of European laws being a bit too strict. But even that is something I'd prefer over the "law of the strongest" in the US.
EDIT: Free speech is obviously a great idea and an important right, but the problem with freedoms and rights is that they can't be absolutes when you live in a society with other people you want to share those rights and freedoms ("your liberty to swing your fist ends where my nose begins"). Additionally some of those freedoms and rights are mutually exclusive so you need to define an order of precedence. Even free speech absolutists generally draw the line somewhere (e.g. generally violence isn't considered speech even if it is a form of expression and few people would defend the right to shout "fire" in a crowded building and not facing the consequences of the resulting mayhem).
In other words "being willing to defend free speech" is a meaningless platitude unless you first define what you consider the acceptable limits of that freedom.
The First Amendment protects you from the government. Facebook censoring you is not prohibited by the First Amendment. More broadly, I don’t see how GDPR interferes with one’s right to lawful political speech.
It says nothing in the text of the first amendment that direct threats of violence are not covered. If that restriction is compatible with the first amendment I don't see why a future right to be forgotten can't be.
Blockchains storing social data is good example. It's infringing by nature GDPR. A decentralised facebook-like social network on the blockchain is not possible anymore. Each node can be sue. It had happened with TOR exit nodes.
IANAL, but crypto-shredding seems to be a viable way to meet GDPR deletion requirements, making it possible to implement compliant blockchains. Of course you'd have to make the nodes comply, but that has nothing to do with blockchains.
But I still don't see the connection with the first amendment.
Yes, protects, it doesn't require them to. Facebook and its likes volunteering to destroy personal data on request really doesn't have anything to do with the first amendment at all.
Edit: come to think of it I'm not even sure it protects them, but again, it certainly doesn't require them to store or transmit anything.
If you want to run a social media platform, GDPR is infringing your freemdom of speech. You can argue that it’s worth it, for the illusion of more privacy. I just don’t think it is.
If on your social network someone wants his posts to be removed, you have to comply under GDPR, or else. HN for example doesn’t allow to remove your comments after some time.
If it's a part of conversation - like most of social media posts -, I would assume it's fair use to keep it. Like if I interview you, and I publish the video, you can't retract what you said. Why social medias should be different?
It's obviously not suggesting using the GDPR whatever that may turn into in the future, but the contents of the GDPR.
Though I think you're right that they would fight it. It's one thing to have your competitors voluntarily adopt a privacy policy that makes them less profitable, but it's another matter if users flock to them and end up eroding the profit margins of the whole privacy violation industry.
Whatever happens, Facebook has irreparably damaged my trust in their handling of user data and I think many on here would agree. My wife and I have switched to Riot [0] for now for personal communication and are considering other decentralized alternatives.
[0] https://about.riot.im/