Whatever happens, Facebook has irreparably damaged my trust in their handling of user data and I think many on here would agree. My wife and I have switched to Riot  for now for personal communication and are considering other decentralized alternatives.
The situation with Cambridge Analytica was that they let users export the information about their friends, information that users had access to; not allowing that export at all would probably be met with legally-binding criticism. What the API allowed at the time was to conveniently do that through a third-party application that users were free to use. The main use-case that was discussed then was to empower services like Riot, to encourage competition — something that, surprisingly, Facebook was very supportive of at the time.
There was no expectation of fiduciary duties at the time, so whether they should treat users as grown-up and obey to their request to export their social graph, or whether they had a duty to prevent that from happening was not clear at the time. It has since became clear that people were not reading the permissions that they were granting and more applications were abusing them than trying to build an alternative to Facebook. With the benefit of being the central platform, Facebook was the first to notice and started cutting accesses, to the great anger of many third-party services that grew dependent of the feature. Some services who needed the social graph for legitimate reasons well understood by the users (e.g. Tinder) have kept their now-not-publicly available access.
As much as people want to blame the only site that they can see in the process and the one that appears the most powerful, namely Facebook, the company acted with far more awareness than the law would, even with something as progressive as GDPR in action.
Facebook not only has been very effective at showing me and letting me edit the data they have about me in https://www.facebook.com/ads/preferences/ but they are the first service that let me see whether my data has been sold by data brokers to (often unsuspecting) brands. Check the “With my personal info” section. Do you see companies you have never heard about here?
Don’t be misinformed and attack Facebook for selling your data — they did not. They are the ones revealing to you that those company purchased it from elsewhere and allowing you to know who to pursue, who to ask to remove your info or who to ask how they got it.
The amount of blaming the nurse for your fever on those issues is getting really concerning.
Their response to the criticism, externally, was to deflect, and internally, was to ignore it . Zuckerberg's response to the Android call and text scraping endeavor was more equivocation . Then he decided to pipe up again about not applying GDPR globally .
One has to squint to see any sense of awareness in Facebook.
> Don’t be misinformed and attack Facebook for selling your data — they did not
They sold ads to an entity that flagrantly broke their rules. Said rule breaking may have had deleterious, and possibly illegal, effects in multiple countries.
Did Facebook know what CA was up to? Probably not. Did they incentivise themselves not to? Absolutely. Complicity comes in shades of grey.
Facebook did that four years before the law, namely GDPR, came into action. That’s not even accounting for the fact that outside of Europe, US and elsewhere, what CA did appears legal. Who the US Congress should be judging is probably whomever is in charge of writing laws.
Unless Facebook had a way to let, four years later (an eternity by Facebook standard) the programmatic ad platform know that those three entities (Kogan, GSR and SCL) that the compliance team interacted with four years earlier were related to CA; or even that CA, working for the official Romney campaign was related to the Pro-Trump SuperPACs buying ads (which would be coordination and illegal) blocking them without the revelations of Alex Wylie would be prescient.
I really don’t think you are making yourself actually smarter by judging Facebook in hindsight.
As Facebook done shady stuff? Absolutely: the Android Contact thing is certainly representative of the “gather first, ask questions later” early attitude which explains why Messenger is so bloated. Anyone familiar will confirm this is laziness over mischief. I worked there: you don’t need to make things up to find issues with Facebook.
On the particular problem in point, GSR, Facebook was outwitted and made misinformed decisions but they would absolutely not have been helped by either the public opinion of developers (we wanted more sharing) or the law (inapplicable) at the time.
The company learned to be more careful; its critics should too, because we absolutely will need those critics to be smart.
I've been blocking every Facebook domain I can find in my browser since 2010. Why? Because I knew they could connect up referer headers on Like buttons to my Facebook cookie, and create a complete profile of me and the kind of articles I read.
(I did the same thing with Google's Plus stuff, as best I could.)
Facebook didn't need to implement things that way. But they've been acting in a sinister way for years, their game plan has been clear as day. And I for one don't consent.
Yeah? Can you point me towards any law that says I have the right to export data I did not enter?
> The main use-case that was discussed then was to empower services like Riot, to encourage competition — something that, surprisingly, Facebook was very supportive of at the time.
You have any evidence for that encouraging competing social networds was the "main use-case" for this feature? This is a pretty strong claim to make with no support.
> It has since became clear that people were not reading the permissions that they were granting and more applications were abusing them than trying to build an alternative to Facebook
The vast majority of people whose data was stolen did not grant ANY access. You appear to be deliberately mis-representing events.
> Some services who needed the social graph for legitimate reasons well understood by the users (e.g. Tinder) have kept their now-not-publicly available access.
Access to the social graph, and full access to the activity of all your friends are not the same thing. Again you seem to be deliberately misleading people.
> Facebook not only has been very effective at showing me and letting me edit
So how can I monitor and delete the shadow profile they have build for me?
> Don’t be misinformed and attack Facebook for selling your data — they did not.
Nobody has said their sold the data. They gave it away for free
> The amount of blaming the nurse for your fever on those issues is getting really concerning.
I didn't have a fever before I reached Facebook, something Facebook did got me sick, so why shouldn't I blame them for the fever?
In the case in point, you did approve, i.e. enter, all the relation on your graph so I’m not sure how your question is related.
To your question: GDPR allows you to access any information associated to identifiable information, there are explicitly no limits on whether you entered it, it was scraped, logged or if it was inferred.
>> main use-case that was discussed
I have the notes from my PhD, yeah. Statistics on blogs posts mainly. They are on another computer, but if you really think this is important. The main use case was obviously to have your friends as a feature in social games, music sharing but that was not really discussed — unless, like for Apple Music, the intention appeared to be to build a competing graph.
I probably should have phrased it better, “the most discussed case”.
> The vast majority of people whose data was stolen did not grant ANY access.
Every one of them granted access to their likes and social graph to their friends. Their friends then overlooked how the platform granted them the ability to share that further. That’s the thing about a social graph that hardly anyone seems to notice now: it’s shared personal data. That’s why calling it “ownership” makes it confusing: information isn’t an excludable good.
> So how can I monitor and delete the shadow profile they have build for me?
I don’t think you have a shadow profile. Your friends shared information about you with Facebook, namely that they know socially the person controlling your email address. What you are asking is for you to be able to tell Facebook that the company should not accept, or store, the information that your friends want to connect with whomever ends up connecting using a certain address. But, if you change your mind, Facebook needs to be able to change that too. Storing your intention, or controlling the ability for you to change it, that would be a profile, missing most feature — a ghost or shadow profile, if you wish.
If you want to prevent your friends from sharing your personal information with programatic agents, I’d love to get your take on how to do that. I use Facebook for most of my social life because I know that, because as a central control the tool, they enable me to prevent my friends from abusing my trust (like email would) and they monitor other programatic agents.
> Nobody has said their sold the data.
This is alas a commonly repeated story (like the shadow profile). If you go through the paragraph, it should be fairly clear that I was actually trying less trying to debunk that and more trying to contrast Facebook and data brokers.
> something Facebook did got me sick
If you don’t have a Facebook account, I’m not sure how Facebook or Cambridge Analytica would have been able to hurt you personally.
I did not enter my friend's birthday, or other "extended profile properties". In what world does GDPR legally require facebook to allow me to export my friend's birthdays and their extended profile information?
> I probably should have phrased it better, “the most discussed case”.
Yeah, I'd totally buy that they tried to sell the feature externally as supporting social competitors. That is very distinct from what you claimed.
> Every one of them granted access to their likes and social graph to their friends.
You were distinctly talking about the permissions people were granting facebook applications and blaming the problem on people not reading those permissions carefully. Now you seem to be blaming people for using Facebook at all?
> What you are asking is for you to be able to tell Facebook that the company should not accept, or store, the information that your friends want
I'm not asking for it, (though GDPR will provide that), but you were claiming it existed.
It doesn’t and that’s not what the API allows today. At the time this was a feature, there were arguments that allowing that would help new competing services to emerge, but they never became law.
Because they did not, competing services now rely on a handful of people claiming they switched, rather than have more effective (or invasive) ways to remind people to switch. That means that it’s extremely unlikely that any project competing with Facebook, many of which have recently felt a gust of interest will actually take off meaningfully. So the reaction you are asking now from Facebook, thinking you are being critical and provocative, happened six years ago and locked them as a monopoly. I guess that’s hindsight.
> That is very distinct from what you claimed.
Yes, because what I claimed is that external activists, developers who set up OpenSocial (OAuth and OAuth 2.0, Activity Streams, and Portable Contacts) were the ones asking for it.
> Now you seem to be blaming people for using Facebook at all?
I’m not blaming anyone (except you): I’m just stating that having a social service means sharing access to personal information. Once information is shared, you have to trust people who are not the person who the information is about, but their friends, with said information. Facebook empowers that trust: you can learn about how long lost friends are doing, which is a great way to leverage that trust; or you can sell their details for a dollar, which is less great.
>"not allowing that export at all would probably be met with legally-binding criticism"
What legally binding criticism were you talking about? Why did you bring up the GDPR to defend this statement?
> Because they did not, competing services now rely on a handful of people claiming they switched, rather than have more effective (or invasive) ways to remind people to switch. That means that it’s extremely unlikely that any project competing with Facebook, many of which have recently felt a gust of interest will actually take off meaningfully. So the reaction you are asking now from Facebook, thinking you are being critical and provocative, happened six years ago and locked them as a monopoly. I guess that’s hindsight.
Oh please tell me, what reaction am I asking for? Are you saying we should have legally force Facebook to continue letting CA strip mine users data? WFT are you talking about?
> The main use-case that was discussed then was to empower services like Riot, to encourage competition — something that, surprisingly, Facebook was very supportive of at the time.
> Yes, because what I claimed is that external activists, developers who set up OpenSocial (OAuth and OAuth 2.0, Activity Streams, and Portable Contacts) were the ones asking for it.
No, you never claimed that at all. You claimed that facebook was discussing this API mainly as a means of fostering competition.
> Once information is shared, you have to trust people who are not the person who the information is about.
And you have to trust the platform to respect your privacy and not give any random quiz app full access. Obviously Facebook is not trust worthy and should not be given this information.
> Facebook empowers that trust: you can learn about how long lost friends are doing
Facebook doesn't empower trust at all: it abuses it to make money off of our information.
GDPR also relates to the consent of having personal data. Facebook is well known for using fishnet trawler techniques to gather whatever personal data they can with little regard for consent. Wouldn't be surprised if everything is passed on to Palantir anyway.
The fact that several aspects of their business model will have to change to accommodate GDPR should be telling.
I also wrote a PhD on how to implement monopoly enforcement to the company and I’ve published my critical understanding of the company’s position for more than ten years prior to joining the company, at academic conference, on my blogs, on Quora, occasionally here. I was the first person to write scary things about Facebook, probably in 2005.
As I wrote repeatedly, the company has a ton of issues and is generally extremely open about it (that there are, less what they are specifically). The trawler approach to data gathering was one of them. Thank you for pointing that out: focusing on real problems is important. Facebook has been fixing aspects of that repeatedly, but it is hard: some data gathering or sharing is actually relevant and expected, so you can’t cut things without understanding what you would break -- a clear strategy change in the last three years.
Why not do more faster? Because the company is already trying to respond as fast as they can. Employees and ex-employees are indeed more tolerant of this because we know personally of the insane amount of work there is to do; prioritisation, i.e. the arbitrage between your most and second most important task, what you do now or later is insane. I left the company to work in a more balanced environment that happened to be the fastest growing start-up ever, where I got woken up ever night at 4am because scaled killed our database again, migrating to a more scalable technology every four months.
Facebook had to reconsider offering services like targeting based on what Experian knows about their users, who I believe are almost exclusively American resident. I don’t think that has to do with GDPR because it’s not on the same continent but I’m not privy to details. EU citizen living in the US or Americans who moved to the EU are both large enough demographics to warrant caution.
They were not hiding the feature because there was an expectation from Americans that their credit card companies sold economic data; Facebook just made that integration easier -- I’m assuming as a reaction to how common a source of Custom Audience that was. If you didn’t like it before, you could hide it on https://www.facebook.com/ads/preferences/ with a click.
After the CA scandal, American became more sensitive to those approach and Facebook responded, almost instantly -- so fast advertisers are a little confused. That balance, pro-users, is also something that anyone familiar with the company, investors, board members but also employees can confirm: there is a strict hierarchy when one of the four “orgs” objectives disagree. Security is always right; User Engagement takes over Advertising.
Every business in Europe has to accommodate to GDPR, mainly processes but all advertising-based company massively so; one would in denial if they think that’s not the case -- the text is still widely open to interpretations. The fact that Facebook had the least amount to change is indeed telling. What you notice is the scale of the company, the prejudice and the attention. What you are missing is in front of you: Facebook is very willing to admit its wrongs and fix them.
The nurse is being blamed because they've ignored clear, worsening symptoms for years.
All I’ve heard in this thread is people judging the company in hindsight, and based on a rather convoluted speculation (that happen to be false: Cambridge Analytica used credit card data and voter records, not Facebook data, to assess psychological profile).
Facebook has made difficult decisions with partial information that ended up proving to be suboptimal — but I don’t see when they have ignored either symptoms or criticism.
I think they've been willfully ignoring the likelihood that this sort of data exfiltration has been happening on a very widespread level for years. Many of those 800,000 "quiz" apps are likely designed for this purpose, and their proliferation should've set off warning bells inside Facebook - it did outside.
Zuckerberg's being out there acting like this was all an unforeseeable, shocking, limited-scope issue is disturbing to me.
"Study Suggests Medical Errors Now Third Leading Cause of Death in the U.S." (https://www.hopkinsmedicine.org/news/media/releases/study_su...)
Note that becoming one involves going to your embassy in person.
Sorry if this is obvious. I haven't been following the details of this.
Does that mean there is no protection for EU citizens while they are outside the EU?
def GDPR_applies(company, person):
You can read the actual text of the territorial scope rule here .
Edit: slightly less rough, but still quite rough:
def GDPR_applies(company, person):
if offering_goods_services_in_EU(company, person):
if monitoring_behavior_in_EU(company, person):
> This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to:
the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or
the monitoring of their behaviour as far as their behaviour takes place within the Union.
“offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or
the monitoring of their behaviour as far as their behaviour takes place within the Union.”
So Facebook US cannot divest itself as long as it serves customers in the EU or exchanges data about data subjects in the EU with its EU subsidiary.
- The data is processed by a company established in the EU (it does not matter where the person / “data subject” is).
- The person is in the EU (whether or not they are an EU national).
“Data subjects” are not protected outside the EU against companies established outside the EU.
From https://cybercounsel.co.uk/data-subjects/ under “Territorial expansion and applicability of EU law”.
I firmly believe that the majority of people still don't care about their privacy in the first place or they wouldn't use such platforms. IMO this is government overreach and anti-competitive.
Currently a provacy conscious start up is competing with those who aren't, making it harder. But with this law, you won't have as many shady companies like Facebook.
Storing less private data makes you less liable to get hacked and get bad PR.
The GDPR states that it must be accessible in a common digital format which is new.
There's nothing inherently wrong with a high bar to entry if that bar exists for a very good reason. If it were hard to break into this space due to regulation (I don't believe it is or will be) then yes, competition will be less, but the alternative is worse.
Safety critical code and health care technology are life and death situations.
It's also important to understand that the regulations in those sectors have destroyed (or deterred) an incredibly large number of startups, and the net lives saved as a result is quite likely negative because the value of life-saving technological advances generally exceeds the cost of mistakes in developing them.
People have severe emotional reactions to this. A doctor's experiment may kill fifty already-terminal patients but uncover a cure that goes on to save five million. But the families of the fifty dead patients can blame a specific person for their deaths while the five million aren't even aware what they lost, so the regulations are biased against progress.
This is obviously not a good template for making decisions in other industries where emotions don't run so high.
That's the point. If we pass regulations that result in continued and increased centralization because only large organizations can afford compliance, that is not advantage to the people whose lives are at risk.
If you're a homosexual in Russia or a democracy activist in China or an advocate for womens' education in parts of the middle east or a Jew in WWII Germany, "privacy laws" can't save you. A company's fear of the state can't protect anyone from a corrupt state. But structural and technological privacy protections might. Which are the things hamfisted regulations inhibit.
Debian is better at this than AT&T.
It's fairly clear that giving away people's data without any care is unsafe.
The answer is yes, it is onerous. And yes, it does matter.
Regulations always start as an idea that sounds good. The companies most impacted are then motivated to gain control of the regulations. Once they do, then they happily add on to regulations because that becomes a barrier to entry for new competitors, but do so in a way that ceases to be a problem for themselves. In the end the regulatory framework stops working and we have the very disaster that we were trying to block.
This is called regulatory capture. It is very, very common.
In the case of Facebook, here is the problem. The regulators are controlled by politicians who wish to remain in power. If Facebook breaks the rules in favor of those politicians, it becomes easier for the politicians to remain in power. The incentive is therefore for the politicians to become complicit in letting Facebook break the rules. However no new startup can provide the politicians with an incentive that matters - only Facebook, Google, and other similarly large players can bribe politicians in back room deals.
The payback for Facebook is that they get to solve their biggest existential crisis. The barrier to entry for a new social network just aren't as big as it seems. They can keep milking more from their users and buying up the Instagrams for only so long until something like Snapchat or Discord or someone not yet thought of succeeds. If Facebook is to avoid being replaced in the way that they replaced MySpace, and MySpace replaced Friendster, they need a new barrier to entry.
Regulation provides that for them. In public they will get chastised. You'll get speeches that you love. In private, they will happily become part of an effective surveillance state for those already in power in return for a blind eye being turned to their ongoing transgressions.
The result? The regulation that you are cheering won't accomplish the causes that you want. And if history is a guide, the very politicians whose speeches are the most to your taste will tend to be the ones who behind closed doors are selling you out. With their public speeches being nothing more than bargaining chips for private deals.
The issue you are talking about is rampant in USA. The problem is not regulations but your politicians and your filthy rich businessmen.
And for the record, I grew up in Canada. I am not opposed to the idea of regulation in principle. However every approach has failure modes. And regulation works a lot better in practice when you exercise skepticism about the actual aim as opposed to the stated one.
If you wish to build your skills at skepticism, I highly recommend watching the series Yes, Minister. It is from the UK in the 1980s. However the lessons about how bureaucrats manage to get their way while pretending to listen to politicians are timeless. It also came out much later that it is less fiction than it first appears - most episodes were based on actual incidents. And some were downright prophetic - compare https://www.youtube.com/watch?v=37iHSwA1SwE with actual British policy towards the EU since.
I have no reason to believe that the picture painted then of the bureaucracy in Whitehall is significantly better than the bureaucracy that has sprung up in the EU.
We wouldn't let a self driving startup ignore traffic laws because it's "too hard". Likewise we shouldn't let a social startup ignore privacy laws and auditing.
Allow Socially to collect the following information for the purposes of providing you service:
- Minimal Account Information: email address and password
To prevent spam if you don't provide additional profile information you will be required to verify your account with a valid government ID. Only the expiration date will be stored.
- Information posted to your timeline.
Without this you will be unable to post updates.
- Messages sent to others.
Without this you will be unable to send messages.
- Profile Information: Name, Address ...
Allow Socially to collect the following information for the purposes of protecting your account:
- Network Addresses used to access the service.
- Login location
- Login times
After a short time using the service if we see a login that doesn't match the information on record we will notify the primary email for approval.
- Links to other sites you click.
We will check links you click against our list of known phishing sites and scams and warn you before redirecting you.
Allow Socially to collect the following information for running internal studies and improving our service.
- Features you use.
- Posts you read.
Allow Socially to collect the following information to help make ads more relevant to you:
There are a lot of extremely serious questions that arise regarding network security, anti-fraud, and anti-abuse measures. Just looking at basic bot detection measures, all of the sophisticated methods are now illegal. It certainly requires a major re-think of how websites serve content as well as the sustainability of advertising as a revenue channel. I can't even wrap my head around how someone would run a GDPR-compliant dating website/app.
If you think Pagefair's interpretations of the GDPR are correct then Google and others are calling the EU's bluff. They are implementing part of the GDPR strictly but the parts which invalidate their business models are being interpreted more liberally or ignored altogether.
I'm not saying that the GDPR is a good idea, bad idea, morally right or wrong. Rather, a lot of things we have come to view as a given -- such as how we detects bots, fraud, and abuse -- are no longer valid. Infrastructure, both technical and business, will need to be re-designed either to comply with the GDPR or evade it.
I really feel like the answers to all of those questions are going to be basically identical between people, and all you really need to do is be able to export whatever data you have on somebody quickly in order to be able to respond to that email in under quarter of an hour.
I guess it could make a decent DoS tactic against a small company, but lots of other things would too.
Let's take an app like Instagram as an example. Instagram had over 1 million users within two months and 10 million within a year, and no profits. You're running on a shoestring trying to keep servers online without any serious budget to speak of. It's probably you and a few friends/associates working closely together.
All of a sudden with GDPR, you have to pay a lawyer to help you understand what you need to do to comply with the regulations. You also have to spend engineering time developing solutions to enable the queries in that letter, enable purging records from long-term backups, etc. And people have to spend the 15 minutes responding to each request.
Now, let's say each request does only take 15 minutes like you suggest (which I find highly unlikely). If a small fraction like 0.5% of your customer base sends such a letter, then that's 50,000 letters. At 15 minutes each, that's 12,500 hours which is over 6 full-time employees. Many small business don't even have 6 employees to conduct the entirety of their business right now!
The letter is nicely formatted into 9 bullets. All are optional for small companies, and all can be automated - the answer should be the same for all users.
1. This is a "yes" or "no" question. If the answer is "no", you can ignore the rest of the letter. If yes, the answer is the same for all users.
2. Simple, short, same for all users.
3. You can avoid doing if you want. If you are doing this, you're signing up to take on this additional burden of informing your users. Consider this when making this decision. This is the only bullet in the list that is in any way burdensome as you will need to update this text in your automated response whenever you take on 3rd-parties (if at all).
4. Simple, short, same for all users.
5. and 6. are "if" conditionals that you shouldn't be doing. The answer should be "No".
7. Amounts to "has my data been hacked". If yes, that's unfortunate, but obviously you have a moral obligation to respond here regardless. Presuming you're hacked once, you provide full details once and send automatically to any users who ask.
8. and 9. are out of place. GDPR doesn't require you to respond to these questions within this quoted 1 month time limit (you do have to have what's detailed within them in place to comply with GDPR but that's tangential to info requests). These seem to have been put into this blog post as extra scaremongering.
* by "well-meaning" I basically mean "not selling all of your users personal data to myriad nefarious 3rd-parties"
Pretty much everyone is going to. Google Analytics, Zendesk, Salesforce, and more all qualify. Hell, even AWS qualifies...
> 5. and 6. are "if" conditionals that you shouldn't be doing. The answer should be "No".
Why do you say that? Given that we're discussing technical companies, I fully expect that automated decisions will be made.
> 7. Amounts to "has my data been hacked". If yes, that's unfortunate, but obviously you have a moral obligation to respond here regardless. Presuming you're hacked once, you provide full details once and send automatically to any users who ask.
And "detail all your security measures". Which, for a small company that doesn't have an InfoSec group, probably means next to nothing. An admission that feels a lot like liability...
> 8. and 9. are out of place. GDPR doesn't require you to respond to these questions within this quoted 1 month time limit (you do have to have what's detailed within them in place to comply with GDPR but that's tangential to info requests). These seem to have been put into this blog post as extra scaremongering.
It's the sort of thing an angry consumer might do, and most startup founders subject to GDPR are not deeply knowledgeable about it.
I worded this badly. This is optional on a case by case basis, i.e. there's a cost-benefit to using each 3rd-party, and this burden is worth considering for each. It's still not a massively onerous burden tbh if you do use a lot of 3rd parties.
> And "detail all your security measures". Which, for a small company that doesn't have an InfoSec group, probably means next to nothing. An admission that feels a lot like liability...
I'm sorry but if you're really defending companies with no competent security measures in place, regardless of size, I think you're in the wrong forum here. If you are a commercial entity of any size there should be moral hazard in ignoring security of your users' personal data.
> It's the sort of thing an angry consumer might do, and most startup founders subject to GDPR are not deeply knowledgeable about it.
Exactly. And unlikely to be more knowledgeable if they're reading misleading scaremongering articles like this on LinkedIn!
I'm up close and personal with a vendor assurance process right now. It's often a non-trivial amount of time for any given vendor.
> I'm sorry but if you're really defending companies with no competent security measures in place, regardless of size, I think you're in the wrong forum here. If you are a commercial entity of any size there should be moral hazard in ignoring security of your users' personal data.
I'm sorry, I worded this badly. I'm saying that small startups have a tendency to prioritize getting a product working and seeing if it's worth investing heavily in before standing up a strong information security unit. You're absolutely, completely, 100% correct that there should be incentives to be very careful with user data.
I think it's possible to see where some people might find the level of expense and expertise required to be appropriately careful somewhat scary. I can even see where some people might decide to not create a social media startup to challenge Facebook because of this fear.
People keep sharing that “nightmare letter” link but won’t point out which question gives them nightmares and why.
Second, a list of everything across all types of storage in any and all systems stands out. Even large companies often lack the ability to search ZenDesk, Salesforce, email, AWS S3, and Slack logs all at once.
Third, there's a clause that asks quite specifically for a thorough list of any and all potential future plans. That's a lot, especially given how startups are subject to pivoting.
Fourth, the section about third parties is essentially asking for the outcome of a vendor assurance process. A lot of small companies can't pass a reasonable vendor assurance process. They often can't afford the time and assurance specialists to manage one for their vendors. Even large companies often have trouble maintaining the level of control required for thorough vendor assurance. The bit about legal reasoning implies the involvement of a lawyer as well.
Fifth, there's a strong implication that no matter what you might say in response, it's not going to be good enough. There's always something that can be pointed to as not enough.
With all of the above combined, I can see where some might view GDPR as intimidating and favoring big companies over small ones through sheer costs.
There is a standard way in which "reasonable" regulations kill small companies. It works like this. You impose some small burden, something like an hour of labor a week. That won't destroy a small company, but that is not the only rule in the world. That rule takes an hour, another rule an hour and a half, a third rule a half hour. By the 60th rule, a two person company is past sunk. Even if every individual rule is nominally reasonable, the combination is hopelessly destructive.
The problem with tech companies is the rules don't just add together, they get multiplied by the user base, and it's entirely common for a very small company to have ten million users.
So you take a letter like that. The first time you get one it will take you a week to figure it out, but over time you get the response time down to an hour. Only with 10 million users, if 0.1% of the users make such a request per year, you're looking at 27 of those every day. That's more than three full time employees doing nothing but that. For this one "reasonable" regulation.
- the requirement to have a DPO. Based on the requirements for the DPO, no one in the company can fill the role (conflict of interest), so we must hire an employee or consultant (expensive either way for a small startup)
- one month to respond. That's a lot of informations to collect the first time, and I might have other fires to put out (or I have to be pro-active and have a prepared respond, which has the take the place of something else important to do)
- the sheer amount of informations to collect. In the age of plug and play solutions, that's a LOT of things to audit (Mailchimp, AWS, GA, Heroku, various Wordpress plugins, logging solution I don't even remember the name, just to name a few)
- tracking every single PI of a user. If your systems are not built for this, it's going to be lengthy. If you were created before the GDPR, they are probably not.
- tracking down the usage of those PI may be complicated depending of the expected scope and usage you do (fortunately for me, there is no ad nor data resell, so really only the scope is the problem)
- some process asked for have a serious implication you should have some and do some sort of things. This is not feasible for a small startup.
It boils down to: it takes time, and time is something I'd rather use for something else, and it also requires to do things that have huge fixed cost that the size of a small company can't absorb (at least not until there is a ready-made solution).
I define small startup as startups with less than 20 employees, that might have received Seed funding but not more. Those points might not all be applicable to a new startup created with GDPR in mind.
We are mostly fine with the spirit of the GDPR, it's the work we have to do to follow it to the letter which is a problem (and the lack of process internally).
The FDA makes the medical field hard to break into for startups, but for good reason. New medical devices need to go through rigorous verification and validation to show that they work as intended. If a company making pacemakers had the same "move fast and break things" attitude as most of SV seems to have, I might never trust medical companies again. As a consumer, I'm extremely content with the quality of pharmaceuticals and devices, and I wish I could trust Facebook or Google as much as I trust Medtronic or Philips Healthcare.
I don't see a comparison between what people *willingly post online to public forums compared to their personal health ledger
The thing is that none of those "anonymized" subjects would have ever been asked for consent if they really knew about the consequences.
Such behavior has really, really bad real world implications: When I got a knee operated one of the questions on the questionaire you need to fill is if you agree that your data can be shared in anonymous form for research. At that point (and given that this was a fairly
benign condition) I didn't see a problem with consenting.
After that revelation about what Facebook was up to my answer in the future is a clear NO!
Facebook handling medical data. What could ever go wrong with that?
If you cannot comply with privacy rules you should not do social media, whatever you growth phase
Everything is as in citation from GDPR:
"Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes
... implement appropriate technical and organisational measures ..."
2. The essential ban on offering services, downloads, etc. in exchange for consent to use data reduces consumer autonomy and will decrease the availability of free resources.
3. It will be extremely easy to use SARs maliciously, and the law includes NO check whatsoever on this. All it would take to cripple many SMBs is for some jerk to spin up a website that provides a nasty SAR template (that the users don't even realize is such a burden) that random people on the Internet can auto-send to every business they've ever used under some innocuous-sounding reason like "See what information businesses have on you!" 99% aren't using data against subjects' interests, so the net effect of this alone (in the way it is designed) is potentially-immense costs for small benefits.
As a recommendation, the $250 my company spent on buying me a membership to the IAPP has been one of the highest ROI decisions in recent memory. It has saved me a ton of time and effort (and the company quite a bit of money) from the member resources available, and the members listserv is essentially free light consulting from people who have already dug into everything.
Yet... I've read through GDPR. All ninety-nine articles are chock full of "reasonable measures" and similar verbiage. Unless you can afford a compliance specialist - which isn't automatic for a new player - it's intimidating as all hell. What are reasonable security measures, as seen from by a careerist somewhere in Brussels? The text is silent on what exactly that means.
It's possible that respecting users and having good intentions may not be enough...
Most important is to document everything. Have a design history file that you can show in case you get audited. When you design your software, save your designs in the DHF. When you update or make changes to the design, put that in your DHF too.
For each GDPR article where it makes sense, have it written somewhere how you are compliant with what they ask for (you probably don't need to demonstrate compliance with Article 4  but you should have it written somewhere how you are compliant with all the points in Article 5 . When it says "Personal data shall be: (b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes" You should be able to procure a document that lists the various kinds of personal data collected and how each is used; e.g. "Username: The username serves to associate a person's login id to their profile. [... other details] Profile Picture: The profile picture serves to display an image of the user. [... other details]."
When it tells you to have reasonable security measures, then document what your security measures are. "This data is encrypted" or "This data is saved on an external server disconnected from the internet and only accessible by someone with a dongle". If you're still worried that your user data could be insecure, then it might be worth hiring a security specialist to check it out.
With all that said, my point was that it's not obvious what is and isn't reasonable. Hiring a security specialist won't necessarily help you understand what bureaucrats will or won't deem reasonable, especially when there's no history to provide context.
It's your best guess what is and isn't reasonable. As long as you've documented what you did and why you did it, then you've satisfied that requirement. If an auditor finds what you've done to be insufficient, you'll probably get a warning but you'll still be considered compliant for having done something.
I know it's not a satisfying answer and I'm sorry that I don't have a better one, but complying with regulation is not as definite a "yes/no" as programming.
My knowledge is with the FDA so I'll give an example I'm familiar with. I worked with CT scanners and we needed to do verification/validation. The FDA requirement was to the effect of "must define reasonable requirements for the device" and "must set up testing procedures that reasonably demonstrate that a device can meet its requirements" and so the team I worked with set requirements like "radiation dose: <20rad when run on [x] setting" and then tested it at [x] setting 5-20 times, then documented "passes radiation dose test with 99% certainty, which exceeds our cutoff for passing which is 95%".
CT is an old industry so there was a bit more to it than that, but we were still following requirements that we had written, and testing them in with procedures we had made. The point is the requirements even in the health industry can be vague, so you really just have to do your best to come up with something reasonable.
And because it's vague, that's also why it's so important to document everything.
Fuck the users! They're not the clients.
I am starting to look at privacy like it should be treated as a public safety concern, since it’s invisible to people until it’s not.
Yeah, it can and it did. Not only did the Ashley Madison leak led to a few deaths, check out what happens in countries where homosexuality is punished by death when private information goes public...
I assume you're aware of this:
That said, if a fire occurs in my shelter and I don't have sprinklers installed, I could die.
We don't have mandatory sprinklers in home offices and undeveloped land and buildings that are still under construction.
The problem with the equivalent distinction in software is that there is no clear point that software is "finished" like a building is. The architect doesn't come back and make changes a year after the occupants move into a building.
If there is no exception for new code still under testing then there is no way to test new code. But if there is, everyone will live their lives inside of it.
I am very curious to see what happens to EU ad revenue after GDPR. If it doesn't drop (outside of Google & Facebook's internal platforms), I'm guessing there isn't much GDPR compliance going on.
Facebook simply needs to give way to a more evolved and humane way of "social networking".
As an European living abroad, I still have no idea if I will be protected by the GDPR. I read at least 2 opposite answers on HN on the last days: "yes because you are a EU citizen", "no because it's where you are when the data are collected that matters".
Art 3 of the GDPR ( https://eur-lex.europa.eu/eli/reg/2016/679/oj ) explains all. It's only when one of the person , or the company, is in the EU (or maybe EEA). So EU citizens in the USA aren't covered.
Does "in the Union" mean within the geographic borders of EU states?
Does "established" mean having a physical presence? Having been incorporated? Registered with a regulatory body? Having remote employees who live there?
If any of those two, or both, covered. If neither, not covered.
Facebook needs to be broken up and an American GDPR codified into law. If you care about this, pick up the phone and call your Congressperson and Senators.
Available here: https://www.facebook.com/help/131112897028467
Scale and trust mostly
Nothing. But Google Plus already had your name and a lot of information on you, yet still failed. Ello had a ton of hype around it and people signed up, yet no-one really stuck around.
I was under the impression that most people fully expect (even if they disdain) free web services vacuuming any and all user data for advertising profit.
Is this data selling/ad targeting a surprise, or rather is it just finally enough to make you leave or get upset even though you knew that was the business model all along?
Also, are you quitting other web services that operate ad based, data driven revenue models like Google, Reddit, Twitter, etc?
This is a genuine question not a sarcastic comment.
It is even harder for most people to understand the implications of even small amounts of data collection.
I see not reason and logic to the fact that nations who have not opted in into this be subjected to laws that are essentially created by no-skin-in-the-game bureaucrats.
Such attempts should be opposed at all costs.
(I know this "urging" is supposed to be "voluntary action" by facebook but nevertheless stinks of the same white man's burden colonizers talked about)
And I presume Facebook has so much data on you that a simple trick like changing your profile won't matter.
The main problem is that EU legislation is complex and subject to interpretation for which we have no precedents. Such legislation is easily exploited by authorities to silence opposition. As Napolen never said, "A Constitution should be short and obscure." GDPR is long and obscure. That leaves even more power to the executive.
This situation benefits large corporations - such as Facebook - who can afford an army of lawyers and can deploy resources to legal fights in any country. Small actors with dissenting views are hopelessly disadvantaged in this kind of a setup, and I forecast that we'll see authorities shutting down blogs and websites using GDPR as their tool.
I'm not sure what this has to with free speech though. Many laws (in any country / federation / commission / union) are long and complex. Not all are related to speech and/or freedom thereof. GDPR is not.
I take your point that complex laws favour the legal establishment and large corporations that can afford them, but again... what does that have to do with free speech in the context of the GDPR?
There seems to be no argument here... Is there something in GDPR I'm missing?
If they're shutting down blogs, there's 2 possible reasons for it:
1. The blog is using a non-compliant commenting system. This may be hand-rolled or 3rd-party: in either case, disabling comments is a common-sense measure to stay up. No legal complexity of any document should obscure the simplicity of this solution.
2. The hosting company hosting the blogging platform is non-compliant and gets shut down completely. In this case, your argument re: the company being small and not understanding legalese hopefully shouldn't apply.
If they're shutting down websites, those websites are offering a user-oriented service of some kind, and should get their act together w.r.t. understanding the legal implications of doing this, no matter how small they are.
If you're not processing user data, you're not a target. Exercising free speech does not require processing user data.
There are other possible reasons. Like "we don't like it".
I live in the country where the corruption index (calculated by Transparency International) is lowest in the world. Still, we have a "black list" of web sites that the police distributes to Internet access providers to block users from accessing the sites. The legal basis of this is supposed to be stopping child pornography, but still, the mechanism is used for blocking sites that criticize the police, and have no pornography at all. And there is no legal mechanism to challenge the police and stop them from doing this.
GDPR gives many additional tools for authorities to perform censorship like this.
Strangely, none of these critics ever seem to consider the opposite cases: what if a person was wrongly accused of murder, but was later found innocent? Old articles about his "suspicion for murder" should either be rectified or deleted. What is more important: to prevent an innocent person from being punished, or to be able to punish a legit criminal?
Or let's take something more mundane. If you posted embarassing party photos while you were a teenagers, and some site made a copy of those photos, shouldn't you be able to have them removed?
(Mind you: this particular critique on GDPR wasn't valid in the first place. GDPR article 17 states that the right to be forgotten does not apply "for archiving purposes in the public interest", among others.)
See also my other comment in this thread about the plastic surgery meme, "The only thing you’ll ever have to worry about is how to tell the kids". The meme is false but it ruined the woman's career.
The legitimate cases are rare.
88.7% of requests by private persons. And even excluding those, only 20.9% of the remaining request were made by gov institutions or politicians.
Please don't spread lies, thank you.
Let me give you a real world example. You know that meme about plastic surgery, "The only thing you’ll ever have to worry about is how to tell the kids"? The woman in that meme in fact did not have plastic surgery, but most people thought the meme was true and was about her, without researching the truth. It ruined her career. https://nextshark.com/heidi-yeh-chinese-family-plastic-surge...
You can't treat the Internet as an append only database where you can rectify things by publishing more stuff. The human mind is bounded rational and most people only look at the first Google search results page.
Even if it's ranked high, a lot of readers would still end up with this feeling that "yeah the latest news article say that but MAYBE that guy DID do something wrong... let's not hire him just to be sure", i.e. "where there is smoke there is fire".
I just gave you a practical example about the plastic surgery meme. News articles about how the woman was ruined rank nowhere near as high as the meme itself.
GDPR is a threat to freedom of speech while not changing much in term of privacy as worst actors is governements themselves. Edward Snowden revalations are 100x worse than whatever worse FB scenario you are picking.
GDPR sets a bad precedent with local laws impacting foreign businesses. In this logic, why Chinese speech laws shouldn’t apply to EU and US companies if GDPR applies globally?
The U.S. set that precedent 2 decades ago with the DMCA.
Don't like it? Then don't deal with EU citizens and residents. Don't like that? Fine, just don't go to the EU, or have any assets in the EU.
American's national identity is wrapped up in founding of their government and throwing off "oppressive" European laws is at the center of that narrative.
Laws carry their culture. GDPR is, from an American perspective, an overworked mess designed to support a big bureaucracy. This side of the Atlantic, we'd do something slimmer, more reliant on privately-funded cases (and regulatory complaints) versus public ombudsmen, and better attuned to start-ups’ needs.
IMHO, as an EU citizen, an American perspective would be welcome. A text must fit the local hierarchy of norms and the local judicial system.
You may read this article related to this point of view : https://www.economist.com/news/leaders/21739961-gdprs-premis...
Both of these things are out of scope for you, no? I suppose you could vote with your shares if you own stock in Facebook...
Perhaps a global data protection framework can largely conform to GDPR, but it clearly has to be decided in the US.
Meaning: not absurd at all.
1. There might be significant internal hassle involved in working with more than one standard of integrity within a single company dealing with integrity as its currency and user base as pillars. It may be easier for them to just use a common one even if it is more restrictive, at least if it is backed by a population of 500 million people.
2. Facebook are going through a PR crisis and need to dig out of that hole somehow. If they only adopt EU regulations in the EU, it could be seen as Facebook are only doing the least they can to protect the integrity of their users and that this is seen as a better move?
It's funny how GDPR coincides with the Facebook scandals as of late, though... Their lawyers and engineers have got to be buried with work. I can't even begin to imagine what a company of Facebook's scale and business model need to do to support GDPR globally.
The GDPR has been on the table since 2012. That's a very loose coincidence.
In the automotive industry, California emissions and efficiency standards become the defacto standards in the US mainly because it wouldn’t make economic sense to maintain a California model and a more polluting model. Also, better fuel efficiency is generally seen as a good thing by consumers.
EU has a population of 510m. FB is blemished by privacy concerns. If Facebook makes a GDPR compliant version – it wouldn’t be unreasonable of them to roll that same version out to the rest of the world.
Facebook isn't only a US company. Facebook Ltd is a UK company and they have many more companies around the world.
If you want to operate in UK and generate revenues there then Facebook Ltd must follow UK laws.
That's not true. Read the 23rd point right at the top: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CEL...
Here's the part of it that covers your webserver: "Whereas the mere accessibility of the controller's, processor's or an intermediary's website in the Union, of an email address or of other contact details, or the use of a language generally used in the third country where the controller is established, is insufficient to ascertain such intention, [...]".
> envisage d'offrir des services à des personnes concernées dans un ou plusieurs États membres de l'Union
They just have to prove you are considering EU in your app. It can be anything. Like Having EU timezones, or a country input with EU countries is enough to prove intent to server EU residents. If you collect IPs via your web sever, you are infringing.
Given that it's still April, there's literally no way for you to know that. Also, the sentence you're quoting starts with "may make it apparent" not "does make it apparent".
Having said that, if you're building a service that let's people select EU timezones, countries, currencies and so on you're probably going to have a hard time proving that you're not providing goods or services to Europeans (because you probably are). If you're providing goods or services to Europeans GDPR applies.
Under the US interpretation of free speech political donations are protected as "speech" and politicians can go on TV and say they want someone to be murdered and not face any consequences.
I'm German so you can imagine why I fundamentally disagree with that notion, even if our laws are sometimes a bit too strict (though that often has more to do with post-WW2 denazification than free speech in particular -- e.g. not being allowed to put nazi symbology in video games, not even as enemies).
UK libel laws and their advertising code are another example of European laws being a bit too strict. But even that is something I'd prefer over the "law of the strongest" in the US.
EDIT: Free speech is obviously a great idea and an important right, but the problem with freedoms and rights is that they can't be absolutes when you live in a society with other people you want to share those rights and freedoms ("your liberty to swing your fist ends where my nose begins"). Additionally some of those freedoms and rights are mutually exclusive so you need to define an order of precedence. Even free speech absolutists generally draw the line somewhere (e.g. generally violence isn't considered speech even if it is a form of expression and few people would defend the right to shout "fire" in a crowded building and not facing the consequences of the resulting mayhem).
In other words "being willing to defend free speech" is a meaningless platitude unless you first define what you consider the acceptable limits of that freedom.
*Terms and conditions may apply.
The First Amendment protects you from the government. Facebook censoring you is not prohibited by the First Amendment. More broadly, I don’t see how GDPR interferes with one’s right to lawful political speech.
But I still don't see the connection with the first amendment.
Edit: come to think of it I'm not even sure it protects them, but again, it certainly doesn't require them to store or transmit anything.