This is simply the extent to which we've permitted these Internet giants to collect information about us. It's business as usual.
Edit: To clarify, this is indeed worse than if the data were taken from Facebook without consent. What it means is that not only does Facebook have access to vast troves of personal information, but so does everyone tangentially connected to someone with a Facebook developer account.
> Can we not let this become framed as a "breach"? No
> systems were compromised. Nothing of Facebook's was
> accessed that wasn't supposed to be accessed. This was
> data intentionally exposed by Facebook, just exfiltrated
> and given to an entity whom Facebook hadn't authorized.
As an aside, a HIPAA-style law that protects and enforces portability for this type of personal data might be a good first step to reforming our industry here, which is currently completely unregulated in this regard.
HIPAA data is accessed by researchers, sometimes anonymized, but not in all cases. These are not considered breaches. In addition, as others indicate, FB posts are not, at least at this time, protected data.
So, while illustrative, the analogy is not apt.
We're seeing a divide between the technical and popular interpretations of the term "breach". When an industry drops the ball and responds pedantically, that's a strong sign that further regulation is needed. If only to force a common language.
Facebook insists they were not "breached" because many states require notification in the event of "security breaches of information involving personally identifiable information" . Each body of law defines "breach" differently. Most do not limit it to technical security malfunctions.
We already have plenty of regulation here that Facebook is unambiguously subject to; the question is whether the relevant authorities will actually follow through on that.
For what it's worth, it's been two days, and we're already seeing an FTC investigation and a Congressional investigation, so it's a little premature to conclude that existing regulation is insufficient.
In order to receive data protected under HIPAA by a covered entity, you have to go through an extraordinarily elaborate and complex legal process. In addition to signing an agreement that (in effect) binds you to all of the same restrictions on the data that the original covered entity (e.g. hospital/insurer) was, if you're accessing the data for research purposes, you'll have to go through an institutional review of your intended purpose and methods for the research.
Facebook does none of these, which is why they have been (rightfully) criticized for conducting unbelivably unethical studies without either user consent or institutional approval, even though both of those are typically required by all reputable universities and publishers for research.
Facebook posts are not protected under HIPAA, but they're not entirely unprotected either, and it's totally valid to refer to that breach of responsibility and trust as a breach.
 e.g. https://www.washingtonpost.com/news/morning-mix/wp/2014/07/0...
It's not Russians hacking in, it's not part of some effort to destabilize democracy, etc. That characterization and demonization is indicative of the mindset of those people and that may be even pose more danger than the breach of trust by Facebook.
True! Mostly it was information about users and their social graph collected by people voluntarily. It's distressing that people were not informed, "We're going to use this to target political propaganda at you when you" when they took personality quizzes/etc, but all the data was shared by users. FB's security isn't breached, merely their users' trust.
> it's not part of some effort to destabilize democracy, etc
I'm not sure we all agree on that. ;) The whole point was that one can use the intelligence gleaned from these users' social graphs to target memes/advertising/messaging to specific subgroups whose political responses you are hoping to influence.
I'll avoid the word "hacking" since it's used to mean a lot of different things to different people, but it absolutely could be part of an effort to destabilize or undermine (US) democracy.
What we've seen is definitely a breach of responsibility and a breach of trust. It's also probably a breach of the law, since the data Facebook collects is still subject to some protections (and it's hard to imagine how Facebook could have done all this while adhering to those). And while we don't yet know the motivation or intentions of the people involved in these actions, it could very well be motivated by an effort to destabilize or undermine US democracy. I don't see why you think those are mutually exclusive.
Do we know what data was harvested? Cause if its data that's supposed to be private then yeah, that's some murky business. If its public info, or info that can be accessed if you give an app permission to log-in, then is that really a "breach"?
I mean, it's terrible and CA was definitely misusing it, but if I install an app and it asks for permission to use my location and my contacts, and I grant them, is that a break of trust and a breach of the law on the Apple/Google front? What should Apple/Google be doing to protect my privacy?
Legit questions here; I do hope something is figured out and less people fall into this kind of trap. I've heard of Android games whose purpose actually is to harvest a ton of personal info. Apple seems to veto its apps better, and maybe that's the solution-- Facebook should veto 3rd parties better (Google should too, before something like this hits the fan).
What data was being protected? The data was created when the user chose to engage with the facebook apps. CA pays facebook to put something in front of users faces and then CA gets back information on user engagement. How is that different than any other kind of advertising on the web?
We can argue that there needs to be more transparency on facebook but a breach? That's torturing the word.
Personally-identifiable information . Many states require notification in the event this data is found to have been accessed improperly. The definition of a "breach" is not limited to technical malfunctions.
We might say that you can't sign away the secrecy of your PII, so user consent is irrelevant. Then we had better get on YCombinator, Stack Overflow, Medium, etc. for allowing prominent community members to use their real names on their posts. Someone could  use them train statistical models to who-knows-what purpose, after all.
Whether you believe them is another matter.
> This is similar to a HIPAA "breach" where the word doesn't imply that a security system was compromised, but that protected data was accessed by folks who shouldn't have had it.
Protected data, in the context of HIPAA, would refer to Personal Health Information (PHI)
One of the big weaknesses of HIPAA is that the privacy requirements technically apply to the data custodians, not the data. That allows for some loopholes through which private information can fall out of HIPAA protection, and also creates some unnecessary hassles for health care providers.
Ontario's PHIPA is one example of a better model for patient privacy.
Facebook handed over the data. They need to understand that they don't have control over it once it leaves Facebook. Is a violation of ToS a data breach? Do we really want to conflate those things?
That Facebook would rather not call that a breach so much as "business as usual" is all the more reason legislators may be inclined to define "breach" the way that voters do.
The point I'm trying to make is that there's a difference between an isolated attack (e.g. Equifax) and what Facebook has going on here. To the person who reads about a "data breach at Facebook", it does sound like this was an abberant event that happened suddenly — rather than systemically, by a machine built on doing this every day.
Cambridge Analytica's actions may illuminate how far this can go, but we should treat it as the norm — and regulate accordingly.
The distinction may be very subtle, but it's important to know that following the 25th of May, businesses can no longer claim to be "in the process" of implementing it -- they have already had two years to prepare.
Data breach is a compound noun with a very specific meaning in information security. It means that the data was protected, and a malicious entity defeated the protections.
Breach of contract, breach of trust, physical breaching of the hull of a ship, etc. are all different usages of the word breach, but it's not a data breach unless someone accessed a protected system without or exceeding authorization as defined by the CFAA.
It's not, at all. The FB API was designed to give out this information before it was changed. That means the friend data was not need-to-know like healthcare data.
An academic who has done some great work on this is Evgeny Morozov. Highly recommend his books, articles and lectures.
The massive industry that has been built around advertising and personal data trading needs to be regulated.
I specifically want to avoid the Equifax comparison because it looms large in people's minds as an example of an intrusion and forceful removal of data, which is not what occurred with Facebook and Cambridge Analytica. We should have better laws around protecting sensitive data from intruders, too, but they won't be the same laws prohibiting companies from selling data they've collected on us. Conflating these problems will not help us solve them.
Was this a breach in trust to Facebook users? I think undoubtedly yes.
And was there a breach of a the Terms of Service by companies taking all this data and using it for non-academic purposes? Yes there was.
So the type of breach seems to be a worthwhile distinction to make.
What's interesting about this is the fact that the same data is shared with many third-parties, with proper "consent", and users not understanding what's really happening. Calling this a "breach" has the slight unintended side-effect in the public by promoting the idea that this company received a different dataset than other partners, which is not the case.
There's a legal concept of 'waiver' meaning that even if something is prohibited in a contract, but the parties don't enforce that part, then that part is later not enforceable. Facebook was fully aware of this behavior, chose not to enforce the ToS, and therefore it waived that clause. Therefore no breach.
How naive is the average person? The purpose of facebook is to gather this information, hence why its offered as a "free service".
Frankly, I don't understand why the stock is going down, facebook is fulfilling its core mission: Get private information on millions of people and package that information for sale to its clients. If anything CA situation should show how FB is fulfilling its core mission.
The fact that the public is now waking up to this is not a breach, its simply casting a light on what has always existed.
The public waking up to this breach and the costs being exposed are probably a huge part of why the stock is dropping. Facebook's continued profitability and success is dependent on its users not understanding how their data is being used. And now "everyone" knows, so the secret is out and hopefully Facebook can't get away with this going forward.
Metaphorically, somebody had a gun, and someone else took that gun and used it to rob a bank. Equifax left the gun sitting visible in an unlocked car, and people are angry about the predictable results. Facebook was running a "borrow my gun" program for strangers, but had a clause saying "no using my gun for crimes, no lending my gun to any third parties". One of those strangers lent the gun to the robber, and Facebook is saying this isn't their problem because they said not to do that.
So yes, they're both bad outcomes. But "breach" usually means "this was stolen without our knowledge", and that's a very misleading impression to create here.
The only difference is that instead of the baddies having to sneak in carefully at night to nick stuff, Facebook said 'welcome, come on in, help yourself – here's a sack'.
The end result – millions of people having their personal data used against them without their knowledge or consent - is the same.
This is far worse than if the data were taken from them unwillingly, because it vastly increases the number of entities with unfettered access to it.
It’s time to update the definition. “Breach” means you lost my shit. I thought I gave it you in confidence and then you lost it. Facebook arguing “this isn’t technically a breach” comes across as their yet again talking down to users to slip problems under the rug.
This isn't like the Equifax breach. It's not a result of Facebook's security practices. It's a result of Facebook's entire business model.
This can be a 'breach' by many of these definitions.
You're basically saying, "Words only mean the things that I want them to mean, and if you try to use them a different way than I approve, then I will use this meme to try to shut you down."
Words fluctuate in meaning all the time. This may very well be the beginning of a new definition for breach, i.e., a social data breach, for example.
But we don't even have to go so far as to claim that this is a new meaning for breach. Any of these old definitions contains sufficient meaningfulness to make "Facebook loses control of data to unauthorized breach" perfectly intelligble.
Sure, but the point being made by the "it's not a breach" people is that Facebook didn't lose control of data to an unauthorized breach. They gave up data according to their own documented and expected procedures to people who were supposed to have it. "Facebook voluntarily and purposefully gives away data in an authorized breach" is not so intelligible.
The fact that "Facebook loses control of data to unauthorized breach" would be a sensible, understandable sentence isn't really relevant when nothing of the kind has happened. Who'd be using that sentence?
Did Facebook have control over its (my? your?) data at Cambridge Analytica or not? I thought the extra 50 to 250 million profiles scraped were unauthorized access?
I could be entirely mistaken.
checkyoursudo, I don't want to shut anybody down. I get your point.
And I am sure that in a world of haveibeenpwned.com and Equifax you get mine.
Let's focus on the real issue here. Facebook has data that:
- Can harm everyone
- Is not protecting it well enough
Facebook's responsibilities and Cambridge Analytica's responsibilities towards data protection have been breached.
There's no other useful word for that. It might not be a hack and it might not be a security vulnerability, but it is surely a breach.
> A personal data breach means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data. This includes breaches that are the result of both accidental and deliberate causes.
Most of the breaches I'm familiar with are accidental - people putting their research on thumb drives and losing them, etc etc.
Whether the fox gets into the chicken shed, or you let the chicken out of the safety of the shed, it's a breach of the chicken's security.
This appears to have been systemic and profitable for them because companies would turn around and pay them for highly targeted ads. They ignored it because of greed.
Let's say it like it is: facebook betrays users expectations giving their data to other businesses.
Same for hacking: some people invaded system such and such and took private information.
It doesn't matter if it was a breach, a floodgate, a window, what matters is what happened, and what happened is that player X did Y. Let's just state that first and foremost.
Once in a while I reread http://www.derailingfordummies.com and review the definition of “horizontal aggression”. Sometimes it saves me from engaging with people who are derailing the conversation. Accidentally or willfully.
As I wrote previously, don't you think that it can be a breach in the same sense of a breach by phishing? After all, both of the cases are about people giving their "secrets" for one reason but the info being used for something else.
I mean, in the case of traditional phishing the user is tricked to provide the password by impersonating a banking site, getting their funds stolen and in the case in question, the users are tricked to provide personal information by being promised some kind of personality analysis but their data is used for political propaganda that they didn't asked for resulting in life-changing consequences du to politics.
It wasn’t a mistake. It was by design.
Anyway, the idea here is that CA breached Facebook users personal data by methods quite similar to phishing and FB look the other way. Not necessarily by design but maybe by a desire to exploit the platform as much as possible so that did not get in the way of people who were doing interesting things.
Look at all the examples of a data breach in this wiki. The CA/Facebook incident looks nothing like them.
CA either paid facebook to collected data through apps or scraped data from public profiles. Maybe the CA/facebook incident will change what we consider "breach" to mean but right now "unauthorized collection of public data to create a political profile of users" is not a data breach.
Sounds like exactly what happened with CA and FB. People came for friends and fun personality tests, their information got into the hands of a propaganda machine. Definitely a breach.
As for the examples, do you want me to edit the Wikipedia article and add the CA/FB incident?
And as for your glib comment on editing the wiki article, you should read more carefully what I said. My argument was that the numerous examples of a breach in that wiki do not fit the CA/FB incident. Adding the incident to the list would do nothing to dispute that point.
The comments on this thread aren't generally dealing with the question of the applicability of that definition so brining that up doesn't help you.
I guess you're really trying to get at is that you disagree with that definition. That's fine. But it's a very weak argument to appeal to an authority and then disregard the authority where it contradicts your position.
Maybe you need to edit the Wikipedia article ;)
BTW, not sure if this is the part you don't like, but the distinction between intentional and unintentional is tricky. For one, we'd have to pin down whose intentions we're talking about (the people controlling the data store that has been breached, or the people's whose private information has been taken). Then, peer into the minds of people we don't know or, worse, try to determine intention for a corporate entity. If intent is part of the definition of a breach then it would demand a lot of assumptions to be applied (or some kind of long, expensive process like an investigation and trial).
In the end, the impact on the people's whose private information was taken is the same: their private information has been taken, en mass, without their permission, by someone they don't know, for purposes they don't know.
Did the sensitive data end up someplace it shouldn't? Yes? Then your data security was breached. The end.
But hey let's argue over the technical definition of breach rather than how evil facebook are and how much power they have - both of which are vastly more interesting to consider. I'd like to see some support of the not very, not much school of thought.
The problem is that Facebook just made its partners pinky swear to only use the data for research, which is obviously not an adequate data security measure.
Just because it wasn't a hack does not mean it wasn't a breach. To wit - a breach of data governance, breach of trust, breach of moral responsibility.
>the platform operations manager at Facebook responsible for policing data breaches [...] warned senior executives at the company that its lax approach to data protection risked a major breach
>One Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?”
>They felt that it was better not to know. I found that utterly shocking and horrifying.
The only thing shocking and horrifying about this whole thing is how naive the American public must be to find any of this shocking and horrifying.
It's a simple cost-benefit analysis.
Implementing effective security is difficult, time-consuming, and expensive. Ignoring problems costs nothing. Unless it's clear the cost of a breach is higher than the cost of security, corporations will risk a breach every single time.
The ultimate loser here is users, who bear the burden of having their data appropriated and misused. Unless the government steps in and imposes penalties on corporations on behalf of users, they'll continue merrily offloading the risks of poor data security on the general population.
It's not even as simple as this. Sometimes, ignoring problems can actually be cheaper. Public perception, as well as government fines, will often treat companies nicer if they were ignorant to the full breadth of security issues than if they knew about them but did nothing.
It's a failing of our system to be sure. I've been asked to stop doing a security assessment halfway through, because once the client realized that the assessment wasn't going to just be "everything is 100% A-OK!", they didn't want it to be on record. If they were breached, they didn't want any paper trail of the executives knowing about the security vulnerabilities that could increase their liability in court. They preferred to be able to claim ignorance.
So that means its not bad?
Why do “people love to pretend” that genuine outrage and the sincere desire to stop immoral practices doesn’t exist?
Many people sincerely care about what’s right, even if they fall prey to human flaws and cognitive biases from
time to time.
Perhaps those who talk about how “everyone” just “loves to act like” x and “virtue signal” y are merely projecting their own values on to the rest of us?
Take Congress for example. Approval ratings are what, 20%? They're generally seen to be corrupt, and they don't get anything done, right? So why aren't they voted out of office? Why are people surprised when they end up having low morals or corrupt? If people honestly cared, wouldn't they immediately demand change? But the status quo remains.
So either the people have no power to change things, or they collectively forget these things every day, or the real reason: they don't really care that much, but like to seem like they do.
Because the average approval rating of individual members of Congress in their own district (for the House) or state (for the Senate) is much higher. For most people, it's (some large subset of) the 532 members of the Congress that they don't get to vote for that are the problem.
For your congress problem it’s actually none of the reasons you listed. The cause of the discrepancy is the 20% approval rating is for congress as a whole, but people don’t vote for congress as a whole, they vote for individual representatives.
People do like their own representatives, and those approval ratings are
often very good in their own district. It’s the rest of congress they don’t like.
"Shocking" fact is revealed. "No!" "Yes!" "Whoa!"
It's a sarcastic way to say "what else is new?".
There's your list. Seriously.
I obviously can't share with you the list of specific clients I work for, but this attitude is pervasive enough that you should assume that any and all major corporations have this same mindset. All of them.
This is true for basically everything, even stuff that is typically acknowledged as sensitive. I've consulted for big financial groups whose customer service reps had completely unfettered access to SSNs, birthdays, and everything else they had on millions of customers. I would not have been surprised in the least to learn that some programmers in the company, either acting on their own behalf or acting at the request of a superior, were taking samples of this data for "unofficial" use.
Maybe the takeaway is that the SV brogrammer is not quite as special as he/she thought, and not exempt from the temptations that afflict the rest of us.
Was that not the case with AOL in the US in decades past?
Facebook might be one of the few organizations with the motivation and ability to set up two different regimes to contain the effects of GDPR on their practices.
In that case, I would love to know what their selection criteria is.
>  I want to see dancing monkeys and for that, I agree to have all my data shared with unnamed third and fourth parties indefinitely.
See? Everyone clicked that :).
It would be different if you had to pay for something, in which case you would have to agree to share your name, credit card etc. However, they still would not be allowed to share it with unrelated (!) third parties.
Seriously though, go buy that stock...it’s ridiculously under valued!
Now is not the time to scream "DUH!" or "I told you so!" to people who in the past have not grasped just what they were agreeing to. Now is the time to help your less tech-savvy friends understand the impact their data has in aggregate (like swaying elections!), and how they have been used by this system. Take advantage of all this bad press and help people you care about stop contributing to this machine. If ever you were going to get someone to stop using these services, these are the moments you capitalize on.
I'm going to be using this thread as a perfect example when people think I'm crazy for saying that SV often exists in its own bubble. The disconnect here, and the failure of so many people to realize something so obvious, is appalling.
Open a software job board. 50% of offers are by companies trying to optimize some data harvesting and analysis to better target some ads.
Ads, the direct child of propaganda. So to everyone working in those kind of companies: you're not better ethically than people working on missile software. I'd say you're worse because you can argue missiles can be used as deterrent.
I agree with your points, don't get me wrong! Spot on-- this optimized data harvesting is widespread and terrible, and ads are dangerous.
Yet, I think your analogy is a little bit much and takes away from your argument. Missiles' purpose is to kill people, they tear apart families, bring chaos to countries-- they are built with the explicit purpose of terrorizing at best, and ending anyone not terrorized at worst.
Ads are meant to sell things. Sure, they are terrible when used as propaganda, but they're still just meant to be an efficient way do deliver feelings+ideas, and one that can be escaped with skepticism and critical thinking.
I personally don't think that a Google engineer working on Google Maps, a Youtube intern helping with creator tools, or even a Facebook employee making face filters for Instagram are in nearly the same ethical level.
Ads at their best (aka, furthest removed from propaganda) are about informing people of things they would otherwise not know about. Think, mom and pop sops, some new organization, or a science fair.
It's easy to paint things black and white, and there's a line that can be crossed in terms of tracking, optimization, and attempts to control the population/public opinion. IMHO though, I really do think engineers working on companies in the ad space are not as ethically removed as those working on machines meant to kill.
If you agree with those tropes missiles are less dangerous than propaganda.
My comment may come as exagerated but I think you need some shock value if you want people to start really thinking. Cognitive dissonance is hard to break and rare are those who don't consider themselves good people.
Really? You expect every user to understand the extent of FB's reach into their personal information? You expect every user to understand the extent of how companies obtain that data, whether through purchases or covert harvesting? You expect every user to then understand the myriad of ways their data can be used?
We work in tech, but we're quick to forget that most users at best have a simple understanding of, "I mean, they have some of my data, I guess some of the ads match some things I've searched for on Google before." Joe Schmo never reads a ToS, and we can't expect people not involved in this industry to not be surprised when something like this happens.
10 years ago they dismissed people telling them about things like Google Analytics or other external scripts on website. They used the "nothing to hide, nothing to fear" phrase. "You're paranoid no one want simple people's data".
Even better: doing it one week after complaining about how the GDPR and the Right to be Forgotten are bad EU laws.
However, Software development is not a Profession, in the proper use of the term. It is not self-regulating the way Medicine, Engineering, Law, and a few others are.
There is no formal standard of ethical conduct in software for practitioners to use as a baseline for their own behaviour.
>1.03. Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good.
edit: this version is from 1992. And I should point out my courses at least had a discussion or two regarding programming ethics in college in the mid-2000's.
For instance, not following those guidelines would conceivably end one's membership of the ACM, and many companies have their own ethical guidelines (I would argue there is not much difference between professions for what is truly considered "ethical") which when breached would result in disciplinary action. Theoretically?
Perhaps not in the case of FB...
Let's say I'm a structural engineer or a lawyer and I act legally but unethically: I can be censured by my professional association/college, because law and engineering are professions and thus are self-regulating.
Can the same be said of software development? Certainly not. The cult of the amateur, self-taught basement coder and the entirety of startup culture are antithetical to professional ethics.
The problem is that it's very easy (and socially acceptable, even desirable) to build elaborate towers of logic on an unexamined premise.
Might be more than a modicum. If a lawyer or a doctor violates medical ethics, they could get their licenses revoked and be unable to practice their profession legally.
If it did, we'd have heard from it again in the last 26 years.
Such a baseline standard _must_ exist, and _must_ be created. Every applied technology has started out with dreams to "change the world", only to have those dreams shattered by those obsessed with power.
Biology? Biological weapons, nerve agents.
Chemistry? Mustard gas, TNT.
Physics? Nuclear weapons.
Michigan doesn't have a degree requirement for the Fundamentals of Engineering exam to work toward being a licensed Professional Engineer. In general, in the past, the NCEES, which runs the FE and PE exams has made degree exceptions for people with appropriate work experience.
It's absolutely feasible to have accrediting standards and bootstrap in all/most of the self-taught programmers today.
The flip side is admitting defeat and proclaiming software development truly is the new blue collar and has no hopes of truly being a profession.
If you've ever seen your Google activity log; that's very scary. The accuracy with which your phone can track your movement and where you are at every point in time is unprecedented. I'm very careful with what I allow 3rd parties to access but I can see a lot of users blindly accepting (like they did for this personality quiz that leaked all this Facebook info in the first place).
Actually, as of posting, the "Apple/Google/Microsoft are just as bad" version has not yet put in an appearance.
Sure, those of us on HN know to expect this from data-mining companies, but spend an extended amount of time with people who don't work in tech and you'll quickly learn that, yes, they know that FB uses your data, but most people have almost zero idea around just how much of your data is captured and sold/harvested/whatnot, nor what is done with the data after that point.
Um, well yeah. This is the case any time you give data to a third party. They now have a copy, and you can't control what they do with it.
Even doing an audit wouldn't necessarily reveal anything. If somebody has data that they want to hide I'm not sure how much can really be done to force them to reveal it.
If the price is high enough, bad actors will be willing to breach NDAs/CDAs/licensing agreements/etc, but at least then you can be seen as having done more than zero.
Might have been prudent here.
This doesn't stop external attacks, of course, but it can reduce internal risks.
Facebook could have had more than zero control, if it had wanted.
For example we have CNN article in 2013:
vs now in 2018:
Maybe that would explain why we see so many that got surprised by this while others has seen it for a long time and just got used to everyone not caring.
BUT it’s important to note that GDPR would probably not have had an effect on the specific situation with Cambridge Analytica. CA is obviously toast if not by law then by the attention alone. Facebook, however, is likely allowed to share data under GDPR as they did with CA: they got the users’ permission initially, and there isn’t much you can do to protect yourself against malicious actors.
The EU is clearly moving against that blatant circumvention. I don't know exactly what they are going to do, but the whole, "just sign all your rights to privacy way with one click" is something they want to change.
I think the mostly likely situation will be one where each specific instance of use of your data would need explicit approval. Moreover the prompt cannot be disingenuous legalese. It needs to be clear and concise. I fear it might just become another Cookie's Law. But it might still be useful. For example, imagine if you get something like:
"Facebook discovered that you have Chronic Illness 1. Facebook requests permission to share this information with Insurance Company in your State. Do you approve?"
I think people would suddenly care about that.
Facebook's big data is getting to where they can predict things like pregnancies, illnesses based on parsing minor changes in behavior and correlating it against the big data set. This is of course super interesting, but it also gives you results like 'suddenly this guy is 42% more likely to die in the next 6 months and doesn't know it'. There are no certainties, but to an actuarial entity like an insurance company?
That's more than worth getting your lobbyists to repeal any shred of requirement that you have to keep faith with such a person. Insurance combined with big data and stripped regulations makes such an industry purely a financial play: handled properly they can, for a time, collect money and never pay any of it out, until it becomes obvious that's what they're doing.
Those are the entities most interested in having Facebook tell them you're probably getting sick. And why would Facebook ever tell you? That's their inference. You never said a thing about it, and indeed they could be wrong. But don't bet on it.
Next (as I understand) the consent was for research purposes, not for the CA targeting. So under GDPR Cambridge Analytica could be fined 4% of global revenue or €20M - whichever is HIGHER 
In the end, it’s about using data as it’s intended and nothing more.
Laws against murder are good too, but I think we all agree that a law against murder which defined saying mean things as murder would be over-broad.
> He said one Facebook executive advised him against looking too deeply at how the data was being used, warning him: “Do you really want to see what you’ll find?” Parakilas said he interpreted the comment to mean that “Facebook was in a stronger legal position if it didn’t know about the abuse that was happening”.
If this is true – would this constitute willful blindness, and is that not illegal?
I mean you can download your data here and see what endpoints/interests you can be targeted on:
Sadly, the world will continue to use Facebook and users will continue to be exploited.
And yes, everyone on HN and RMS will say “why is everyone surprised ?” Well it’s because normal people don’t have that perspective and think Facebook is the internet for them.
Data is the new oil but now everyone knows this except just us on HN.
Note that I'm not saying that any of this is ok just because there was no illegality.
Some sketchy apps harvested this data (which was against Facebook's terms and conditions for those apps). So the apps may have broken the law. I guess there is the question "should Facebook have protected the data better" but I doubt they broke the law exactly.
Anyway the stupid thing about this is that it was obvious that's what all these sketchy apps were doing at the time. Facebook app developers knew they could get this data, and the only thing stopping its exploitation was Facebook's app T&C's - i.e. "please don't do bad things".
There was even a setting to prevent third party apps accessing your data when given permission by friends. That's how obvious this issue was. (I doubt anyone used this option).
Facebook removed the friends API in 2014 so this is all about historical data "breaches".
The interest in FB and privacy, while gratifying, also seems focused through a particularly yellow lens.
People on HN have also pointed out that it’s very likely that CA’a analytical prowess may well be overstated as part of submarine marketing efforts.
I suppose many people are just surprised that this is taking off now, without any truly new or novel fuel driving it - when the same articles and worse, had no effect earlier.
Lots of new fuel.
There's good reason for the media to be tense against Facebook right now, since Facebook has changed the news feed algorithm:
"traffic in the news category, which includes major news publishers The New York Times, Washington Post, CNN and BuzzFeed, was down 14 percent after a sharper drop in the months prior"
I do think Facebook should audit its 3rd party developers more closely and that this leak of data is terrible. Yet, imagine CA instead had built an app for a personality quiz and asked for a ton of permissions from your phone to track your location, harvest your contacts, etc. What else could Google/Apple have done?
a company known for their respect of privacy and exemplary business ethics. good that he left facebook.
I am dismayed at the state of journalism that it took until there was a trump connection until they seriously reported on this
This is a national security issue. That seems like the most pertinent issue, and yet there is no mention of it in the discussions here. Facebook has amassed huge amounts of data about all citizens, and adversary nations are leveraging this data to manipulate the nation, including by helping elect a president who will be friendlier towards them.
Facebook is Russia's biggest cyberweapon. Just as private companies would not be allowed to stockpile WMDs, private companies should not be allowed to stockpile so much digital information either. This is a national security issue.
I know some people who work in SEO and marketing and the stuff CA was doing was a more sophisticated version of what every free Facebook game or 'survey' vendor is doing. This is literally the business model of the free/popular Internet. Of course it's going to be used for political campaigns-- why not? It's used for every other kind of marketing.
I'm not saying it's good. I think it's terrible. It's a plague. I'm just shocked that people are shocked by what's been happening in the open now for years.
Excuse me if I'm not really horrified by political groups using personal data to craft strategies.
I believe you are making a false equivalency.
The article acts like this is unprecedented. No one reads terms and conditions. It's not as if people fork over intimate details of their personal lives to Facebook with the defense that "oh the terms and conditions say they can't use this in a way I don't like"
People fork over intimate details of their personal life to companies like fb because they haven't thought about it very hard
I use google maps a lot. I search a place, it provides me lots of useful information. Yes, I find it helpful, but also terrifying, especially with the "Popular times" section, which is "based on visits to this place."
Where are the eyes? :/
I'm sure there are all kinds of metrics and analytics available but I'm interested:
What are the "rawest" forms of user interaction that Facebook makes available to these companies?
This has been common knowledge for years. Is the general population not aware of Facebook’s business model?
1. I barely add stuff on it
2. 99% of my news feed is irrelevant and I really don't care (there maybe 1 post/day from a friend that is interesting to me)
3. Starting to be more and more concerned about all this data
But I'm scared for multiple reasons:
1. connections needed from a lot of friends, family etc. that I would not keep contact with otherwise
2. It does a great job at keeping my contact list updated (no later than yesterday I searched for friends/coworker to make sure I don't forget anyone on my farewell email)
3. Messenger. I use it a lot (almost as much as iMessage) and again, a lot of people I talk to on facebook I don't have their info for telegram/whatsapp/etc.
> But I'm scared for multiple reasons:
I went through that a similar thought process. I decided to keep my profile active, but unlike everything and delete all my posts. I also changed my profile pic to make it clear I wouldn't be using Facebook anymore. That way I can still get event invites, etc. but not be too burdened by the whole thing.
Eventually I'll delete my profile, but not until Facebook becomes far less ubiquitous (and I do what small things I can to hasten its decline).
The scary part is the cat is already out of the bag if you authorised any app. They could have a wealth of your data that is being sold.
'If you authorized any app'? I'm sure there are workarounds for that. If you touch them or pages where their invisible Facebook gif is present, they've probably got all your data that's gettable.
If those figures were extrapolated, tens of thousands of apps, if not more, were likely to have systematically culled “private and personally identifiable” data belonging to hundreds of millions of users, Parakilas said."
So it's quite possible that there are more than a few third parties holders of FB user data who have now been alerted to the potential profitability of their old "research data."
A lot of the discussion revolves around friends data -- was all friends data accessible regardless of the friends' own privacy setting (this would be deeply troubling), or was it the data that friends shared with the app users (a bit less troubling, but still very questionable), or was it friends' data that was openly available on their public profiles open to any internet user?
What is the deal with those videos in news stories these days that are just moving* pull quotes with music and maybe some pictures? It's like a really short article in video form.
*I mean literally, not emotionally
In the upcoming weeks users will be more 'careful' but since they have 'nothing to hide' it's 'Okay' to use FB in a 'responsible' way.
'Everyone' is shocked but no one feels threatened individually, so the circus and clown show can continue.
The cynicist in me wonders if this is all lighting up intentionally before GDPR is enacted to reduce potential financial liability. Too conspiratorial?
like all things in life... people cry foul when things come back to bite them in ass.
like i said yesterday in a comment... delete ALL forms of social media cause this goes on with ALL platforms out there, it isn't just limited to FB.
Where do you draw the line for Social Media?
I think some people just naturally like rules and would prefer to live in a more orderly, rule based society, and some people don't like the idea of being constrained. Both groups act quite sanctimoniously though, as if their personal preference is somehow the holy truth.
What I do like is honesty.
Look at this very robust list of data breaches and tell me how the CA/Facebook incident this week looks anything like any of them.
>The release was intentional and intended for research purposes;
Sounds pretty damn close to this event with Facebook
In the CA/FB case the information was either public (and could be scraped as such) or was collected in the form of facebook apps.
Or lordy, didn't expect this comment to blow up this much. Do forgive me if it sounded a bit smug, that was not my intention. But the fact of the matter is this was something we were all warned about, we were shown countless examples of exactly this, not just us nerds, everyone, people like Edward Snowden risked their lives telling us about how all this data was being used against all of us. and yet everyone kept giving more and more, you were looked at like a tin foil wearing nutter when you told people not to give away so much information about themselves so easily.
At the end of the day, this is not really 100% facebook's fault, this is our fault, the fault of everyone who so readily made their information available without giving much thought to who sees it and what happens to it. And no just because you are not a techie you are not off the hook for not caring enough about your own privacy. I mean what level of technical knowledge is needed to know that once you post something online others can see it?
Funny thing is, this would all blow over after a few months, and everyone will go back to the usual habbits.
In general it's pretty amazing how trusting the average human being seems to be as soon as computer are involved. I suppose that it's mostly out of ignorance and complacency. People seem a lot more careful when physical mail is involved than emails for instance. They also don't hesitate to share extremely intimate details about their private lives with a faceless corporation. Some of my friends willingly opt into streaming their position in real time continuously through their smartphones. That's terrifying to me but apparently very convenient for them. I think Zuckerberg agrees with my sentiment since that's the source of his "dumb fucks" comment.
I hope these articles will help change that mentality but I'm not overly optimistic. I read a comment on a forum earlier today that basically said "screw Facebook, I'll close my account and do everything from WhatsApp instead". I don't think it was sarcastic.
1990s: your signature needs to match exactly so we know it's really you!
2000s: you must enter a PIN that hopefully only you know?
2010s: fuck it just tap the card near the reader
“No one company should have the power to pick and choose which content reaches consumers and which doesn’t,” said Franken. “And Facebook, Google and Amazon, like ISPs, should be neutral in their treatment of the flow of lawful information and commerce on their platform.”
And then one week later, his political career was suddenly over. Politicians got the message loud and clear; Don't F* with Facebook.
How do you avoid this? I have a GPS in my car with stored routing information, but if I need to navigate for someone else or get walking/biking directions, I am forced to do this. Printing out directions beforehand is something I did only a few years ago, but these days I don't always have a chance to do that.
I remember installing a dating app one evening, and thinking "I'll look at it tomorrow.". Next evening, I opened and it said "This stranger and you were both at this subway station around noon!". Geezus Christ! I didn't even open the app the whole day! Uninstalled it straight away.
I usually leave Location services off. I'll enable them for 5-10 seconds, get the directions from Maps, then disable the Location service again. Of course, they can still estimate my location with cell towers (or WiFi, but I usually have that disabled as well), so it's not a perfect solution. Saves a lot of battery life, though.
Most people don’t think “the data used to sell me milk could be used by politicians.” And those that do didn’t think “political ads today could be replaced by surreptitious foreigners tomorrow.”
If your reaction is “they should have known” you are in a Silicon Valley thought bubble. (I was until recently, too.) What you find “horrifying” is that bubble’s edges fraying.
“Of all the news crises Facebook has faced during the past year, the Cambridge Analytica scandal is playing out to be the worst and most damaging.
Why it matters: It's not that the reports reveal anything particularly new about how Facebook's back end works — developers have understood the vulnerabilities of Facebook's interface for years. But stakeholders crucial to the company's success — as well as the public — seem less willing to listen to its side of the story this time around.”
What if your reaction isn't "they should have known" but rather "they should have listened when I told them this!"?
Then you, like me, are still figuring out how to message privacy as a priority to non-technical folks. Maybe it’s an issue of timing. My “delete Facebook from your phone and log out, by default, on your desktop” pitch was more productive yesterday than ever before.
The real discussion to be had is how do you know that the person is actually aware of giving consent, similar to how a recaptcha verifies whether or not you are a human. I see in the future, some sort of test for users, that verifies that they read the terms of service, as a form of consent for the user agreement.
Edit: Fixed all urls. All work except, cnn where you have to copy paste.
This will only happen if terms of service get vastly shorter, or if a law is passed that forces it. I would bet that any such measure would absolutely destroy user signup metrics, which means that not only do companies have no financial incentive to take such measures, but they also have an active financial disincentive to make the "I read the TOS, let me sign up now" process any more complicated than they absolutely must.
I'm also pretty sure that the everyday user would be pissed about that additional barrier to entry.
(2010) - http://www.zdnet.com/article/fbi-feds-collect-facebook-socia...
(2010) - https://www.technologyreview.com/s/418971/facebook-personal-...
(2011) - https://www.independent.co.uk/life-style/gadgets-and-tech/ne...
(2011) - https://blogs.wsj.com/digits/2011/09/26/facebook-defends-get...
(2011) - https://techcrunch.com/2011/11/01/researchers-flood-facebook...
(2012) - http://www.nytimes.com/2012/02/05/opinion/sunday/facebook-is...
(2012) - http://money.cnn.com/2012/03/22/technology/facebook-privacy-...
Also, I tried unsuccessfully to convert all those URLs to use HTTPS, but it either failed to connect or the server forced me back to HTTP. That's rather sad.
It will become interesting with GDPR, when customers start to revoke their consent to exchange data with credit scoring companies.
I was only referring to the remark about credit scoring companies which I believe to be wrong
In what way is the law used arbitrarily? I would like some sources for this claim.
The thing that may feel arbitrary is simply the fact the laws in Europe actually enforce privacy, whereas a company, and people, form the US expect that these laws are teethless.
Across international boundaries where those laws may be difficult to enforce because other countries are not in sync with them? Hell yes. Call me cynical, but...
In Germany where data-leaks (which are a symptom of insufficient data protection) at telecommunication providers seem to happen on the regular, with no (reported) punishment as a result, yes I think that is a bit naive.
Every company tracks you. From what you purchase at target, to broad pattern behavior tracking on the web via ad companies, I think most people know their being tracked at various stages for various reasons.
However, is it bad that target knows I like to buy grass fed beef? Probably not. It reveals some things about me, but I am far less concerned, as are most I imagine. This same mindset is what fuels people when they don't care what FB/etc is doing. Not that it's right/wrong, but I think people don't care who knows about their lunch or catpics. Thinking that's all that FB could gain out of it.
Humans in general are really bad at thinking long term. Nothing bad happens immediately when you sign up to FB, when you post personal information, when they sell your data, etc. For a lot of FB users, it might be 20 years before they regret their actions. That's just a hard feedback cycle for people.
For example if you drive a bicycle and eat beef, most likely you have a certain income, have a certain family type ( you use same IP!! ) , which means you might have a certain political view and concerns. And this is where targeted manipulation is active, they can drive you in a certain direction. Psychology at it's best.
This is how you win an election.
Giving my information to FB/etc though? That's another story.
It's really not that they "don't care" about privacy, even if they themselves think that's what it is. They usually say that because they don't understand the 1,000 horrific ways in which that data about them could be exploited, from personal blackmail situations, to identity fraud, to manipulating elections, to using it against them in court in a possible future conflict with law enforcement, and in many other situations.
I've seen people who are typically quite "anti-privacy" because "they want to benefit from Alexa, Google Assistant" and other such gimmicks, and "aren't scared" if Google or Amazon holds their data, because after all it's not the government holding it (ha! good one).
But now they've deleted their Facebook accounts, because they're finally beginning to understand the implications of these companies holding all of this data about them and how it could be abused. And it's still early days. It's only going to get worse from here, as we see more such abuses using Facebook, Google, Amazon's data, carriers', and other data hoarders' data.
People that don’t have jobs working with data, who are not technical or mathematical, aren’t going to know.
Look, I'm a developer, I'm somewhat privacy-conscious, and I quit Facebook years ago because they're slimy.
But "doesn't keep up with technology and privacy news" is not the same as "dumb". For any product as big as Facebook, there are people of all kinds using it, including many who are brilliant.
Is it wise to trust Facebook with your data? No. But not having come to that conclusion doesn't make someone dumb. Please don't be so condescending. I'm sure many of those "dumb" people could be condescending about some of your life decisions based on their own expertise. But it's not helpful.
If you want cursing or other low content, there's always Reddit.
Zuckerberg: I have over 4,000 emails, pictures, addresses, SNS
[Friend]: What? How'd you manage that one?
Zuckerberg: People just submitted it.
Zuckerberg: I don't know why.
Zuckerberg: They "trust me"
Zuckerberg: Dumb fucks
Teenagers and college students say a lot of condescending, dumb, immature stuff in group discussions. It's not news and no evidence at all.
I don't need to. Facebook just illustrated it for me.
IMO it’s very apropos because it sums up the core attitude of the company. That 19 year grew up to become one of the richest and most powerful men on the planet, with unchecked power.
Please. 19 is not 5.
Makes it even creepier. He was willing to dox people for social cred, before he realised the data had financial value.
On Reddit the normal person can be forgiven for not knowing the full context of quote, but this quote has come up on HN many times over many years.
I've never seen an "explanation." It seems self-explanatory. I haven't seen an apology either, but this was in the New Yorker:
When I asked Zuckerberg about the IMs that have already been published online, and that I have also obtained and confirmed, he said that he “absolutely” regretted them ... Zuckerberg’s sophomoric former self, he insists, shouldn’t define who he is now.
These stories are newsworthy because they represent the break from generalized scepticism to specific examples of harm. If the New York Times had waged a nebulous campaign against Facebook without clear evidence it would have rightly been accused of getting ahead of the facts.
In any case, if this should have been known by everyone already, I guess Facebook has no reason to panic if it's all over the news now. Just a bit of publicity for them, right?
It was fairly well known at the time I thought.
People still have an expectation of privacy, even when, from an HN perspective they should be extremely skeptical about having such an expectation.
So from one reckless company that doesn't give a damn about the law to the next. Who teaches developers that it's okay to work for anyone as long as the tech is cool and the salary is great?
Who teaches them otherwise?
Absent parental/primary-school-instilled ethics, rather a lot of engineers operate in a bubble of like-minded (and similarly-employed) people, making large amounts of money, and are often insulated (voluntarily, deliberately, or accidentally) from the impact of their work.
What could be changed to improve on that situation? I've heard simplistic suggestions to "sue the C-class until they learn/abandon the incredibly lucrative profit motive", "fire/imprison engineers whose changes harm people", and "make the bridge-builder stand under bridge they built" (whatever that means in a software context). Those seem utopian. What tangible, plausible changes can be made to improve on developer accountability (for their work) and discernment (about prospective employers)?
What about if you're making a social media app, and the ethics are less clear-cut? It's not like you can show every new hire footage of Trump and drive home the negative impact of data mining/sharing--the causal link is tenuous, the viewer might sympathize politically, or they just might not care about politics.
Ethics in the abstract is very hard to teach; object lessons are easy.
It’s blinders. Plain and simple. I’ve worked with too many developers who will pander for money. A few that tried to shame me for not being on board (my life skills tell me calling someone a whore in a team meeting is a bad career move but it doesn’t stop me from staring at them and thinking it). When enough money is on the line principles get set aside. We like to think our cohort are above this sort of thing but the evidence clearly doesn’t support it.
Then the corporate koolaid of come and tell you you’re doing the most important thing in the world and you just eat it.
I wish a little philosophy and ethics were part of the curriculum. This would not be to inculcate normative values, but to help eng students clarify what they believe, and what the implications are.
That said, most engineers I've met who work on sketchy stuff are either naive, apathetic, or suffer from massive cognitive dissonance.
The latter will too often regurgitate the self-justifying language of the business people in their companies.
Ever listen to ad tech people spew absurdities about people wanting to be engaged with "their" brands? How about the justifications for massive data collection and analysis - targeted ads are so much better for people. Pfft.
Then there are, say, NSA engineers who convince themselves that what they do is necessary, if illegal. That said, I saw a lot of NSA LinkedIn profiles that swapped out NSA for DoD a few years back.
Company leaders tend to hand employees ideas and the slogans to repeat to themselves and others. The internal spin is huge and insidious.
Uber doesn't appear to have historically given a damn about the law, but AFAIK it has historically given a damn about its users. Facebook, OTOH, doesn't appear to be giving a damn about its users.
As for the law: there are plenty of unjust laws out there; I respect someone who fights unjust laws such as the taxi monopolies. I don't respect someone who fights just laws.
The tech wizards who build things and run these companies:
1. Are not smarter than you
2. Do not have your best interests in mind
3. Will lie to you repeatedly
4. Will do everything to avoid negative attention or consequences
Stop worshipping anyone. Not Jobs, not Zuckerberg, not Gates, not Musk, not anyone. They aren't on your team. I don't care if they look like you or represent something you are really passionate about, you still need to be skeptical.
Edit for interesting link to the 2012 Facebook election data story.
If those little radio buttons in privacy settings do literally nothing on the backend, then FB could have a massive legal/financial battle if they knowingly ignored user preferences and sold off unaggregated data for profit.
I'd expect it to use it for advertising targeting. The privacy settings I've put on there exclude other uses.
I even remember such articles from TheGuardian before decrying that people are "going dark" - no, it wasn't about using Tor or VPNs. It was simply about using tracking protection.
And I am not in support of the mass surveillance exercised by those companies, just noticing the timing. When Obama won, his data scientists were hailed as geniuses. What do you think they were doing?
Its a bit like comparing withdrawing money from your bank account and robbing a bank.
> They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.
An Obama Campaign data scientist from the 2012 campaign explaining how they did the exact same thing but Facebook were ok with it "because they were on our side".
WikiLeaks also covered this earlier in their spyfiles warning. People don't care that a political opinion survey being spammed around by their friends is actually harvesting their details in a campaign dossier to manipulate them directly later.
"A more productive answer to someone saying something you agree with is “I agree”, not mistakenly berating them for not agreeing sooner." (https://news.ycombinator.com/item?id=16627766)
It's not like huge numbers of people didn't know about global warming before society started caring about fixing it.
First, it absolves the perpetrators, who are definitely in the wrong. I include both FB and CA in this category.
Second, it is becoming clear that there can be no such thing as "informed consent" in a networked world with respect to data privacy. Zeynep Tufekci, whose writings I heartily commend, had a good article on it a few weeks back. She argues both that the actual uses known of that data are not fully described in consent waivers, and also that it is not possible to know ahead of time how that data will be combined, recombined, projected, analysed, and used in the future to fully consent to all those things. Even if you could do so for yourself as an individual it's not possible to consent to the effects of the combination of an entire society's data as a whole, on others.
Again, it's not possible to obtain informed consent in today's privacy environment, so let's stop blaming the victims.
It can't blow over in the UK or the EU, because it is seriously f-ing illegal in those places.
Yes, we "knew" it was happening before this (hence all the regulatory steps taken that were dismissed as anti-American protectionism), but we were lacking hard evidence, so all we could do was reinforce regulations and regulatory authorities.
Now that shit has leaked, it's simply not an option for those authorities to not act. Not to mention the fact that they really, really want to act.
So no, this will not blow over. Maybe in the US and/or the media, but not where it matters.
I agree. A couple of generations ago the media was much more combative and willing to take on the powers that be (and each other). By this time most media outlets would have been saying "told you so (many times)". Unfortunately, now media mostly follows trends and competes on beauty of their talking heads, which is a lot safer than, say, investigating slavery or organized crime.
- This was part of an open API, you just needed to sign up for free. There is no data breach.
- EVERYONE was using it - this consisted mostly of games like Farmville. This is how they can show your friends progress and their profile pic.
- It was shut down more than a year ago.
Actually there is no newsworthy thing at Facebooks side at all, the new thing is that companies built games just to harvest this data and use it for something else.
Facebook has reached this size but has not prepared itself for it, and what's happening is that they took a bit of a stumble, and absolutely nobody is rushing in to defend them because they've burned all bridges. The media hates them for the economics of taking over the media industry and their ad revenue, and making them dependent on them. Conservatives have a pretty solid case that they are being censored by the platform systematically, even if it isn't true they feel it is true, so no friends there. It's pretty obvious that Facebook can expect no help from the Republicans in general. The Democrats may not hate Facebook, but there's no positive reason to burn very much political capital on helping them. (After all, they didn't deliver this time, did they?) And increasingly, the chickens are coming home to roost with their customer base, as fears about surveillance, power, and abuse of power are now going from vague fears to metastaticized, realized issues with Facebook that appear to affect it down to its very core.
It's not just the media narrative, though that's true enough... everybody is now at best neutral towards Facebook, and they're accruing enemies fast, not least of which is an ever-increasing portion of their own customer base(s).
How will they get out of this one? It's possible this will just die down this time. But these forces aren't going anywhere, and if it isn't already too late for Facebook to change course on this, the clock is definitely reaching midnight fast.
Facebook has been aggressively monetizing user data for years. They are just one player in an entire industry built around this business model.
The existence of this industry is most obvious to technically literate, who you can generally identify by their use of strong ad blockers and password managers. But it’s been reported on before . Online privacy is not a new concern... just look at the “Facebook is listening to me” meme.
So what I want to know is: why now? Why Cambridge Analytica? Why Facebook?
Here’s my best take so far.
1) Facebook’s user base is so big that it’s a relevant political constituency, and thus democratic governments have a reason to care.
2) Facebook creates an expectation of privacy and illusion of control that doesn’t exist with public-first platforms like Twitter
3) Cambridge Analytica is a scummy company in many respects, not just their work with Facebook data. They got lots of Facebook data from a third party who probably wasn’t authorized to sell it to them. This makes them a good candidate for regulators to make an example of.
4) Cambridge Analytica is closely tied to the Trump and Brexit campaigns, both of which are regarded as “dangerous perversions of democracy using lies to exploit vulnerable people” by exactly the kinds of political and media organizations that are driving this story.
Overall, I think this is a “Pigs get fat, Hogs get slaughtered” situation. The industry’s toxic practices are finally causing enough damage that institutions responsible for protecting the public (government, real media) are responding.
Bravo, I say.
 WaPo has an article explaining this on the front page today https://www.washingtonpost.com/business/economy/facebooks-ru...
For lack of a better word: sad!
I know whats happening in Yemen, I've read the facts, and now I don't care anymore. I don't want to see it in the news everyday because it wasn't relevant to me when I read about it and it has practically zero chance of every being relevant to me.
What Trump said about xxx person at yyy place is varying degrees of relevant to my life, all of those degrees more so than Yemen.
If Fox had a large Middle Eastern demographic that it's advertisers cared about, you would see Yemen Nightly at 7:30 without question.
Yes, because on the other side of that conflict is the Saudis, who are one of our biggest "allys", and of course, the other side of the coin is the general hypocrisy of caring what goes on in other parts of the world and not this one because it doesn't fit a specific narrative.
> What Trump said about xxx person at yyy place is varying degrees of relevant to my life, all of those degrees more so than Yemen.
I don't know how to not say this in a disrespectful way, but I really feel sad for you on a personal level if that's truly what you think. I have a feeling that you are just attempting to be a contrarian in this instance.
Anything Donald Trump says is more relevant to the west than Yemen.
I keep seeing comments which equating cheating and cleverness. If I win a chess game by moving making illegal moves this is not a sign of my brilliance. If you can't distinguish brilliant play from cheating, perhaps you don't understand the game.
See this thread: https://twitter.com/cld276/status/975564499297226752
Here's Time describing exactly the same tactic of friend-mining and using the data for targeting, and praising it as a game-changer: http://swampland.time.com/2012/11/20/friended-how-the-obama-... ctrl+f privacy -> no results
When we do it it's awesome, when they do it it's a data breach, it's a privacy violation, it's a breach of trust, and it requires government regulation.
When Obama's campaign did it, it was heralded as the future of democracy. Even the social media director for Obama's 2012 campaign acknowledges that they did the exact same thing that CA is being blasted for now . I'm not sure why you're getting downvotes other than people just wanting to suppress the truth.
... the campaign literally knew every single wavering voter in the country that it needed to persuade to vote for Obama, by name, address, race, sex and income.
...the digital-analytics team, led by Rayid Ghani, a 35-year-old research scientist from Accenture Labs, developed an idea: Why not try sifting through self-described supporters’ Facebook pages in search of friends who might be on the campaign’s list of the most persuadable voters? Then the campaign could ask the self-identified supporters to bring their undecided friends along.
...They started with a list that grew to a million people who had signed into the campaign Web site through Facebook. When people opted to do so, they were met with a prompt asking to grant the campaign permission to scan their Facebook friends lists, their photos and other personal information.
So, they used Facebook data, including "Friends" lists and personal information that those "Friends" had never directly consented to providing to the campaign.
 How did Facebook react to the much larger data harvesting of the Obama campaign? The New York Times reported it out, in a feature hailing Obama’s digital masterminds:
The campaign’s exhaustive use of Facebook triggered the site’s internal safeguards. “It was more like we blew through an alarm that their engineers hadn’t planned for or knew about,” said [Will] St. Clair, who had been working at a small firm in Chicago and joined the campaign at the suggestion of a friend. “They’d sigh and say, ‘You can do this as long as you stop doing it on Nov. 7.’ "
In other words, Silicon Valley is just making up the rules as they go along. Some large-scale data harvesting and social manipulation is okay until the election. Some of it becomes not okay in retrospect. They sigh and say okay so long as Obama wins. When Clinton loses, they effectively call a code red.