> BREAKING: Facebook WAS inside Cambridge Analytica's office but have now "stood down" following dramatic intervention by UK Information Commissioner's Office..
> To be clear, @facebook was trying to "secure evidence" ahead of the UK authorities. Nice try, @facebook. The UK Information Commissioner's Office cracking whip...British legal investigation MUST take precedence over US multibillion $ company.....
Something VERY wrong is going on at Facebook.
edit, with another account:
> Facebook have confirmed that auditors and legal counsel acting on behalf of the company were in the offices of Cambridge Analytica this evening until they were told to stand down by the Information Commissioner. These investigations need to be undertaken by the proper authorities
What did most people think Facebook was doing... Keeping all the data locked away an never letting anyone make use of it? The interest in that data by political entities should have been especially obvious.
Additionally, this information about Cambridge Analytica came out months ago. I remember first hearing about it on a podcast that was mostly focused on the personality profiles. [1?] However, it was open about CA optaining the data via the survey. (I'll provide a link as soon as I dig it up again). Suddenly, months later, this story is exploding in media and political rhetoric.
It makes me wonder if this isn't more about a concerted effort to force regulations to give someone control over these platforms?
After seeing what happened in the 2016 elections around the world, I'd imagine many people became interested in ensuring social media would work for them next time around.
 https://www.stitcher.com/podcast/wnyc/note-to-self/e/5231252... (not -the- podcast I was trying to find, but still has the info as far back as November)
Minor nit: Years ago.
The name Cambridge Analytica was being circulated in hacker circles since at least mid-2016. Thus far, the only new revelation was the Channel 4 reporting .
Security nerds have been complaining about Facebook's business model for years and it fell upon deaf ears.
Suddenly, the public gives a shit and I don't understand what changed.
"On 8 February 2018 Mr Matheson implied that Cambridge Analytica "gathers data from users on Facebook." Cambridge Analytica does not gather such data." — Letter from Alexander Nix, CEO, Cambridge Analytica, to the Chair of the Committee, 23 February 2018 (PDF, linked from below page)
— And now, it's basically been 'proven' by FB that they do. Of course the ICO is gonna get involved now.
Edit: plus there's also this Channel4 investigation which went public today:
"An undercover investigation by Channel 4 News reveals how Cambridge Analytica secretly campaigns in elections across the world. Bosses were filmed talking about using bribes, ex-spies, fake IDs and sex workers."
— I think that most certainly puts CA into all kinds of shit they weren't in just 24hrs ago.
Why would russian intelligence use an oil company as a shell ? Does Russia sell oil to the US ? If so do they market directly to individuals in the US ?
My only point (if there is one) is that, despite the massive proliferation of blogs and amateur media, it often takes a professional, salaried reporter to bring an important story into the public eye.
No. It was Ronan Farrow that originally broke the story. The NY Times took their sweet time before going after someone that was an ultra-major Democrat party donor. The Weinstein story was rejected by multiple "professional" news outlets.
The "professional" journalists working for the NY Times KNEW about the Weinstein story as far back as 2004, but spiked it under pressure from various Hollywood interests. Professional journalists aren't supposed to spike stories for political reasons and that's exactly that the Times did in 2004.
Weinstein's office in Tribeca was right downstairs from the Tribeca Film offices. It was on the 3rd floor. Spend a few hours in that building and you could probably have heard a dozen stories in whispered tones about Harvey. Some professional journalist from the NY Times should have had this story years ago. A high school journalist could have written this! And Kevin Spacey? Anyone who in Hollywood would have known about Spacey as far back as 2004, or perhaps earlier. It was an "open" secret. So open that it was a joke. It started becoming more "known" when Spacey was working with Sam Mendes on American Beauty in 1999.
Give me a fucking break. Professional journalists sat on this story, ignored it or conspired to crush it. It took a rookie, Ronan Farrow, trying to make a name for himself while on a personal mission against that horrible abuser Woody Allen for this to all break.
I understand your reluctance to credit the Times given how they sat on the news for years before finally publishing their expose. Nevertheless, the Times article came out on October 5, 2017 — five days before Farrow’s story hit the wire on newyorker.com. So technically the Times still gets the scoop — and most of the credit :-).
It is commendable that Farrow investigated his story independently, but that does not mean the NYT “took their sweet time” — certainly, I haven’t seen Farrow demean their work. The reporters and editors who worked on the NYT’s coverage were not on staff 13 years ago. Story ideas aren’t passed down from generation to generation.
If not, he would have been in a substantially better position to sue for libel.
Would you care to stand that up? Because it seems unsubstantiated.
Bring on the smug downvotes boys, but until all media, social networks, and the overall internet can be brought completely under government control, it will continue because there is simply too much criminal activity going on everywhere.
This article is probably the biggest one right now.
Yup. It's not a surprise why people are interested in the role of data in modern public debate and democracy.
Edit: Someone linked this below - http://adage.com/article/moy-2008/obama-wins-ad-age-s-market...
"So I was talking to a senior government official of this government (2012) about that outcome and he said well you know we've come to realize that we need a robust social graph of the United States. That's how we're going to connect new information to old information."
I suspect that the only reason we are hearing about the Trump campaign being the buyer is that the democrats already had the information. It is also why Facebook can not put the cards on the table since then they would not have any political allies left. Just like with the Snowden leak, privacy is not a issue for which the parties differentiates on. One can only hope that will change in the next election.
There's also evidence that Facebook's advertising models were a massive help -- the Trump campaign and affiliated organizations were paying much lower advertising rates than the Clinton campaign + affiliates, largely because Facebook's model prioritized the kind of controversy-and-outrage-generating stuff Trump was putting out there (since controversy and outrage drive engagement, and engagement is the metric Facebook cares about).
The story was linked to Trump?
Also, security nerds are attuned to how personal data might be misused. The public needs a concrete example.
From Josh Greene's book:
"Yiannopoulos devoted much of Bretibart's tech coverage to cultural issues, particularly Gamergate, a long-running online argument over gaming culture that peaked in 2014. And that helped fuel an online alt-right movement sparked by Breitbart News.
"I realized Milo could connect with these kids right away," Bannon told Green. "You can activate that army. They come in through Gamergate or whatever and then get turned onto politics and Trump.""
Mercer spends around $10M per year funding Breitbart to spread that propaganda.
Sorry to burst your bubble, but everyone is in on it.
Not the same.
There are major power players at work trying to a) exert control over the internet as a speech platform; and b) ensure that their political opponents can't win without establishment support. I discussed this some at https://news.ycombinator.com/item?id=16616227 .
As for my grandma, my mom, and my sisters, like most of the public, they don't care about any of this, because this data was mostly already public-enough (friends list restricted) and they just want a platform where they know they'll be able to engage with friends and family.
Not the public per se but the media suddenly cares.
My guess it's the runup to the US midterm elections, it's never too early to start the mudslinging...
How about all of the services using Facebook OAuth that subtly leak social graph information? For exactly this reason, Facebook OAuth is always my absolute last resort, and I'll almost always skip a service entirely if it's the only option.
Still, I always gave Facebook the benefit of the doubt and I figured that these things were handled by Facebook directly as a "plugin" to the app/website (I'm not a web developer, I don't know the details of how this would work), and the various services didn't actually see the data. It's pretty mind blowing to me that this is not actually the case. I always felt I was being absurdly paranoid about Facebook compared to most people I knenw, now it turns out I was not being even remotely paranoid enough.
Does anyone know if Google is similarly aggressive with user data sharing? I've never noticed information leakages similar to the above coming from Google (so I don't hesitate much to use Google OAuth) but me not noticing something during casual browsing is not a very high bar to clear.
Everyone who worked at Facebook or on 3rd party integrations knew this for years.
CA's CEO is seen presenting this clearly in 2016, specifically how well it helped the Cruz campaign: https://www.youtube.com/watch?v=n8Dd5aVXLCc
Worldview Standard has a podcast called RawData where ep 1 of season 1 in 2015 talked about how a few likes on Facebook would let you know someone better than their own family, based on research from 2013: http://worldview.stanford.edu/raw-data/episode-1-uploaded
Obama's campaign used heavy data analytics for both runs, explained as early as 2012: https://www.technologyreview.com/s/509026/how-obamas-team-us...
None of this is new. It was only either ignored or accepted before and has finally reached critical mass due to the amount of controversy and conspiracy today, along with the generally expected fatigue of social media and the now evident effects it makes on most people's lives.
"But Cambridge’s psychographic models proved unreliable in the Cruz presidential campaign, according to Rick Tyler, a former Cruz aide, and another consultant involved in the campaign. In one early test, more than half the Oklahoma voters whom Cambridge had identified as Cruz supporters actually favored other candidates. The campaign stopped using Cambridge’s data entirely after the South Carolina primary.
“When they were hired, from the outset it didn’t strike me that they had a wide breadth of experience in the American political landscape,” Mr. Tyler said."
People use the Myers-Briggs typing at work because it helps you work with (read persuade) people.
So you can make an argument this is at least 100 years old.
As for why we've reached critical mass - it seems likely the ability to influence democratic elections, and the efforts by enemies of the Western world to use it to undermine democracy, are getting people to notice.
What a bizarre statement to make. In this sense anything that is in any way scientifically based “grew out of data analysis.”
Myers/Briggs/Jung identified 4-5 personality "axes" on which people vary. How did they do that? They basically gave a bunch of people surveys with hundreds of questions and did PCA on the data. They found that 4-5 dimensions explained a lot of the variance. That was a new finding based on data, at the same time that the field of statistics was developing. And it gave insight into how people behave. It's important enough that we frequently use it as a heuristic in workplaces today.
In modern times, we can do the same on much larger data sets. Given Facebook "like" data for 50 million people, you can do dimensionality reduction on the data and extract personality "types". There's no question that this gives you information about people. The question is how well it can be weaponized - that's the debate around CA now.
American politicians of both stripes have realized the private sector now wields comparable propaganda power to that of the government, hence the post election "Russia" propaganda activity burst and full court press on the story by friendly newspapers to rein everyone in, before they are neutered.
But the Russian state does spend billions on their secret services, and they consider America their "Main Enemy". Billions buys you something. I'd be shocked if there weren't more shoes to drop about Russia. But for now, we should be focused on domestic disinformation.
"The Data That Turned the World Upside Down
How Cambridge Analytica used your Facebook data to help the Donald Trump campaign in the 2016 election.
People understand and accept the concept and execution of advertisement. Propaganda is not received in the same way.
I think we want to believe that but hasn't been true for many years. Presidents sell a brand unfortunately, just like large companies do their commercials. With the same psychological and rhetorical tricks.
One of my favorite examples I always bring up is this: http://adage.com/article/moy-2008/obama-wins-ad-age-s-market... notice how with much fanfare everyone was happily handing his campaign the marketing award. Normally that is not awarded to political candidates, it goes to Coke, Pepsi, Apple etc.
"I honestly look at [Obama's] campaign and I look at it as something that we can all learn from as marketers," said Angus Macaulay, VP-Rodale marketing solutions "To see what he's done, to be able to create a social network and do it in a way where it's created the tools to let people get engaged very easily. It's very easy for people to participate."
Social network they say? They couldn't mean using Facebook,could they? But, I think they are. An unsurprisingly Obama's campaign used the same methods as CA did:
Any time people used Facebook’s log-in button to sign on to the campaign’s website, the Obama data scientists were able to access their profile as well as their friends’ information. That allowed them to chart the closeness of people’s relationships and make estimates about which people would be most likely to influence other people in their network to vote.
> Propaganda is not received in the same way.
That's exactly why it is disguised not to be perceived as blatant propaganda. It works best when it is sneaking its way in via a seemingly unbiased publication, or news story, a comedy skit etc
I think you're probably right. There's a different emotional impact between being manipulated to consume versus being manipulated in what you think. Of course at the end of the day it's exploiting a similar vulnerability in our wetware.
If the propaganda was fundamentally truthful and respectful (e.g. sharing additional accurate factual analysis that people just didn’t know about), it wouldn’t have quite the same odious smell.
There’s not that much distance between some of the ads and fake news stories flashed in front of people before the election and e.g. ISIS recruiting materials or Nazi propaganda from the 1930s.
The problem is when huge amounts of money get mixed up in it. In the US money doesn't buy you political power directly, but it does buy you a voice (in the form of advertisements using mass media). It's still up to the listeners to listen to your voice one way or the other, but the disproportionate loudness of people's voices ensures that arguments backed by money are supported much louder than those without money (this is the thesis of Manufactured Consent). Ok, this is less than ideal, but things probably aren't skewed that much regarding things like social issues.
Political advertisement, in my opinion, doesn't veer into the realm of propaganda until one of two things happen: either the source is dishonest about their intentions (e.g. a person fully aware of climate change publicly denying it for financial reasons) and true beliefs, or their arguments are veiled in a way such that they do not appear to be advertisement at all. For example, suppose out of 100 homicides in the US, 10 were committed by Green people against Purple people, but a news organization decides to cover 5 homicides this week, and focuses solely on the ones between green and purple people. That doesn't look like an ad, even though it is one. The problem is that there's big money in this type of propaganda; these days political power is all about controlling narratives. It allows for a type of "inception" of beliefs and values - for example, making Green people think they're on the bring of a race war with Purple people - by letting people come to conclusions themselves after being presented by a highly slanted distribution of input.
This type of belief-inception is precisely what Cambridge Analytica specialized in. By knowing demographic information, they could target individuals based on issues they knew they would be sensitive to, and slowly indoctrinate them with desired views. I'll use my race example again, because Robert Mercer is essentially an unapologetic racist: https://en.wikipedia.org/wiki/Robert_Mercer_(businessman)#Ra.... You start by painting a narrative picture, highlighting race-related conflicts and painting a picture of deteriorating race relations. Like a self-fulfilling prophecy, this stokes racial tensions, creating more incidents for you to curate. By misrepresenting the relative frequency of these types of occurrences, people gradually come to the conclusion that you want them to: in this case, that black people are becoming more racist towards white people. You can use this to bring over working-class white people to the Republicans. Another good example of this type of indoctrination was Gamergate (which essentially birthed Breitbart / the alt-right) being used to galvanize frustration with the social justice sphere into creating a community of young male "race-realists"
A principle we could rely on is openness. Just disclose who is paying for what. And disclose the ads.
If Trump/CA/Russia targeted an ad at you distorting HRC's record, we should know who paid for it.
Up until now, political ads - TV, billboards, even direct mail - were discoverable to the American public, so big distortions could be called out (even if they sometimes were not, as with GWB's racist attacks in SC on McCain's adopted kid.)
But the current setup, where Facebook ads are effectively secret, is a big big problem. How do we know the ads were all honest? Let's just have FB release the 2016 ads so we America has time to figure out what to do before 2018.
Then we require that all money spent on politics (AND related political influencing, like money to Jud Watch and Cit U and Cato and AEI and Bradley Fdn and Am First and Koch Found and Americans for "Prosp" and the NRA) requires public disclosure of who's behind it.
We already do much of this for direct campaign donations and in real estate. It's just a matter of political will. And one side has spent 4 decades and hundreds of billions on creating this money-first system, so they are very invested in not changing it. If you care, the first thing to do is get Congress to pass a law rescinding most of Cit U decision.
The DNC and their friends in the media are no slouches at it either. Or was it someone else hinting that a Nazi revival was underway, not to mention some sort of equivalent movement of misogynists who were determined to put women back in the kitchen where they belonged?
> I'll use my race example again, because Robert Mercer is essentially an unapologetic racist
Is there more to the "unapologetic racist" charge than the 2 sentences in your link? If not, most any Libertarian is probably guilty of "unapologetic racist" level crimes as well. Idiot may be a more appropriate label, but each to his own.
> By misrepresenting the relative frequency of these types of occurrences, people gradually come to the conclusion that you want them to: in this case, that black people are becoming more racist towards white people.
The far more common narrative in this election was: that white people are becoming more racist towards black people. And not just mildly racist, but full on Nazi racist. The disparity between what you see on TV and read in the newspaper vs what you see when you actually get off the couch and go look around makes it pretty clear how much the media is not lying, but selectively choosing stories, and frequencies of stories. Selective and deceptive reporting is shamelessly obvious in right wing media (let's not kid ourselves, the viewers are not too bright), but there is plenty in liberal media as well, it's just extremely well done.
> You can use this to bring over working-class white people to the Republicans. Another good example of this type of indoctrination was Gamergate (which essentially birthed Breitbart / the alt-right) being used to galvanize frustration with the social justice sphere into creating a community of young male "race-realists"
Here there is some substance, except hardly anyone knows about Gamergate, I've heard of it, but have no idea what it is. But I do know that there is a non-imaginary new social justice movement who hold many utterly delusional beliefs that they love to shout at the top of their lungs given any opportunity, I think that had a MAJOR effect on pushing a lot of people to the right.
I think you're mostly bang on with your ideas, but I think you have a filter on and don't realize it. I'm sure I do as well, but I'm perfectly comfortable to acknowledge and discuss it, unlike most of my ideological opponents on the other side of the fence.
Oops.....look like the censors finally caught up to me so it will be a while before I can submit this comment. No hard feelings, all's fair in the political propaganda war, gotta control that narrative after all!
There is a huge huge difference between the sides. And that difference gives a huge advantage to one group: billionaires who can use their money to lobby to retain more wealth from the economy.
Nothing new here, seriously. Propaganda from both sides before elections has existed for as long as there were political debates and political campaigns. The fact that we have now systems to make Propaganda more targeted may make it more effective than before, but that's all. In the end, believing or not in Propaganda is the individual's responsibility.
I agree that we need better education or something of the like to also work towards hardening people against propaganda, but I don't see these different approaches to the same problem as mutually exclusive. And while I would want more funding / different methods to be explored in education independently of this, and believe it could yield amazing benefits for society as a whole, I recognize that the first option might be more cost effective
In the meantime, it's easy and possible today to require that all political spending MUST come with disclosure of funding. No more secret donations to Super PACs or Heritage or Hillsdale.
It's literally the first amendment.
Platforms like Facebook would be well within their rights to try and prevent politically targeted advertising, even if it would be a fool's errand. Outlawing it would be unconstitutional.
If they try to prevent "spreading of false information" by political advertisers I've no doubt they will simply be harsher on the propagandists who have a political aim at odds with Facebook's interests, one of which is stopping these hit pieces by those angry that Trump won.
Nobody would be talking about this if Cambridge Analytica worked with Hillary. They simply want to stop their opponents from using the useful tool that is targeted advertising.
Outlawing political advertising is not what I propose. I believe propaganda is different in its intent: in my opinion direct disinformation or dishonesty (with intent) would be a sufficiently high barrier, given that it would require a high barrier of proof that the supporters were seeking to manipulate opinion with lies. This would ensure nobody would be prosecuted except in the most egregious of cases. I also believe this could be a valid exception to the first amendment in the same vein as libel or slander: spreading false information, with intent, possibly for personal gain. The parallels certainly exist.
Furthermore, I believe I would be just outraged if Hillary did this, and I think this is a pointless distraction. I didn't vote for her and I know that she also had her own shady internet propagandists working too. I think we should do our best to make sure political discussions happen organically, from real people.
No, there's new Cambridge Analytica information, as of today:
The fact that computers are placing digital picture ads (that are cheap to produce), allows such extreme cases as we've seen to happen.
People don't think.
They assume protections in place that aren't there because things like the bill of rights do not apply.
It's easy to understand, however, the business model of companies like Facebook if you don't pay for the service. That's what Facebook users should realize by themselves.
Most people think Facebook is a convenient way to keep in touch with family and friends, and most of my friends and family are bemused that I don't have a FB account, and think that any concerns I have about privacy are overblown.
Using the data internally to target ads. Once you let it out the door, you no longer have an exclusive asset on which to charge rent.
I’d like some explaination of $1.2 billion is spent on Clinton some of which came from Saudi, Canada, UK, Australia, Norway and that’s all well and good! But $500,000 spent against her is “influencing elections”.
Is Facebook advertising that effective!?
I’d really appreciate if people stopped using the term “influencing elections” that’s the whole point of campaigning. In related news, you don’t have to like him, but Trump won fairly.
Restricting political advertising is not such an unprecedented concept. The risks are just too fundamental.
“The people whose job is to protect the user always
are fighting an uphill battle against the people
whose job is to make money for the company,”
said Sandy Parakilas
If your company hasn't figured out user privacy yet (Facebook hasn't), you might want to look for the exit.
If your company treats you badly, look for the exit.
If your company treats you like you are expendable, you are.
If your company treats users like they are replaceable, they are - and when they have burned out all the users, the company will catch fire and sink.
If you are not aware, here's where most of your Equifax data that's been leaked online comes from, send in a request:
You can confirm employer participation--> Login--> Find Employer Code if someone wants to scrape the DB list.
One's market value doesn't crash by tens of billions when investors learn everything is going as intended. This is a side effect of Facebook's business model which Facebook ignored. Chickens are coming home to roost.
I don’t think we have fully enough information yet, but if a political campaign is using analytics to clearly advertise their campaign, fine, that’s being straightforward.
If a political campaign is posting in ways that do not clearly label it as a political campaign, and is lying to people viewing the data it is paying to show, would you agree that’s kind of a different situation?
There’s not enough information yet I think to claim what was shown, but if political campaigns are not labeling their ads clearly, that is in violation of a variety of state - and some federal - laws.
Also, rationalizing cheating, because they're certain everyone's doing do it, so it's only proper when the better cheater wins.
> Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing.
> They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.
Whataboutism taints conversations when it's an excuse; other kinds of excuses also shut down conversations that should be had.
More plainly, the CA approach to starting the graph was nauseatingly scammy, but how many friends of Obama supporters (and perhaps Clinton - the API changed before the campaign, but maybe some data persisted with the DNC) were aware that their data was being processed by political parties?
Hypocrisy is what I care about and there's enough of it to repave the entire Interstate system. When someone criticized Obama or Democrats, the first words in response was some variation of "Bush..." Blame Bush was a competitive sport. Whatabout that?
But it is disingenuous to use it to disregard others who point out hypocrisy. If you want others not to use a useful strategy, you can't use it yourself and then whine when they respond in kind, telling them they should stop without making any assurances that you yourself will. It's like telling someone they should only fight with fists while you're wearing brass knuckles.
Say targeted advertising is like a nuke. If you complain when your enemy drops a nuke on you, but not when you drop a nuke on them, your problem is obviously not with nukes, just with your enemies dropping them on you.
This whole media campaign against Facebook is aimed to prevent something like Trump 2016 from ever happening again by denying the people who [i]shouldn't win[/i] modern tools. It has nothing to do with privacy.
See also, "consistency."
1. How Obama’s Team Used Big Data to Rally Voters (MIT Technology Review, 2012)
2. How Trump Consultants Exploited the Facebook Data of Millions (NYT, 2018)
No bias here.
It is a rationally induced bias.
And that's ignoring the fact that Cambridge Analytica was apparently breaking laws.
Either way, it is ok for the media to 'influence people'. If you're going to be vague, then we may as well say that is their whole reason for being. And if I wanted them to advocate one message over another, what difference is it to you? That's politics.
Voter suppression is a term of art that means something. Democrats generally don’t engage in it because more people voting usually translates to more people voting democrat.
It's also not news when it's some story about a town in <state> adopting Sharia law. At least the drug pricing thing is halfway true in some convoluted form.
This person asserts that people from Facebook gave them their blessing because FB was "on our side". However, she says that from what she knew, FB was on the other team's side too. Kind of need more specifics about who from FB said what, and what "suck out the whole social graph" means. But it's still a different situation than what CA is being accused of, which is using the guise of a quiz app to mine the social data of the quiz participants' friends.
In contrast, the Obama campaign Facebook app/outreach was explicitly connected to the Obama campaign efforts, i.e. people who signed up for the app knew they would be explicitly allowing this Obama-connected app access to info/friend data.
edit: Here's a tweet by someone on the Obama campaign, protesting angrily to a tweet by Cambridge Analytics:
> I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work.
Of course, we shouldn't take Obama's team at their word that absolutely everything they did was on the up-and-up. But it's important to acknowledge that there are distinct differences between what we know of their work so far compared to what has been revealed with CA.
In other words, it's fair to say that the Obama team was lauded for their "innovation" at mass usage of FB data, which they talked about publicly. It is unfair to say that what they talked about publicly is anything like what CA is currently being accused of.
edit: I more or less agree with u/makomk that @mbsimon (the staffer who tweeted angrily at CA) is not giving the most complete description of how Obama's campaign harvested FB data: https://news.ycombinator.com/item?id=16624794
But isn't the bigger problem sucking up the entire social graph from a small seed of users, not how those users signed up in the first place? If I'm getting spammed via a friends-of-friends connection, I'm not particularly worried about the pretense that initial vector signed up with.
> Once permission was granted, the campaign had access to millions of names and faces they could match against their lists of persuadable voters, potential donors, unregistered voters and so on. “It would take us 5 to 10 seconds to get a friends list and match it against the voter list,” St. Clair said. They found matches about 50 percent of the time, he said. But the campaign’s ultimate goal was to deputize the closest Obama-supporting friends of voters who were wavering in their affections for the president. “We would grab the top 50 you were most active with and then crawl their wall”
In the next paragraph, FB said it was "satisfied" that this met their data and privacy standards. Which is a bit curious because IIRC, it was not kosher to cache data scraped from FB for any reason beyond having a reasonable cache (to prevent unneeded API requests), nevermind for independent data collation and analysis. I would bet that the users who did knowingly sign up for the Obama app did not think the app would be scraping the walls and photo albums of their friends and attempting to do friendship-strength analyses.
CA still has an extra level of subterfuge, but I agree, what the Obama campaign is reported to have done is definitely not as innocent as the Obama campaign staffer claims in the aforementioned tweet.
You don’t see “Sign up for this quiz to find out your true personality” as different than “sign up to support change and spread the word about Barack Obama”?
Facebook is a cesspool, but shady onboarding tactics makes a far more dangerous cesspool.
Rephrased: Both campaigns spammed non-signees, and it looks like the CA people spammed signees as well.
people who signed up for the app knew they would be explicitly
allowing this Obama-connected app access to info/friend data.
What on earth makes you think they "targeted friends"?
its all well established, get enough people together and you can nuke any story you want and take over subs with time. Facebook's crime was getting caught helping the wrong people
But because of how things played out, we slowly became more introspective and started questioning. Scandals involving sexual impropriety with actors, culminating in the #metoo movement, is part of this introspection. And there will be increased scrutiny in social media and its role in enabling the current situation.
There's still the fact that FB _did_ close off the info that CA was holding onto, and they saw it as something they no longer want to offer to its ad clients.
The simplest explanation is that FB is trying to do damage control for being way too liberal about its data sharing in the past, because it will generate more scrutiny for their present policies (even if they are "better" than before). Even if they're improving, many might not think they've improved enough.
Who knows, if it gets any worse, we might finally be convinced to pay for our things.
Simple: stock price. Even now, FB stock is down nearly 7%. So Facebook will try to limit the damage as much as possible until it is no longer to do so. After that, they will "fully cooperate" with the authorities.
Uhh....that's not good.
In effect, this is a sanctioned data breach. Facebook opened the firehouse of user data by knowingly keeping very lax access to their developer APIs while not at all preventing developers from storing the data they accessed.
That's a very serious breach of consumer trust. A terms of service is only as good as your user's ability to understand it's implications. Just because users check a box doesn't mean Facebook is any less liable.
have now "stood down"
Let's not pretend the ICO has unlimited funds, people and legal resources to install the fear of God in to companies. Like many other departments and organisations, it's been badly hit by "austerity" measures.
It's mostly funded by organisations that process data, plus some grants from the Ministry of Justice for Freedom of information work, the latter affected by "significant
reductions ... for our current levels of FOI work"
Their current full year budget forecast is £25M (and costs of £26M)
> Elizabeth Denham [the Information Commissioner] says it's nonsense to suggest her office will be handing out huge fines routinely once the General Data Protection Regulation comes into force. "Predictions of massive fines under the GDPR that simply scale up penalties we’ve issued under the Data Protection Act are nonsense."
So the ICO might do nothing. But Max Schrem's new org NOYB might.
See for example: http://www.bbc.co.uk/news/technology-43465700
Facebook's product is not selling that data, it's selling ads using the data. You can only sell the data once, you can sell the ads forever.
It's worth rewatching this video: specifically, Project Alamo (Trump's digital campaign) had Cambridge Analytica inhouse, and they had their Facebook/Google staff in the same building. It's easy to imagine that people at Facebook knew what data CA had, and have knowingly lied since.
Of course, when the ICO gets involved then whether CA breached Facebook's EULA or not is moot, and Facebook become relevant only inasmuch as the question of whether their own executives breached or encouraged breaches of data protection laws.
And if the collaboration is doing something illegal, obviously personnel from both companies should be charged. And FB can do whatever investigation it wants (as long it's legal) and the authorities are free to ignore their findings.
I wonder how FB views public scraping. Shades of webcrawlers.
Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica."
Facebook at CA isn’t illegal in and of itself. The important factor will be what they were doing.
Do we have a reputable source corroborating this claim?
Guardian Profile: https://www.theguardian.com/profile/carolecadwalladr
I'm sure we'll see an article published by the Guardian on it by the morning in the UK
hopefully the people responsible will go to prison for interfering with the investigation.
Between Facebook’s political issues and the happiness-depressing effects of its use, I think it is pretty easy to draw the conclusion that Facebook is a net negative for society. This is without even taking into account the amount of PII that has been concentrated into a single entity (who monetizes it), or the effects of algorithmically appealing to people’s desires.
A hundred years from now, Equifax, YouTube, and Facebook will be lumped into the same pile: companies who profit off of information about consumers. The algorithmic veneer that protects YouTube and Facebook will be gone by then.
I’m not trying to condemn anyone, and I’m not in the position of having to weigh providing for my family with making ethical choices.
But, I think it is clear that change for Facebook will not come from the top. It will only come as people leave.
a) We're often small cogs ...
b) ... working on often interesting technical problems that require much detail ("think down here" I was once told by a manager, who put his hand to the ground, "not up here" he said putting his hand up and waving it) ...
c) ... and we don't always get to choose. Not everyone is a superstar who can leisurely choose which exciting opportunity to pick and choose. And yes, most of us have rent/mortgages/children/ other obligations to concern ourselves with.
...even if we aren't necessarily all amoral.
1 Luckily I outlasted him in that company. :-)
While it may not affect current employees, I do think vivid stories like this make the allure of joining Facebook less compelling for the next generation of programmers. It also may influence just a few people in hot fields who have many opportunities to choose from (such as the top researchers in AI).
You don't even need to look at meta-effects of Facebook. Look at how it operates, in effect. It splits people into mutually exclusive echo chambers that are falling increasingly far away from reality in terms of median ideological view. Far from connecting people social media has become, arguably, the single biggest factor in societal division in modern history. People even speak of this casually without realizing the implications of what they're saying - 'I can't believe what [non echo chamber approved views] my [friend/family member/acquintance/etc] has. Unfriending!' Of course these views and differences always existed, but in typical social interaction agreeing to disagree on issues is fine. In the social media era, people have started to condemn people over any failure to abide group ideology. It's cult like behavior without the formality.
There's no way in the world you can possibly spin this into a positive or unifying force for society. You've even had founders and executives of speak out against the social harm the company is causing. The point of this is that there's no 'algorithmic veneer' protecting YouTube and Facebook, and I strongly doubt Zuckerberg himself has any delusions about what he's doing. Even most users themselves could easily reason that Facebook is a net negative. But they enjoy and/or are addicted to the services, so they keep using it. It's slot machines on a global scale, where instead of inserting coins - you insert your personal information and get that dopamine rush when somebody likes or otherwise interacts with you.
As for employees - you'll never make a company change from the bottom up. Most people don't work for ideologies - they work for money. And Facebook has deep enough pockets to ensure that they'll never suffer for a lack of employees.
>YouTube, the Great Radicalizer
>James Damore, Google, and the YouTube radicalization of angry white men
>James Damore, Google, and the YouTube radicalization of angry white men
>PragerU doesn’t disguise the fact that it is waging a war for young minds. Though the site’s videos are clinical, their cumulative function is to proselytize, and the language PragerU uses to describe its mission is religious.
Over the past year my view of Facebook has shifted from "I kinda don't like the privacy implications, but it is very useful for following what my extended family (most of whom live literally across the country) and friends are up to" to "why is this fucking thing sending me all these useless notifications all the time? (rhetorical question, I know why it is...) am I getting enough benefit out of it to be worth all of this or should I delete it?"
> I have deleted my Tweets on Cambridge Analytica, not because they were factually incorrect but because I should have done a better job weighing in.
Archive of those deleted tweets: https://twitter.com/aprilaser/status/975078309930311680
EDIT: Stamos responds to news:
> Despite the rumors, I'm still fully engaged with my work at Facebook. It's true that my role did change. I'm currently spending more time exploring emerging security risks and working on election security.
Also, I think the real problem here is that the media is attempting to politicize the term "breach," and security professionals are rightly offended.
Is it fair to use the term breach from the perspective of the user whose data has been acquired? Or is breach only in reference to what the company that collected the data intended to do with it?
There’s also seemingly two types of breaches at play:
1. The idea of a security breach, where a company gets “hacked”
2. The idea of a breach of trust, where people had given a company data in good faith that it would not be abused, and then had it abused, even going against that company’s TOS
The case of a "breach of trust" is a different story, and the problem emerges when you realize that what defines "private data" (the plunder from a breach) is nothing more than an arbitrary set of restrictions, set forth by the platform producing the data itself. Without Facebook, none of this data would exist. Without the Facebook API, no app would be able to collect this data within a sanctioned platform.
Because Facebook exists, and because Facebook offers an API to its data, Cambridge Analytica was able to collect "private data" on users. But it never needed to circumvent any technical barriers to collecting the data it extracted. The Facebook API and platform willingly supplied the data to Cambridge Analytica, as it did and does to thousands of other apps.
If it constitutes a breach that Facebook supplied that data to Cambridge Analytica, then there must exist some "bug," technical or not, that Cambridge Analytica exploited to gain access to the data. What is the bug? Can Facebook identify it, document it, and rectify it? If not, can Facebook really classify it as a breach?
The fact is, there was no bug. The Facebook API and platform worked as designed and documented, and supplied all data as expected to Cambridge Analytica, along with user authorization to supply that data.
If Facebook were to classify this as a breach, they must also point to the "bug" or "vulnerability," or whatever they want to call this, that enabled and precipitated the breach. Unfortunately, there is nothing for Facebook to point to, because the real vulnerability is the system itself. Facebook created an ecosystem of private data, and Facebook defined the boundaries for access to it. Facebook cannot claim an app, that was explicitly within the boundaries of its ecosystem, utilized the Facebook API in a way that constitutes a "breach." Facebook is the only entity in control of the boundaries defining a breach, or what exactly constitutes "private" data, so trying to call this a "breach" is like changing the rules mid-game.
There are two problems with that:
1) That view is already too narrow for practical security engineering. It's not enough to have a technically correct solution, you need to consider the entire product to ensure that it has the expected security properties in the way it's actually used.
2) Worse, it ignores how the message is going to be interpreted outside of the computer security field, which is especially important when the company is under political scrutiny.
For a C-level executive, it seems like an unfortunate lapse.
Or if not technically "intended" then well within the boundaries of what FB is willing to tolerate as long as it's making them money.
They absolutely deserve most of the blame.
"I never expected them to disappear, I was hoping to reduce the rate at which people were intentionally misreading them."
Why Facebook employees are doing PR on Twitter, a platform designed for intentional misreading, is the question.
Well it made his company look bad and now he's gone from that company ahead of schedule. Sometimes things are as simple as they seem on the surface.
I think what's missed in this conversation, is that this sort of shenanigans isn't really in the purview of a CIO anyway. Too bad he got himself mixed up in it.
I can see why Facebook would not want that out there.