Hacker News new | past | comments | ask | show | jobs | submit login
Facebook Security Chief Said to Leave After Clashes Over Disinformation (nytimes.com)
1074 points by aaronbrethorst on Mar 19, 2018 | hide | past | favorite | 395 comments



There's breaking reporting that Facebook just had personnel in the Cambridge Analytica offices before the UK authorities could get there with warrants.

https://twitter.com/carolecadwalla/status/975844154361221121

> BREAKING: Facebook WAS inside Cambridge Analytica's office but have now "stood down" following dramatic intervention by UK Information Commissioner's Office..

https://twitter.com/carolecadwalla/status/975855218490519552...

> To be clear, @facebook was trying to "secure evidence" ahead of the UK authorities. Nice try, @facebook. The UK Information Commissioner's Office cracking whip...British legal investigation MUST take precedence over US multibillion $ company.....

Something VERY wrong is going on at Facebook.

edit, with another account:

https://twitter.com/DamianCollins/status/975856097163702272

> Facebook have confirmed that auditors and legal counsel acting on behalf of the company were in the offices of Cambridge Analytica this evening until they were told to stand down by the Information Commissioner. These investigations need to be undertaken by the proper authorities


I seriously feel like I’m missing something here, why isn’t Facebook fully behind getting to the bottom of this? Going back even further, why was it so difficult for them to even admit they had a problem during the election? I don’t think it’s as simple as “more money,” but maybe as simple as people too close to the problem and too enamored by what they’ve created?


Because the "breaches" and "abuses" aren't breaches or abuses, it's Facebook's business model working as intended.


This fact has me more confused than anything.

What did most people think Facebook was doing... Keeping all the data locked away an never letting anyone make use of it? The interest in that data by political entities should have been especially obvious.

Additionally, this information about Cambridge Analytica came out months ago. I remember first hearing about it on a podcast that was mostly focused on the personality profiles. [1?] However, it was open about CA optaining the data via the survey. (I'll provide a link as soon as I dig it up again). Suddenly, months later, this story is exploding in media and political rhetoric.

It makes me wonder if this isn't more about a concerted effort to force regulations to give someone control over these platforms?

After seeing what happened in the 2016 elections around the world, I'd imagine many people became interested in ensuring social media would work for them next time around.

[1] https://www.stitcher.com/podcast/wnyc/note-to-self/e/5231252... (not -the- podcast I was trying to find, but still has the info as far back as November)


> Additionally, this information about Cambridge Analytica came out months ago.

Minor nit: Years ago.

The name Cambridge Analytica was being circulated in hacker circles since at least mid-2016. Thus far, the only new revelation was the Channel 4 reporting [1].

Security nerds have been complaining about Facebook's business model for years and it fell upon deaf ears.

Suddenly, the public gives a shit and I don't understand what changed.

[1]: https://www.channel4.com/news/cambridge-analytica-revealed-t...


Perhaps part of what's changed is that just a few weeks back Cambridge Analytica claimed — in writing, to the UK Parliament — that they did not harvest profiles from Facebook:

"On 8 February 2018 Mr Matheson implied that Cambridge Analytica "gathers data from users on Facebook." Cambridge Analytica does not gather such data." — Letter from Alexander Nix, CEO, Cambridge Analytica, to the Chair of the Committee, 23 February 2018 (PDF, linked from below page)

https://www.parliament.uk/business/committees/committees-a-z...

— And now, it's basically been 'proven' by FB that they do. Of course the ICO is gonna get involved now.

Edit: plus there's also this Channel4 investigation which went public today:

"An undercover investigation by Channel 4 News reveals how Cambridge Analytica secretly campaigns in elections across the world. Bosses were filmed talking about using bribes, ex-spies, fake IDs and sex workers."

https://www.channel4.com/news/cambridge-analytica-revealed-t...

— I think that most certainly puts CA into all kinds of shit they weren't in just 24hrs ago.


They also testified to Parliament that they had no business in Russia, and an ex-founder (who left on bad terms, admittedly) has since come out and said that when he was still there, they were sending a Russian oil company information about how to target American voters.

https://www.nytimes.com/2018/03/17/us/politics/cambridge-ana...


Question from a [mostly] ignorant person (me, about these subjets):

Why would russian intelligence use an oil company as a shell ? Does Russia sell oil to the US ? If so do they market directly to individuals in the US ?


Yes. Yes. Lukoil gas stations.


you must not understand how deeply tied the oil companies in Russia are to their government. They are essentially the same entity.


That I understand, but it seems like oil companies don't market directly to _people_, that they would collect data on individuals they don't market to seems highly suspicious in and of itself.


Nothing changed. What happened in many of these cases (Weinstein, Facebook) is that the story was thoroughly researched and reported in depth by a team of professional journalists working for the New York Times.

My only point (if there is one) is that, despite the massive proliferation of blogs and amateur media, it often takes a professional, salaried reporter to bring an important story into the public eye.


> the story was thoroughly researched and reported in depth by a team of professional journalists working for the New York Times

No. It was Ronan Farrow that originally broke the story. The NY Times took their sweet time before going after someone that was an ultra-major Democrat party donor. The Weinstein story was rejected by multiple "professional" news outlets.

The "professional" journalists working for the NY Times KNEW about the Weinstein story as far back as 2004, but spiked it under pressure from various Hollywood interests. Professional journalists aren't supposed to spike stories for political reasons and that's exactly that the Times did in 2004.

Weinstein's office in Tribeca was right downstairs from the Tribeca Film offices. It was on the 3rd floor. Spend a few hours in that building and you could probably have heard a dozen stories in whispered tones about Harvey. Some professional journalist from the NY Times should have had this story years ago. A high school journalist could have written this! And Kevin Spacey? Anyone who in Hollywood would have known about Spacey as far back as 2004, or perhaps earlier. It was an "open" secret. So open that it was a joke. It started becoming more "known" when Spacey was working with Sam Mendes on American Beauty in 1999.

Give me a fucking break. Professional journalists sat on this story, ignored it or conspired to crush it. It took a rookie, Ronan Farrow, trying to make a name for himself while on a personal mission against that horrible abuser Woody Allen for this to all break.

https://www.smh.com.au/entertainment/movies/russell-crowe-ma...


I was unaware of the Farrow article, thank you for noting it. Its role in publicizing the Weinstein scandal doesn’t diminish my point, however. Farrow is hardly an amateur; as a Rhodes Scholar and Yale Law graduate, he’s been published in numerous journals and works as an undercover journalist for NBC News. His article on Weinstein was carried by The New Yorker, a magazine with a stellar reputation for high journalistic standards. My point that it takes a pro to give credence to a big story still stands.

I understand your reluctance to credit the Times given how they sat on the news for years before finally publishing their expose. Nevertheless, the Times article came out on October 5, 2017 — five days before Farrow’s story hit the wire on newyorker.com. So technically the Times still gets the scoop — and most of the credit :-).


I don't know anything about the particulars here, but this explanation seems very strange? Do NYT have someone on the staff at New Yorker magazine to make sure they never get scooped by a weekly?


No, the NYT story came out before Farrow’s story — Oct 5 vs Oct 10: https://www.google.com/amp/s/mobile.nytimes.com/2017/10/05/u...

It is commendable that Farrow investigated his story independently, but that does not mean the NYT “took their sweet time” — certainly, I haven’t seen Farrow demean their work. The reporters and editors who worked on the NYT’s coverage were not on staff 13 years ago. Story ideas aren’t passed down from generation to generation.


NYT finally published because Ashley Judd went on the record. NYT knew just like so many others, but that was really mostly just based on rumors...how do you possibly publish something like that and risk the lawsuits? You need solid evidence to make taking the risk a responsible choice, and in a case like Weinstein's, that really means at a minimum someone needs to go on the record.


I don't know whether Weinstein would have qualified as a "Public Figure" before the news coverage.

If not, he would have been in a substantially better position to sue for libel.


He was one of Hollywood’s most well-known producers, he most certainly qualified as a public figure.


> The "professional" journalists working for the NY Times KNEW about the Weinstein story as far back as 2004, but spiked it under pressure from various Hollywood interests.

Would you care to stand that up? Because it seems unsubstantiated.


And yet, almost everyone continues to insist that the system is more or less clean, even though it is blatantly obvious news and politics is lies and propaganda from top to bottom.

Bring on the smug downvotes boys, but until all media, social networks, and the overall internet can be brought completely under government control, it will continue because there is simply too much criminal activity going on everywhere.


Quality authors afiliated to prominent prestigious publishers printed on the finest cloths imaginable. Not only were their words and the patterns extraordinarily beautiful, but in addition, this material had the amazing property that it was to be invisible to anyone who was incompetent or stupid.


That's because there's an army of people who are quick to discredit anything coming from blogs and lesser known sources as bunk while falling all over themselves for traditional news sources like NYT. Even when the big sources completely fail at their jobs and end up piggybacking on the story. Like you're doing in this thread.


My guess: Selling psychological data to advertisers is boring but if you get Bannon involved and rope it into the Trump news cycle, it suddenly gets a lot of people talking.

https://www.theguardian.com/news/2018/mar/17/data-war-whistl...

This article is probably the biggest one right now.


Selling psychological data to advertisers is boring, but when you get billionaires and foreign powers involved to elect a would-be autocrat who is degrading American democracy, it suddenly gets a lot of people talking.

Yup. It's not a surprise why people are interested in the role of data in modern public debate and democracy.


That's why we need safeguards in place. Everyone was happy when Obama used it and never thought it can be used by someone else too.

Edit: Someone linked this below - http://adage.com/article/moy-2008/obama-wins-ad-age-s-market...


Did Obama acquire the data under false pretense, use it to lie and mislead people, or downright coerce candidates through bribes and sex?


I have seen no evidence that the efforts of “foreign powers” were even remotely close to effective. Has anyone done a study proving that people either changed their votes or didn’t vote based upon the Russian campaign?


I think most people believe Russian hackers literally hacked voting machines to give Trump the election. Subtle manipulation of emotions it too complex for the public.


Wait until people find out American democracy is smoke and mirrors, then you'll see people talking. They better this situation under control soon or that might just happen.


Not just security nerds. Take for example Eben moglen, professor in law and history of law.

"So I was talking to a senior government official of this government (2012) about that outcome and he said well you know we've come to realize that we need a robust social graph of the United States. That's how we're going to connect new information to old information."

I suspect that the only reason we are hearing about the Trump campaign being the buyer is that the democrats already had the information. It is also why Facebook can not put the cards on the table since then they would not have any political allies left. Just like with the Snowden leak, privacy is not a issue for which the parties differentiates on. One can only hope that will change in the next election.


There's been evidence that the Trump campaign did a massively better job of taking advantage of Facebook than any other campaign, Republican or Democrat, had previously.

There's also evidence that Facebook's advertising models were a massive help -- the Trump campaign and affiliated organizations were paying much lower advertising rates than the Clinton campaign + affiliates, largely because Facebook's model prioritized the kind of controversy-and-outrage-generating stuff Trump was putting out there (since controversy and outrage drive engagement, and engagement is the metric Facebook cares about).


That’s a great statement on the issue almost always faced by security professionals. Almost impossible to impress upon people the stakes when you can do something, almost impossible to do anything once it’s too late.


> Suddenly, the public gives a shit and I don't understand what changed.

The story was linked to Trump?

Also, security nerds are attuned to how personal data might be misused. The public needs a concrete example.


What changed is that an attack vector has been found to take out Cambridge Analytica. The Democrats are in full political warfare with the Trump regime, and this is just another salvo.


p.s. Did you know Steve Bannon looked at tech workers as people he could bamboozle to be his political footsoldiers? That's why the Mercers completely fund Breitbart - to propagandize young, mostly tech, men and boys.

From Josh Greene's book: "Yiannopoulos devoted much of Bretibart's tech coverage to cultural issues, particularly Gamergate, a long-running online argument over gaming culture that peaked in 2014. And that helped fuel an online alt-right movement sparked by Breitbart News.

"I realized Milo could connect with these kids right away," Bannon told Green. "You can activate that army. They come in through Gamergate or whatever and then get turned onto politics and Trump.""

Mercer spends around $10M per year funding Breitbart to spread that propaganda.


PS: And the democrats weaponize starry eyed, well-intentioned young progressive millennial and gen-xers. Why do you think the Russia-did-everything story sells so well when they were a bit player at most?

Sorry to burst your bubble, but everyone is in on it.


Democrats are moving to the center (e.g. right) while conservatives move further right. Meanwhile, young progressive millennial and gen-xers are trying to move democrats to the left and are in large power struggle.

Not the same.


Conservatives haven’t really moved anywhere since 2010. Most of the political movement is on the Left with many Democrats being targeted for things like intersectional politics, gender. This season of America is not one to miss.


Spoken like a true believer.


The public does not "give a shit". If Trump's win tells us anything, it's that there is not necessarily a correlation between what gets trumpeted as national news and what people actually believe, accept, or experience.

There are major power players at work trying to a) exert control over the internet as a speech platform; and b) ensure that their political opponents can't win without establishment support. I discussed this some at https://news.ycombinator.com/item?id=16616227 .

As for my grandma, my mom, and my sisters, like most of the public, they don't care about any of this, because this data was mostly already public-enough (friends list restricted) and they just want a platform where they know they'll be able to engage with friends and family.


> Suddenly, the public gives a shit and I don't understand what changed.

Not the public per se but the media suddenly cares.

My guess it's the runup to the US midterm elections, it's never too early to start the mudslinging...


There are always different levels of privacy. Everybody knew Google was "reading" their emails. The understand of that was never that Google will expose your individual email to a company. Similarly people understood that Facebook used the data about them to monetize and advertise to them. I don't think anyone assumed they'd let someone else literally take that data.


I've always been incredibly suspicious of the way Facebook handled it's user data. Remember back when every website on the internet had a box that showed "random" people that liked their website, and if you had any friends that liked it, they were in there?

How about all of the services using Facebook OAuth that subtly leak social graph information? For exactly this reason, Facebook OAuth is always my absolute last resort, and I'll almost always skip a service entirely if it's the only option.

Still, I always gave Facebook the benefit of the doubt and I figured that these things were handled by Facebook directly as a "plugin" to the app/website (I'm not a web developer, I don't know the details of how this would work), and the various services didn't actually see the data. It's pretty mind blowing to me that this is not actually the case. I always felt I was being absurdly paranoid about Facebook compared to most people I knenw, now it turns out I was not being even remotely paranoid enough.

Does anyone know if Google is similarly aggressive with user data sharing? I've never noticed information leakages similar to the above coming from Google (so I don't hesitate much to use Google OAuth) but me not noticing something during casual browsing is not a very high bar to clear.


It was blatantly obvious to me within a few years that FB had no interest in thinking through the implications of data sharing. There was a time when a friend liking a picture of their friend would show up in my news feed and clicking through it would expose the entire album to me - who is not even remotely connected to that person. That's the day I went frantically removing photos of friends and coworkers from FB and eventually all of mine as well. Facebook is very unlikely to earn back my trust ever.


> I don't think anyone assumed

Everyone who worked at Facebook or on 3rd party integrations knew this for years.


So about 0.1% of the American populace knew that. (Ball park: call Facebook 30k employees. If you want to add a whole lot of digital marketers take that to even as high as 3M. That's still only 1% of the US population.)


Psychographic data (personalities and traits beyond simple demographics) has long been known and used before Facebook ever existed, it's just much easier now that so many people are feeding data to a centralized system with easy access.

CA's CEO is seen presenting this clearly in 2016, specifically how well it helped the Cruz campaign: https://www.youtube.com/watch?v=n8Dd5aVXLCc

Worldview Standard has a podcast called RawData where ep 1 of season 1 in 2015 talked about how a few likes on Facebook would let you know someone better than their own family, based on research from 2013: http://worldview.stanford.edu/raw-data/episode-1-uploaded

Obama's campaign used heavy data analytics for both runs, explained as early as 2012: https://www.technologyreview.com/s/509026/how-obamas-team-us...

None of this is new. It was only either ignored or accepted before and has finally reached critical mass due to the amount of controversy and conspiracy today, along with the generally expected fatigue of social media and the now evident effects it makes on most people's lives.


Interestingly, according to the NYT Cruz campaign staffers did not think it worked:

"But Cambridge’s psychographic models proved unreliable in the Cruz presidential campaign, according to Rick Tyler, a former Cruz aide, and another consultant involved in the campaign. In one early test, more than half the Oklahoma voters whom Cambridge had identified as Cruz supporters actually favored other candidates. The campaign stopped using Cambridge’s data entirely after the South Carolina primary.

“When they were hired, from the outset it didn’t strike me that they had a wide breadth of experience in the American political landscape,” Mr. Tyler said."

https://www.nytimes.com/2017/03/06/us/politics/cambridge-ana...


This is in some sense an extension of the Myers-Briggs-type personality typing, which in its early days also grew out of data analysis.

People use the Myers-Briggs typing at work because it helps you work with (read persuade) people.

So you can make an argument this is at least 100 years old.

As for why we've reached critical mass - it seems likely the ability to influence democratic elections, and the efforts by enemies of the Western world to use it to undermine democracy, are getting people to notice.


“his is in some sense an extension of the Myers-Briggs-type personality typing, which in its early days also grew out of data analysis.”

What a bizarre statement to make. In this sense anything that is in any way scientifically based “grew out of data analysis.”


No, the history of personality typing is a much more interesting story than that, and depends very specifically on data.

Myers/Briggs/Jung identified 4-5 personality "axes" on which people vary. How did they do that? They basically gave a bunch of people surveys with hundreds of questions and did PCA on the data. They found that 4-5 dimensions explained a lot of the variance. That was a new finding based on data, at the same time that the field of statistics was developing. And it gave insight into how people behave. It's important enough that we frequently use it as a heuristic in workplaces today.

In modern times, we can do the same on much larger data sets. Given Facebook "like" data for 50 million people, you can do dimensionality reduction on the data and extract personality "types". There's no question that this gives you information about people. The question is how well it can be weaponized - that's the debate around CA now.


> As for why we've reached critical mass - it seems likely the ability to influence democratic elections, and the efforts by enemies of the Western world to use it to undermine democracy, are getting people to notice.

American politicians of both stripes have realized the private sector now wields comparable propaganda power to that of the government, hence the post election "Russia" propaganda activity burst and full court press on the story by friendly newspapers to rein everyone in, before they are neutered.


Yes, I think that's right - Western societies need to think very carefully about the role of data, the power of private corporations, and the tension between profit-seeking and societal goals. Government regulation is the way we align societal goals with individual goals like profit seeking.

But the Russian state does spend billions on their secret services, and they consider America their "Main Enemy". Billions buys you something. I'd be shocked if there weren't more shoes to drop about Russia. But for now, we should be focused on domestic disinformation.


In the comments of the CA CEO video you linked to is a Motherboard article from Jan 2017:

"The Data That Turned the World Upside Down How Cambridge Analytica used your Facebook data to help the Donald Trump campaign in the 2016 election.

https://motherboard.vice.com/en_us/article/mg9vvn/how-our-li...


I think culturally we've accepted that political propaganda is different from run-of-the-mill corporate advertisement. Even the words we use are different: few people would call a TV spot "propaganda", even though both seek to influence people to act in a certain way.

People understand and accept the concept and execution of advertisement. Propaganda is not received in the same way.


> I think culturally we've accepted that political propaganda is different from run-of-the-mill corporate advertisement

I think we want to believe that but hasn't been true for many years. Presidents sell a brand unfortunately, just like large companies do their commercials. With the same psychological and rhetorical tricks.

One of my favorite examples I always bring up is this: http://adage.com/article/moy-2008/obama-wins-ad-age-s-market... notice how with much fanfare everyone was happily handing his campaign the marketing award. Normally that is not awarded to political candidates, it goes to Coke, Pepsi, Apple etc.

---

"I honestly look at [Obama's] campaign and I look at it as something that we can all learn from as marketers," said Angus Macaulay, VP-Rodale marketing solutions "To see what he's done, to be able to create a social network and do it in a way where it's created the tools to let people get engaged very easily. It's very easy for people to participate."

---

Social network they say? They couldn't mean using Facebook,could they? But, I think they are. An unsurprisingly Obama's campaign used the same methods as CA did:

https://www.washingtonpost.com/amphtml/business/economy/face...

---

Any time people used Facebook’s log-in button to sign on to the campaign’s website, the Obama data scientists were able to access their profile as well as their friends’ information. That allowed them to chart the closeness of people’s relationships and make estimates about which people would be most likely to influence other people in their network to vote.

---

> Propaganda is not received in the same way.

That's exactly why it is disguised not to be perceived as blatant propaganda. It works best when it is sneaking its way in via a seemingly unbiased publication, or news story, a comedy skit etc


The vast majority of people, and that includes HNers, don't think of that as propaganda. Well, if it was done by a Republican maybe you'd get somewhat better uptake, but I'm getting the feeling the American mind is highly resistant to any suggestion that things aren't as they're told they are. America is the Greatest Country of All Time, after all. It's starting to get very difficult to maintain this level of ignorance but most people are fighting the good fight, at least those who are even paying attention at all.


Interesting point.

I think you're probably right. There's a different emotional impact between being manipulated to consume versus being manipulated in what you think. Of course at the end of the day it's exploiting a similar vulnerability in our wetware.


Also, when that propaganda is full of invented racist conspiracy theories and defamatory lies about the opposition, it starts getting pretty ethically gross.

If the propaganda was fundamentally truthful and respectful (e.g. sharing additional accurate factual analysis that people just didn’t know about), it wouldn’t have quite the same odious smell.

There’s not that much distance between some of the ads and fake news stories flashed in front of people before the election and e.g. ISIS recruiting materials or Nazi propaganda from the 1930s.


There's also the fact that political propaganda comes in a lot more veiled forms than corporate propaganda. Because it's in a sense natural for us to discuss politics face to face, or at least we recognize that some level of discussion about politics is necessary and good, we're mostly ok with things like celebrities endorsing a candidate. A candidate traveling around the country and speaking to potential voters is pretty much fine. A single person expressing their views honestly isn't really guilty of "propaganda".

The problem is when huge amounts of money get mixed up in it. In the US money doesn't buy you political power directly, but it does buy you a voice (in the form of advertisements using mass media). It's still up to the listeners to listen to your voice one way or the other, but the disproportionate loudness of people's voices ensures that arguments backed by money are supported much louder than those without money (this is the thesis of Manufactured Consent). Ok, this is less than ideal, but things probably aren't skewed that much regarding things like social issues.

Political advertisement, in my opinion, doesn't veer into the realm of propaganda until one of two things happen: either the source is dishonest about their intentions (e.g. a person fully aware of climate change publicly denying it for financial reasons) and true beliefs, or their arguments are veiled in a way such that they do not appear to be advertisement at all. For example, suppose out of 100 homicides in the US, 10 were committed by Green people against Purple people, but a news organization decides to cover 5 homicides this week, and focuses solely on the ones between green and purple people. That doesn't look like an ad, even though it is one. The problem is that there's big money in this type of propaganda; these days political power is all about controlling narratives. It allows for a type of "inception" of beliefs and values - for example, making Green people think they're on the bring of a race war with Purple people - by letting people come to conclusions themselves after being presented by a highly slanted distribution of input.

This type of belief-inception is precisely what Cambridge Analytica specialized in. By knowing demographic information, they could target individuals based on issues they knew they would be sensitive to, and slowly indoctrinate them with desired views. I'll use my race example again, because Robert Mercer is essentially an unapologetic racist: https://en.wikipedia.org/wiki/Robert_Mercer_(businessman)#Ra.... You start by painting a narrative picture, highlighting race-related conflicts and painting a picture of deteriorating race relations. Like a self-fulfilling prophecy, this stokes racial tensions, creating more incidents for you to curate. By misrepresenting the relative frequency of these types of occurrences, people gradually come to the conclusion that you want them to: in this case, that black people are becoming more racist towards white people. You can use this to bring over working-class white people to the Republicans. Another good example of this type of indoctrination was Gamergate (which essentially birthed Breitbart / the alt-right) being used to galvanize frustration with the social justice sphere into creating a community of young male "race-realists"


I agree the problem is money, and I also agree that bad faith and disinformation are true measures of problematic political communication.

A principle we could rely on is openness. Just disclose who is paying for what. And disclose the ads. If Trump/CA/Russia targeted an ad at you distorting HRC's record, we should know who paid for it. Up until now, political ads - TV, billboards, even direct mail - were discoverable to the American public, so big distortions could be called out (even if they sometimes were not, as with GWB's racist attacks in SC on McCain's adopted kid.)

But the current setup, where Facebook ads are effectively secret, is a big big problem. How do we know the ads were all honest? Let's just have FB release the 2016 ads so we America has time to figure out what to do before 2018.


I agree with you in principle, but I'm not sure how feasible this given that I literally can't think of how to implement it. Political advocacy groups use complex hierarchies of shell companies and revolving payments to get money from super-donors into advertising. FB can't just release an invoice saying something like (Payment: 106,000; Sender: Robert Mercer) or "paid for by the Russian Federation". It would say things like "paid for by the Committee to Improve America", which receives money from the Pro-Families Committee, funded by the Traditionalists Group, an advocacy arm of the Peers Think Tank, which has ~100s of wealthy donors.


It's very feasible, and even easy. You require disclosure of beneficial ownership of shell corps. That's already being done in some real estate markets to prevent money laundering: http://www.capdale.com/treasury-issues-final-regulations-to-...

Then we require that all money spent on politics (AND related political influencing, like money to Jud Watch and Cit U and Cato and AEI and Bradley Fdn and Am First and Koch Found and Americans for "Prosp" and the NRA) requires public disclosure of who's behind it.

We already do much of this for direct campaign donations and in real estate. It's just a matter of political will. And one side has spent 4 decades and hundreds of billions on creating this money-first system, so they are very invested in not changing it. If you care, the first thing to do is get Congress to pass a law rescinding most of Cit U decision.


> This type of belief-inception is precisely what Cambridge Analytica specialized in.

The DNC and their friends in the media are no slouches at it either. Or was it someone else hinting that a Nazi revival was underway, not to mention some sort of equivalent movement of misogynists who were determined to put women back in the kitchen where they belonged?

> I'll use my race example again, because Robert Mercer is essentially an unapologetic racist

Is there more to the "unapologetic racist" charge than the 2 sentences in your link? If not, most any Libertarian is probably guilty of "unapologetic racist" level crimes as well. Idiot may be a more appropriate label, but each to his own.

> By misrepresenting the relative frequency of these types of occurrences, people gradually come to the conclusion that you want them to: in this case, that black people are becoming more racist towards white people.

The far more common narrative in this election was: that white people are becoming more racist towards black people. And not just mildly racist, but full on Nazi racist. The disparity between what you see on TV and read in the newspaper vs what you see when you actually get off the couch and go look around makes it pretty clear how much the media is not lying, but selectively choosing stories, and frequencies of stories. Selective and deceptive reporting is shamelessly obvious in right wing media (let's not kid ourselves, the viewers are not too bright), but there is plenty in liberal media as well, it's just extremely well done.

> You can use this to bring over working-class white people to the Republicans. Another good example of this type of indoctrination was Gamergate (which essentially birthed Breitbart / the alt-right) being used to galvanize frustration with the social justice sphere into creating a community of young male "race-realists"

Here there is some substance, except hardly anyone knows about Gamergate, I've heard of it, but have no idea what it is. But I do know that there is a non-imaginary new social justice movement who hold many utterly delusional beliefs that they love to shout at the top of their lungs given any opportunity, I think that had a MAJOR effect on pushing a lot of people to the right.

I think you're mostly bang on with your ideas, but I think you have a filter on and don't realize it. I'm sure I do as well, but I'm perfectly comfortable to acknowledge and discuss it, unlike most of my ideological opponents on the other side of the fence.

Interesting times.

Oops.....look like the censors finally caught up to me so it will be a while before I can submit this comment. No hard feelings, all's fair in the political propaganda war, gotta control that narrative after all!


The difference is deception, lies, secrecy, AND the amount of money behind it. Look at the Koch network, Mercer, Adelson, Murdoch, and how much they spend on politics and related disinformation (Heritage, ALEC, SPN, Cato/Koch Fnd, NRA, Americans for "Prosp", Reason, Federalist, Breitbart, NY Post, WSJ ed, Fox, Bos Herald, Wash Ex, IJR, Daily Caller, Prager "U" etc etc. - all controlled by billionaires, and that's leaving out Limbaugh and Hewitt and Levin who are just after personal profit)

There is a huge huge difference between the sides. And that difference gives a huge advantage to one group: billionaires who can use their money to lobby to retain more wealth from the economy.


The billionaires who control what news the public sees aren't limited to the right end of the political spectrum though. Let's see if the data "leveraging" the Obama campaign was so proud about makes the mainstream news shall we? Maybe then I'll start to question my stance.


I personally call it "brainwash", as "propaganda" tends to have political connotations (which I think may be a reason you don't see it associated with typical advertising).


> Propaganda is not received in the same way.

Nothing new here, seriously. Propaganda from both sides before elections has existed for as long as there were political debates and political campaigns. The fact that we have now systems to make Propaganda more targeted may make it more effective than before, but that's all. In the end, believing or not in Propaganda is the individual's responsibility.


I highly disagree. It's easier to simply disallow veiled political advertisements (propaganda) for both platforms and propagandists. Nobody has a "right" to spread propaganda, just like nobody has a right to defraud people simply because they aren't able to spot a scam


The point is that fraud is illegal, and prosecuted when found, this is made very clear to begin with. Spreading information or misinformation is not, and it is up to the recipient to use critical thinking. If you believe people are being manipulated because they can't seen through blatant lies, then the problem is not in the lies.


Sure, and in my opinion we should make (knowingly) spreading disinformation in political contexts illegal, just as we have made spreading disinformation in financial contexts illegal. To me there seem to be almost direct parallels between the two, and I (without a law degree, of course) believe knowingly spreading misinformation could use similar arguments as libel and slander as precedents for its relationship with the first amendment

I agree that we need better education or something of the like to also work towards hardening people against propaganda, but I don't see these different approaches to the same problem as mutually exclusive. And while I would want more funding / different methods to be explored in education independently of this, and believe it could yield amazing benefits for society as a whole, I recognize that the first option might be more cost effective


That will be hard, but we might eventually get there.

In the meantime, it's easy and possible today to require that all political spending MUST come with disclosure of funding. No more secret donations to Super PACs or Heritage or Hillsdale.


>Nobody has a "right" to spread propaganda

It's literally the first amendment.


There's a difference between me telling you my opinion on an issue and me knowingly spreading false information at a mass scale for personal gain. I think that specifically (in my layman's interpretation of law) does not fall under the first amendment's interpretation of allowing freedom of expression since the expression is not genuine, in the same way that fraud is not genuine. You would not be prosecuted for fraud for unwittingly spreading false information, but you would be for doing so wittingly and with incentive for making money


Why are you assuming that the information spread is false? Propaganda is not necessarily false, in fact, it is better if it is true.

Platforms like Facebook would be well within their rights to try and prevent politically targeted advertising, even if it would be a fool's errand. Outlawing it would be unconstitutional.

If they try to prevent "spreading of false information" by political advertisers I've no doubt they will simply be harsher on the propagandists who have a political aim at odds with Facebook's interests, one of which is stopping these hit pieces by those angry that Trump won.

Nobody would be talking about this if Cambridge Analytica worked with Hillary. They simply want to stop their opponents from using the useful tool that is targeted advertising.


First, I agree propaganda is not always false, but if you read a post I made earlier, you'll see that the reason's it's a problem is that it controls narratives and creates curated biases that give people inaccurate beliefs (e.g. by focusing heavily on single small issues to further a controversy). For example, consider that Russia created fake BLM-related twitter accounts to stoke tensions and drive controversy on both sides of the issue: http://faculty.washington.edu/kstarbi/examining-trolls-polar.... This is somewhat different, less propaganda and more astroturfing, but the effect is the same regarding propaganda: to direct the narrative into something convenient for those behind the strings.

Outlawing political advertising is not what I propose. I believe propaganda is different in its intent: in my opinion direct disinformation or dishonesty (with intent) would be a sufficiently high barrier, given that it would require a high barrier of proof that the supporters were seeking to manipulate opinion with lies. This would ensure nobody would be prosecuted except in the most egregious of cases. I also believe this could be a valid exception to the first amendment in the same vein as libel or slander: spreading false information, with intent, possibly for personal gain. The parallels certainly exist.

Furthermore, I believe I would be just outraged if Hillary did this, and I think this is a pointless distraction. I didn't vote for her and I know that she also had her own shady internet propagandists working too. I think we should do our best to make sure political discussions happen organically, from real people.


Yes, but this is individually targeted propaganda, which is a relatively new thing.


> Additionally, this information about Cambridge Analytica came out months ago.

No, there's new Cambridge Analytica information, as of today:

https://www.channel4.com/news/cambridge-analytica-revealed-t...


I think that kind psychological profiling you can do with a Facebook profile is not obvious to most people. It’s not even clear to to most people that they are under 24/7 surveillance by using the Facebook app. There has been some raised eyebrows of creepy targeted advertisements, but it’s a different ball game of knowing that there is a company that has profiled you as an individual and then manipulated you with targeted political advertising using your most deepest fears.


Sure, but companies do this sort of thing too, and in ways that are almost certainly illegal. Using Race/Sexual orientation to better target individuals, the biggest difference between now and before, is that when running pre internet ad campaigns, you couldn't AB test well and measure direct impressions.

The fact that computers are placing digital picture ads (that are cheap to produce), allows such extreme cases as we've seen to happen.


"What did most people think Facebook was doing"

People don't think.


Most people couldn't understand ToS agreements if they had the energy to read them, which they don't.

They assume protections in place that aren't there because things like the bill of rights do not apply.


> Most people couldn't understand ToS agreements if they had the energy to read them, which they don't.

It's easy to understand, however, the business model of companies like Facebook if you don't pay for the service. That's what Facebook users should realize by themselves.


The "easy-to-understand" business model of a service like Facebook is that they show ads on it. That's pretty different from what we're talking about here.


Because I still have Al Gore's voice in my head. I assumed they took this data and put it in a "Lock-box". You see, we need to take it all in and put it in a "Lock-box".


I'd be happy if we made it easier for gov't to buy stuff like ashtrays.



The story about the conspiracy aspect has been hiding in plain sight since the election https://washingtonmonthly.com/2017/11/24/a-trumprussia-confe...


> What did most people think Facebook was doing.

Most people think Facebook is a convenient way to keep in touch with family and friends, and most of my friends and family are bemused that I don't have a FB account, and think that any concerns I have about privacy are overblown.


>Keeping all the data locked away an never letting anyone make use of it?

Using the data internally to target ads. Once you let it out the door, you no longer have an exclusive asset on which to charge rent.


There's a clear distinction between selling widgets based on however-detailed behavioral profiling, and influencing the results of elections.


>influencing the results of elections.

I’d like some explaination of $1.2 billion is spent on Clinton some of which came from Saudi, Canada, UK, Australia, Norway and that’s all well and good! But $500,000 spent against her is “influencing elections”.

Is Facebook advertising that effective!?

I’d really appreciate if people stopped using the term “influencing elections” that’s the whole point of campaigning. In related news, you don’t have to like him, but Trump won fairly.


I don't like any of those. I do my best to avoid being manipulated by ads. And I especially avoid political ads. I want unbiased information for choosing.

Restricting political advertising is not such an unprecedented concept. The risks are just too fundamental.


Its sad that you think that. Election campaigning and selling products shouldn't be different. What we need to be fighting is false claims.


  “The people whose job is to protect the user always
  are fighting an uphill battle against the people
  whose job is to make money for the company,”
  said Sandy Parakilas
Just a shout out to all you who might be at a company like this.

If your company hasn't figured out user privacy yet (Facebook hasn't), you might want to look for the exit.

If your company treats you badly, look for the exit.

If your company treats you like you are expendable, you are.

If your company treats users like they are replaceable, they are - and when they have burned out all the users, the company will catch fire and sink.


Also, if your company treats what many (probably not most on here) people would consider to be private details of your employment contract financials like an asset to be sold/ another piece of data to monitize... send a GDPR type request to HR. Then look for exit. Or just treat them similarly, and discretely monitize every piece of information you've gained from the relationship that they would expect, but not mandate, you to keep quite about.

If you are not aware, here's where most of your Equifax data that's been leaked online comes from, send in a request: https://www.theworknumber.com/Employees/DataReport/

You can confirm employer participation--> Login--> Find Employer Code if someone wants to scrape the DB list.


> Because the "breaches" and "abuses" aren't breaches or abuses, it's Facebook's business model working as intended

One's market value doesn't crash by tens of billions when investors learn everything is going as intended. This is a side effect of Facebook's business model which Facebook ignored. Chickens are coming home to roost.


It does when "people finally caught on" is the reason.


Exactly, and sites like Reddit make money on the same sort of thing. It wasn't controversial when Facebook and Google became arms of the Obama and then Clinton campaigns, but now more people are turning on Facebook after learning that their info was sold to the Trump campaign and companies and so on through companies like CA.


Who remembers how the Obama campaign's social media analytics and strategy were breathlessly praised?


Is there a difference in the means and ends?

I don’t think we have fully enough information yet, but if a political campaign is using analytics to clearly advertise their campaign, fine, that’s being straightforward.

If a political campaign is posting in ways that do not clearly label it as a political campaign, and is lying to people viewing the data it is paying to show, would you agree that’s kind of a different situation?

There’s not enough information yet I think to claim what was shown, but if political campaigns are not labeling their ads clearly, that is in violation of a variety of state - and some federal - laws.


The cognitive ability to be aghast over false equivalencies continues to amaze me.

Also, rationalizing cheating, because they're certain everyone's doing do it, so it's only proper when the better cheater wins.


I certainly do, but I don't recall anyone seriously alleging that they were extralegal or covert.



Ouch.

> Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing.

> They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.


Absolutely they were, and many comparisons were made to the Romney campaign's ineptitude in this area.


were they spreading misinformation?


[flagged]


I do adhere to the view that wrong is wrong even when others do it, but I'd like a better look into the big data fantasy of the past decade, and that includes a deeper look into the times when this social/data/analytics/targeting bonanza seemed sweet because the teams who were more adept at it were not associated with people or institutions one abhors.

Whataboutism taints conversations when it's an excuse; other kinds of excuses also shut down conversations that should be had.

More plainly, the CA approach to starting the graph was nauseatingly scammy, but how many friends of Obama supporters (and perhaps Clinton - the API changed before the campaign, but maybe some data persisted with the DNC) were aware that their data was being processed by political parties?


Interesting how whataboutism became a common use term recently -- when people started pointing out hypocrisy, suddenly we care about whataboutism?

Hypocrisy is what I care about and there's enough of it to repave the entire Interstate system. When someone criticized Obama or Democrats, the first words in response was some variation of "Bush..." Blame Bush was a competitive sport. Whatabout that?


In its original incarnation it was not so bad. If one criticizes the Russian government for their low-level corruption and it responds with "but you are lynching negroes", that is basically irrelevant and does not invalidate the criticism and one can categorize the response as whataboutism to disregard it.

But it is disingenuous to use it to disregard others who point out hypocrisy. If you want others not to use a useful strategy, you can't use it yourself and then whine when they respond in kind, telling them they should stop without making any assurances that you yourself will. It's like telling someone they should only fight with fists while you're wearing brass knuckles.

Say targeted advertising is like a nuke. If you complain when your enemy drops a nuke on you, but not when you drop a nuke on them, your problem is obviously not with nukes, just with your enemies dropping them on you.

This whole media campaign against Facebook is aimed to prevent something like Trump 2016 from ever happening again by denying the people who [i]shouldn't win[/i] modern tools. It has nothing to do with privacy.


Also known as, "having principles."

See also, "consistency."


Sure do. This kind of double speak is rampant. One that bubbles to the top of my head is that when some people were targeted for anti-HRC messages(I think specifically it was Haitian Americans on the gulf coast), then that was labeled as "voter suppression", but targeting likely voters for Trump and spreading negative information about him, say the access Hollywood tapes, is "informing the voters"


Yes, it's funny. Titles for two articles, both are easy to google:

1. How Obama’s Team Used Big Data to Rally Voters (MIT Technology Review, 2012)

2. How Trump Consultants Exploited the Facebook Data of Millions (NYT, 2018)

No bias here.


The bias there is largely do to the differences in behaviors and histories of the parties involved. It's one thing for a charity group to run a donation center, and a wholly different thing for a life-long con artist to do it.

It is a rationally induced bias.


You're saying that it's ok for the media to attempt to influence people if you happen to agree with the message?


If I wanted to say that, I would have. The situation today is very different from when Obama campaigned. For one, the FBI & CIA didn't announce that Russians interfered during Obama's campaign. So it's really no mystery that people are paying attention to what the Trump campaign is doing.

And that's ignoring the fact that Cambridge Analytica was apparently breaking laws.

Either way, it is ok for the media to 'influence people'. If you're going to be vague, then we may as well say that is their whole reason for being. And if I wanted them to advocate one message over another, what difference is it to you? That's politics.


There’s a difference between implying that a racial/ethnic group will get hassled or deported, etc due to their race and saying that Trump said douchey things in an interview.

Voter suppression is a term of art that means something. Democrats generally don’t engage in it because more people voting usually translates to more people voting democrat.


That's not what the ads were saying - they were talking about how CGI spent(or didn't spend) money in Haiti


Bit of a difference between "he said <this>" vs "news" stories about Clinton conspiring to keep drug prices high. The source for the latter was an email where someone rejected the idea of negotiating american prices so as to avoid derailing ongoing negotiations into drug pricing in Africa.

It's also not news when it's some story about a town in <state> adopting Sharia law. At least the drug pricing thing is halfway true in some convoluted form.


Here's a Twitter thread by someone who worked on Obama's campaign talking about her firm's use of Facebook data. It was tweeted by Julian Assange so I'm assuming it's one of the more damning examples of the Obama campaign using social media data:

https://twitter.com/cld276/status/975564499297226752

https://twitter.com/cld276/status/975568130117459975

This person asserts that people from Facebook gave them their blessing because FB was "on our side". However, she says that from what she knew, FB was on the other team's side too. Kind of need more specifics about who from FB said what, and what "suck out the whole social graph" means. But it's still a different situation than what CA is being accused of, which is using the guise of a quiz app to mine the social data of the quiz participants' friends.

In contrast, the Obama campaign Facebook app/outreach was explicitly connected to the Obama campaign efforts, i.e. people who signed up for the app knew they would be explicitly allowing this Obama-connected app access to info/friend data.

edit: Here's a tweet by someone on the Obama campaign, protesting angrily to a tweet by Cambridge Analytics:

https://twitter.com/mbsimon/status/975231597183229953

> I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work.

Of course, we shouldn't take Obama's team at their word that absolutely everything they did was on the up-and-up. But it's important to acknowledge that there are distinct differences between what we know of their work so far compared to what has been revealed with CA.

In other words, it's fair to say that the Obama team was lauded for their "innovation" at mass usage of FB data, which they talked about publicly. It is unfair to say that what they talked about publicly is anything like what CA is currently being accused of.

edit: I more or less agree with u/makomk that @mbsimon (the staffer who tweeted angrily at CA) is not giving the most complete description of how Obama's campaign harvested FB data: https://news.ycombinator.com/item?id=16624794


> I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work.

But isn't the bigger problem sucking up the entire social graph from a small seed of users, not how those users signed up in the first place? If I'm getting spammed via a friends-of-friends connection, I'm not particularly worried about the pretense that initial vector signed up with.


Precisely! From what I cant tell, that tweet (which went rather viral) is misleading at best; based on the public information, the 2012 Obama campaign wasn't using that access to target people who volunteered access to their Facebook data, they were using that access to get info about their non-consenting friends and figure out how to get them to vote for Obama. (I commented about this in one of the other threads: https://news.ycombinator.com/item?id=16620454) The 2008 campaign may have been more benign, but the CA Tweet he was debunking was about both, and those Carol Davidsen tweets appear to be about 2012 specifically.


More specifically, based on the NYT story that you linked to, Obama's campaign did things like match the mined user and user-friends' data with voter registration and donor lists, and also attempt to calculate who a user's "real-life friends" were, versus their casual FB acquaintances, which involved an analysis of photo-tagging, among other things:

> Once permission was granted, the campaign had access to millions of names and faces they could match against their lists of persuadable voters, potential donors, unregistered voters and so on. “It would take us 5 to 10 seconds to get a friends list and match it against the voter list,” St. Clair said. They found matches about 50 percent of the time, he said. But the campaign’s ultimate goal was to deputize the closest Obama-supporting friends of voters who were wavering in their affections for the president. “We would grab the top 50 you were most active with and then crawl their wall”

In the next paragraph, FB said it was "satisfied" that this met their data and privacy standards. Which is a bit curious because IIRC, it was not kosher to cache data scraped from FB for any reason beyond having a reasonable cache (to prevent unneeded API requests), nevermind for independent data collation and analysis. I would bet that the users who did knowingly sign up for the Obama app did not think the app would be scraping the walls and photo albums of their friends and attempting to do friendship-strength analyses.

CA still has an extra level of subterfuge, but I agree, what the Obama campaign is reported to have done is definitely not as innocent as the Obama campaign staffer claims in the aforementioned tweet.


When you have a Facebook account, you are explicitly granting permission to FB to use your personal information and social graph to sell ads. There is nothing deceptive about this.


This isn’t about FB selling ads. Cambridge Abalytica and the Obama campaign are third-parties


Really?

You don’t see “Sign up for this quiz to find out your true personality” as different than “sign up to support change and spread the word about Barack Obama”?

Facebook is a cesspool, but shady onboarding tactics makes a far more dangerous cesspool.


I don't see a difference if I didn't sign up for either of those things and I'm targeted anyway, which is what happened.

Rephrased: Both campaigns spammed non-signees, and it looks like the CA people spammed signees as well.


No doubt, spamming is a practice that people hate. Whether it's as unethical as what's being alleged against CA is another matter, though. AFAIK, there was nothing when signing up for the quiz app that said your data would be used for political purposes. At most, the quiz might have been said to be affiliated with a Cambridge professor and his studies [0].

[0] https://www.gsb.stanford.edu/sites/gsb/files/conf-presentati...


  people who signed up for the app knew they would be explicitly 
  allowing this Obama-connected app access to info/friend data.
But the targeted friends had no say in the matter and gave no consent to being isolated and targeted.


What on earth makes you think they "targeted friends"? What does that even mean? Did they spam friends with unwanted emails and phone calls? Did they try to propagandize friends with ads targeted to certain demographics? Did they literally light up the friends with green lasers?


  What on earth makes you think they "targeted friends"? 
I wrote "the targeted friends". In other words, the "volunteer's" friends did not themselves volunteer to participate, yet their contact info was handed over, and they were then targeted.


They targeted friends who would be on the fence about who to vote for. Easy to find out about based on Facebook information.


I would say that the alleged use of this information by foreign-funded groups has soured people on it more quickly. I think this kind of behavior would have eventually come to be seen in a negative light over the years, but the possibly-more-nefarious connections has accelerated the process.


you really should not be downvoted, anecdotal but I have remarked on similar. I have a relative who is part of a few groups and they "managed" certain subreddits for a PAC.

its all well established, get enough people together and you can nuke any story you want and take over subs with time. Facebook's crime was getting caught helping the wrong people


Facebook also plays both sides.


How? Facebook doesn't get paid when app-developers access users, do they? Wouldn't facebook make more money if CA was forced to go through FB's ad-targeting systems instead of CA's own offsite ad campaigns?


This doesn't answer your question specifically, but there's the possibility that FB wanted a heated political environment inside its platform so it could sell more ads for all parties involved in the race, keep people engaging with the platform, etc.


Everyone wanted a different outcome. All of the monies accepted by NRA and other right wing organizations would not have been investigated if Clinton won in 2016. Trump would have averted the whole issue of finances and set himself up to create a new right-wing media organization. Chaffetz and other politicians admitted they had "years of material" on Benghazi with which they could hound Clinton. All of the potential money laundering operations through the RNC would have gone unnoticed. Facebook would have enjoyed years of elevated activity with political arguments.

But because of how things played out, we slowly became more introspective and started questioning. Scandals involving sexual impropriety with actors, culminating in the #metoo movement, is part of this introspection. And there will be increased scrutiny in social media and its role in enabling the current situation.


CA may have created retargetting lists and segmented users out to create target lists and lookalike audiences.


That's one of the reasons this behavior has been banned since 2014.


This is exactly my thought. Being able to see detailed data about "friends" of the person who opted in, when the friend did not opt in with a "but don't use it" caveat, smacks of a :wink: :wink: and :nudge: :nudge: letting facebook sell data without permission after painting a thin veneer of deniability over it.


Exactly this. This is not a one-off. This is exactly what Facebook exists to do.


I get that this is most definitely the likely side effect of a business model of collecting info.

There's still the fact that FB _did_ close off the info that CA was holding onto, and they saw it as something they no longer want to offer to its ad clients.

The simplest explanation is that FB is trying to do damage control for being way too liberal about its data sharing in the past, because it will generate more scrutiny for their present policies (even if they are "better" than before). Even if they're improving, many might not think they've improved enough.


Or even more generic: this is every advertisement platform's business model. They just perfected it.

Who knows, if it gets any worse, we might finally be convinced to pay for our things.


But that’s sort of my issue, this is Mark Zuckerburg and Cheryl Sandburg, engaged, liberal leaning (to my knowledge) individuals, you’d think regardless of the business model, they would be taking extreme steps to understand Facebook’s role in all this, not ostriching....


It could be that Facebook employees somehow got caught up in some of Cambridge Analytica's unethical activities, as recently revealed by an undercover Channel 4 investigation:

https://www.channel4.com/news/cambridge-analytica-revealed-t...


Had not thought of that, though it would only really work if it was Zuck getting caught up, I don’t think a lower level person getting netted would change how he acts.


Facebook is complicit in a lot of shady shit happening with Russia from the very beginning. Including taking money that was laundered for the Kremlin when they were strapped for cash in the early days.

https://finance.yahoo.com/news/zuckerberg-got-early-business...


Because they profited from this firm and likely actively worked with them to enable what’s now being characterized as a “breach”. And this is one of many hundreds of other companies doing the same thing. This is probably just the tip of the iceberg. If they are pushing back they are afraid we will find out how deep it goes.


That’s interesting, hadn’t thought about that, but undoubtedly there are hundreds more examples of this. That’s a congressional investigation they would probably like to avoid...


The higher-ups at facebook, and the other big techs, drink some special koolaid. They might not see facebook altering society, creating a new media landscape, as a bad thing. We assume that what happened over the last couple years, the disinformation campaign, was some sort of embarrassment for facebook. They might not see it as a problem. They might see it as a natural an inevitable outcome of technology, something not to be restrained.


>>I seriously feel like I’m missing something here, why isn’t Facebook fully behind getting to the bottom of this?

Simple: stock price. Even now, FB stock is down nearly 7%. So Facebook will try to limit the damage as much as possible until it is no longer to do so. After that, they will "fully cooperate" with the authorities.


they don't want to admit it, because Facebook is worst then Russia in election manipulation


I'm tempted to take Facebook's actions at Face value in this case. They say they were going in to audit what Analytica was up to (and to make sure nothing was deleted). I kinda believe them.


"Facebook WAS inside Cambridge Analytica's office but have now "stood down" following dramatic intervention by UK Information Commissioner's Office.."

Uhh....that's not good.

In effect, this is a sanctioned data breach. Facebook opened the firehouse of user data by knowingly keeping very lax access to their developer APIs while not at all preventing developers from storing the data they accessed.

That's a very serious breach of consumer trust. A terms of service is only as good as your user's ability to understand it's implications. Just because users check a box doesn't mean Facebook is any less liable.


  have now "stood down"
... which could mean simply that "we finished doing what we came to do".


ICO cracking whip, lol.

Let's not pretend the ICO has unlimited funds, people and legal resources to install the fear of God in to companies. Like many other departments and organisations, it's been badly hit by "austerity" measures.

It's mostly funded by organisations that process data, plus some grants from the Ministry of Justice for Freedom of information work, the latter affected by "significant reductions ... for our current levels of FOI work"

Their current full year budget forecast is £25M (and costs of £26M)

> Elizabeth Denham [the Information Commissioner] says it's nonsense to suggest her office will be handing out huge fines routinely once the General Data Protection Regulation comes into force. "Predictions of massive fines under the GDPR that simply scale up penalties we’ve issued under the Data Protection Act are nonsense."


The GDPR is mostly the same EU data protection law, but one new part is that civic organisations/NGOs/consumer organisations are allowed to sue the companies to enforce privacy law.

So the ICO might do nothing. But Max Schrem's new org NOYB might.


Authorities cannot get in until an estimated Wednesday. Tomorrow, they will apply for a warrant from the court, as they did not get sufficient answers to questions previously asked. Now Facebook has moved tonight to go into the London offices. I wonder what they are doing.


Do you have a source for this? It sounds a bit hard to believe. If the police believe that evidence was/is at risk of being destroyed (as you are implying), they could get an emergency search warrant immediately, even outside normal court hours. In the U.S., there are magistrate (warrant) judges on call overnight for exactly this type of situation; I imagine a similar system exists in the UK.


Yes, I seen the lady in charge of the regulator (not police) on Channel 4 news (here in UK), stating that she will likely be in on Wed and they cant walk in right now.

See for example: http://www.bbc.co.uk/news/technology-43465700


Facebook had enough time to cleanup this mess with Cambridge Analytica (CA). The news about how CA exploited their platform was out just after 2016 election results. If FB was even little bit serious about it they could have done at that time what they are doing now


It's not an "exploit". This stuff was freely given to Obama back in 2012. My best friend was a canvasser and before going into each house, the iPad they used told canvassers what to talk about and not to talk about. It knew who was pro-life so you wouldn't bring up abortion. It knows who's pregnant and who's recently had an abortion (you can get this through knowing what purchases someone has made and not made). There is a file on everybody, their intricate political and consumer dispositions, and this is Facebook's product. CA has blown up in the news despite being only one of many consumers of the same thing because journalists can make the technically true charge that they are "Trump’s election consultants" and millions are primed to go apeshit at that. But CA is just one of many buying the unethical products, and Facebook is just one of many producers of these data sets.


That's not facebook data, that's DNC/Obama data, accumulated over time by other canvassers including your friend. They've done that since before Facebook even existed.

Facebook's product is not selling that data, it's selling ads using the data. You can only sell the data once, you can sell the ads forever.


Well, yes & no: Facebook doesn't sell the data itself, but they do give it away when a user agrees when using an app or service mediated by Facebook. That seems to be what happened here: Users took a personality survey, and their data along with the data of their friends made its way to CA.


The "data" that you could get was just like what people listed in their profile page under "Interests" and things like that though. Which maybe tells you something, but not quite like home address and a list of political preferences


No, those things are in a different section of your Facebook profile.


There was never a time that home address was available from the friend graph api.


Perhaps Facebook were contacted for comment on the Channel 4 story (in which CA's CEO suggested something that sounded a lot like sex trafficking Ukranians in to Sri Lanka to help a fictional Sri Lankan businessman discredit his opponents) and by BBC's Newsnight (in which the CEO was interviewed before the Channel 4 interview aired but was to be shown at 22.30 GMT) and realised that something big was going down involving CA.


http://www.bbc.co.uk/news/av/magazine-40852227/the-digital-g...

It's worth rewatching this video: specifically, Project Alamo (Trump's digital campaign) had Cambridge Analytica inhouse, and they had their Facebook/Google staff in the same building. It's easy to imagine that people at Facebook knew what data CA had, and have knowingly lied since.


Just curious, what authority does FB have to have the personnel in CA offices? Why didn't CA simply call the cops to get them out?


I would assume the banal reality is that they heard a lot of adverse publicity about someone who was a major client including claims they'd breached Facebook guidelines around storing data taken from APIs, and demanded a meeting. Which CA, who don't really want to get kicked off Facebook's platform, were happy to oblige, whether they were prepared to disclose much information or not.

Of course, when the ICO gets involved then whether CA breached Facebook's EULA or not is moot, and Facebook become relevant only inasmuch as the question of whether their own executives breached or encouraged breaches of data protection laws.


Are people having a hard time understand a criminal conspiracy?


Maybe CA & FB are friends here and looking after each other.


According to the statement by FB, they just asked CA politely.


That is just bloody incredible! Talk about ham-handed CYA bullshit. This is plainly evidence destruction, no?


I'm no fan of Facebook, but if they have some sort of contract with CA, isn't it within their right to audit them? And if one company wanted to invite another for whatever collaboration, they should be allowed to.

And if the collaboration is doing something illegal, obviously personnel from both companies should be charged. And FB can do whatever investigation it wants (as long it's legal) and the authorities are free to ignore their findings.


When police are involved that's one of those 'get in line' situations. I suspect the UK authorities are far more worried about the potential for destruction of evidence that they are in hampering an 'investigation' by FB.


I would think it was very likely that Facebook itself seeks to destroy evidence. They've got a much clearer sense of their culpability than any outsider would.


No, when police are involved that's one of those 'maximally invoke your rights' situations. The police are the enemy. Get whatever brigade of lawyers you can afford and stop their efforts in any way you can. This is true whether you're an unfortunate person who happened to be driving while black or a multinational corporation.


One can only hope that is the case.


From what I've read, CA wasn't Facebook's customer. CA bought the data from a researcher who was authorized to slurp the data.


Interesting, do you have a source on that. I would like to see the agreement researchers have to agree to to use fb data. I'm guessing this includes private profile information not public? Or is it a platform level EULA or something like that?

I wonder how FB views public scraping. Shades of webcrawlers.


NY Time article "The technique had been developed at Cambridge University’s Psychometrics Center. The center declined to work with Cambridge Analytica, but Aleksandr Kogan, a Russian-American psychology professor at the university, was willing.

Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica."

https://www.nytimes.com/2018/03/19/technology/facebook-cambr...


Thank you.


if there has been a breach of the law that takes precedence over contract law. If a criminal prosecution takes place and results in a conviction then it makes the civil case a much easier win (and less costly to as the government is doing the majority of the work).


Obstructing justice or going against a legal order takes precedence over contract law. But pretty much all laws take precedent over contract when the contract is in conflict.

Facebook at CA isn’t illegal in and of itself. The important factor will be what they were doing.


Thats some shady shit


> There's breaking reporting that Facebook just had personnel in the Cambridge Analytica offices before the UK authorities could get there with warrants

Do we have a reputable source corroborating this claim?


The tweets are from Carole Cadwalladr, a journalist at the Guardian working on Cambridge Analytica stories

Twitter: https://twitter.com/carolecadwalla Guardian Profile: https://www.theguardian.com/profile/carolecadwalladr

I'm sure we'll see an article published by the Guardian on it by the morning in the UK


I’m stunned that this didn’t end with the Facebook crew being arrested for trespassing. Is it legal in the U.K. for a mob of people with no legal authority to walk into a private business and demand anything from anyone?


they were destroying evidence, of course.

hopefully the people responsible will go to prison for interfering with the investigation.


"stood down" sounds very waltish behavior to be.


My greatest hope with all of the noise surrounding this, is that the engineers and employees at Facebook realize that Facebook and Zuckerberg’s vision does not line up with reality. Zuckerberg believes that Facebook will connect people and change the fabric of society and communities for good in a way that was heretofore impossible.

Between Facebook’s political issues and the happiness-depressing effects of its use, I think it is pretty easy to draw the conclusion that Facebook is a net negative for society. This is without even taking into account the amount of PII that has been concentrated into a single entity (who monetizes it), or the effects of algorithmically appealing to people’s desires.

A hundred years from now, Equifax, YouTube, and Facebook will be lumped into the same pile: companies who profit off of information about consumers. The algorithmic veneer that protects YouTube and Facebook will be gone by then.

I’m not trying to condemn anyone, and I’m not in the position of having to weigh providing for my family with making ethical choices.

But, I think it is clear that change for Facebook will not come from the top. It will only come as people leave.


I don't disagree with these sentiments, but the hope that engineers/IT staff will leave is wishful thinking. I speak from my own experience which my differ from others in other industries/regions/countries but, I find people who work in tech to be generally dispassionate with regards to the downstream effects of their contributions. I think that's because:

a) We're often small cogs ...

b) ... working on often interesting technical problems that require much detail ("think down here" I was once told by a manager, who put his hand to the ground, "not up here" he said putting his hand up and waving it[1]) ...

c) ... and we don't always get to choose. Not everyone is a superstar who can leisurely choose which exciting opportunity to pick and choose. And yes, most of us have rent/mortgages/children/ other obligations to concern ourselves with.

...even if we aren't necessarily all amoral.

1 Luckily I outlasted him in that company. :-)


Thanks for sharing! I empathize with a lot of what you said. I wrote about my own challenges with ethics in software (and added your comment as a citation): https://www.nemil.com/musings/software-engineers-and-ethics....

While it may not affect current employees, I do think vivid stories like this make the allure of joining Facebook less compelling for the next generation of programmers. It also may influence just a few people in hot fields who have many opportunities to choose from (such as the top researchers in AI).


This is, sadly, 100% true. I know because I've been there. I've been that engineer who was so interested in the technical problems of what he was doing that he didn't think about what it was being used for, which I think is the gist of (B) at the least. The banality of evil is very much real.


I think the reason is (d), that most employees believe in the company's mission and do not think they are working for an unethical company, but instead that the company is being unfairly portrayed.


Most in Facebook or most in general in tech companies? If the latter, I'd bet my life that most people couldn't care less about visions or missions, which is more the realm of Founders, very young employees and marketing materials. I'm sorry to state it that harshly...


"Zuckerberg believes that Facebook will connect people and change the fabric of society and communities for good.." is marketing speak. What does Zuckerberg believe? Nobody knows that except Zuckerberg. But I really doubt it's that.

You don't even need to look at meta-effects of Facebook. Look at how it operates, in effect. It splits people into mutually exclusive echo chambers that are falling increasingly far away from reality in terms of median ideological view. Far from connecting people social media has become, arguably, the single biggest factor in societal division in modern history. People even speak of this casually without realizing the implications of what they're saying - 'I can't believe what [non echo chamber approved views] my [friend/family member/acquintance/etc] has. Unfriending!' Of course these views and differences always existed, but in typical social interaction agreeing to disagree on issues is fine. In the social media era, people have started to condemn people over any failure to abide group ideology. It's cult like behavior without the formality.

There's no way in the world you can possibly spin this into a positive or unifying force for society. You've even had founders and executives of speak out against the social harm the company is causing. The point of this is that there's no 'algorithmic veneer' protecting YouTube and Facebook, and I strongly doubt Zuckerberg himself has any delusions about what he's doing. Even most users themselves could easily reason that Facebook is a net negative. But they enjoy and/or are addicted to the services, so they keep using it. It's slot machines on a global scale, where instead of inserting coins - you insert your personal information and get that dopamine rush when somebody likes or otherwise interacts with you.

---

As for employees - you'll never make a company change from the bottom up. Most people don't work for ideologies - they work for money. And Facebook has deep enough pockets to ensure that they'll never suffer for a lack of employees.


For what it’s worth, I think YouTube is generally much more net positive than Facebook (especially if you stay out of the comments).


https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-po...

>YouTube, the Great Radicalizer

https://www.theguardian.com/technology/2017/aug/13/james-dam... >James Damore, Google, and the YouTube radicalization of angry white men

>James Damore, Google, and the YouTube radicalization of angry white men

https://www.buzzfeed.com/josephbernstein/prager-university >PragerU doesn’t disguise the fact that it is waging a war for young minds. Though the site’s videos are clinical, their cumulative function is to proselytize, and the language PragerU uses to describe its mission is religious.


Buzzfeed accusing others of using methods akin to religious indoctrination, that's quite a howler.


I agree. I can’t imagine a better platform for educational content. Crash Course and Khan Academy have been a godsend.


Zuckerberg SAYS that. But I think we've all known CEOs who live in a reality distortion field. I'm not sure he believes that Facebook isn't a bad actor. But he has a ton of financial incentive to deny it's a bad actor.


Why would people leave? Facebook knows your buttons and pushes them just right. In aggregate this is terrible for society, but individually it feels good.


I'd argue that in the pursuit of upping engagement, they've shifted to pushing our buttons too much over the past year or so. Or at least that's true for MY buttons.

Over the past year my view of Facebook has shifted from "I kinda don't like the privacy implications, but it is very useful for following what my extended family (most of whom live literally across the country) and friends are up to" to "why is this fucking thing sending me all these useless notifications all the time? (rhetorical question, I know why it is...) am I getting enough benefit out of it to be worth all of this or should I delete it?"

YMMV.


I was referring to employees in the last paragraph. I agree that users will probably not leave, unfortunately.


I very much doubt that most employees at Facebook share your viewpoint that Facebook is a net negative for society. It’s almost a tautology—people who feel that way won’t seek employment there.


Yes, and I’m hoping that these “scandals” make people reevaluate their previously held beliefs.


Some employees probably feel that they can help move things in a better direction.


surely then there ought to be nobody who willingly works at a tobacco company!


my relationship with Facebook reminds me more of an abusive girlfriend than one which is on the whole good


> "heretofore"


Heretofore is a fairly uncommon word but it's not an incorrect one. It means "before now".


Stamos had been actively engaging with security researchers on Twitter over the past few days about CA with heated discussions:

> I have deleted my Tweets on Cambridge Analytica, not because they were factually incorrect but because I should have done a better job weighing in.

https://twitter.com/alexstamos/status/975069709140877312

Archive of those deleted tweets: https://twitter.com/aprilaser/status/975078309930311680

EDIT: Stamos responds to news:

> Despite the rumors, I'm still fully engaged with my work at Facebook. It's true that my role did change. I'm currently spending more time exploring emerging security risks and working on election security.

https://twitter.com/alexstamos/status/975875310896914433


Wondering what he thought would be him doing "a better job weighing in"? It seems like his deleted tweets were apparently too honest? i.e. in arguing that there was no data breach, he argued that FB's API and TOS allowed (without oversight) for all app developers to do the kind of data harvesting Cambridge Analytics did? That was well-known by developers, but I guess it's different stating it as an official policy.


Translation, "I pissed off my boss"


If he's leaving anyway why would he really care?


Stock options.


In political circles, this is what is called a "Kinsley gaffe": when somebody accidentally tells the truth (and then has to hastily walk it back)


Essentially admitting that the amount of information harvested was not unauthorized but rather by design.


Its corporate speak for "The PR department and/or my boss gave me marching orders to delete them."


And probably the legal department too. He was inadvertently delivering ammunition to anyone who might want to come after the company for this, which seems like something that might very well happen.


Yup but Legal is probably smart enough to realize deleting it won't make it go away tho.


The man makes some extremely reasonable points. I just wrote a comment along the same lines. I'm glad to see there is some common sense at Facebook. Stamos always seemed a bit too rational to be working at a company like that. I worry what will happen to Facebook after he leaves; they were lucky to have him.

Also, I think the real problem here is that the media is attempting to politicize the term "breach," and security professionals are rightly offended.


How would you define the term breach?

Is it fair to use the term breach from the perspective of the user whose data has been acquired? Or is breach only in reference to what the company that collected the data intended to do with it?

There’s also seemingly two types of breaches at play: 1. The idea of a security breach, where a company gets “hacked” 2. The idea of a breach of trust, where people had given a company data in good faith that it would not be abused, and then had it abused, even going against that company’s TOS


This is the difficult question at the heart of the matter. Certainly I am accustomed to hearing "breach" in the context of a "security breach," in which a third party accesses data without authorization by circumventing technical measures restricting such access. In this case, there was no such security breach. The Facebook API worked as designed, and returned all data according to spec, TOS, and API documentation.

The case of a "breach of trust" is a different story, and the problem emerges when you realize that what defines "private data" (the plunder from a breach) is nothing more than an arbitrary set of restrictions, set forth by the platform producing the data itself. Without Facebook, none of this data would exist. Without the Facebook API, no app would be able to collect this data within a sanctioned platform.

Because Facebook exists, and because Facebook offers an API to its data, Cambridge Analytica was able to collect "private data" on users. But it never needed to circumvent any technical barriers to collecting the data it extracted. The Facebook API and platform willingly supplied the data to Cambridge Analytica, as it did and does to thousands of other apps.

If it constitutes a breach that Facebook supplied that data to Cambridge Analytica, then there must exist some "bug," technical or not, that Cambridge Analytica exploited to gain access to the data. What is the bug? Can Facebook identify it, document it, and rectify it? If not, can Facebook really classify it as a breach?

The fact is, there was no bug. The Facebook API and platform worked as designed and documented, and supplied all data as expected to Cambridge Analytica, along with user authorization to supply that data.

If Facebook were to classify this as a breach, they must also point to the "bug" or "vulnerability," or whatever they want to call this, that enabled and precipitated the breach. Unfortunately, there is nothing for Facebook to point to, because the real vulnerability is the system itself. Facebook created an ecosystem of private data, and Facebook defined the boundaries for access to it. Facebook cannot claim an app, that was explicitly within the boundaries of its ecosystem, utilized the Facebook API in a way that constitutes a "breach." Facebook is the only entity in control of the boundaries defining a breach, or what exactly constitutes "private" data, so trying to call this a "breach" is like changing the rules mid-game.


There is (or was) a “bug” in the business logic, as is now apparent. Assuming it is the case that CA gained access to this information by lying, it means there were inadequate safeguards in who could access. The data. The only reason you could say this wasn’t a “breach” is because FB had engineered virtually no safeguards against this type of deceptive use of user data.


By design vulnerabilities are always the best!


Where does a Terms of Service violation fall in all of this? Because CA clearly violated the ToS.


Right, "breach (of trust)" is the terminology getting bandied about, and is absolutely more accurate, but I think it still obscures the issue.


We have a term for what happened: Piracy. It wasn't "theft", since the original is still there. They made and now use an unauthorized copy. In gaming and software, ignoring or working around reversible security mechanism and violating agreements is called piracy.


In the deleted tweets, he was using "breach" in the narrow sense of computer security, basically saying "my team didn't mess up, we didn't get hacked".

There are two problems with that:

1) That view is already too narrow for practical security engineering. It's not enough to have a technically correct solution, you need to consider the entire product to ensure that it has the expected security properties in the way it's actually used.

2) Worse, it ignores how the message is going to be interpreted outside of the computer security field, which is especially important when the company is under political scrutiny.

For a C-level executive, it seems like an unfortunate lapse.


Note that Stamos' clarification doesn't contradict the NYT, which says he effectively gave 8 months notice in December, a notice period he'd now be just 3 months into.


Agreed. But, what is your point? That HN shouldn't crucify Stamos, there is something else more important we should be looking at?


Just that Stamos' clarification doesn't really clarify anything in the NYT story itself.


Yeah. The timing's weird, but stranger things have happened...


What's weird about the timing? I assume NYT ran it as soon as they could confirm it, and that Alex commented as soon as it was published. The timing here seems pretty straightforward.


I took another look, and the timing is, in fact, not weird at all—thanks for pointing this out. This news is not very exciting.


Wait: Stamos' tweet isn't exciting. The NYT story is huge news.


I can easily see how FB wants to throw CA under the bus and make it look like data theft while the truth FB wants to draw attention away from is that this is the system working as intended.

Or if not technically "intended" then well within the boundaries of what FB is willing to tolerate as long as it's making them money.


It’s obviously not within the boundaries of what Facebook was willing to tolerate, given that they changed how the APIs worked 3 years ago to prevent this behavior.


Cambridge Analytica took data from a third party app against the terms and conditions and has alleged to have wilfilly lied both to Facebook and congressional committess about what they did with the data.

They absolutely deserve most of the blame.


There's "willfully lied" and there's "played the game by the unwritten rules"


...and then there's admitting (on film) to using bribes and sex workers to entrap politicians, amongst other illegal shennanigans. I can totally understand why Facebook suddenly want to distance themselves from Cambridge Analytica.

https://www.channel4.com/news/cambridge-analytica-revealed-t...


How is CA legally bound to Facebook because of Terms and Conditions between Facebook and that app? Like you said, third party app - there was no agreement between Facebook and CA?


It sounds like CA and FB had other contracts for other stuff, given other news that FB has banned CA and related from other stuff.


I don't understand why he deleted that. It seems a reasonable summary. When I originally saw his tweet about the deletion I thought he may of gone on some crazy rant.


https://twitter.com/alexstamos/status/975767154145415168

"I never expected them to disappear, I was hoping to reduce the rate at which people were intentionally misreading them."

Why Facebook employees are doing PR on Twitter, a platform designed for intentional misreading, is the question.


> I don't understand why he deleted that

Well it made his company look bad and now he's gone from that company ahead of schedule. Sometimes things are as simple as they seem on the surface.


As a point of anecdata, I worked with Alex in 2005. He's a standup guy, and one of the best in the business.

I think what's missed in this conversation, is that this sort of shenanigans isn't really in the purview of a CIO anyway. Too bad he got himself mixed up in it.


It wasn't a breach.... because Facebook straight up let it happen.

I can see why Facebook would not want that out there.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: