Hacker News new | past | comments | ask | show | jobs | submit login
FTC Probing Facebook for Use of Personal Data, Source Says (bloomberg.com)
1107 points by coloneltcb on Mar 20, 2018 | hide | past | web | favorite | 371 comments

This presents an interesting opportunity for the FTC.

The amount of data being amassed by Facebook, Google and others has become exorbitant, and apparently has already been abused (some might even say weaponized) in a major election.

If Facebook indeed violated the 2011 consent decree, then the FTC can fine them up to "thousands of dollars a day per violation [per user]". This presents the FTC with the opportunity to send a message to these data hoarders: protect the data you collect, or else.

Fine them to the point where they have to start asking themselves whether it's even worth it to collect and store certain data, and with whom to share it.

It shouldn't be the government's job to ensure that the data gets protected, this should be in Facebook's own self interest.

To your second point, I argued this sort of data was weaponsized and not just in an election (http://www.armyupress.army.mil/Journals/NCO-Journal/Archives...).

To the third point, focusing on Facebook seems like that scene from Casa Blanca though: "There's gambling going on, I'm shocked, shocked" "Your winnings, sir" Not confident FTC fines would actually change any trends.

I was the technology lead at Myspace for the Games Platform during the 2011 crackdown by the FTC. We took the FTC filings seriously and spent large amounts of cash and resources to prevent our data from making it to databrokers. Fines are one thing. FTC can shutdown or cripple your business.

I bet if Facebook is found not to take reasonable steps to mitigate issues raised during the 2011 FTC investigation, they'll be forced to do yearly audits of every app on the platform and require KYC(know your customer) process for all app publishers. This will be very costly and we'll probably see the end of the FB graph API except for trusted and highly capitalized partners.

I have not been involved in FTC decisions but I have worked at companies subject to FTC consent decrees. I agree with adrr's comment. The initial fines are not that big a deal; the work required to demonstrate compliance is non-trivial.

Last I saw there were over 1 million accounts distributed by CA.

Even assuming they were only distributed 3 months (unlikely) and there were only 1 million accounts (also unlikely) the maximum fine is:

1000x1000000x90 = 90 billion dollars.

Imposing the maximum fine would be more than double their entire 4th quarter earnings last year.

That's a bite. That would hurt any company.

From this article:

> If the FTC finds Facebook violated terms of the consent decree, it has the power to fine the company more than $40,000 a day per violation.

> Facebook Inc. is under investigation by a U.S. privacy watchdog over the use of personal data of 50 million users

So I think the maximum (assuming this went on for 90 days) would be:

40,000 x 50,000,000 x 90 = 180,000,000,000,000

180 Trillion.

I think the world would be a better place if they just pulled the plug on the fb. Donald Trump is one really really bad outcome.

Hillary lost because she was the worst Democrat candidate in history. She had MSM, the entirety of liberal America, all major tech companies, most/all colleges, illegals voting en masse -- all of these organizations were united in their support for Hillary, and she still lost.

The Dems had the election on a silver platter and they still lost because Hillary was awful.

Hillary lost the election, if it wasn't her it would've been a win.

Calling Hillary the "worst Democratic candidate in history" is just a meme - she was perfectly qualified for the job, more so than Donald Trump, anyway. What she wasn't was photogenic, charismatic or capable of not coming across as "a politician" at a time when both parties were in a disgruntled, antiestablishment mood. I think she and the DNC felt it was finally "her time," and she didn't take Trump seriously, perhaps because she felt the winds of destiny were at her back.

Unfortunately for her, Julian Assange decided to make it his religion to ruin her and Donald Trump happens to be very good at channeling populist antipathy. So it goes.

>She had MSM, the entirety of liberal America, all major tech companies, most/all colleges, illegals voting en masse

Ok. Let's go through this one by one...

- The Democrats/leftists/DNC do not control the mainstream media. That's a conspiracy theory started by the right-wing fringe and Fox news, and of course, canonized by Trump and his supporters, in order to dismiss all criticism in the media as being manufactured.

- The entirety of liberal America does not think and act in unison, nor were they entirely behind Hillary. Both parties were fractured this last election, and many Democrats who couldn't get Bernie wound up voting for Trump or stayed home.

- All major tech companies are not liberal or leftist. There is a deep wellspring of right-wing, alt-right and libertarian ideology in tech and SV.

- "most/all colleges" are also not automatically leftist. Plenty of right-wing, alt-right and libertarian ideology there as well.

- "illegals voting en masse" is just a baseless conspiracy theory.

You are correct that the race was Hillary's to lose. Unfortunately you couldn't resist running through the typical Trumpist hyperbole. Sad.

"Hillary lost because she was the worst Democrat candidate in history"

Because she was a woman? I mean, in 1984 Walt Mondale got 13 electoral votes and just 37 million votes. I think this qualifies as much worse.

But I get you.

How revolting, between nitwits who voted for Hillary purely because she was a woman and nitwits that dismiss votes against Hillary as purely a masculine act of defiance towards women in positions of power -- I don't know what's worse. Clearly some people are only capable of reducing others to arbitrary superficial qualities inferred from their own prejudices.

Is it really beyond your comprehension that someone would judge Hillary based on the quality of her character rather than her gender?

Sure you voted for Trump because you want a tax cut. I'll give you that.

But on the other hand, you brought up the "worst candidate in history" thing because of other reasons. Its just not mathematically true, man. So bringing up bias is fair game; you aren't using math as a judge. But I guess it could be a bias of recent events. Who knows - either way its not true.

I'm sorry I triggered you with the word "Trump" and I'm sorry you triggered me with just saying something that is mathematically false.

I also looked at your hacker news profile and it looks like you only only talk about politics here - this is a technology forum so I think you have the wrong audience. I'm sorry you are so angry but Jesus Christ, lets talk about computers here.

PS - If I could save your blood pressure; I'd down vote this response for you. I don't care about internet points here.

Your concern is touching but unnecessary, and, while you are correct that Mondale faired terribly, the basis of my reasoning is that a significant portion of those 62M votes that went to Trump could've easily went to the Democrats but didn't because of explicit and universal distaste for Hillary.

Mondale may have received only 40.6% of votes but Trump, as a general rule, shouldn't have had a chance. It was a Black Swan event of epic proportions and the Democrats made a mistake every step of the way, the statistical likelihood of that happening was so astronomically low but Hillary's involvement made it a guarantee.

Facebook didn’t elect trump. The us populace did.

The US populace that lives in key electoral college swing states... what a convoluted system.

Thanks for clearing this matter up.

I'm not sure I follow your math. Facebook had the following figures at the close of 2017 per https://investor.fb.com/investor-news/press-release-details/...:

Earnings (Q4 2017): $4B

Earnings (Y2017): $16B

Revenue (Q4 2017): $13B

Revenue (Y2017): $40B

So, maybe you're confusing revenue with earnings (net income) and a quarter (3 months) with the entire year (12 months). Because $90B is over 20x FB's Q4 2017 earnings and over 5x their entire 2017 earnings.

I messed up.

I saw their Q4 revenue statement and read the year end 40B as the Q4 revenue.

My bad.

i find it depressing that 90 billion dollars is only double the 4th quarter earnings.

> To the third point, focusing on Facebook seems like that scene from Casa Blanca though

It's mere coincidence, but your spelling "Casablanca" as two words (Casa Blanca) put into my mind that the literal translation of that place is "white house" (two words, natch). [0]

To your point, yes, Facebook knows user data trafficking (gambling) goes on as well as the stakes of such trafficking. Facebook is the gatherer and ostensible guardians of such data, but they directly profit from such trafficking. Very likely their "interest" in user data security is pretense.

[0] https://en.wikipedia.org/wiki/Casablanca#Etymology

EDIT: recast second paragraph to more clearly convey intended meaning.

I just went and read the linked article -- it's definitely worth a look. Personally I hadn't seen media coverage of the evolving relationship between Russia and DPRK, so I learned something new.

> and apparently has already been abused (some might even say weaponized) in a major election.

You mean major election_s_, right? I do seem to remember the Democrats crowing about how Obama's team had used social media to their advantage and Republicans were hopelessly outmatched in this arena.


Fun tidbits:

> But the Obama team had a solution in place: a Facebook application that will transform the way campaigns are conducted in the future. For supporters, the app appeared to be just another way to digitally connect to the campaign. But to the Windy City number crunchers, it was a game changer. “I think this will wind up being the most groundbreaking piece of technology developed for this campaign,” says Teddy Goff, the Obama campaign’s digital director.

> That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists. In an instant, the campaign had a way to see the hidden young voters. Roughly 85% of those without a listed phone number could be found in the uploaded friend lists.

Whoa, that sounds exactly like the "breach" we're talking about here!

And a former Obama staffer confirms this: https://www.theblaze.com/news/2018/03/20/ex-obama-staffer-cl... (yeah yeah "I don't trust your source", but it's just screenshots straight from the horse's mouth).

Money quotes:

> Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing.

> They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.

The major differences:

1. The Democrats didn't harvest the data under false pretenses; the data came from people who signed up for a political app.

2. The Democratic campaign data wasn't illegally transferred from one company to another.

But I agree that the Obama campaign's actions should have been a flag and we should have worried harder about it, even if they weren't as bad as what Cambridge Analytica did.

> 1. The Democrats didn't harvest the data under false pretenses; the data came from people who signed up for a political app.

Were these people aware all their data and friend's data was going to be recursively sucked down? Somehow I doubt the app included a disclaimer to that effect. Doesn't really matter what your app does if the main goal of it is to, well, harvest data.

2. The Democratic campaign data wasn't illegally transferred from one company to another.

That you know of. It's data, it can get around. The staffer did mention that the Democrats still have the data, and they weren't supposed to be sucking down the whole graph in the first place, hence Facebook's initial freakout (but of course, it was OK because "we're on your side.")

Nope, not "that you know of." Cambridge Analytica got their data from a third party, violating their contract with Facebook. The Obama campaign got their data directly. That is an actual difference between the two actions.

It's possible to say "I think the Obama campaign also took undesirable actions" without saying "and they were just as bad." I agree with that position, as I said.

Here's another difference.

Obama campaign was US CITIZENS who are legally allowed to work on election programs.

CA was staffed almost entirely by BRITISH and CANADIAN citizens, and ALL of their Trump 2016 (and Cruz et all) actions are straight FEC violations of foreign actors working US elections.

Thanks and I agree in theory. It remains to be seen whether that statement was true, or just CA pumping up their own importance.

CA also has Russians playing key roles in its lifecycle, with early work done in Russia, and a link to a Russian government oil firm, Lukasoil, considered to be an overseas intelligence/influence agent of Putin's. I'm less concerned by the connection with Allied national citizons.

Looking at the last quotes, is it worse that Facebook did not protect the data from a violator vs giving it away explicit and intentionally?

"That you know of" is referring to the fact that you don't know where the data is _now_ (well, we know the Dems still have it) and what it's going to be used for in the future, much as in the CA case. Unless you believe that the Dems destroyed all the data harvested in 2012 and haven't used it again.

I believe in judging based on the facts in evidence rather than making assumptions about what happened.

CA acquired data from a third party which did not have permission to give CA the data. The Obama campaign did not do that.

Facebook required the third party (Dr. Kogan) to certify that the data had been destroyed. Dr. Kogan certified that the data had been destroyed, but did not do so. The Obama campaign did not do that.

These facts support the conclusion that nobody should have access to this kind of data, including the Obama campaign. They do not support the conclusion that the Obama campaign did the same thing as CA.

I also don't think you've provided evidence that the Obama campaign still has the data. If I've missed that please let me know.

I also noticed that you are conflating the Obama campaign with the Democratic Party. If you have evidence that the Obama campaign shared this data with the Democratic Party, you should also share that.

> I also don't think you've provided evidence that the Obama campaign still has the data. If I've missed that please let me know.

> “Where this gets complicated is, that freaked Facebook out, right? So they shut off the feature. Well, the Republicans never built an app to do that. So the data is out there, you can’t take it back, right? So Democrats have this information,” she said.

This is what Davidsen has said.

Also, as you said, they obtained the data legitimately. Why _wouldn't_ they keep the data around for future use?

> I also noticed that you are conflating the Obama campaign with the Democratic Party. If you have evidence that the Obama campaign shared this data with the Democratic Party, you should also share that.

Common freaking sense. It's a goldmine for future elections, they would be fools not to share it with the DNC.

Considering how much traction this story is getting, and considering that the Obama campaign used the same friend list "breach" to obtain data, they really should comment to the effect that they aren't keeping the data around. Otherwise, common sense says they are. That, coupled with Facebook's rather "it's OK" response to learning that they sucked down tons of data makes me think FB didn't make a big stink about deleting the data. If they did, they need to attest to that.

> Common freaking sense. It's a goldmine for future elections, they would be fools not to share it with the DNC.

Well, no. They'd be people who are violating their Facebook contract if they did.

When you live in the swamp, it's easy to assume everyone is dirty. The Obama campaign certainly used data in a way I personally find uncomfortable, which makes it even easier to leap to conclusions. However, there's no value in this conversation as long as you don't understand the difference between evidence and the things you want to be true.

We rarely get to deal in certainty; life is mainly degrees of probability.

It's very likely that the Obama campaign retained the data: I'd put it around 75%. Others have different assessments.

Lumping all uncertain things into one bundle of low probability is a massive category error.

> Well, no. They'd be people who are violating their Facebook contract if they did.

Again, who’s actually asking any questions whatsoever about their use of harvested social media data? You’re only in breach of your “Facebook contract” if someone cares to look into it in the first place. You still haven’t addressed the staffer’s claim that Facebook was freaked out about the campaign’s harvesting of data but then said they were “OK” with it. You trust FB to make a stink if the Obama campaign misused data? Seems to me like they were perfectly content to look the other way.

You are very naive if you don't know that many, if not most campaign consulting agencies are entirely apolitical about collecting and shopping around their data to various candidates. It's simply about expanding their market. Do yourself a favor and volunteer on a single campaign for a state or federal level committee-favored candidate to see for yourself.

Sure, the Obama campaign itself did not do the above, but liberal-leaning SuperPACs did

No, the only truth here has been

1. It was not Democrats, therefor it was wrong if not illegal.

If Hillary had won none of this would have come about and even if it did no one in Congress would be up in arms. We have had nearly two years of people trying to delegitimize Trump's win. This is a standard political tactic by the losing side but this time Trump beat both sides at the game.

These politicians and activist refuse to acknowledge that their message is either not acceptable or delivered wrong or even worse, that a large number of people were just tired of them.

There wasn't simply enough money spent by Russia to change the outcome and this is completely ignoring the fact they have been doing similar in nearly every election they could if not within political parties and the media.

> illegally transferred

I'd question illegality. In violation of agreements, perhaps. If there were any, and there wasn't a wink, wink type of understanding on what would be done.

In violation of agreements, definitely, if you believe Facebook's public statement. I think it would be risky for Facebook to lie about their developer policies but that doesn't mean it's impossible. I don't have time right now to dig through archive.org to find an old copy of those, unfortunately.

For a much better examination of legal aspects than I can provide, see https://www.lawfareblog.com/cambridge-analytica-facebook-deb.... Please keep in mind the sentence "I am leaving aside for now the potential claims under British and European law, but those add to this list considerably," which is rather important given the EU's more aggressive privacy regulations.

It's like SuperPAC coordination. Every election cycle there are countless obvious violations of SuperPAC coordination at all levels and parties but these are hardly ever investigated much less prosecuted.


I sort of don't care why the media firestorm is so bad, even if it's unfair, because it means we might see some action which will limit bad actors on all sides of the political spectrum.

IMO the point is the origin application. A Campaign App used for that purpose vs. an app that shows you what your face would look like when you were older to swing distorted news.

> A Campaign App used for that purpose

But how long does the harvested data remain "valid" for that purpose? The Dems still have the harvested data from 2012, is it OK to use it for 2016, which they most likely did?


You can see that already with the Obama staffer. Direct quotes from someone who was there yet the mainstream media simply isn't reporting on it. Just another right wing conspiracy, otherwise CNN would be talking about it, right?

You do sometimes get bits and pieces like the Time article from 2012 that haven't been memory-holed yet, but again, the media won't bring up something like that because the intent to paint this chilling use of social media as something unique to the Trump campaign.

There is a new Washington Post article that covers the Obama campaign story - it's not being entirely silenced: https://www.washingtonpost.com/business/economy/facebooks-ru...

I agree that there is a pattern of bias to all large media outlets on both sides. They may put a piece out like this one to appear impartial but only post-facto and if it supports the rancor of a news cycle that currently leans in their side's favor.

Anyways, there is bipartisan benefit to people becoming more aware of their online presence. Maybe people will use social media less and become less fervently partisan?

They squeezed it in right at the very end, but it was actually rather surprising how little they minced words:

“We ingested the entire U.S. social graph,” Davidsen said in an interview. “We would ask permission to basically scrape your profile, and also scrape your friends, basically anything that was available to scrape. We scraped it all.”

So obviously a fair amount of strategic writing going on but all things considered, pretty respectable.


Bloomberg has also admitted Obama took advantage of it as well:


"The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump."

To me, the interesting part going forward is: will Democrats and the mainstream media continue to frame this as if it was Donald Trump who committed the wrongdoing? I'm not really sensing any widespread public outrage so I would suspect not, but time will tell.

(Yeah yeah "I don't trust your source", but my methamphetamine-enthusiast uncle assures me that Safeway supermarket lets the Jews decide how much salt your food is allowed to have, and Gwyneth Paltrow's magnet stickers can totally cure hemorrhoids...)

Quality whataboutism that doesn't change the overall debate about these practices. You do realize they talked about exactly this in the linked article right?

This is already how HIPAA forces data decisions in the health care industry. We ask ourselves: "Is it worth the time and effort to store patient PII?"

If the answer is no, we don't store it.

>If Facebook indeed violated the 2011 consent decree, then the FTC can fine them up to "thousands of dollars a day per violation [per user]". This presents the FTC with the opportunity to send a message to these data hoarders: protect the data you collect, or else.

No one ever seems to get the maximum fine in America, often because it would "destroy the company".

But we're willing to execute living people.

As the old adage says: I'll believe corporations are people when they execute one in Texas.

The problem for these companies is that hoarding and monetising data _is_ their business model. If they can't do that anymore, they are going to struggle in a serious way.

That seems like a feature, not a problem.

They aren't going to struggle, their currently spectacular profits are just going to get somewhat more modest.

This is what happened to the banking industry after the 2008 financial crisis.

I've been wondering what kind of collapse would happen when something like this happened to a business where the majority of their revenue comes from monetizing their consumers. Of course, a collapse would only be possible if:

* The FTC actually does something about this in a way that companies in a similar manner are also affected (directly or indirectly) * These companies don't find a way to get around the issues.

I'm not convinced anything will significantly damage tech companies whose primary profit driver is their users' data anyway. The general public has been using them for years now and despite any outrage, it's become too integrated in society for people to suddenly stop (unless someone comes up with a better alternative).

I'll have two, please.

Acxiom and others of the old guard have been doing exactly the same for 40+ years. Why should Facebook be singled out for voluntary disclosures when the data mining industry has far more aggregious transgressions.

"...and apparently has already been abused (some might even say weaponized) in a major election."

While it's clear that CA/Russians/whoever tried to influence the election through these techniques, is anyone aware of any studies or evidence that they actually affected anything at all? Has anyone even done a survey asking people if they either did not turn out to vote, or changed the candidate they were going to vote for, based on paid advertising they saw on Facebook?

I'm genuinely curious about this, I'm not trying to be argumentative. After this erupted yesterday, I went looking and found nothing. This whole thing may be much ado about nothing.

I think this ultimately comes down to the problem of attribution in marketing -- how do you determine if an ad or story is effective in actually influencing somebody to buy a product or vote for a candidate? We know millions of people engaged with content from Russian trolls masquerading as Americans, but (like any marketing campaign with an offline action) it's difficult to quantitatively measure the ultimate impact they had.


Yeah, but even a simple survey would at least start to unravel this. "Did you either fail to vote or change your vote based upon paid advertising you saw on Facebook?" would at least be a good start. Even anecdotal stories of people being swayed by a paid Facebook ad would be a start. I haven't seen a single one, and I've looked.

The whole point of using the data like this is to change people's opinion without them knowing why, so I doubt anyone can answer a survey like this accurately.

Perhaps if there were two similar candidates, this would be true. However, that wasn't the case here. These candidates and their supporters were polar opposites. If they were swayed at all, it wouldn't have been through subtlety. The stories would be "I was going to vote for Hillary, but then I saw [X] on Facebook and was so horrified that I decided to [not vote at all or vote for Trump]".

I'm pretty sure that polls like that are ineffective for discerning the impact of any type of marketing. The best evidence I can think of for whether somebody found something influential is whether they liked or re-shared a post, and there's plenty of evidence for that. Those are, after all, the sorts of metrics typically used for measuring the success of a social media campaign: https://www.socialmediaexaminer.com/10-metrics-to-track-for-...

When broadband got reclassified by the FCC under the "huge loss" for Net Neutrality, a little noticed M.O.U. was published as part of that decision that explicitly stated the FTC would be now be beefing up its presence to protect the consumer. It's only in Facebook's interest if they believe they'll get caught. If they think they can sell this data for profit and escape scrutiny, they will. Here's hoping this is a sign of more work to come from the FTC.

Isn't the FCC being run by a Trump shill at the moment? I mean, they just repealed net neutrality, I doubt they're going to go around imposing fines on Trump buddies now are they...

FTC, not FCC in this case. And what was repealed was a huge blob of legislation called Title 2, not “net neutrality”.

Well net neutrality was repealed. Without title 2 there is really no net neutrality until some other law or regulation puts it back into place.

The reason I make this distinction is because the limited neutrality is the least of what the legislation does.

To any engineers reading this who are now working at Facebook, you have a choice to make:

Stay in the organization and work to turn it away form casual misuse of personal information. Prevent an Orwellian future of machine-learning assisted, personally targeted messaging preying upon our fears and insecurities. Stand up and speak out against the performance of unethical psychological experiments on unwitting participants.


Leave now.

This is one of the important moral issues of our time. To stay on the sidelines is unacceptable.

For what it’s worth, when I worked there, all the engineers I met sincerely cared about just…making a useful product, and respecting people’s data in the process. Pretty much the only guaranteed fireable offense was looking at someone’s data without permission and a valid reason, e.g., to directly fix something broken about their account, which almost never required viewing anything private anyway.

Nobody appeared to be “casually misusing” data—I think the problem is that they’re largely just engineers, particularly young ones, naïvely considering only the engineering side of things. All the data queries go through the robust privacy-checking system, so everything is good, right?

In a case like this, they didn’t consider the optics of what happens when someone scrapes the public (at the time) profiles of Facebook users and uses that information for nefarious deeds. What happens when users are angry not because their private data was “breached”—a technical problem with an engineering solution—but because they didn’t realise how much they’d already shared publicly (even if you explicitly told them) and how it could be used to influence them en masse?

One of the problems with the Facebook API is that it is disconnected from policy on too many points. The policy is all hand-wavy honor system, and the API lets you trample all over the policy.

Case in point, one of the most common policy violations is prefilling the user message on posts made via the API. It is forbidden. But the field is right there for you to abuse and put whatever you want into it. Sure there are some automated enforcement algorithms and policy employees look at things when complaints go up, but if the policy says you can't do it, why on earth does the code allow it?

OK I know the pat answer is that apps are allowed to prompt the user earlier in the workflow for the message, and then use that value when calling the API. That is true but weak (what would it hurt to eliminate that loophole vs. the benefit of no longer having to detect and take enforcement action on an impossible action) -- the point remains, if they really cared about their vaunted policy and protecting the user, they would put more controls directly into the code behind the API to disallow prohibited actions.

These are things where smart engineers can make a difference. Spend some time on the FB Developer Community Group and you will see the flood of questions from developers who are completely ignorant of the policy, even on basic things like "don't use an account with a name other than your own" aka, there are no business or developer accounts. Many of them willfully ignore policy and just do what the code allows them to do. A lot of good could be done by FB devs taking more accountability for how the platform is abused.

It is not so much that something is wrong, but that everything is working as it should. The system is the problem.

Case in point, Cambridge Analytica used ill-gotten data from 50 million people to craft extremely effective political ads. And since user engagement with those ads was very high… Facebook's algorithm made it cheaper for them to buy even more ads.

Well, not overwhelmingly effective. Ted Cruz was a client and look how well he did.

Got second to their other client.

In this forum, I think this is the most important comment.

I think there is enough information available for Facebook employees to be faced with a decision, after which they are morally culpable for the growing net-negative effect that Facebook has on society.

I'm not a Facebook engineer--and I'm probably not smart enough to be one--so I can't really say how I would act if faced with an ethical decision to provide for my family or take a stand. However, I think anyone who has been employed by Facebook is capable enough to be able to immediately find comparable employment.

Similarly, I think there were lots of well-meaning people involved in Big Tobacco, who didn't realize they were contributing to the deaths of millions of people. I imagine there was a similar inflection point for them, as well.

(Please note, I do not think Facebook is as damaging to the world as Big Tobacco. I also don't think that individual contributors are as culpable as leadership. I am not comparing the degree of moral evil, but am comparing the complicity of individual contributors.)

"put your bodies upon the gears"

I absolutely agree with you that this is a moral decision for the employees. At a former company I pushed to improve our user privacy and decrease our storage of unused personally identifying information.

I left that company when they neutered my project to only affect the UI...

We aren't soldiers following orders, we are humans that can reflect on our actions.

That said, I had the savings to be unemployed for a while, not everyone does.

Just following orders wasn't an acceptable excuse at the Nuremberg trials. It shouldn't be an excuse now.

Or, you could make money off people who don't know what they're giving away and don't care even if they're told, be one of the experimenters, and make a lot of money doing so.

I'll take one of these, please

Another person's job is an easy thing to sacrifice.

As if someone at facebook will have a hard time being hired elsewhere. But either way, that's what solidarity is all about.

Luckily, that wasn't the only option.

Prevent an Orwellian future of machine-learning assisted, personally targeted messaging preying upon our fears and insecurities.

Is this the only option?

Why can't it (not necessarily facebook) instead be a "machine-learning assisted, personally targeted messaging to help support your long term goals?"

Because it's apparent that's not Facebook's goal.

Surely you've deleted your own FB account, and went on to convince your friends and family to follow suit, prior to suggesting that?

He made an impassioned plea on the social network of his choice.

Oh stop with this ridiculous hyberbole. Just stop.

>This is one of the important moral issues of our time.

No, it's not. Even if it was (and it's not) I'm not even sure if it would crack the top 100. For example, did you know there are people without access to clean water? There are civil wars? State run gulags? Did you know man-made global climate change is a thing? How about that we're going through an unprecedented ecological collapse? All non-issues. The big moral problem of our time is a social media company that wants to sell you shit.

Doesn't this entire issue corroborate the idea that this ISN'T just about a social media company trying to sell us shit?

Two of the issues you mentioned, state run gulags and anthropogenic climate change, are issues really only solvable at the federal level. Facebook's and Cambridge Analytica's ability to influence an election can have a profound effect on those kinds of issues. I mean, we now have a climate change denier in the White House who is dismantling the EPA. If propaganda spreading through Facebook created that, could that not also be partly responsible for our inability to do something about climate change?

That's just one example, but I think you're being just as hyperbolic by saying this wouldn't crack the top 100.

>Doesn't this entire issue corroborate the idea that this ISN'T just about a social media company trying to sell us shit

No. OP called out Facebook, not Cambridge Analytica. OP attempted to shame Facebook employees not Cambridge Analytica employees. Facebook is here to sell targeted ads.

>but I think you're being just as hyperbolic by saying this wouldn't crack the top 100.

I stand by it. This smells like a big nothing burger. I'm not even sure what the news here is. Candy Crush probably has info on hundreds of millions of Facebook users. No outrage there.

It isn't even novel that Facebook was used for political targetting, as the Obama, Romney, and more broadly DNC and RNC did the exact same thing. I just assumed this was all part of that vauted digital strategy all the news outlet were blaring about everytime one party won an election. It may be a coincidence that this is a problem because Trump used this method for voter outreach. Maybe.

Maybe it's the potential Russian meddling that's the new news here? But then it's not really what the news outlets are focusing on. It's all about how Cambridge Analytica created 'psychological profiles' on voters...which sounds more like a query that was ran against the dataset.

It's the obvious malicious intent that we see time and time again with Facebook, the companies they own, and like-minded companies like Google and Amazon. People are fed up with the BS. It's atrocious that no tech people speak out or get the airtime to inform people what's going on without their knowledge (and most of the time, consent). Facebook is a scourge to humanity.

He said one of the most important issues. No need to be on such a strong defense for that.

And I disagreed with that characterization. Especially in context of OPs hyperbolic call for Facebook's employees, and not Cambridge Analytica's employees, to quit their jobs.

And I disagree with your characterization. Selling people shit isn't the bulk of the problem like you say - it's everything that surrounds it. I'm not sure why you don't think of it as a major issue, but I hope that someday you will.

Not that I necessarily agree with OP's "call to action," but as software people we have more potential for impact on software-related issues...

Do you realize Facebook has the power to change this all, but instead keeps people misinformed with their obvious malicious intent?

Don't forget the starving children in africa.


Let us please remember that these incidents are not specific to Facebook, rather they are systemic to the big five.

A couple of years or more I was posting on Facebook regarding Cambridge Analytica's practices and was considered a tin foil hat and crazy.

No the reason I was able to shed some light at the time was I knew exactly how we could utilize the Facebook API back then to elicit the kind of data we are talking about, and completely legally. Nobody needed to circumvent FB API policies, it was yours for the taking.

I didn't do it although I did put together multiple PoC's from 2011 to 2014 to see what was possible and it was bad.

Another thing we should remember is that Cambridge Analytica is just one small tip of a fractal iceberg whose body is Facebook and the big five, your internet connection and certainly your smartphone themselves.

Google, Apple and Amazon are no less culpable in this regard.

The question now becomes which side of history we want to be on.

Another question is we assume we want to take our privacy back and how we do that with consent and assurance.

I don't have a Facebook account anymore but I'm still tracked as we all are. My mother doesn't like me not being there but a small price to pay. I can contact her elsewhere and do.

Surely enough is enough?

I think it is time to look for broad scale technologies that are better both in the real world and in our private world.

Out of interest, is there any evidence that Apple are collating data and making it available to 3rd parties in the same way as Facebook? They like to position themselves as more caring of the user’s privacy than the rest, but I’d definitely like to know more about any problems.

Unlike Google and Facebook, Apple does not make money by selling user profiles for marketers to target.

Only because iAd (https://en.wikipedia.org/wiki/IAd) was a horrific failure.

In early 2011, the minimum buy price on the platform was $500K. By midyear, $300K. By early 2012, $100K. Early 2013? $50 (no K missing, just fifty bucks).

I believe you that it was a failure, but that doesn't follow from the minimum price dropping. Perhaps as they gained confidence in the system, they allowed smaller buys with smaller prices (but still just as profitable).

Are you sure?

Specifically it isn't necessarily about advertisers it regards surveillance.

Advertising revenue can be completely offset by Governmental tracking.

As I said in the other post we can't prove the positive but it certainly is a feasible option.

I know I could do it given the charter.

Advertising revenue can be completely offset by the government? That seems unlikely given how much these companies make off of advertising. It would be amazing that the Apple and the USG could hide that kind of massive money transfer off their books.

Isn't facebook embedded in iOS to some degree?

It used to be one of the only share targets (Twitter was the other). iOS 10 & 11 removed it; to log in with FB or share to FB through the OS, you must install the app to do so.

Lack of evidence is conspicuous in of itself, although that right now is tin foil hat territory. I'll tell you in a few years.

On the other hand I'd refer you to Bletchley Park.

Turing et al knew the decrypted Enigma messages but the Government were unable to act.

For good reason.

Secrecy is a thing

Apple likes to loudly proclaim that they care about protecting their user's data, but they also refuse to put their money where their mouth is. That to me is telling enough.

I do think it's important to note that I have not seen direct evidence of them abusing that data, but we've seen plenty of companies/governments/organizations doing bad things for years without direct evidence.

What are you referring to with “refuse to put their money where their mouth is”?

They refuse to open-source their products, and they also refuse to put in zero-knowledge encryption systems.

I guess you can argue that WebKit, CUPS, Darwin, LLVM etc were open-source before Apple started using/sponsoring them (or based new software on them) and so had to continue, but Swift was a from-scratch project that was open sourced.

As for zero-knowlege encryption, iCloud Keychain is although the rest isn’t, you’re right there. Hopefully they’ll move in that direction.

I'm not saying that Apple is staunchly against FOSS or anything, and they absolutely do release a lot of FOSS stuff (which is awesome!), but their platform is absolutely not FOSS. I still can't compile my own iOS or MacOS.

If Apple open-sourced their OS you'd have a CentOS in half a day. Apple definitely doesn't want clones, it means less customers and less cohesive branding, so is there any reason this wouldn't be a very damaging move?

It's definitely possible that this would have detrimental effects to their bottom line. I know I would start buying their products, and I would encourage others to do so, though, I'm not sure if that would make up for the loss.

But that's irrelevant to the point. The point is that Apple prevents users from understanding or controlling how the user's data is being used. Just because we understand why they won't fix it doesn't make it any less true that they could fix it, but choose not to.

And that's what I mean by "putting their money where their mouth is". They talk a big talk about protecting their users, but their actions are different than their speech.

I think this is good advice, not only because it generalizes the problem, but also because it avoids the politicization of the topic re: Cambridge. This shouldn't be viewed as a left vs. right problem.

Absolutely and hits the nail on the head.

We can't be seen to pick on Facebook or CA here since there is a bigger picture.

It's not about picking on anyone, it's about a line being crossed and bringing it back home.

Thank you for your comment.

It might be beneficial to engage in such fiction, seeing how unable the right is to even pretend to put country above self-interest with regards to election hacking.

But let’s not pretend that this fiction is true. Only one campaign hired this company. And if they are bragging to journalists now that they are willing to entrap politicians with hired prostitutes, I’m fairly certain they would have had some things in their sales pitch two years ago that would raise red flags in an ethical campaign.

The people you hire are a reflection of your character. And if they end up arrested one after another, it becomes less and less likely to just be bad luck.

Another point is even if Cambridge Analytica didn't exist, Facebook itself would and are doing the same things themselves, although not over a 50M population radius but over a billion. With a budget to match.

It's laughable that any Facebook user assumes any degree of privacy.

What's far more concerning, and what this probe doesn't appear to address, is what Facebook does with the information of non-users.



You seriously think grandma and Joe the plumber are aware of Facebook's constant data collection?

Let's have empathy for people outside the tech bubble and realize that it's our duty as technical people to educate people around us about these issues.

I was on the phone with my mother and told her about the recent facebook things and she said "I am very careful about what I post on facebook so I don't worry".

Then I told her about what they actually do with the pixel and like buttons and she was flabbergasted. "You mean they can see what I read even if I don't press the like button?"

Not sure I convinced her to delete the account though as all her friends are there.

I would guess that the elderly are much more skeptical about handing over their personal information online than younger generations. For example, Facebook started off in colleges. And other forms of social media like Snapchat, Twitter, and Instagram are predominantly used by younger cohorts who are ambivalent about what companies might use with their data. The information about data collection is out there, what with Google searches and all. People choose not to abstain.

I'll give a more recent example: I meet 20-somethings at a meetup I go to each week. Most of them go to a pretty well-known university (thus, they are well educated), they ask me if they can connect with me via Facebook. I say I don't use Facebook, and then spend an extra 20 minutes explaining all the reasons why often to their astonishment. In my mind I'm like, "Really? How do you not know all of this? You read tons of magazines/journals?"

The sad reality is that billions of people don't care. Even with this whole scandal, I'd be shocked if Facebook's stock price was hurt in the long term.

You don't talk to many casuals then. They are fully aware that all of their data is being recorded and used against them. They've been aware since the massive NSA scandal.

What about the like button on web sites and the FB pixel that is invisible to the user? Again, you seriously think Joe average is aware of that?

In my opinion, the problem isn't just with being an active Facebook user. Anytime you visit a website with a FB Pixel, "Like" button, or any other FB embedded content, you are tracked - whether or not you are a user.

I do have the Facebook trackers disabled in Ghostery but I wonder if that’s enough

You're being additionally tracked in the reverse by a third party affiliate on the knowledge of you being a person who blocks Facebook, if that makes any sense.

Adblock detectors that function in the same vein as "FuckAdblock" look at if the client blocked a Facebook pixel.

Seems like the fingerprinting that can be done in this case is much less -- the affiliate would just get "ip / website / using-adblock" instead of "ip / website / FB profile ID" right?

Or are the websites providing identifying information like email? (I've never heard of this but I'm not well-versed here).

It's more like the combination of ip / website / using-adblock / screen resoluction / installed fonts / installed plugins and their versions / hash-this / hash-that are individually non-identifying by themselves, but a combination of them can be used to uniquely identify an individual. [1]

But who, exactly, is the individual? Well, that comes later. Maybe your blocker fails to block something that is gathering that data plus your identity. Now, all of that activity (that was previously not tied to an individual) can now be safely linked to you, the individual.

1: https://panopticlick.eff.org/

Very good point, those fingerprinting techniques were what I was missing.

Also, thanks for sharing that EFF link, I really like the breakdown of how much entropy they can get from each fingerprint dimension.

I do not doubt this, but do you have more information/sources?


Yea, so "laughable" that people are not constantly paranoid and super informed about how the information industry works. /s

It's not people's fault that facebook is abusing their data. That's some sociopathic logic.

Just think of all the children whose whole lives are being put unto Facebook by unthinking parents. They never stood a chance.

Remember for a lot of regular internet users, Facebook is almost indistinguishable from "the internet" itself.

Prediction: Facebook's goal for this investigation will be to make sure the public doesn't learn that Cambridge Analytica was only one of countless political actors that "somehow gained access" the pool of user information.

I was in meetings with FB almost 10 years ago, as the OpenGraph API was being implemented, where they were openly selling, to anyone willing to pay, exactly what CA supposedly "hacked their way into".

In Australia the Liberal party did basically what CA was doing - in 2013. There's a pastebin dump floating around containing the JS code; notably, it flags any of the user's friends who lived in specific electorates (I think they were opposition-held swing seats at the time)

The Obama campaign was very proud of doing something at least as bad, but I don't think we'll see that on the news, which offers a glimpse into what the motives are here.

Why does everything have to be US based?

For instance, before this, some of the most ethically questionable censorship stories I have heard from Facebook have had to do with minority groups or various activists in more repressive regimes around the world being blocked or censored.

Likewise, with Cambridge Analytica claiming to have worked with more than 200 elections around the world [1], and Channel 4 not painting an exactly flattering picture of their ethics, it's very possible that some of the most disturbing details that will emerge from this scandal have zilch to do with Donald Trump.

[1] http://www.businessinsider.com/cambridge-analytica-secretly-...

The extent to which the HN consensus is simultaneously exuberant about EU regulatory enforcement because "you should follow the law" and angry about other forms of regulatory compliance is astounding.

Repressive laws under authoritarian regimes are laws too. At the very least, we should admit that we're evaluating the specific rules (or the people making them) under some other rubric before deciding whether they ought to be obeyed. The sentiment you express here is exactly why "companies should obey the laws where their users live" and "countries should make laws according to their values and enforce them against websites accessible by their citizens" are too simplistic.

At least as bad seems a bit strong. The impression I'm getting is that they gave out an app explicitly from the campaign that also collected some info. There's a major difference between that and putting out an app and getting info under false pretenses

Yes, I too doubt that a very different case from many years ago will appear on the news, which is for current events as far as I'm aware.

The Obama campaign was involved in laughably simple PR and door to door efforts compared to the complexity and microtargeting of this one.

The Obama campaign employed Big Data to target which doors to knock on and which issues to bring up at each one.

CTR would be a better comparison, and perhaps even more nefarious, since they employ full-time... let's call them "engagers".

Both used Facebook data, but the comparison doesn't go further. The Obama campaign did something very different - as different as legal, ethical medical experiments using informed consent and the tests on the Tuskegee Airmen. I won't repeat what was said elsewhere:


To put the FB selloff into a market perspective: -Equifax is only off 13% from its peak after its unprecedented leak of nearly every American and many Canadians' credit info, leaving the population vulnerable to identity theft. The population mostly never agreed to give Equifax this information, but Equifax collects it anyways.

FB is off 8.5% now as a client business failed to adhere to FB users' privacy for data that the users were willingly giving out to FB and the client (but not the third party). Not likely much more downside to the stock on this news imo.

Perhaps the market's reaction reflects the widespread belief that this publicity may drive users away from Facebook, whereas Consumers can't really opt out of Equifax's collection.

Biggest concern for Facebook has got to be them investigating the other sharing of data beyond Cambridge Analytica. I seriously doubt they turned the other way for conservative but not liberal think tanks / firms.

And with Peter Thiel being a Trump supporter, even giving a speech on stage at the RNC in praise of Trump, it would be crazy to think that Palantir's involvement isn't of similar (if not much greater) scale as Cambridge Analytica.

Palantir, in my guess, is probably like CA on steroids.

Whenever I would talk about stuff like CA a few years back to my friend who works at Facebook, he would just say Palantir is even worse. Palantir has everything that can possibly be scraped from Facebook, and everything else they can get. It's not far off from that show Christopher Nolan's brother made... Person of Interest?

I'd be curious to read more about what your friend saw. There are ways to make that happen, e.g.


I would assume Facebook has more than Panatir can scrape. They are the real problem.


Looks like CA is just one means Palantir has used to get Facebook data.

For those who may be concerned about this being a partisan hit job because it's a news article, the primary source from a verified account: https://twitter.com/cld276/status/975565844632821760

Also, a new article from the Washington Post: https://www.washingtonpost.com/business/economy/facebooks-ru...

They were directly involved with the Obama and Clinton campaigns... and so was Eric Schmidt.

The Podesta e-mails have conversations between John Podesta and Sheryl Sandberg about meetings to help elect the first woman president. So, while I think Facebook is morally reprehensible, this latest media outrage because of connections to the Trump campaign feels a bit like an economic hit job more than anything.

They also have datasets of 50M user profiles floating around out there filled with Facebook-like data. There still hasn't been a public leak of that kind of data that I can think of. I think a concern for Facebook is also what happens if/when 50M Americans' names, gender, hometown, birthday, and names of 500 closest friends become public for all to see. That's not really data that you can change or put back into a bottle.

The name and address data isn't anything unique. There's probably multiple companies with better address data than Facebook has. And the national party organizations have pretty much complete voter rolls with addresses.

The unique data is the friend graph and the likes, which they can use to (quite effectively) predict political attitudes.

In the long term we need HIPAA style regulation for all kinds of personal data: friend graphs, behavioral histories, private messages, and especially things like location data or voice assistant audio samples.

Leak such data without explicit customer consent? That will be $10,000 per incident. So if you leak 100 data points of someone's location history that will be a $1,000,000 fine.

Explicit consent must be per-incident as in "YES I give my consent to send this information to <recipient> for purpose of <...>."

That would incentivize strong security practices and even more importantly dis-incentivize data hoarding beyond what is needed to provide a service. Hoarded personal data would be a gigantic risk and liability.

It would just pass the buck. What everyone on HN is clamoring for will result in basically more bureaucracy and longer EULAs with no actual change in business practice.

Businesses don't take it seriously because people don't actually care. Some do. The vast majority don't. They might say so in a survey, but at the end of the day, Facebook (and companies like it) will continue to survive doing what they always have been and people will continue using those services.

That's the root of the issue, though. People really don't care as much as posters on HN think they do. If we could acknowledge that, I think we could come up with better solutions.

The GDPR should take care of that to a large degree.

Consent must be asked for in a clear understandable fashion.

Burrying some legalese crap at page 29 of your 12'000 word TOS doesn't cut it.

Could be that Facebook tries to push the envelope yet again. They may come to regret it.

We'll see.

If I had to bet, it will not be the case that we'll see some big exile of users as a result of having to click through an additional "Agree and Continue" dialogue to get to what they were going to do anyway. The GDPR will do a lot more to appear to be doing things right than actually benefiting users.

It depends. Courts could rule that click through licences don't constitute "informed consent", because let's be honest, people aren't informed about what they're signing.

Respectfully if users are honestly considered too dumb to read targeted dialog boxes, the regulation required to "fix" that "problem" is going to be downright draconian.

> implying that adding an additional "are you sure" to the process of joining a social media platform _doesn't_ benefit users.


It'll be about as beneficial as the "this site uses cookies" notices or the Vista UAC that everyone just clicked through anyway.

What about the data already collected? Do they need consent to keep it?

I'm not the expert, but from what I know

You have a right to learn what data they have about you, with whom they share it and a lot more details In addition you can opt out of anything you agreed on earlier and I would be surprised if you can't request deletion if the business has no business reason to store it. (Arguably, a difficult call with Facebook whom's entire raison d'être is to fuck with your privacy).

That's the root of the issue, though. People really don't care as much as posters on HN think they do. If we could acknowledge that, I think we could come up with better solutions.

FB didn’t drop billions in value because nobody cares, it’s just that most people take a lot of repetition to grasp the scope of the issue.

Investors care when a company is painted in a negative light and threatened with regulation.

That is not the same as individual users caring about data privacy as much as HNers believe.

What constitutes that reputation if not the views of “individual users” determining the fate of the platform?

I think you’re selling poeple short, and in the face of evidence contrary to your claims.

Looming regulatory threats will push any stock price down.

And I'm not selling people short. I think the risks are incredibly overstated and the people who appreciate free services (acknowledging some data sharing is happening) are not necessarily just dumb-dumbs being preyed upon.

It’s not a free service, it’s a surveillance platform that provides a paid service to advertisers.

So is Google. But the semantics are irrelevant.

The same Google covering its ass with $300 million to fight disinformation on its own platforms?

Yeah, that same one?

Is that what taking down videos that are conservative or critical of the government is called now?

> [Spammers] don't take it seriously because people don't actually care. Some do. The vast majority don't. They might say so in a survey, but at the end of the day, Facebook (and companies like it) will continue to survive doing what they always have been and people will continue using [email].

"People don't care" about things like this until there are consequences. Nobody cares about pollution until it impacts their health or destroys their property. Nobody cares about financial crime until it crashes the economy and costs them their job.

I think we're reaching the point where all these data mining honeypots we've built over the past 20 years are being used in ways that are nefarious enough that people are starting to care.

I feel like claims like these are easy to make, yet very difficult to substantiate.

I think it's in the media a lot right now because it potentially helped Donald Trump win an election (despite the Obama '12 campaign being praised for similar tactics).

People having the data I put on Facebook (which is not notably more than is available through public sources) is not going to destroy my property or lose me my job. The rhetoric here has been dialed up to 11 and it's not winning any converts.

Yes, always thought it weird that folks are so protective about health data but nothing else. Why the dichotomy?

Yeah but then who is going to pay to use every single website? I'm going to do that. I'd rather they spend all this time attempting to show me ads that I will never see and be able to use great sites/apps for free rather than keep paying everytime I want to go to a website.

This is basically what GDPR is.

I expect that FB will be receiving a few milliong letters like this [1] in a couple of months, when GDPR will be kicking in. And following that, a few million requests (smaller number) for the "right to be forgotten" [2].

#1 was on the front page of HN a couple of days ago.

[1]: https://www.linkedin.com/pulse/nightmare-letter-subject-acce...

[2]: https://gdpr-info.com/the-right-to-be-forgotten/

They already had an obligation to respond to such inquires under the older EU Data Protection Directive. But they generally ignored the requests or only gave incomplete data (e.g., you'd get your facebook posts, etc., but no tracking data they had on you).

The focus of this scandal is curious. Does anyone seriously expect privacy for information that they willingly share with hundreds of people (their facebook friends)?

If you think this kind of information floating around is damaging to society, then you're making an argument against the very idea of FB. Fine; it's at least worth thinking about how social media is changing our society. But to really be upset that this data is out there and being used, after willingly sharing it with many people, is kind of ridiculous. What did you expect? You should assume that anything you do on FB, short of private messages, is part of the public record.

Everyone is targeting you based on information you put out in the world. Every major political campaign in the US works with a database of voters that includes party affiliation, past turnout, name, age, etc. They can also fold in the same data that advertisers use. Being a good citizen of a republic—and a savvy consumer—in the modern world means thinking critically about who is trying to persuade you and what their agenda is. The government cannot protect you from persuasion.

You made your point very clear. Thanks. I concur.

I love how now this issue is alright to talk about on hacker news. For some reason when it was happening and all of the alarms were going off, it was banned content on hacker news. Technology is inherently political with political ramifications. -_-

Facebook is just one target right now. Working in ad tech, you know how many data brokers are available and how much information your phone and apps, regardless of OS, leaks real time personal data. The issue is with the ad tech industry as a whole.

Wait a minute.....

Wasn't the FTC already "monitoring" Facebook?


Oh, that's right. That whole monitoring for 20 years thing, that they're also applying to Google and a few other companies is a complete joke. If anything, it has become almost a badge of honor/certification thing - like "Look, the FTC is monitoring us for 20 years, so that obviously means we can't possibly abuse your data or do anything nefarious with it!"

"...50 million users..."

Anyone know how this number was added up? Any reason not to believe it wasn't 500 million or 5 million?

The Guardian's original article on this says that number was shown in documents from 2015 [1]. However, the whistleblower said that by now it should be over 230 million [2].

[1] https://www.theguardian.com/news/2018/mar/17/cambridge-analy...

[2] https://www.theguardian.com/news/2018/mar/17/data-war-whistl...

It's the number of Facebook users whose data was accessed by Cambridge Analytica, allegedly without being authorized for how they used the data.

It's 270.000 users directly impacted. The friends of those users has also been fetched, but with a limited amount of data most of which is pulic already.

Facebook is a net negative force in our society. Particularly for the startup ecosystem.

Disclaimer: I'm anti-democracy in it's current form so before you down vote me think through my point carefully.

What I don't get about this whole issue, is that we are basically admitting that the average person who uses these services has poor critical thinking skills and the American democratic process is easily gamed as such. I feel we are trying to fix the wrong thing. Sure, regulate FB and any other company that happens to collect lots of user data I don't have a problem with that. My fundamental issue is that FB is mostly an opt-in service. As far as I can see, the shadow profiles they create on a user who isn't signed up for their services doesn't really contain enough data to be of material impact for the type of things that a company like CA is doing (it's mostly to help in it's ad network to sell you more stuff rather and is rather well anonomyzed).

The only argument against the fact that FB is an opt-in service is that it has a near monopoly on social media and seems to either buy or kill any serious threat. More important than trying to regulate FB's data collection and privacy would be ensuring that our antitrust and monopoly laws are being enforced to remove FB's near monopoly.

Further, I'm not here to defend Facebook, but I feel it's being used as a scapegoat for an easily gamed democratic process where rather than biting the bullet and fixing that, we are saying it's FB and CA are the real evil here. In fact they are not. They are for-profit businesses who are operating within the current regulatory framework they've been provided. It seems obvious to me what really needs to be fixed is a broken electoral process.

I think you make a great point here: we absolutely need a population that thinks more critically before they act, whether it's voting or signing up for a service or purchasing a product.

Unfortunately, I'm not sure how realistic that is. I think people just don't want to have to critically think all the time, and it's unrealistic to expect every member of the population to exert constant vigilance against ill will. There are simply too many forces trying to manipulate us and get our attention to expect every person to never mess up ever.

This is where the government steps in. Expect companies to be reasonably transparent about their intentions (signing up for Facebook definitely did not give me adequate warning a decade ago about their intentions with my data, and subsequent changes in their plans were not adequately expressed to me as a user), and reasonably cautious in their expansion (I signed up for Facebook at the age of 12, which is technically against Facebook's TOS. They didn't put enough effort into policing those TOS to kick me off the site at the age of 12, and I certainly wasn't mature enough to understand the breadth of Facebook's TOS. Unfortunately I don't think many non-lawyers are equipped to understand the true meaning of signing up for Facebook, which... is it's own problem).

Basically we need these corporations to be better citizens than then currently are, which shouldn't be terribly hard since they're currently downright psychopathic citizens, exploiting the law and their large workforces to manipulate other citizens at every turn. In a better world than our own, corporations would actually be model citizens, but that's probably not realistic in the capitalist system. And we also need the government to do its job better, by punishing corporations that act selfishly so that others actually have an incentive to behave well.

How do you distinguish a population that "thinks more critically before they act" and one that lives in constant fear of social backlash? They're two sides of the same coin!

An intellectually free society is one in which one doesn't have to think critically about the potential social and economic ramifications of every remark. It's a society in which people can pitch inchoate ideas, receive feedback, and iterate on worldviews.

That we're increasingly living in an intellectually unfree society isn't the fault of "corporations" and their citizenship. Instead, it's the result of a push toward controlling people in the name of 'fixing' society and preventing 'harm'. History tells us that down this road lies only death and blood.

I agree to a certain extent. I'm not a fan of legalese and do think things like TOS should be much simpler. To have rational consumers who think critically we need to remove hurdles for people to think critically. But here's my main issue. Our economy and governments are built on the premise of rational consumers. If that basic assumption isn't in place then let's stop the facade, call it out for what it is and change the system. Let's look into a technocracy or epistocracy. Let's stop pretending that America can function as an unfettered capitalist nation and really it needs elements of socialism to function properly and reduce concentration of power and wealth.

>There are simply too many forces trying to manipulate us and get our attention to expect every person to never mess up ever. This is where the government steps in

If government should step in to prevent manipulation by misinformation, then one or both of the New York Times and Fox News needs to be shut down, depending on who you ask.

The argument is that even if you give information to a company willingly, they should not be able to do whatever the hell they want with it. We have accepted this maxim for certain pieces of information already, I don't see why it can't apply to other information too.

If you think FB can do no wrong since we willingly give them this information, what do you feel about HIPAA? Why is it okay for FB to sell data I willingly give them but a doctor can't? If the answer is "one is illegal, the other isn't", then you aren't arguing against the idea of it being made illegal, just that we don't currently have a law saying they can't do it. Laws change though.

FB isn't a life or death service and neither is any social media for that matter. I think we sometimes seem to forget that the world existed and did rather well before the FB ad network made it's debut. At the end of the day that's what FB is. It's a large media company/platform that sells ads. Don't ever forget that. I can chose to not use FB and none of my fundamental rights as a human being are being hurt. Comparing FB and health care is disingenuous. I need health care to survive whether I like it or not. HIPAA exists to prevent the for profit medical and medical insurance model from taking advantage of it's patients privacy, because otherwise they would since patients have no choice but to go to a doctor, and provide lots of detailed personal information to their hospitals, insurance and other health care practitioners.

>It seems obvious to me what really needs to be fixed is a broken electoral process.

Could you propose an alternative that addresses your points.

Isn't regulating companies like facebook part of fixing the democratic process? Or what did you have in mind?

Why don’t they just offer the option to turn off ads? If there’s no ads to show to you because you’re paying to opt out, there should be no reason to harvest your personal info. They have 2 billion active users and make around $50 billion a year. So they’d need to charge $2 per user per month to opt out of ads (and could probably charge more). The fact that this isn’t even an option just reeks of their holier than thou corporate attitude.

The type of person to pay to turn off ads is generally the type of person that advertisers want to influence.

I don't know about that.. those are probably the people who it's way harder to influence, or even not possible. It's not worth their time. Focus on the.. I don't know.. 70%? who doesn't care if ads are flying across their screen, it's easy to influence them.

I don't know where else to say this so i'll say it here as it's relavant.

Is there not a way to have a browser extension that would scramble the meta data that is read by websits lik Facebook and Google? so we would still be feeding data to their brain but it would be worthless and random gibbrish. Does that make any snse?

I'm somewhat surprised by the amount of criticism levied at Facebook here. Consider the following:

1) Both Android and iOS allow apps to access your contacts, which in aggregate is more or less the same kind of social graph that Facebook has. If you happen to be in someone else's contacts, you don't get a say here either. I suppose Facebook's data is richer in some ways, but not in other ways.

2) When Twitter removed API access for 3rd parties, there was an uproar in the developer community about how evil this is and so on. There's a trade-off here - openness at the platform level necessarily means less privacy for users.

3) A lot of the criticism Facebook has received in the past (both here and elsewhere) had to do with not allowing 3rd party developers to do more and hoarding user data, which is not theirs, for monetization. Here Facebook was explicitly giving the app owner and the user the power to decide - the app owner could ask and the user could either accept or decline. You could argue that this isn't adequate protection, but consider how this works for other platforms such as Windows, Mac OS, iOS and Android. Apps can access more or less everything and permission dialogs, even where they do exist, aren't taken seriously by the user.

4) Most publishers that are currently publishing these articles criticizing Facebook are also selling everything they know about you to marketers, often more explicitly for the purposes of targeting. The "scandal" here is that a third-party app gathered personal information that wasn't supposed to be used for targeting and the data ended up being used for targeted political ads. Most publishers have no problem explicitly selling whatever data they can get on you to these centralized data brokers who will sell that data to anyone.

5) All this talk about privacy and data aside, the motivation seems to be that the wrong guy won the presidential election - I don't see anyone whose personal data was supposedly used in this manner being upset nor anyone owning up to the fact that they were falsely manipulated into voting for Trump or not voting. It seems to be mainly Clinton supporters being upset that other people were manipulated into voting for the wrong guy, amplified by the same concern about privacy and social graph data ownership issue we've always had.

6) If we accept that it's the presidential election result that most people are upset about here, the media is even more culpable, both from creating this false narrative that it was not a close election and prematurely taking the moral high ground against the potential Clinton administration by focusing on the irrelevant stuff (emails, etc). And that's just the "mainstream" media, before we get to Fox News, etc.

> The "scandal" here is that a third-party app gathered personal information that wasn't supposed to be used for targeting and the data ended up being used for targeted political ads.

The scandal is that an organization impersonated a health care research entity and knowingly collected PII for use in political actions. Not only is this awful in itself, but it undermines public health by making people distrust legitimate data collection projects for beneficial health purposes. It's similar to when the CIA used a vaccination effort to locate Bin Laden, and now those aid personnel are routinely attacked and not trusted by locals which makes it more difficult to eradicate disease. If you are representing yourself as a health care entity and collecting PII for stated purposes of public health, you are likely bound by HIPAA, and I would like to see people go after this company for HIPAA violations as well as fraud.

If we’re going to go this route, Google should be hauled in first. You can easily choose not to play with Facebook. Google? Not so much.

I don’t see how this argument makes sense. You can opt out of using Google’s services just the same, if you so choose.

I have to disagree, and hopefully can make a better job at convincing you than the other comments. In terms of search "google" is a verb for a reason, but I actually do think you could use other search engines, true, I agree.

What worries me is Android however. Perhaps not in the US, but in a lot of other places around the world Android is the only operating system accessible to people and a large majority of the market, since iPhones are an impossible purchase. Phone manufacturers depend on Android like computer manufacturers once depended on Windows.

Apple is raking in the profits because it controls the vertical and sells expensive phones in premium markets, but in terms of raw market share things seem to be shaping towards a Mircosoft/Windows like situation. Android is close to passing the 75% mark [1] and judging by sales it probably will [2].

Maybe we're still not quite there, but it's shaping up in that direction. Facebook def has the lion's share on social networking, and messaging, but I can't help but believe that when it comes to Facebook it's mostly network effects keeping people in, since there are plenty of other (and sometimes better) messaging apps, lots of photo-sharing apps, and alternatives for event planning, getting your news, posting updates, etc.

Idk if "Google should be hauled in first", but it should def be hauled in too (alongside Amazon, but that's a whole other story).

[1]: http://gs.statcounter.com/os-market-share/mobile/worldwide/#...

[2]: https://www.statista.com/statistics/266136/global-market-sha...

Tbh, Google has a monopoly over search, but Facebook does not have a monopoly over social networks. Bing and DDG are much smaller competitions to Google in comparison to what Twitter, Snapchat, et al. are to Facebook.

This means you don't have much choice when it comes to search, but you do when it comes to social networks. Obviously, you can live without both; but if you need both search and social networks in your life, it's obvious which company is more powerful. (IMO you can live without social networks, but not without search engines.)

Sincere question: It appears that it's crystal clear to everyone where the distinction between fake and real news lies, or who should be allowed to post (mis)information (serve ads, if you prefer) on the interwebs, or how to identify the culprits exhaustively. I fail to see a way in practice to draw the boundary and identify/persecute the offenders. Imho virtually all ads spread misinformation (push someone's agenda) to my detriment. My only means of resistance is inherent in my duty (civic, but also self-serving -- to keep my sanity) to consciously make the effort to seek/filter through multiple viewpoints.

Its simply astounding that the "use of personal data" is seemingly coming as a revelation to anyone. What's especially ridiculous is that arms of the government (like the FTC) are feigning indignation. The US government is - by far - the biggest collector and aggregator of personal data and information in the world. Both the government, corporations, "social media companies", and everyone else with access to data has been doing the same, and using this data to create models to influence people believe and support (or buy) whatever agenda or product that respective organization is pushing. Unfortunately this latest episode of faux outrage is very much like the faux outrage that has infected our cultural landscape for the last 18 months (due to Trump). Things that have been going on for decades are suddenly being attacked as evils unique to Trump. If the net result of this selective outrage led to substantive changes in our society, then perhaps it the outrage would be a "good" thing (despite its manufactured nature). Unfortunately, its very clear that all of these practices - the data mining, the troll farms, the bot-networks, the propaganda (both public and private) - will roll forward at full steam once Trump is consigned to the wastebin of history (where he belongs). Once the reigns of power are back into the hands of a trusted steward of global US military empire (rather than a deluded carnival barker) you should have absolutely no doubt that Comcast, and GE, and Disney will direct their minions in the media to focus your outrage somewhere else.

Reminds me of this quote from the tabbacco industry in the early 90s;

“We don't smoke that shit. We just sell it. We reserve the right to smoke for the young, the poor, the black and the stupid." - R.J. Reynolds executive’s reply when asked why he didn’t smoke

Ok. I know it's off-topic here, but...

Now what's good alternative to Facebook Messanger video calls? Signal doesn't support it - I read that it's in beta, but I can't enable it. Telegram doesn't seem to support it either. Skype is not an alternative.

Is Matrix/Riot good enough right now? For me it should work on Linux, Android and iOS for my family.

Now that I think about it, maybe I should setup some private WebRTC service for video calls. But it seems cumbersome for part of my family. However it probably would be easier for my parents and my in-laws.

EDIT: As Thriptic mentioned Signal indeed does support video calls. I failed to find the functionality, because I expected to see separate button for video call. One has to first start calling and then enable video.

Signal supports video calling on mobile at least. I've used it many times.

What is a socially acceptable way of getting people interested in Signal? I installed it and use it for messaging (Ive never been able to actually receive an mms with it). But I have not one contact on signal, so it's not doing much for me at this point. My daily work contacts don't seem like a prime target user base.

You're going to have a hard time convincing non-technical people / people who don't acutely care about privacy to use it in my experience. I generally try to shift people who are not in these demographics over to WhatsApp as that has a large user base and implements the signal protocol for end to end encryption. It's not a "safe" as signal but it's far better than something like messenger.

I can't even send pictures on Signal.

There is a camera icon for taking a picture. Otherwise you need to use the attachment function for sending a picture.

I've tried everything AFAIK. The pictures appear inside the chat area but with a red cross on their upper right corner indicating that sending failed.

After you take or attach a picture to the Signal chat you haven't sent it yet; you can write a caption for the image or send it without. The red cross if for removing the image if you change your mind. At least, this is what popped up in my mind immediately after your description.

Riot is perfectly usable now, works on all the platforms you mentioned and it supports video and voice calls. I'm helping them make the UX/UI more pleasant to use

Wire works ok for me. I use it on my desktop for video calls. I'm on Ubuntu.

Whats wrong with Skype?

I don't have much of an issue with targeted advertising or content by harvesting of data through what you make publicly available, as long as it is de-identified. It has plenty of applications for good as well. I hope that platforms are working on ways of taking these algorithms to edge nodes, making large master databases containing un-aggregated data that can be used to de-anonymise people less prevalent.

I'm more immediately concerned about blatantly fake news, clickbait, bots, sock-puppets and fake accounts posing as trusted parties in order to harvest trusted information and spread misinformation.

Isn't Facebook valuation is implicitly based on how Facebook can [ab]use personal data?

Ah, the other shoe drops. This is the reason for the blitz of anti-Facebook stories all at once... Legal authority to examine all that juicy personal data Facebook holds.

And if you think you are safe because you don't have an account, I have bad news for you.

The government doesn't have to resort to conspiracies to get your personal data from Facebook, if it wants it, it can get it. We live in the age of PRISM (in which Facebook is a participant,) secret orders, and NSA bulk surveillance. They could literally just build or hack an app that uses Facebook's API, apparently, or buy something from the black market, or in the narrow case get a warrant.

Not to mention... I don't think this would actually give anyone "Legal authority to examine all that juicy personal data Facebook holds." I don't think "legal authority" actually works that way, but IANAL.

Nice try. The government already has your Facebook data. https://prod01-cdn07.cdn.firstlook.org/wp-uploads/sites/1/20...

This is exactly how I see it. The government already has your “official” details like your SSN, bank account info, address and phone number, but it doesn’t have a very good look into the kind of things you like to do. Having access to that data would be a surveillance analyst’s wet dream.

We see companies doing this all the time and I hope there's some sort of fix here, but I'm curious about the individual to individual implications. If we're serious about fixing privacy then things like sharing private messages, doxxing, etc should be addressed as well imho. Not sure what that would look like without curtailing free speech however.

At a previous employer we had a high up executive from Facebook come out to give a presentation.

One of my main take away was that one of FBs big goals was to build a 'knowledge economy'. It struck me as a bit of an odd objective at the time, but I think I am now starting to understand what this means (and it's a little scary).

I’m no fan of Facebook, and welcome this scrutiny. But what about all the people Facebook buys data from? Can we regulate them? What about Equifax?

All these corps do it without even the veneer of informed consent; we should make sure we criminalize the activity and not just crucify a well known practitioner and call it a day.

Nothing will change with government regulations, those companies like Facebook, Google, etc are gold mines for the FBI, CIA, NSA and other 3 letter organizations, so in the interest of national security, they will continue to harvest the user data... Only major boycott can do some damage...

Facebook deserves a good deal of criticism for this, but I can't be the only one who thinks this is just a continuation of the anti-startup, anti-social media dialog the major press agencies have been pushing ever since the Election?

With these practices or lack of restriction on data pulling at Facebook, is it wrong to assume the board knew about this? Too many brilliant minds on their board, including pmarca, to not understand firms were able to do this.

Better than a new law. I'm sure Congress is mulling one over, but surely they won't take a large sweeping, harm-the-good-more-than-the-bad, regulation-instead-of-enforcement approach.

I'd wage money that Feinstein will campaign on Facebook Regulation this year.

> If the FTC finds Facebook violated terms of the consent decree, it has the power to fine the company thousands of dollars a day per violation.

Maybe I'm misreading this, but only a few thousand?

Yeah, this struck me too. It seems like the FTC's powers are far too light to deal with modern tech.

I'd really like to see something like the NTSB here, but for privacy/security issues. After an incident, the NTSB comes in, investigates everything, and produces a very detailed report as to what happened and what the industry should be doing differently. You can see their recent reports here: https://www.ntsb.gov/investigations/AccidentReports/Pages/Ac...

It's very clear from Facebook's behavior since the elections that they can't be trusted to investigate and report on themselves. E.g., this article on how their execs thought it best not to say anything until forced by circumstances: https://www.nytimes.com/2018/03/19/technology/facebook-alex-...

If each person is considered a separate violation then $1000/day adds up quickly.

Works out to $2 trillion in liability according to one (highly speculative) article I saw.

That's a neat way to nationalize FB.

Is it. Because if it is immoral for facebook to sell data for electoral campaigns, what does it mean for the president to get all that data legally and at tax payer money.

I did not make claims whether nationalizing FB is neat or not. Just that if you want to nationalize a company, setting fines that make the company immediately insolvent and government by far largest creditor is quite a neat way to do it.

Where can I get my check? Or will the government spend it in my best interest?

I've been saying for awhile if people were just payed fairly for the data all these companies were tricking them into giving away it would amount to a basic income of a couple thousand dollars a year.

It basically requires collective action though. If everyone does it at once they will start paying your bills to track you.

Wow, how much money do you think facebook makes. They have 114$/person in the US revenue.

And that's taking all the global revenue into account.

Facebook is only one company. Start with the FANG companies, then banks + "partners", ISPs, etc.

If people were paid fairly for this data then these companies wouldn't exist. I'm not saying this is good or bad, but you'd turn around and be charged to use the service.

Interesting side question, how do you see market forces working to set a proper payment for data to users? Right now Facebook is essentially saying your data is worth free photos and being advertised and propogandized and people seem to accept that. How does this not become the standard of exchange in your system for any popular network effect service?

>"Right now Facebook is essentially saying your data is worth free photos and being advertised and propogandized and people seem to accept that. How does this not become the standard of exchange in your system for any popular network effect service?"

The point would be that people started valuing their data "correctly". I don't know how high that is but it must be worth much more than free web hosting or else these companies wouldn't have gotten so huge.

Almost certainly the latter. Although it would be whimsical to consider for a moment people retiring from Facebook’s misfortunes.

Fear not citizen, it'll be put to use by the same government that's actively working to dismantle consumer protections.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact