The amount of data being amassed by Facebook, Google and others has become exorbitant, and apparently has already been abused (some might even say weaponized) in a major election.
If Facebook indeed violated the 2011 consent decree, then the FTC can fine them up to "thousands of dollars a day per violation [per user]". This presents the FTC with the opportunity to send a message to these data hoarders: protect the data you collect, or else.
Fine them to the point where they have to start asking themselves whether it's even worth it to collect and store certain data, and with whom to share it.
It shouldn't be the government's job to ensure that the data gets protected, this should be in Facebook's own self interest.
To the third point, focusing on Facebook seems like that scene from Casa Blanca though:
"There's gambling going on, I'm shocked, shocked"
"Your winnings, sir"
Not confident FTC fines would actually change any trends.
I bet if Facebook is found not to take reasonable steps to mitigate issues raised during the 2011 FTC investigation, they'll be forced to do yearly audits of every app on the platform and require KYC(know your customer) process for all app publishers. This will be very costly and we'll probably see the end of the FB graph API except for trusted and highly capitalized partners.
Even assuming they were only distributed 3 months (unlikely) and there were only 1 million accounts (also unlikely) the maximum fine is:
1000x1000000x90 = 90 billion dollars.
Imposing the maximum fine would be more than double their entire 4th quarter earnings last year.
That's a bite. That would hurt any company.
> If the FTC finds Facebook violated terms of the consent decree, it has the power to fine the company more than $40,000 a day per violation.
> Facebook Inc. is under investigation by a U.S. privacy watchdog over the use of personal data of 50 million users
So I think the maximum (assuming this went on for 90 days) would be:
40,000 x 50,000,000 x 90 = 180,000,000,000,000
The Dems had the election on a silver platter and they still lost because Hillary was awful.
Hillary lost the election, if it wasn't her it would've been a win.
Unfortunately for her, Julian Assange decided to make it his religion to ruin her and Donald Trump happens to be very good at channeling populist antipathy. So it goes.
>She had MSM, the entirety of liberal America, all major tech companies, most/all colleges, illegals voting en masse
Ok. Let's go through this one by one...
- The Democrats/leftists/DNC do not control the mainstream media. That's a conspiracy theory started by the right-wing fringe and Fox news, and of course, canonized by Trump and his supporters, in order to dismiss all criticism in the media as being manufactured.
- The entirety of liberal America does not think and act in unison, nor were they entirely behind Hillary. Both parties were fractured this last election, and many Democrats who couldn't get Bernie wound up voting for Trump or stayed home.
- All major tech companies are not liberal or leftist. There is a deep wellspring of right-wing, alt-right and libertarian ideology in tech and SV.
- "most/all colleges" are also not automatically leftist. Plenty of right-wing, alt-right and libertarian ideology there as well.
- "illegals voting en masse" is just a baseless conspiracy theory.
You are correct that the race was Hillary's to lose. Unfortunately you couldn't resist running through the typical Trumpist hyperbole. Sad.
Because she was a woman? I mean, in 1984 Walt Mondale got 13 electoral votes and just 37 million votes. I think this qualifies as much worse.
But I get you.
Is it really beyond your comprehension that someone would judge Hillary based on the quality of her character rather than her gender?
But on the other hand, you brought up the "worst candidate in history" thing because of other reasons. Its just not mathematically true, man. So bringing up bias is fair game; you aren't using math as a judge. But I guess it could be a bias of recent events. Who knows - either way its not true.
I'm sorry I triggered you with the word "Trump" and I'm sorry you triggered me with just saying something that is mathematically false.
I also looked at your hacker news profile and it looks like you only only talk about politics here - this is a technology forum so I think you have the wrong audience. I'm sorry you are so angry but Jesus Christ, lets talk about computers here.
PS - If I could save your blood pressure; I'd down vote this response for you. I don't care about internet points here.
Mondale may have received only 40.6% of votes but Trump, as a general rule, shouldn't have had a chance. It was a Black Swan event of epic proportions and the Democrats made a mistake every step of the way, the statistical likelihood of that happening was so astronomically low but Hillary's involvement made it a guarantee.
Earnings (Q4 2017): $4B
Earnings (Y2017): $16B
Revenue (Q4 2017): $13B
Revenue (Y2017): $40B
So, maybe you're confusing revenue with earnings (net income) and a quarter (3 months) with the entire year (12 months). Because $90B is over 20x FB's Q4 2017 earnings and over 5x their entire 2017 earnings.
I saw their Q4 revenue statement and read the year end 40B as the Q4 revenue.
It's mere coincidence, but your spelling "Casablanca" as two words (Casa Blanca) put into my mind that the literal translation of that place is "white house" (two words, natch). 
To your point, yes, Facebook knows user data trafficking (gambling) goes on as well as the stakes of such trafficking. Facebook is the gatherer and ostensible guardians of such data, but they directly profit from such trafficking. Very likely their "interest" in user data security is pretense.
EDIT: recast second paragraph to more clearly convey intended meaning.
You mean major election_s_, right? I do seem to remember the Democrats crowing about how Obama's team had used social media to their advantage and Republicans were hopelessly outmatched in this arena.
> But the Obama team had a solution in place: a Facebook application that will transform the way campaigns are conducted in the future. For supporters, the app appeared to be just another way to digitally connect to the campaign. But to the Windy City number crunchers, it was a game changer. “I think this will wind up being the most groundbreaking piece of technology developed for this campaign,” says Teddy Goff, the Obama campaign’s digital director.
> That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists. In an instant, the campaign had a way to see the hidden young voters. Roughly 85% of those without a listed phone number could be found in the uploaded friend lists.
Whoa, that sounds exactly like the "breach" we're talking about here!
And a former Obama staffer confirms this: https://www.theblaze.com/news/2018/03/20/ex-obama-staffer-cl... (yeah yeah "I don't trust your source", but it's just screenshots straight from the horse's mouth).
> Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing.
> They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.
1. The Democrats didn't harvest the data under false pretenses; the data came from people who signed up for a political app.
2. The Democratic campaign data wasn't illegally transferred from one company to another.
But I agree that the Obama campaign's actions should have been a flag and we should have worried harder about it, even if they weren't as bad as what Cambridge Analytica did.
Were these people aware all their data and friend's data was going to be recursively sucked down? Somehow I doubt the app included a disclaimer to that effect. Doesn't really matter what your app does if the main goal of it is to, well, harvest data.
That you know of. It's data, it can get around. The staffer did mention that the Democrats still have the data, and they weren't supposed to be sucking down the whole graph in the first place, hence Facebook's initial freakout (but of course, it was OK because "we're on your side.")
It's possible to say "I think the Obama campaign also took undesirable actions" without saying "and they were just as bad." I agree with that position, as I said.
Obama campaign was US CITIZENS who are legally allowed to work on election programs.
CA was staffed almost entirely by BRITISH and CANADIAN citizens, and ALL of their Trump 2016 (and Cruz et all) actions are straight FEC violations of foreign actors working US elections.
CA acquired data from a third party which did not have permission to give CA the data. The Obama campaign did not do that.
Facebook required the third party (Dr. Kogan) to certify that the data had been destroyed. Dr. Kogan certified that the data had been destroyed, but did not do so. The Obama campaign did not do that.
These facts support the conclusion that nobody should have access to this kind of data, including the Obama campaign. They do not support the conclusion that the Obama campaign did the same thing as CA.
I also don't think you've provided evidence that the Obama campaign still has the data. If I've missed that please let me know.
I also noticed that you are conflating the Obama campaign with the Democratic Party. If you have evidence that the Obama campaign shared this data with the Democratic Party, you should also share that.
> “Where this gets complicated is, that freaked Facebook out, right? So they shut off the feature. Well, the Republicans never built an app to do that. So the data is out there, you can’t take it back, right? So Democrats have this information,” she said.
This is what Davidsen has said.
Also, as you said, they obtained the data legitimately. Why _wouldn't_ they keep the data around for future use?
> I also noticed that you are conflating the Obama campaign with the Democratic Party. If you have evidence that the Obama campaign shared this data with the Democratic Party, you should also share that.
Common freaking sense. It's a goldmine for future elections, they would be fools not to share it with the DNC.
Considering how much traction this story is getting, and considering that the Obama campaign used the same friend list "breach" to obtain data, they really should comment to the effect that they aren't keeping the data around. Otherwise, common sense says they are. That, coupled with Facebook's rather "it's OK" response to learning that they sucked down tons of data makes me think FB didn't make a big stink about deleting the data. If they did, they need to attest to that.
Well, no. They'd be people who are violating their Facebook contract if they did.
When you live in the swamp, it's easy to assume everyone is dirty. The Obama campaign certainly used data in a way I personally find uncomfortable, which makes it even easier to leap to conclusions. However, there's no value in this conversation as long as you don't understand the difference between evidence and the things you want to be true.
It's very likely that the Obama campaign retained the data: I'd put it around 75%. Others have different assessments.
Lumping all uncertain things into one bundle of low probability is a massive category error.
Again, who’s actually asking any questions whatsoever about their use of harvested social media data? You’re only in breach of your “Facebook contract” if someone cares to look into it in the first place. You still haven’t addressed the staffer’s claim that Facebook was freaked out about the campaign’s harvesting of data but then said they were “OK” with it. You trust FB to make a stink if the Obama campaign misused data? Seems to me like they were perfectly content to look the other way.
1. It was not Democrats, therefor it was wrong if not illegal.
If Hillary had won none of this would have come about and even if it did no one in Congress would be up in arms. We have had nearly two years of people trying to delegitimize Trump's win. This is a standard political tactic by the losing side but this time Trump beat both sides at the game.
These politicians and activist refuse to acknowledge that their message is either not acceptable or delivered wrong or even worse, that a large number of people were just tired of them.
There wasn't simply enough money spent by Russia to change the outcome and this is completely ignoring the fact they have been doing similar in nearly every election they could if not within political parties and the media.
I'd question illegality. In violation of agreements, perhaps. If there were any, and there wasn't a wink, wink type of understanding on what would be done.
For a much better examination of legal aspects than I can provide, see https://www.lawfareblog.com/cambridge-analytica-facebook-deb.... Please keep in mind the sentence "I am leaving aside for now the potential claims under British and European law, but those add to this list considerably," which is rather important given the EU's more aggressive privacy regulations.
I sort of don't care why the media firestorm is so bad, even if it's unfair, because it means we might see some action which will limit bad actors on all sides of the political spectrum.
But how long does the harvested data remain "valid" for that purpose? The Dems still have the harvested data from 2012, is it OK to use it for 2016, which they most likely did?
You do sometimes get bits and pieces like the Time article from 2012 that haven't been memory-holed yet, but again, the media won't bring up something like that because the intent to paint this chilling use of social media as something unique to the Trump campaign.
I agree that there is a pattern of bias to all large media outlets on both sides. They may put a piece out like this one to appear impartial but only post-facto and if it supports the rancor of a news cycle that currently leans in their side's favor.
Anyways, there is bipartisan benefit to people becoming more aware of their online presence. Maybe people will use social media less and become less fervently partisan?
“We ingested the entire U.S. social graph,” Davidsen said in an interview. “We would ask permission to basically scrape your profile, and also scrape your friends, basically anything that was available to scrape. We scraped it all.”
So obviously a fair amount of strategic writing going on but all things considered, pretty respectable.
Bloomberg has also admitted Obama took advantage of it as well:
"The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump."
To me, the interesting part going forward is: will Democrats and the mainstream media continue to frame this as if it was Donald Trump who committed the wrongdoing? I'm not really sensing any widespread public outrage so I would suspect not, but time will tell.
If the answer is no, we don't store it.
No one ever seems to get the maximum fine in America, often because it would "destroy the company".
But we're willing to execute living people.
As the old adage says: I'll believe corporations are people when they execute one in Texas.
This is what happened to the banking industry after the 2008 financial crisis.
* The FTC actually does something about this in a way that companies in a similar manner are also affected (directly or indirectly)
* These companies don't find a way to get around the issues.
I'm not convinced anything will significantly damage tech companies whose primary profit driver is their users' data anyway. The general public has been using them for years now and despite any outrage, it's become too integrated in society for people to suddenly stop (unless someone comes up with a better alternative).
While it's clear that CA/Russians/whoever tried to influence the election through these techniques, is anyone aware of any studies or evidence that they actually affected anything at all? Has anyone even done a survey asking people if they either did not turn out to vote, or changed the candidate they were going to vote for, based on paid advertising they saw on Facebook?
I'm genuinely curious about this, I'm not trying to be argumentative. After this erupted yesterday, I went looking and found nothing. This whole thing may be much ado about nothing.
Stay in the organization and work to turn it away form casual misuse of personal information. Prevent an Orwellian future of machine-learning assisted, personally targeted messaging preying upon our fears and insecurities. Stand up and speak out against the performance of unethical psychological experiments on unwitting participants.
This is one of the important moral issues of our time. To stay on the sidelines is unacceptable.
Nobody appeared to be “casually misusing” data—I think the problem is that they’re largely just engineers, particularly young ones, naïvely considering only the engineering side of things. All the data queries go through the robust privacy-checking system, so everything is good, right?
In a case like this, they didn’t consider the optics of what happens when someone scrapes the public (at the time) profiles of Facebook users and uses that information for nefarious deeds. What happens when users are angry not because their private data was “breached”—a technical problem with an engineering solution—but because they didn’t realise how much they’d already shared publicly (even if you explicitly told them) and how it could be used to influence them en masse?
Case in point, one of the most common policy violations is prefilling the user message on posts made via the API. It is forbidden. But the field is right there for you to abuse and put whatever you want into it. Sure there are some automated enforcement algorithms and policy employees look at things when complaints go up, but if the policy says you can't do it, why on earth does the code allow it?
OK I know the pat answer is that apps are allowed to prompt the user earlier in the workflow for the message, and then use that value when calling the API. That is true but weak (what would it hurt to eliminate that loophole vs. the benefit of no longer having to detect and take enforcement action on an impossible action) -- the point remains, if they really cared about their vaunted policy and protecting the user, they would put more controls directly into the code behind the API to disallow prohibited actions.
These are things where smart engineers can make a difference. Spend some time on the FB Developer Community Group and you will see the flood of questions from developers who are completely ignorant of the policy, even on basic things like "don't use an account with a name other than your own" aka, there are no business or developer accounts. Many of them willfully ignore policy and just do what the code allows them to do. A lot of good could be done by FB devs taking more accountability for how the platform is abused.
Case in point, Cambridge Analytica used ill-gotten data from 50 million people to craft extremely effective political ads. And since user engagement with those ads was very high… Facebook's algorithm made it cheaper for them to buy even more ads.
I think there is enough information available for Facebook employees to be faced with a decision, after which they are morally culpable for the growing net-negative effect that Facebook has on society.
I'm not a Facebook engineer--and I'm probably not smart enough to be one--so I can't really say how I would act if faced with an ethical decision to provide for my family or take a stand. However, I think anyone who has been employed by Facebook is capable enough to be able to immediately find comparable employment.
Similarly, I think there were lots of well-meaning people involved in Big Tobacco, who didn't realize they were contributing to the deaths of millions of people. I imagine there was a similar inflection point for them, as well.
(Please note, I do not think Facebook is as damaging to the world as Big Tobacco. I also don't think that individual contributors are as culpable as leadership. I am not comparing the degree of moral evil, but am comparing the complicity of individual contributors.)
I absolutely agree with you that this is a moral decision for the employees. At a former company I pushed to improve our user privacy and decrease our storage of unused personally identifying information.
I left that company when they neutered my project to only affect the UI...
We aren't soldiers following orders, we are humans that can reflect on our actions.
That said, I had the savings to be unemployed for a while, not everyone does.
Is this the only option?
Why can't it (not necessarily facebook) instead be a "machine-learning assisted, personally targeted messaging to help support your long term goals?"
>This is one of the important moral issues of our time.
No, it's not. Even if it was (and it's not) I'm not even sure if it would crack the top 100. For example, did you know there are people without access to clean water? There are civil wars? State run gulags? Did you know man-made global climate change is a thing? How about that we're going through an unprecedented ecological collapse? All non-issues. The big moral problem of our time is a social media company that wants to sell you shit.
Two of the issues you mentioned, state run gulags and anthropogenic climate change, are issues really only solvable at the federal level. Facebook's and Cambridge Analytica's ability to influence an election can have a profound effect on those kinds of issues. I mean, we now have a climate change denier in the White House who is dismantling the EPA. If propaganda spreading through Facebook created that, could that not also be partly responsible for our inability to do something about climate change?
That's just one example, but I think you're being just as hyperbolic by saying this wouldn't crack the top 100.
No. OP called out Facebook, not Cambridge Analytica. OP attempted to shame Facebook employees not Cambridge Analytica employees. Facebook is here to sell targeted ads.
>but I think you're being just as hyperbolic by saying this wouldn't crack the top 100.
I stand by it. This smells like a big nothing burger. I'm not even sure what the news here is. Candy Crush probably has info on hundreds of millions of Facebook users. No outrage there.
It isn't even novel that Facebook was used for political targetting, as the Obama, Romney, and more broadly DNC and RNC did the exact same thing. I just assumed this was all part of that vauted digital strategy all the news outlet were blaring about everytime one party won an election. It may be a coincidence that this is a problem because Trump used this method for voter outreach. Maybe.
Maybe it's the potential Russian meddling that's the new news here? But then it's not really what the news outlets are focusing on. It's all about how Cambridge Analytica created 'psychological profiles' on voters...which sounds more like a query that was ran against the dataset.
A couple of years or more I was posting on Facebook regarding Cambridge Analytica's practices and was considered a tin foil hat and crazy.
No the reason I was able to shed some light at the time was I knew exactly how we could utilize the Facebook API back then to elicit the kind of data we are talking about, and completely legally. Nobody needed to circumvent FB API policies, it was yours for the taking.
I didn't do it although I did put together multiple PoC's from 2011 to 2014 to see what was possible and it was bad.
Another thing we should remember is that Cambridge Analytica is just one small tip of a fractal iceberg whose body is Facebook and the big five, your internet connection and certainly your smartphone themselves.
Google, Apple and Amazon are no less culpable in this regard.
The question now becomes which side of history we want to be on.
Another question is we assume we want to take our privacy back and how we do that with consent and assurance.
I don't have a Facebook account anymore but I'm still tracked as we all are. My mother doesn't like me not being there but a small price to pay. I can contact her elsewhere and do.
Surely enough is enough?
I think it is time to look for broad scale technologies that are better both in the real world and in our private world.
In early 2011, the minimum buy price on the platform was $500K. By midyear, $300K. By early 2012, $100K. Early 2013? $50 (no K missing, just fifty bucks).
Specifically it isn't necessarily about advertisers it regards surveillance.
Advertising revenue can be completely offset by Governmental tracking.
As I said in the other post we can't prove the positive but it certainly is a feasible option.
I know I could do it given the charter.
On the other hand I'd refer you to Bletchley Park.
Turing et al knew the decrypted Enigma messages but the Government were unable to act.
For good reason.
Secrecy is a thing
I do think it's important to note that I have not seen direct evidence of them abusing that data, but we've seen plenty of companies/governments/organizations doing bad things for years without direct evidence.
As for zero-knowlege encryption, iCloud Keychain is although the rest isn’t, you’re right there. Hopefully they’ll move in that direction.
But that's irrelevant to the point. The point is that Apple prevents users from understanding or controlling how the user's data is being used. Just because we understand why they won't fix it doesn't make it any less true that they could fix it, but choose not to.
And that's what I mean by "putting their money where their mouth is". They talk a big talk about protecting their users, but their actions are different than their speech.
We can't be seen to pick on Facebook or CA here since there is a bigger picture.
It's not about picking on anyone, it's about a line being crossed and bringing it back home.
Thank you for your comment.
But let’s not pretend that this fiction is true. Only one campaign hired this company. And if they are bragging to journalists now that they are willing to entrap politicians with hired prostitutes, I’m fairly certain they would have had some things in their sales pitch two years ago that would raise red flags in an ethical campaign.
The people you hire are a reflection of your character. And if they end up arrested one after another, it becomes less and less likely to just be bad luck.
What's far more concerning, and what this probe doesn't appear to address, is what Facebook does with the information of non-users.
Let's have empathy for people outside the tech bubble and realize that it's our duty as technical people to educate people around us about these issues.
Then I told her about what they actually do with the pixel and like buttons and she was flabbergasted. "You mean they can see what I read even if I don't press the like button?"
Not sure I convinced her to delete the account though as all her friends are there.
I'll give a more recent example: I meet 20-somethings at a meetup I go to each week. Most of them go to a pretty well-known university (thus, they are well educated), they ask me if they can connect with me via Facebook. I say I don't use Facebook, and then spend an extra 20 minutes explaining all the reasons why often to their astonishment. In my mind I'm like, "Really? How do you not know all of this? You read tons of magazines/journals?"
The sad reality is that billions of people don't care. Even with this whole scandal, I'd be shocked if Facebook's stock price was hurt in the long term.
Adblock detectors that function in the same vein as "FuckAdblock" look at if the client blocked a Facebook pixel.
Or are the websites providing identifying information like email? (I've never heard of this but I'm not well-versed here).
But who, exactly, is the individual? Well, that comes later. Maybe your blocker fails to block something that is gathering that data plus your identity. Now, all of that activity (that was previously not tied to an individual) can now be safely linked to you, the individual.
Also, thanks for sharing that EFF link, I really like the breakdown of how much entropy they can get from each fingerprint dimension.
Yea, so "laughable" that people are not constantly paranoid and super informed about how the information industry works. /s
It's not people's fault that facebook is abusing their data. That's some sociopathic logic.
I was in meetings with FB almost 10 years ago, as the OpenGraph API was being implemented, where they were openly selling, to anyone willing to pay, exactly what CA supposedly "hacked their way into".
For instance, before this, some of the most ethically questionable censorship stories I have heard from Facebook have had to do with minority groups or various activists in more repressive regimes around the world being blocked or censored.
Likewise, with Cambridge Analytica claiming to have worked with more than 200 elections around the world , and Channel 4 not painting an exactly flattering picture of their ethics, it's very possible that some of the most disturbing details that will emerge from this scandal have zilch to do with Donald Trump.
Repressive laws under authoritarian regimes are laws too. At the very least, we should admit that we're evaluating the specific rules (or the people making them) under some other rubric before deciding whether they ought to be obeyed. The sentiment you express here is exactly why "companies should obey the laws where their users live" and "countries should make laws according to their values and enforce them against websites accessible by their citizens" are too simplistic.
FB is off 8.5% now as a client business failed to adhere to FB users' privacy for data that the users were willingly giving out to FB and the client (but not the third party). Not likely much more downside to the stock on this news imo.
Palantir, in my guess, is probably like CA on steroids.
Looks like CA is just one means Palantir has used to get Facebook data.
The unique data is the friend graph and the likes, which they can use to (quite effectively) predict political attitudes.
Leak such data without explicit customer consent? That will be $10,000 per incident. So if you leak 100 data points of someone's location history that will be a $1,000,000 fine.
Explicit consent must be per-incident as in "YES I give my consent to send this information to <recipient> for purpose of <...>."
That would incentivize strong security practices and even more importantly dis-incentivize data hoarding beyond what is needed to provide a service. Hoarded personal data would be a gigantic risk and liability.
Businesses don't take it seriously because people don't actually care. Some do. The vast majority don't. They might say so in a survey, but at the end of the day, Facebook (and companies like it) will continue to survive doing what they always have been and people will continue using those services.
That's the root of the issue, though. People really don't care as much as posters on HN think they do. If we could acknowledge that, I think we could come up with better solutions.
Consent must be asked for in a clear understandable fashion.
Burrying some legalese crap at page 29 of your 12'000 word TOS doesn't cut it.
Could be that Facebook tries to push the envelope yet again. They may come to regret it.
If I had to bet, it will not be the case that we'll see some big exile of users as a result of having to click through an additional "Agree and Continue" dialogue to get to what they were going to do anyway. The GDPR will do a lot more to appear to be doing things right than actually benefiting users.
You have a right to learn what data they have about you, with whom they share it and a lot more details
In addition you can opt out of anything you agreed on earlier and
I would be surprised if you can't request deletion if the business has no business reason to store it. (Arguably, a difficult call with Facebook whom's entire raison d'être is to fuck with your privacy).
FB didn’t drop billions in value because nobody cares, it’s just that most people take a lot of repetition to grasp the scope of the issue.
That is not the same as individual users caring about data privacy as much as HNers believe.
I think you’re selling poeple short, and in the face of evidence contrary to your claims.
And I'm not selling people short. I think the risks are incredibly overstated and the people who appreciate free services (acknowledging some data sharing is happening) are not necessarily just dumb-dumbs being preyed upon.
I think we're reaching the point where all these data mining honeypots we've built over the past 20 years are being used in ways that are nefarious enough that people are starting to care.
I think it's in the media a lot right now because it potentially helped Donald Trump win an election (despite the Obama '12 campaign being praised for similar tactics).
People having the data I put on Facebook (which is not notably more than is available through public sources) is not going to destroy my property or lose me my job. The rhetoric here has been dialed up to 11 and it's not winning any converts.
#1 was on the front page of HN a couple of days ago.
If you think this kind of information floating around is damaging to society, then you're making an argument against the very idea of FB. Fine; it's at least worth thinking about how social media is changing our society. But to really be upset that this data is out there and being used, after willingly sharing it with many people, is kind of ridiculous. What did you expect? You should assume that anything you do on FB, short of private messages, is part of the public record.
Everyone is targeting you based on information you put out in the world. Every major political campaign in the US works with a database of voters that includes party affiliation, past turnout, name, age, etc. They can also fold in the same data that advertisers use. Being a good citizen of a republic—and a savvy consumer—in the modern world means thinking critically about who is trying to persuade you and what their agenda is. The government cannot protect you from persuasion.
Wasn't the FTC already "monitoring" Facebook?
Oh, that's right. That whole monitoring for 20 years thing, that they're also applying to Google and a few other companies is a complete joke. If anything, it has become almost a badge of honor/certification thing - like "Look, the FTC is monitoring us for 20 years, so that obviously means we can't possibly abuse your data or do anything nefarious with it!"
Anyone know how this number was added up? Any reason not to believe it wasn't 500 million or 5 million?
What I don't get about this whole issue, is that we are basically admitting that the average person who uses these services has poor critical thinking skills and the American democratic process is easily gamed as such. I feel we are trying to fix the wrong thing. Sure, regulate FB and any other company that happens to collect lots of user data I don't have a problem with that. My fundamental issue is that FB is mostly an opt-in service. As far as I can see, the shadow profiles they create on a user who isn't signed up for their services doesn't really contain enough data to be of material impact for the type of things that a company like CA is doing (it's mostly to help in it's ad network to sell you more stuff rather and is rather well anonomyzed).
The only argument against the fact that FB is an opt-in service is that it has a near monopoly on social media and seems to either buy or kill any serious threat. More important than trying to regulate FB's data collection and privacy would be ensuring that our antitrust and monopoly laws are being enforced to remove FB's near monopoly.
Further, I'm not here to defend Facebook, but I feel it's being used as a scapegoat for an easily gamed democratic process where rather than biting the bullet and fixing that, we are saying it's FB and CA are the real evil here. In fact they are not. They are for-profit businesses who are operating within the current regulatory framework they've been provided. It seems obvious to me what really needs to be fixed is a broken electoral process.
Unfortunately, I'm not sure how realistic that is. I think people just don't want to have to critically think all the time, and it's unrealistic to expect every member of the population to exert constant vigilance against ill will. There are simply too many forces trying to manipulate us and get our attention to expect every person to never mess up ever.
This is where the government steps in. Expect companies to be reasonably transparent about their intentions (signing up for Facebook definitely did not give me adequate warning a decade ago about their intentions with my data, and subsequent changes in their plans were not adequately expressed to me as a user), and reasonably cautious in their expansion (I signed up for Facebook at the age of 12, which is technically against Facebook's TOS. They didn't put enough effort into policing those TOS to kick me off the site at the age of 12, and I certainly wasn't mature enough to understand the breadth of Facebook's TOS. Unfortunately I don't think many non-lawyers are equipped to understand the true meaning of signing up for Facebook, which... is it's own problem).
Basically we need these corporations to be better citizens than then currently are, which shouldn't be terribly hard since they're currently downright psychopathic citizens, exploiting the law and their large workforces to manipulate other citizens at every turn. In a better world than our own, corporations would actually be model citizens, but that's probably not realistic in the capitalist system. And we also need the government to do its job better, by punishing corporations that act selfishly so that others actually have an incentive to behave well.
An intellectually free society is one in which one doesn't have to think critically about the potential social and economic ramifications of every remark. It's a society in which people can pitch inchoate ideas, receive feedback, and iterate on worldviews.
That we're increasingly living in an intellectually unfree society isn't the fault of "corporations" and their citizenship. Instead, it's the result of a push toward controlling people in the name of 'fixing' society and preventing 'harm'. History tells us that down this road lies only death and blood.
If government should step in to prevent manipulation by misinformation, then one or both of the New York Times and Fox News needs to be shut down, depending on who you ask.
If you think FB can do no wrong since we willingly give them this information, what do you feel about HIPAA? Why is it okay for FB to sell data I willingly give them but a doctor can't? If the answer is "one is illegal, the other isn't", then you aren't arguing against the idea of it being made illegal, just that we don't currently have a law saying they can't do it. Laws change though.
Could you propose an alternative that addresses your points.
Is there not a way to have a browser extension that would scramble the meta data that is read by websits lik Facebook and Google? so we would still be feeding data to their brain but it would be worthless and random gibbrish. Does that make any snse?
1) Both Android and iOS allow apps to access your contacts, which in aggregate is more or less the same kind of social graph that Facebook has. If you happen to be in someone else's contacts, you don't get a say here either. I suppose Facebook's data is richer in some ways, but not in other ways.
2) When Twitter removed API access for 3rd parties, there was an uproar in the developer community about how evil this is and so on. There's a trade-off here - openness at the platform level necessarily means less privacy for users.
3) A lot of the criticism Facebook has received in the past (both here and elsewhere) had to do with not allowing 3rd party developers to do more and hoarding user data, which is not theirs, for monetization. Here Facebook was explicitly giving the app owner and the user the power to decide - the app owner could ask and the user could either accept or decline. You could argue that this isn't adequate protection, but consider how this works for other platforms such as Windows, Mac OS, iOS and Android. Apps can access more or less everything and permission dialogs, even where they do exist, aren't taken seriously by the user.
4) Most publishers that are currently publishing these articles criticizing Facebook are also selling everything they know about you to marketers, often more explicitly for the purposes of targeting. The "scandal" here is that a third-party app gathered personal information that wasn't supposed to be used for targeting and the data ended up being used for targeted political ads. Most publishers have no problem explicitly selling whatever data they can get on you to these centralized data brokers who will sell that data to anyone.
5) All this talk about privacy and data aside, the motivation seems to be that the wrong guy won the presidential election - I don't see anyone whose personal data was supposedly used in this manner being upset nor anyone owning up to the fact that they were falsely manipulated into voting for Trump or not voting. It seems to be mainly Clinton supporters being upset that other people were manipulated into voting for the wrong guy, amplified by the same concern about privacy and social graph data ownership issue we've always had.
6) If we accept that it's the presidential election result that most people are upset about here, the media is even more culpable, both from creating this false narrative that it was not a close election and prematurely taking the moral high ground against the potential Clinton administration by focusing on the irrelevant stuff (emails, etc). And that's just the "mainstream" media, before we get to Fox News, etc.
The scandal is that an organization impersonated a health care research entity and knowingly collected PII for use in political actions. Not only is this awful in itself, but it undermines public health by making people distrust legitimate data collection projects for beneficial health purposes. It's similar to when the CIA used a vaccination effort to locate Bin Laden, and now those aid personnel are routinely attacked and not trusted by locals which makes it more difficult to eradicate disease. If you are representing yourself as a health care entity and collecting PII for stated purposes of public health, you are likely bound by HIPAA, and I would like to see people go after this company for HIPAA violations as well as fraud.
What worries me is Android however. Perhaps not in the US, but in a lot of other places around the world Android is the only operating system accessible to people and a large majority of the market, since iPhones are an impossible purchase. Phone manufacturers depend on Android like computer manufacturers once depended on Windows.
Apple is raking in the profits because it controls the vertical and sells expensive phones in premium markets, but in terms of raw market share things seem to be shaping towards a Mircosoft/Windows like situation. Android is close to passing the 75% mark  and judging by sales it probably will .
Maybe we're still not quite there, but it's shaping up in that direction. Facebook def has the lion's share on social networking, and messaging, but I can't help but believe that when it comes to Facebook it's mostly network effects keeping people in, since there are plenty of other (and sometimes better) messaging apps, lots of photo-sharing apps, and alternatives for event planning, getting your news, posting updates, etc.
Idk if "Google should be hauled in first", but it should def be hauled in too (alongside Amazon, but that's a whole other story).
This means you don't have much choice when it comes to search, but you do when it comes to social networks. Obviously, you can live without both; but if you need both search and social networks in your life, it's obvious which company is more powerful. (IMO you can live without social networks, but not without search engines.)
“We don't smoke that shit. We just sell it. We reserve the right to smoke for the young, the poor, the black and the stupid."
- R.J. Reynolds executive’s reply when asked why he didn’t smoke
Now what's good alternative to Facebook Messanger video calls? Signal doesn't support it - I read that it's in beta, but I can't enable it. Telegram doesn't seem to support it either. Skype is not an alternative.
Is Matrix/Riot good enough right now? For me it should work on Linux, Android and iOS for my family.
Now that I think about it, maybe I should setup some private WebRTC service for video calls. But it seems cumbersome for part of my family. However it probably would be easier for my parents and my in-laws.
EDIT: As Thriptic mentioned Signal indeed does support video calls. I failed to find the functionality, because I expected to see separate button for video call. One has to first start calling and then enable video.
I'm more immediately concerned about blatantly fake news, clickbait, bots, sock-puppets and fake accounts posing as trusted parties in order to harvest trusted information and spread misinformation.
And if you think you are safe because you don't have an account, I have bad news for you.
Not to mention... I don't think this would actually give anyone "Legal authority to examine all that juicy personal data Facebook holds." I don't think "legal authority" actually works that way, but IANAL.
One of my main take away was that one of FBs big goals was to build a 'knowledge economy'. It struck me as a bit of an odd objective at the time, but I think I am now starting to understand what this means (and it's a little scary).
All these corps do it without even the veneer of informed consent; we should make sure we criminalize the activity and not just crucify a well known practitioner and call it a day.
Maybe I'm misreading this, but only a few thousand?
I'd really like to see something like the NTSB here, but for privacy/security issues. After an incident, the NTSB comes in, investigates everything, and produces a very detailed report as to what happened and what the industry should be doing differently. You can see their recent reports here: https://www.ntsb.gov/investigations/AccidentReports/Pages/Ac...
It's very clear from Facebook's behavior since the elections that they can't be trusted to investigate and report on themselves. E.g., this article on how their execs thought it best not to say anything until forced by circumstances: https://www.nytimes.com/2018/03/19/technology/facebook-alex-...
It basically requires collective action though. If everyone does it at once they will start paying your bills to track you.
And that's taking all the global revenue into account.
Interesting side question, how do you see market forces working to set a proper payment for data to users? Right now Facebook is essentially saying your data is worth free photos and being advertised and propogandized and people seem to accept that. How does this not become the standard of exchange in your system for any popular network effect service?
The point would be that people started valuing their data "correctly". I don't know how high that is but it must be worth much more than free web hosting or else these companies wouldn't have gotten so huge.