EDIT: And don't forget that going rogue is just one scenario. Another is just a bigger attack surface: the more insiders have broad system access, the more credentials there are that can be phished by/leaked to/stolen by outsiders. Really, it would be completely missing the point of security to have arguments about how exactly insiders' credentials might get compromised.
With so much to lose and so little to gain internal leaks of this sort are extremely rare.
#1 - There's always a back door. I did some medical records stuff for a while. I looked myself up, just to confirm for myself how trivial it was to do. Yup, there I was. Which is why I insist that all data at rest is encrypted. (I have yet to win this argument.)
#2 - Our "portal" product had access logs for auditing. Plus permissions, consent trees, delegation. The usual features. Alas. We also had a "break the glass" scenario, ostensibly for emergency care, but was more like the happy path. And to my knowledge, during my 6 years, none of our customers ever audited their own logs.
#3 - My SO at the time worked in a hospital and went to another disconnected hospital for care because she knew her coworkers routinely, illegally looked up patient records, and she didn't want them spying on her.
The 'We will log your access and fire you' line of defense prevents nothing from someone who only has a job for the purpose of moving data out.
Someone in that position would be much better off building a back door into the system. But if they could also build a backdoor into iCloud, or scrape Gmail data from within Google.
I assume that Facebook has mechanisms to check that new hires (especially foreign nationals) are legitimate.
Ironically, you're undermining your own point. The fact that they would be fired afterwards in no way contradicts the notion that they could access such data, and in fact suggests they can (hence the firing policy).
Everything is logged, so if you might have looked at anything you shouldn’t have, it’s flagged and you’re audited; if you didn’t have permission (from a user and/or manager) and a valid business reason, then (we were told during onboarding) you’re likely to be fired and possibly sued.
The reality is that huge amounts of personal data were harvested by third parties through app permissions - apparently with FB’s knowledge and support.
No one needs back door hacks to get into a vault when the front door is wide open.
You asked about the "average employee" having access to user data, and the answer is unequivocally "no", with both technical and disciplinary safeguards.
There are only a few roles (moderation) who can access the relevant tools, and while engineers may technically have programmatic access (how would you expect things to work if nobody did ?), this is thoroughly logged and you'd better have an ironclad justification not to get fired on the spot.
> You asked about the "average employee" having access to user data, and the answer is unequivocally "no", with both technical and disciplinary safeguards.
(a) How do you know, and (b) so what is your explanation of stories like ? They're just hoaxes?
> and while engineers may technically have programmatic access (how would you expect things to work if nobody did ?)
Again you are wording this in quite a vague, lawyer-y manner, which again raises my eyebrows. "May" as in "might", or as in "do"? And "engineers" as in what fraction of them? There is a lot of wiggle room between "nobody" and "all engineers". It's quite strange that I can't get a straightforward, crystal-clear denial to a non-weasel-worded claim from you who seem to be confidently contesting what I'm saying. Please don't keep muddying the waters.
As for why no one is giving you a clear answer it is because there is no reason for anyone to tell some random person deep details about security policy and procedure. The people building the internal controls and defenses are smarter than you, they know what needs to be protected and are rather devious about thinking up attack scenarios and possible paths of compromise, and eventually get tired of repeating the same answers. Want to know more? Too bad.
Where did I ask for "deep details about security policy and procedure"?
> Want to know more? Too bad.
No, but thanks.
> There is some data that an average employee just cannot get to.
"Some data" means nothing. I'm sure this is true in many, many companies, ranging from the most competent to the most incompetent.
> For some data a dev can access it but the pattern of access and amount of data accessed will be audited and anomalies will raise an alarm.
This is yet again consistent with what I've said.
At the end of the day, the data is there - they have it. Possession is arguably MORE than 9/10 of the law in this situation. They can access it whenever they want -- trivially if they are rogue or have no concern for keeping their job. but this is true of just about any huge company that employs a lot of people-- but they're not going to say they can. Why would they?
For goodness's sake, please stop these straw-man arguments. I said this above once, but it seems I have to say it again: nobody ever asked for that level of detail. People have been struggling with far more basic issues. No current or ex-employee or intern has even come along to try to say something simple like "as far as I know, the average Facebook intern simply cannot access private user data regardless of any business reasons"; indeed, we've gotten anecdotes that that the opposite has actually happened. How you suddenly deduce that I'm looking for specific descriptions of what teams can access what data is just beyond me.
That could be answered with something vague like "yes, this requires permissions from a small team of trusted individuals, which are granted only if the issue is severe/cannot otherwise get immediate attention/cannot be addressed by that team/etc., and it's never granted to most interns". No need for jumping to "X-dev-team #1 has access to X, Y, and Z".
This is the best resource I've found for protecting such things:
Translucent Databases: Confusion, Misdirection, Randomness, Sharing, Authentication And Steganography To Defend Privacy http://a.co/eLgQACC
Maybe differential privacy stuff will supersede, compliment these techniques. I'm keeping an open mind.
-Former custodiet of the custodes
Everything you’re describing sounds like it’s either incredibly fly by night, not in the US, or substantially out of date. If the last two aren’t true, you have a situation that is literally illegal.
In the USA, there is no way to encrypt medical records at rest and permit data interchange. Because in the USA we do not have universal MRNs (PIDs, GUIDs, whatever). Meaning that if demographic data is encrypted, the system cannot match records across org boundaries, meaning care providers aren't 100% sure they have the correct medical history for the patient, meaning prescription errors, cutting off the wrong arm, misdiagnosis, etc.
Some enclaves like Medicare and VA can encrypt their own data for their own usage, but that protection is moot the moment data is shared with other orgs. It's been a while since I've checked, but I doubt they do encrypt, because that's a bottom up design decision.
This does not ring true to me at all.
It's certainly not true at financial institutions. By financial institutions I mean Fortune 100 financial institutions, as well as smaller financial institutions.
If by "pretty strict internal controls" you mean they can, like Prince Potemkin, point to such things existing in some chimeric form, then yes, I suppose you are right. But in any real sense, no, there are no effective controls in the real world.
About 25 years ago I assumed it was early days for a lot of these things and they would sooner or later be closed up, but they haven't been. Things are wide open - as the recent Facebook/Analytics things have shown. In a very small and indirect way at that.
The first major book on this broad subject was Donn Parker's "Crime by Computer" published in 1976. The book opens by saying that a company's biggest enemies in terms of computer crime is its own employees. This is still true 40+ years later - the biggest enemy of the people who own companies are the people who do the work at them.
Yes, because Google is not your average company. It takes security extremely seriously... in fact it's about as awful of an example as you can give for a blanket statement you made about "most companies".
Which is to say... Google and Amazon?
I'd also say it's the norm among most Fortune 500 non-tech companies.
That’s not to say I disagree with you, but the data collected is (to me) orders of magnitude less sensitive.
*disclosure: I toil in the adtech mines.
While this is certainly true, you've admitted elsewhere not knowing anything specifically about either Google or Facebook's security process, so how can you compare them ? You seem to just "know" Facebook doesn't take security seriously (which is of course a ludicrous thing to say)
You already misquoted me once and I already replied to you. Why do you ignore it and do it again? Like I said: no, I never "admitted elsewhere not knowing anything specifically about either Google or Facebook's security process". You are misquoting me again just like you already did in , and it's quite improper that you choose to do this when I have already responded to you and called out your misrepresentation there. If you are looking for a response, see that post. If you are not, then please stop.
People like me or  have called you out because you keep contrasting Google and Facebook's internal security processes for no good reason, making definitive assertions like "[Google] takes security very seriously" , suggesting that Facebook doesn't and should do "Whatever Google does" .
And you're doing this not based on any specific knowledge of what the internal security process looks like at either company, but on your (flawed) perception of what engineering interns might or might not be able to do.
When people like esman1 who actually have that knowledge and context, volunteer to explain to you  some of the safeguards in place (and he told you the truth), instead of taking the point, you won't have any of what he says and keep going at it stubbornly.
I think this is the point where reasonable people stop arguing, and anyone else who cares can check your comments in this thread and make their own opinion.
2012: Google staffs up ‘Red Team’
And this was literally just a Google away: https://nakedsecurity.sophos.com/2012/08/24/google-red-team-...
The job even lists insider threat as part of their responsibility.
Without evidence we're both just guessing. Perhaps someone else will chime in with direct knowledge of how FB works.
It's _probably_ true that things in general have gotten better since then, and it's probably true that they're better at _some_ companies like Google, Facebook, and Amazon - but I'd tend to agree that it's very unlikely to be true for "most companies".
Who watches the watcher indeed.
Source: I interviewed with their security team once and got a fair idea of how their various security teams are organized.
Do I understand correctly that you just admitted that your (extremely confident!) factual statement here:
> most companies have pretty strict internal controls for this sort of thing
was actually "just guessing"?
1) my direct knowledge of similar companies
2) the fact that no large scale leak from internal sources has happened from FB which is evidence that they have at least some internal controls or procedures to prevent one
There were few effective internal controls. The obstacles to lookups were
1 - all info keyed by cookie. Which users can clear, and is very difficult to get identified. That is, to look you up, I need the cookie from your machine.
1a - most devs are not allowed to run the cluster jobs to look up data. Only on the appropriate teams.
2 - but what about stapling? We required partners to pass us blind uids. Certainly nothing like emails.
3 - no data export. The business is to run ads on the customer's behalf, so there's no way built to export data except targeting lists to the exchanges.
I recently downloaded my Facebook archive . If it were legal, I would certainly pay thousands if not tens of thousands of dollars for certain peoples' archives. I can think of several practical contexts in which an unethical actor would find it profitable to pay a Facebook employee a million dollars for someone's Facebook archives.
Really? For what purpose?
On the upside, any case where one is engaging in high-value transactions (broadly speaking). Knowing a negotiating counterpart's likes, dislikes, communication style, et cetera can help one avoid mistakes, build a personal connection and draft (and frame) terms correctly on the first try.
More seedily, such information about a political opponent (whether a politician, rival on a commercial or non-profit board, or commercial competitor) is useful.
As a risk mitigation tool, such data would find a natural home in a due diligence file. Prospective executives, board members, business partners, political donation recipients, et cetera expose one to reputational risks. Catching those in advance is already worth tens of thousands of dollars of legal time.
I would hate to live in a country where the above is legal. We should recognize the value of the information every single single Facebook employee has routine access to.
Well, apart from post-factum incarceration.
There's quite a lot to be gained. Enough to incentivize a very powerful attacker, possibly even a nation-state level actor who can extract the mole and protect / reward them.
The stakes are not low here; I can't imagine why you've said that.
What do you think the number is at facebook? At google? At your bank? At your healthcare provider?
That's not enough by any means (edit: and as  pointed out, I don't even think it's true). There needs to be more to security than mere deterrence. I'm pretty sure at Google, etc. it's simply impossible for a single rogue employee to mess with customer data (except for a few in very privileged positions), and my impression has been that Facebook is not like this at all (unless it has changed recently).
Having never worked there, I can't speak to how it works at FB but I would imagine that there are a lot of limitations on what rank and file employees can do. I guess I could be wrong. Perhaps someone with direct knowledge will chime in.
Cool, now read this: https://news.ycombinator.com/item?id=16675503
Any changes to your thoughts?
Still egregious if that sort of early stage stuff hung around that long, but not the same as it being there today.
companies that move fast and break things don't give a shit.
I want more companies of the first type and less of the second.
Really? So are stories like  complete lies? Or does someone inside just blindly grant these "explicitly requested permissions"?
How else would you suggest to do privacy checks like these?
OK so an insider can just lie and access whatever they want. Heck, they can even tell the truth! Just find a bug that's exhibited in a particular profile and use that as an excuse to look at the profile.
> How else would you suggest to do privacy checks like these?
Whatever Google does. I don't know the details. But, for starters, my understanding is that their interns generally can't do what you just described, so fixing that would be one obvious step forward.
Oh come on. You admit having no idea what Google does either, but surely that must be better than Facebook because you said so, until an FB insider replied and brought down your narrative.
Is it that hard to say "ok well, I stand corrected then" instead ?
No, you are seemingly deliberately misquoting me. I said I "don't know the details", not "I have no idea". I know enough to feel fairly confident in what I've said. But if you don't believe me you're more than welcome to believe otherwise.
> until an FB insider replied and brought down your narrative. Is it that hard to say "ok well, I stand corrected then" instead ?
Stand corrected about what narrative? Everything I am (and hopefully also you are) reading right here      quite clearly says malicious employees can access user data, but will be fired if this is discovered, which is consistent with what I've said. (But don't actually bother replying if you want a response—I have no interest in responding after your comment.)
The real education from this story is far deeper than just Facebook. It is that Facebook employees, and Google employees, and all humans in general are susceptible to this very same "kompromat" concept, and are all susceptible to various forms of influence to greater degrees than our arrogance allows us to admit.
Human beings are attack vectors. Human beings are too self centered to do much about this in any meaningful sense. They can laugh the very idea away too easily.
I know it's not the same, but this reminds me of that story.
A decent part of that conversation seemed to center around how it seemed highly unlikely that the whole hack was even possible without insider information leading to the development of the tool in the first place.
Fifteen years later, knowing what hacks have been at least claimed to have been pulled off through social engineering, I think the more important take away is that we need to stop portraying the worst case of hacking as a masked man executing some bond villain style hack, because it is fundamentally recommending a terrible heuristic. It by definition casts aside all of the incompetence that is equally likely to cause harm, and in the case of sheer volume, the far more likely scenario to occur.
1. Accessing someone's data when it's not mission critical to your work means you're fired on the spot. This is drilled into new engineers over and over.
2. Privacy-related issues are escalated to the highest severity immediately (on par with data centers being down, etc.). I think the question in this whole debate is where you draw the line for this kind of issue, and what's an issue and what's a feature.
This means they are capable of doing it and are merely punished afterwards, right? Not to mention that I would imagine getting fired in exchange for viewing private data could be quite a worthwhile 'transaction' for some people in some cases.
(Or perhaps we have, and whichever trusted journalists they've chosen to share with are franticly poring over the exfiltrated data working out how best to angle the story without throwing the whistleblower and/or innocent FB users under the bus...)
So the customer's privacy got violated, because interns had blanket access to private customer data. To me that's very much not taking security seriously.
I can't believe what I am reading. Why is that? Why use customer data for dev purposes. Why not work on some mock data?
I agree. It'll eventually happen to some social app or email provider (think Slack, gmail, facebook, etc) where some huge portion of the database is dumped online -- not through a hack, but through a person willing to do it internally because they can and do not fear or care about the consequences. The Ashley Madison hack was a preview of what's to come.
"In addition, third parties may attempt to fraudulently induce employees or users to disclose information in order to gain access to our data or our users' data."
"Although we have developed systems and processes that are designed to protect our data and user data and to prevent data loss and other security breaches, we cannot assure you that such measures will provide absolute security."
"In addition, some of our developers or other partners, such as those that help us measure the effectiveness of ads, may receive or store information provided by us or by our users through mobile or web applications integrated with Facebook. We provide limited information to such third parties based on the scope of services provided to us. However, if these third parties or developers fail to adopt or adhere to adequate data security practices, or in the event of a breach of their networks, our data or our users' data may be improperly accessed, used, or disclosed."
Source: MD&A, 2015 Facebook annual report
There's like 50 pages of this stuff that covers literally every possible scenario in case of legal liabilities. Has no meaning whatsoever
-Erskine Bowles ("President Emeritus of the University of North Carolina" and "White House Chief of Staff from 1996 to 1998");
-Ken Chenault ("Chairman and Chief Executive Officer of American Express Company");
-Susan Desmond-Hellmann ("Chief Executive Officer of The Gates Foundation" and former "Chancellor at University of California, San Francisco (UCSF) from 2009 to 2014");
-Reed Hastings ("Chief Executive Officer and Chairman of the board of directors of Netflix");
-Jan Koum ("co-founder and CEO of WhatsApp"); and
-Peter Thiel .
Might not be a bad idea to pen a letter to their Board  with your state attorney general  and perhaps a U.S. Senator  copied.
The HP leak and spying scandal was so convoluted and left so many loose ends that I question its pedagogical utility.
"On September 5, 2006, Newsweek revealed that Hewlett-Packard's general counsel, at the behest of HP chairwoman Patricia Dunn, had contracted a team of independent security experts to investigate board members and several journalists in order to identify the source of an information leak. In turn, those security experts recruited private investigators who used a spying technique known as pretexting. The pretexting involved investigators impersonating HP board members and nine journalists (including reporters for CNET, the New York Times and the Wall Street Journal) in order to obtain their phone records. The information leaked related to HP's long-term strategy and was published as part of a CNET article.
Board member George Keyworth was ultimately accused of being the source and on September 12, 2006, he resigned, although he continued to deny making unauthorized disclosures of confidential information to journalists and was thanked by Mark Hurd for his board service. It was also announced at that time that Dunn would continue as chairwoman until January 18, 2007, at which point HP CEO Mark Hurd would succeed her. Then, on September 22, 2006 HP announced that Dunn had resigned as chairwoman because of the "distraction her presence on our board" created. On September 28, 2006, Ann Baskins, HP's general counsel, resigned hours before she was to appear as a witness before the House Committee on Energy and Commerce, where she would ultimately invoke the Fifth Amendment to refuse to answer questions."
Zuckerberg has voting control of Facebook, in part due to some financial engineering in 2016 . He does not control the Board.
Board members have a fiduciary "duty of care," i.e. "the duty to pay attention and to try to make good decision" . This duty is to the company as a whole, not just its majority vote-holder . (That said "American courts simply do not hold directors liable for business decisions, made without a conflict of interest, unless those decisions are completely irrational. The doctrine of noninterference is known as the business judgment rule." )
 http://www.oecd.org/daf/ca/corporategovernanceprinciples/187... page 6
There's a reason for diversity -- it gives you a mix of opinions and ideas," says Grant, partner and co-founder of Grant & Eisenhofer, a Wilmington, Delaware, firm that specializes in securities and corporate-governance cases.
The board's near-uniformity of experience has led to a consensus of opinion that defers to Zuckerberg on all matters, Grant told CNBC. That can stray from what is best for shareholders."
For example, allowing Zuckerberg to reduce his economic interest in Facebook "dramatically" -- by selling tens of millions of shares -- while allowing him to maintain "absolute control" over corporate decision-making was a bad idea that the board should have voted down, Grant argues.
"You never want to divide economic consequences from decision-making," he says.
Zuckerberg's plan would have created three classes of shares, one with no voting rights, and allowed him to maintain voting control of the company even after selling most of his stake."
(Poll: Do you think it would be interesting to see the video from that deposition?)
"Discovery revealed that Zuckerberg in fact used his relationship with Andreessen to undermine the special committee process. Andreessen leaked Zuckerberg confidential information about the committee members' thoughts and concerns, and coached Zuckerberg through his negotiations with the committee. In one instance, Andreessen and Zuckerberg texted back and forth during a group call with the committee, with Andreessen telling Zuckerberg things like, "This line of argument is not helping. J" and "THIS is the key topic.""
"Trial was set for Tuesday, Sept. 26, 2017, with Zuckerberg slated to testify as the plaintiffs' first witness. On Thursday evening, Sept. 21, however, Zuckerberg asked Facebook's board to withdraw the reclassification, which it did. This withdrawal mooted the plaintiffs' litigation and averted the billions of dollars of harm to Class A stockholders that plaintiffs sought to prevent."
"But Andreessen, a venture capitalist at Andreessen Horowitz and a long-time Facebook board member, is a close Zuckerberg ally. While on the committee, Andreessen slipped Zuckerberg information about their progress and concerns, helping Zuckerberg negotiate against them, according to court documents. The documents include the transcripts of private texts between the two men, revealing the inner workings of the board of directors at a pivotal time for Facebook.
Bowles, former President Bill Clinton`s chief of staff and past president of the University of North Carolina system, was especially skeptical of Zuckerberg`s proposition, as depicted in the suit. Many of Andreessen`s texts focused on persuading him. Among other things, Bowles worried that one of the concessions Zuckerberg wanted -- to allow the billionaire to serve two years in government without losing control of Facebook -- would look particularly irresponsible, according to court filings. Bowles did not respond to requests for comment.
Andreessen sought to persuade Bowles that if Zuckerberg went into politics, the government would likely require him to give up control of Facebook anyway, so the point was moot, according to the documents. A couple weeks later, Andreessen prevailed, and the vote was brought to shareholders. (The stock reclassification is on hold pending the results of the lawsuit, though.)
"The cat`s in the bag and the bag`s in the river,`` he messaged Zuckerberg. "Does that mean the cat`s dead?" Zuckerberg texted back, not understanding the spy speak.
Andreessen replied: "Mission accomplished :-)""
The deposition must have gone well. Heres what the plaintiffs lawyer had to say before Zuckerberg withdrew his proposal.
"This case is said to mark just the second time Zuckerberg testifies as a witness. He previously testified earlier this year over a lawsuit against Facebook-owned Oculus -- a case Facebook lost.
Stuart Grant, the attorney representing the shareholders in the dispute, didn't mince words. He suggested Zuckerberg's limited courtroom experience puts him at a disadvantage in this case.
"That gives me an advantage because I've been doing this for 30 plus years," Grant told CNN Tech. "If we were sitting down to do coding together, I'd bet on Mark, but we're not coding."
Somewhere at Facebook there is a team of people who wrote software to scrape, store and analyze the personal call+text data that users didn't explicitly mean to give to Facebook.
The data that Cambridge Analytica attained (from Facebook's API) doesn't seem surprising at all. Isn't the Cambridge Analytics headline really just, "Group doesn't follow website's terms of service from five years ago".
I think the the headline people are seeing is more like "Group doesn't follow website's terms of service from five years ago, and ends up helping Donald Trump win presidency."
A big part of the reason this has become so big a story is political.
The Obama campaign already acknowledge they did the same thing, but on a bigger scale.
"In 2011, Carol Davidsen, director of data integration and media analytics for Obama for America, built a database of every American voter using the same Facebook developer tool used by Cambridge, known as the social graph API. Any time people used Facebook’s log-in button to sign on to the campaign’s website, the Obama data scientists were able to access their profile as well as their friends’ information. That allowed them to chart the closeness of people’s relationships and make estimates about which people would be most likely to influence other people in their network to vote.
“We ingested the entire U.S. social graph,” Davidsen said in an interview. “We would ask permission to basically scrape your profile, and also scrape your friends, basically anything that was available to scrape. We scraped it all.”"
They asked each user for permission to look at their social graph, in an app designed for this task (Obama election).
In other words, CA demonstrated a tremendous vulnerability in our political dialogue to pure propaganda and dirty tricks that social media, in its algorithmic purity, was supposed to make less likely, not more. Some of the fallout of this affair will be a tremendous distrust that any provider of social media is an honest broker of what others use their services for. If my news feed is tainted by a group like CA manipulating it at the algorithm level, what hope can I have that anything I’m receiving over the Internet isn’t compromised? Maybe the stories are fake; maybe they’re true but the balance of stories is altered; maybe the facts are true-ish but slanted or selective. These are all legitimate concerns in the normal marketplace of ideas, but now we find the marketplace of ideas is deliberately compromised by malicious entities.
Was it CA doing the manipulating, or was it Facebook?
It's Facebook that runs Facebook...CA was merely taking advantage of Facebook to the fullest extent it could.
I'm not saying what CA did was right, but your comment seems to suggest Facebook was somehow helplessly complicit in executing CA's malicious plan.
People are upset that their data was essentially stolen from Facebook (it was collected for use in an academic study, then turned around and sold for profit to CA), used by a company with ethical failures as serious as Cambridge Analytica, and then Facebook buried the story. It was two years before it came to light thanks to Guardian journalist Carole Cadwalladr.
Facebook also worked closely with CA during the Trump campaign, even though they would have known by that time that data obtained under the pretense of an academic study had been sold to CA.
But the flip side to this is why the CA story has blown up: for once, the consequences— "CA got Trump elected!"— are immediate and graspable, in a way that "Facebook is scraping your text info" is not (even if it's probably not true). When the effects are right in your face and not time-delayed, people sit up and pay attention.
The fact that it was used for political ends probably makes a difference as well, both in the amount of coverage it is receiving and that it makes the use of data into a more concrete issue (it's much easier to understand "this is what the data was used for" then "Facebook has your data and that's bad for hypothetical/abstract reasons").
> Isn't the Cambridge Analytics headline really just, "Group doesn't follow website's terms of service from five years ago"
That's the act, but I'd say the usage/intent behind doing so is part of the story.
I am buying some FB calls in the morning because no one will care about this "movement" in a month.
The lie that facebook (and the like) are sold on is that there are zero possible negative ramifications of giving Facebook that data. Of course that's not true. But something has caught people's attention and they're waking up to it.
Now is the time to tell them all the other reasons to not trust facebook. Loudly scratching your head about why people care about Cambridge Analytica is to miss the opportunity you have.
However the Analytica stuff is about conning the masses into voting for Trump and Brexit and that effects me big time and pisses me off somewhat. In face worse that conning - more initing the mob to hatred through lies and bullshit. See for example the Hillary is Satan ad paid for by the russians and targeted with facebook to the kind of people who vote on the basis of that kind of stuff https://static01.nyt.com/images/2017/11/02/us/politics/02dc-...
You sell and use people’s data to get money: this is the business plan. Full stop.
Connecting people can definitely be lucrative and useful in other ways but facebooks particular implementation is impression based not action/outcome based.
The part I don't get is that it seems everyone who does work there is in shock and awe that this is going on. SHOCKED!
It's comical to the point of parody.
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"
This is insane. I get why some people want to live in the bay area, but I'll take my 4k square feet home in the great neighboorhood with the award winning schools for 300k.
The FB employees I've met have been fine with explaining away the consequences of their actions with "oh it's just a job", "that's not my team", or "the technology is really interesting".
And as an idealist, I'll invoke Goodwin's Law depending on our relationship.
She is either wrong, or lying, or has her own ideas of what "selling your data" means.
Maybe they also sell it (processed into aggregated reports, not raw), I wouldn't be surprised.
Facebook does not sell user data, it sells targeted ad space which is the exact opposite business model.
Their competitive edge in this business purely relies on being the only entity in the world who can target so well, precisely because they hold onto that data like a treasure.
Interesting how quickly the narrative changes.
You sell and use people’s data to get money: this is the business plan.
Are people working there really so nieve(sic) as to believe that this is surprising?
Advertising (regulated or not) is important in manipulating people into buying more stuff. There's no benefit in having your movies, music listening, games or your reading interrupted by ads. There's no benefit in a pair of breasts distracting you on the road from a billboard. There's no benefit in having to throw away all the crap you get per mail.
If someone needs something, they will look for it and buy it, but this pull model doesn't result in as many sales as the push model and CxOs have to eat.
They would be a list of superior features of, and only of, the product read in clear, plain terms.
I did a few searches for embedded development boards, and now I got lots of ads for alternative boards. I don't pretend that this information is in my face altruistically or entirely truthfully, but it does give me some useful information, probably paid out of the pockets of some people who take the advertisements at face value.
It's not the most efficient mechanism I could imagine for disseminating truthful information, but if you're not naive, it's a useful channel.
And also the inferior features. And those of its competitors.
It's amazingly deluded to even consider that advertising is about informing consumers. It's clearly about disinforming them - clouding their judgement in order to make them buy something they otherwise wouldn't have.
Do you think the adtech industy is "well-regulated"?
On the other hand, health products / dietary supplements seem woefully under-regulated, especially since the layperson seems to have great difficulty in evaluating health claims. It seems crazy that dietary supplement and drug advertising are treated so very differently.
So, I think in some areas we need better advertising regulation, but not across the board.
More importantly, my point is that I'm not holding my nose up at my former colleagues. Without the second half of my post, the first half could be read as having a very judgemental tone.
This statement is just the pablum that people in the industry repeat to help assuage their guilt at what they are doing. I say this as someone who worked on ads for a company that had more than half of all US internet ad spend at the time.
I’ve seen this us-against-them mentality play out elsewhere in various toxic cult-like organizational cultures. The NSA was a great public example of just how manifestly horrifying things can get with tens to hundreds of decent people willfully participating in corrupt or unethical practices.
The way this all works is terribly fascinating, but the short of it is that you have to become closed off and indoctrinated in order to fit in. Particularly at places like Facebook, Google, and generally anywhere else that provides free on-campus dinners (a good heuristic), employees build their social circles and identities around the company. This, coupled with various other factors, permits an astounding cognitive dissonance amongst a large group of otherwise benign and rational people.
EDIT There’s an interesting additional complication I’ve seen at times: internal spin. The media gets things about companies so wrong so frequently that it’s almost too easy to discount the things with an uncomfortable shred of truth as ‘fake news’.
When I worked at a Fortune 500 company during an economic downturn, we were simultaneously seeing endless austerity measures while being plastered with endless positive spin. The free pens and stuff disappeared. The new job listings shrank overnight from pages and pages to a handful. They cut back on janitorial service. There was a pay freeze. Etc.
But all the press releases and news articles being forwarded to our email was about how we flew up the Fortune 500 ranking (iow we were sucking less than other companies during the recession, even though the company shrank, because it's a relative ranking) and our CEO was named one of the biggest wealth builders in the nation and so on.
I was painfully aware of the disconnect. But I sometimes wondered if other employees really noticed or not. I never asked any of my coworkers. I felt like that would be a good way to end up eventually fired. But I wondered how many drank the kool-aid without noticing that it didn't jibe with the austerity measures we were seeing.
Don't think of the employees as evil, they are probably legitimately not aware of the entirety of what's going on. Like soldiers in a war, they only know how their battles are going, not the war.
Hollywood et al popularize the misconception that evil is fantastic and done with intent. Most of the time evil is banal. The larger problems happen when the unremarkable, small deviances from acceptable behavior becomes normalized.
Wall Street, too, after the crisis. “Of course we bet against our counterparties! They’re counterparties, not clients. If they didn’t read the prospectus they’re morons who deserved to lose their money. We're just the political whipping boy du jour."
Which I totally get. I’ve been a victim myself. And the truth is it’s hard to find successful companies that aren’t profiting off of some kind of exploitation (e.g. pollution, natural resources stolen from a developing nation, behavioral manipulation of users, injecting animals with antibiotics, high fructose corn syrup, lobbying (institutionalized corruption, preying on people’s fears (media), child labor (lots of clothing supply chains), predatory lending, extreme leverage ratios in a too big to fail context, etc).
It’s weird to me that people often don’t understand the root problem here is unrestrained/underregulated capitalism. In any naturally competitive system without adequate rules, the winners will be cheaters/exploiters. In reverse: you often cannot win unless you cheat.
Hence: most people dont have the luxury of working exclusively for socially responsible companies.
Uber comes to mind
Private interests incentivize individuals to undermine our democratic mechanisms. If we let them, we are fools to expect anything else.
The only reason many other companies don’t have to have to explain themselves to governments is they are legally allowed to incentivize the opposite with capital. It’s called lobbying.
Plus they can bury any qualms they have in piles of money and rationalizations.
Wow, some people/families are way too media-sensitive. It's just hypocrisy. Facebook is fundamentally the same company as it was last week, last year and 5 years ago. Everyone knew this, especially Facebook employees.
Facebook today is mostly made up of two kinds of employees; money-hungry sociopaths and hypocrites.
In fact, that statement is true of the government as well. Most people just think it won't really happen, and if it does happen it'll be something fairly trivial like selling me shaving kits because I'm a man, and that my data isn't really all that revealing.
That is vastly different from:
"This specific data you gave facebook went to this specific company, in violation of facebook's own policies.
The breach of ToS wasn't followed up, and we have video of the CEO bragging about fake news, blackmail and honey traps.
This wasn't even a US company influencing the election.
Your data was directly used to campaign for someone you probably deeply oppose.
Not only that, but this specific targeting was probably highly important because we know the result of the election relied upon victories in specific states that are important to the electoral college whilst losing the popular vote.
It also turns out that what had seemed to be deep real organic discussion topics turned out to be targeted propaganda showing a scary ability to control the public discourse
Oh. And this is all carried out by a company whose CEO openly wants to run for political office and could use this to get himself elected next time."
There's not much they can do other than assume absolute pacifism and absolute neutrality, to a fault. Effectively they must be resolute in their support for the stability of society, and that includes putting down political activists like Antifa and BLM. Facebook must no longer be a platform for activism of any kind, where all content deemed objectionable by anyone is pruned.
Facebook doesn’t sound that jealous to me.
The app API was different until 2013(?). App developers only needed a single user's permission to access all of that user's friends' information. Both Obama and Cambridge Analytica came out of that period. Now, users can only authorize the release of their own information.
There ought to be a #quitfacebook topic to get many employees to quit. But I don’t believe that would get much traction due to the attractiveness of compensation/benefits and probably some challenging work. If someone working at Facebook believes that things will get better, I’d say they’re just deluding themselves. It cannot happen with the current management.
P.S.: Since this post is about Facebook, I’m not going to talk about other companies.
I'm curious what might be the source of this regained "confidence."? The idea that this will all just blow ever in a few months?
I am worried that questionable semi-private German entity can block me (e.g. 30 days ban) on facebook at will. I am an US citizen and don't live in Germany. This is outrageous.
I just read about "FTC’s Bureau of Consumer Protection Regarding Reported Concerns about Facebook Privacy Practices". Since I am an American citizen and was blocked for 30 days on facebook by this dubious organisation, I may actually drop the FTC an email and ask about their opinion on this.
 Might be behind paywall:
The answer, for most of us is an emphatic 'yes'
But I remember thinking that it was a very funny, cult-member like response. And you can test this too. Ask your friends who work at FB and I bet you will get some pre-programmed response very similar to that.
What makes you assume it's got to be a pre-programmed, cult-member like response and cannot believe that this is the actual work culture ?
1. What was Mark Zuckerberg's response when people asked him if Facebook might be overstepping bounds in terms of data collection (shadow profiles)?
2. What did the company employees think of the backlash over their beacon project?
3. When Facebook told the EU that they cannot match FB user profiles and WhatsApp user profiles to create a single profile (remembering that they would be fined), what was the general consensus among employees? Did they know that FB had lied? Were they still OK with that? If they were, was there not a single person expressing dissent?
I think it depends on what we find. But we're going to be investigating and reviewing tens of thousands of apps from before 2014, and assuming that there's some suspicious activity we're probably going to be doing a number of formal audits, so I think this is going to be pretty expensive. You know, the conversations we have been having internally on this is, "Are there enough people who are trained auditors in the world to do the number of audits that we're going to need quickly?" But I think this is going to cost many millions of dollars and take a number of months and hopefully not longer than that in order to get this fully complete."
Why waste the fucking money. Quit being sentimental. Just trash Facebook and pivot (lol pivot). Be a real motherfucker, and let Facebook burn. Make something cooler than Facebook. Fuck this audit stupidity.
Come on, man.
I even expect to see a few angry post from people that decided to delete their account now, and when they tried to undelete the account after a few weeks they surprisingly discover that all the old information was missing and Facebook can't recover it because it is deleted.
fake quote > If they used Windows instead of Linux, they could have send the account to the Recycler Bin, and recover it now.
The real sizzle comes not from the emotional outrage but the calls for government inquiry and potential regulation, which would do structural damage to all of Silicon Valley. This has been a stunningly bipartisan effort, the left supporting it ostensibly because they like regulating big businesses, and the right supporting it because they (perhaps correctly) see Silicon Valley megacorps as adversaries.
Nope -- there are political and legal proceedings underway, and those things take time. In a year? Maybe.
> Facebook can't recover it because it is deleted
"Deleted." It's easy for those people to fake up a new account, and remember the lessons they learned the last time around.
Ugh, so we're going to have the front page of HN dominated with the exact same "discussions" for another year?
It's not going to happen quickly, but if this awareness gains momentum a much more healthy (federated, preferably open source) social media site could have a higher chance of survival. I think that's worth something.
That was five years ago.