Hacker News new | comments | show | ask | jobs | submit login
Zuckerberg Takes Steps to Calm Facebook Employees (nytimes.com)
258 points by SREinSF 8 months ago | hide | past | web | favorite | 262 comments



I feel like someone should also give Zuckerberg the memo that it's only a matter of time before an insider also goes rogue and abuses data access (edit: or otherwise; see below). Facebook fundamentally seems to trust itself way too much, and it worries me that it thinks the only threats are external entities... to me, this is another silently ticking time bomb.

EDIT: And don't forget that going rogue is just one scenario. Another is just a bigger attack surface: the more insiders have broad system access, the more credentials there are that can be phished by/leaked to/stolen by outsiders. Really, it would be completely missing the point of security to have arguments about how exactly insiders' credentials might get compromised.


You'd think so, but most companies have pretty strict internal controls for this sort of thing. Access is also carefully logged so a leaker is pretty much guaranteed to get caught at which point they'd immediately lose their job and likely face criminal prosecution.

With so much to lose and so little to gain internal leaks of this sort are extremely rare.


Who watches the watchers?

#1 - There's always a back door. I did some medical records stuff for a while. I looked myself up, just to confirm for myself how trivial it was to do. Yup, there I was. Which is why I insist that all data at rest is encrypted. (I have yet to win this argument.)

#2 - Our "portal" product had access logs for auditing. Plus permissions, consent trees, delegation. The usual features. Alas. We also had a "break the glass" scenario, ostensibly for emergency care, but was more like the happy path. And to my knowledge, during my 6 years, none of our customers ever audited their own logs.

#3 - My SO at the time worked in a hospital and went to another disconnected hospital for care because she knew her coworkers routinely, illegally looked up patient records, and she didn't want them spying on her.


As an ex-employee, I feel much more confident in Facebook's processes than the company you're describing. Facebook would have no problem terminating people who do what you're describing.


Imagine you are the Egyptian government. You want to squash a social media fueled rebellion, lead by some anonymous person. How hard is it to get one of your bright and loyal minds hired by Facebook? How much data could such a person exfiltrate before getting fired?

The 'We will log your access and fire you' line of defense prevents nothing from someone who only has a job for the purpose of moving data out.


Batch scrapping or drip feeding data via a monitored internal tool? I doubt they’d get much out at all. It’s inherently a low bandwidth and very obvious channel.

Someone in that position would be much better off building a back door into the system. But if they could also build a backdoor into iCloud, or scrape Gmail data from within Google.

I assume that Facebook has mechanisms to check that new hires (especially foreign nationals) are legitimate.


They don’t fire you, they arrest you. And then they find out who you work for.


Doesn't matter. At the first hint of trouble you escaped back to your country, protected by the government which had sent you there in the first place, and now the rebels have been murdered thanks to the data you got out.


As an ex-employee could you please also confirm whether or not the average employee is able to access user data, and what kinds of permissions (if any) this requires?


Another ex-FB employee here. I can't believe this is even a thing people are wondering about. Of course not the average employee can't access user data, it's an immediate firing offense.


> Another ex-FB employee here. I can't believe this is even a thing people are wondering about. Of course not the average employee can't access user data, it's an immediate firing offense.

Ironically, you're undermining your own point. The fact that they would be fired afterwards in no way contradicts the notion that they could access such data, and in fact suggests they can (hence the firing policy).


Yet another ex-FB here. When I was there I think it was possible for engineers to access pretty much anything programmatically, although the vast majority never have any reason to go near the systems that would allow them to do so. During onboarding we were basically told “If you look at any data that’s not yours, assume you will be fired”.

Everything is logged, so if you might have looked at anything you shouldn’t have, it’s flagged and you’re audited; if you didn’t have permission (from a user and/or manager) and a valid business reason, then (we were told during onboarding) you’re likely to be fired and possibly sued.


Thank you for the response. Question: if you (assumed average Facebook engineer for this discussion) observe a bug (normal severity, not something obviously critical and not something conversely trivial) with a particular profile that you cannot otherwise reproduce, and it is determined that addressing it would involve looking at the user's private data, then I assume that would be a valid business reason to do so. Now, is it possible to do this without explicitly (re-)obtaining the user's permission for this incident, or is it assumed the user has already agreed to this somewhere in the ToS or otherwise? And if this is possible, then what stands in the way of someone opportunistically finding bugs that provide convenient covers for looking at user's private data?


FB’s internal security protocols are irrelevant.

The reality is that huge amounts of personal data were harvested by third parties through app permissions - apparently with FB’s knowledge and support.

No one needs back door hacks to get into a vault when the front door is wide open.


Maybe it's irrelevant to you but I'm sure it's mighty relevant to some other users whether they are notified before employees dig into their private data to fix random bugs.


I’m afraid I don’t know the answer. I’m confident that such a thing would be quickly recognised as suspicious, so that sounds pretty far-fetched. Most of the time, it’s someone with moderation powers interacting with anything potentially sensitive; a regular engineer is going to be using test accounts, their own account, or asking someone else to look at the issue for them.


Are you genuinely asking a question you would like to know the truthful answer to, or are you just interested in confirming the strong preexisting bias on display in each of your comments on this story ?

You asked about the "average employee" having access to user data, and the answer is unequivocally "no", with both technical and disciplinary safeguards.

There are only a few roles (moderation) who can access the relevant tools, and while engineers may technically have programmatic access (how would you expect things to work if nobody did ?), this is thoroughly logged and you'd better have an ironclad justification not to get fired on the spot.


No, I'm interested in knowing the truthful answer. It's just that I've received plenty of seemingly truthful responses (both here and elsewhere, e.g. [1]) that seem quite consistent with the notion that an average-employee(-turned-malicious) would be capable of accessing user data, punishments and all notwithstanding.

> You asked about the "average employee" having access to user data, and the answer is unequivocally "no", with both technical and disciplinary safeguards.

(a) How do you know, and (b) so what is your explanation of stories like [1]? They're just hoaxes?

> and while engineers may technically have programmatic access (how would you expect things to work if nobody did ?)

Again you are wording this in quite a vague, lawyer-y manner, which again raises my eyebrows. "May" as in "might", or as in "do"? And "engineers" as in what fraction of them? There is a lot of wiggle room between "nobody" and "all engineers". It's quite strange that I can't get a straightforward, crystal-clear denial to a non-weasel-worded claim from you who seem to be confidently contesting what I'm saying. Please don't keep muddying the waters.

[1] https://news.ycombinator.com/item?id=16675664


Regarding your question about a dev setting up a test server and accessing live data, that hole has been closed for years. There is some data that an average employee just cannot get to. For some data a dev can access it but the pattern of access and amount of data accessed will be audited and anomalies will raise an alarm.

As for why no one is giving you a clear answer it is because there is no reason for anyone to tell some random person deep details about security policy and procedure. The people building the internal controls and defenses are smarter than you, they know what needs to be protected and are rather devious about thinking up attack scenarios and possible paths of compromise, and eventually get tired of repeating the same answers. Want to know more? Too bad.


> As for why no one is giving you a clear answer it is because there is no reason for anyone to tell some random person deep details about security policy and procedure.

Where did I ask for "deep details about security policy and procedure"?

> Want to know more? Too bad.

No, but thanks.

> There is some data that an average employee just cannot get to.

"Some data" means nothing. I'm sure this is true in many, many companies, ranging from the most competent to the most incompetent.

> For some data a dev can access it but the pattern of access and amount of data accessed will be audited and anomalies will raise an alarm.

This is yet again consistent with what I've said.


I think what you're asking for here you're never going to get. Nobody who works there currently will tell you because they'd get fired (and everyone has bills to pay). People who worked there in the past aren't going to tell you because #1) it's bad practice/bad op-sec/it's uncouth/whatever, #2) if they did it would negatively impact their future prospects and reputation. Nobody has any incentive to hand out definitive numbers or break it down into "X-dev-team #1 has access to X, Y, and Z"

At the end of the day, the data is there - they have it. Possession is arguably MORE than 9/10 of the law in this situation. They can access it whenever they want -- trivially if they are rogue or have no concern for keeping their job. but this is true of just about any huge company that employs a lot of people-- but they're not going to say they can. Why would they?


> Nobody has any incentive to hand out definitive numbers or break it down into "X-dev-team #1 has access to X, Y, and Z"

For goodness's sake, please stop these straw-man arguments. I said this above once, but it seems I have to say it again: nobody ever asked for that level of detail. People have been struggling with far more basic issues. No current or ex-employee or intern has even come along to try to say something simple like "as far as I know, the average Facebook intern simply cannot access private user data regardless of any business reasons"; indeed, we've gotten anecdotes that that the opposite has actually happened. How you suddenly deduce that I'm looking for specific descriptions of what teams can access what data is just beyond me.


I suddenly deduced you were looking for specific descriptions a little ways up this comment tree where you asked the question: "As an ex-employee could you please also confirm whether or not the average employee is able to access user data, and what kinds of permissions (if any) this requires?"


> I suddenly deduced you were looking for specific descriptions a little ways up this comment tree where you asked the question: "As an ex-employee could you please also confirm whether or not the average employee is able to access user data, and what kinds of permissions (if any) this requires?"

That could be answered with something vague like "yes, this requires permissions from a small team of trusted individuals, which are granted only if the issue is severe/cannot otherwise get immediate attention/cannot be addressed by that team/etc., and it's never granted to most interns". No need for jumping to "X-dev-team #1 has access to X, Y, and Z".


Really? That was a pretty specific question, and you were looking for (and would accept) a vague answer? It doesn't matter anyway, again, they have no incentive to tell you that. vague or not vague. Nobody that knows the answer to that question is dumb enough to answer that question (i would hope).


Yes, really. And I don't see why it would be dumb to answer that question, but no need to go on that tangent. If people can't respond then they can live with that being interpreted however it is.


I've read that, for a time, "view anyone's profile" was an advertised perk of being a Facebook employee (maybe just a wink-wink, nudge-nudge thing in an interview, I have no firsthand experience). I'm sure they don't do that anymore, but how much have they really tightened up the ship after having a culture like that?


If data at rest is unencrypted, I don't believe you. Sorry. Someone, somewhere is peeking at the naughty bits.

This is the best resource I've found for protecting such things:

Translucent Databases: Confusion, Misdirection, Randomness, Sharing, Authentication And Steganography To Defend Privacy http://a.co/eLgQACC

Maybe differential privacy stuff will supersede, compliment these techniques. I'm keeping an open mind.


Facebook has a long history of employees doing sketchy shit like peeking at the profile/timeline of the new SO for their former SO. This has been one of the top threat scenarios internally for more than a decade and they have built significant security infrastructure to protect against this sort of problem. Yes, there is 'always a back door', but that back door has gotten smaller and much harder to find over the years. It is always a possibility, and while the system will prevent attempts to exfil large chunks of data for smaller breaches like this the audits/alarms will probably take a day or so before you are sitting in someone's office with HR present to have a discussion regarding your user data access patterns. So compromise the security infra you say? Yeah, there are other systems watching for that too...

-Former custodiet of the custodes


HITRUST CSF is a framework for auditably proving HIPPA compliance. It prescribes controls such as encrypting data at rest. If you have a business relationship with a company which provides you PHI without explicit user consent you must have an agreement (a BAA) with the third party which puts them under the same requirements (backed up with third party audits).

Everything you’re describing sounds like it’s either incredibly fly by night, not in the US, or substantially out of date. If the last two aren’t true, you have a situation that is literally illegal.


I've worked in health care a couple of times now. And while the companies I've worked for have gone well beyond the minimum required for legal compliance, the scary bit really is the sorts of things you could, if you were lazy enough, do and still legally be compliant.


Yeah, HIPPA has some holes you could drive a truck through. I also hate OAuth (so much focus on access, so little focus on what gets done with that access).


Uh huh. We were the first to market with portable electronic medical records. "Fly by night." Sounds about right.

In the USA, there is no way to encrypt medical records at rest and permit data interchange. Because in the USA we do not have universal MRNs (PIDs, GUIDs, whatever). Meaning that if demographic data is encrypted, the system cannot match records across org boundaries, meaning care providers aren't 100% sure they have the correct medical history for the patient, meaning prescription errors, cutting off the wrong arm, misdiagnosis, etc.

Some enclaves like Medicare and VA can encrypt their own data for their own usage, but that protection is moot the moment data is shared with other orgs. It's been a while since I've checked, but I doubt they do encrypt, because that's a bottom up design decision.


Surprise: regulating and legislating doesn’t actually make bad behaviour go away. I too have had the experience of interning at a medical software company where security and patient privacy were a joke.


That sucks, but you might consider next time talking to people and seeing if they are working to improve things and, if not, being a whistleblower.


You might as well connect that whistle to an aircompressor if my experience is anything to go by. Very few companies have their house in order, and healthcare is definitely not an exception to this.


Recent news ([1]): Facebook security boss says its corporate network is run "like a college campus"

1. http://www.zdnet.com/article/leaked-audio-facebook-security-...


most companies have pretty strict internal controls for this sort of thing

This does not ring true to me at all.


It was true at Google. It's certainly true at financial institutions. I dunno about Amazon. I'm not sure what other comparisons would be relevant here.

EDIT/NOTE: https://news.ycombinator.com/item?id=16675493


> It's certainly true at financial institutions

It's certainly not true at financial institutions. By financial institutions I mean Fortune 100 financial institutions, as well as smaller financial institutions.

If by "pretty strict internal controls" you mean they can, like Prince Potemkin, point to such things existing in some chimeric form, then yes, I suppose you are right. But in any real sense, no, there are no effective controls in the real world.

About 25 years ago I assumed it was early days for a lot of these things and they would sooner or later be closed up, but they haven't been. Things are wide open - as the recent Facebook/Analytics things have shown. In a very small and indirect way at that.

The first major book on this broad subject was Donn Parker's "Crime by Computer" published in 1976. The book opens by saying that a company's biggest enemies in terms of computer crime is its own employees. This is still true 40+ years later - the biggest enemy of the people who own companies are the people who do the work at them.


> It was true at Google.

Yes, because Google is not your average company. It takes security extremely seriously... in fact it's about as awful of an example as you can give for a blanket statement you made about "most companies".


OK you're right that I overstated when I said "most companies." What I meant to say was most companies of the size and sophistication of Facebook that have a significant amount of private user information. Sorry for not being clear.


> What I meant to say was most companies of the size and sophistication of Facebook that have a significant amount of private user information.

Which is to say... Google and Amazon?


If you add "and notoriety" to that list of qualifiers, I agree with you!


I'm comfortable with such a qualification. Glad we can violently agree. ;-)


That's a really short list of companies, though! There are a lot of companies that each independently hold a ghastly amount of information about random people that have virtually no meaningful controls over this stuff. "No meaningful controls" is the norm.

I'd also say it's the norm among most Fortune 500 non-tech companies.


Yep. Equifax, Experian and Transunion come to mind.


Consider also every large adtech firm.


Even the very largest adtech firms don’t have messenger apps used by millions of people, social graphs of the population or control of large swaths of the internet infrastructure.

That’s not to say I disagree with you, but the data collected is (to me) orders of magnitude less sensitive.

*disclosure: I toil in the adtech mines.


These are great counterpoints to the view I expressed and does make me reconsider my assumptions somewhat. Thanks!


Internal abuse is a big area of effort for Facebook and google but things still go wrong. Here was googles moment for that back in 2010:

https://www.wired.com/2010/09/google-spy/


> Google is not your average company. It takes security extremely seriously

While this is certainly true, you've admitted elsewhere not knowing anything specifically about either Google or Facebook's security process, so how can you compare them ? You seem to just "know" Facebook doesn't take security seriously (which is of course a ludicrous thing to say)


> While this is certainly true, you've admitted elsewhere not knowing anything specifically about either Google or Facebook's security process

You already misquoted me once and I already replied to you. Why do you ignore it and do it again? Like I said: no, I never "admitted elsewhere not knowing anything specifically about either Google or Facebook's security process". You are misquoting me again just like you already did in [1], and it's quite improper that you choose to do this when I have already responded to you and called out your misrepresentation there. If you are looking for a response, see that post. If you are not, then please stop.

[1] https://news.ycombinator.com/item?id=16676704


I am most definitely not misrepresenting you.

People like me or [1] have called you out because you keep contrasting Google and Facebook's internal security processes for no good reason, making definitive assertions like "[Google] takes security very seriously" [2], suggesting that Facebook doesn't and should do "Whatever Google does" [3]. And you're doing this not based on any specific knowledge of what the internal security process looks like at either company, but on your (flawed) perception of what engineering interns might or might not be able to do.

When people like esman1 who actually have that knowledge and context, volunteer to explain to you [4] some of the safeguards in place (and he told you the truth), instead of taking the point, you won't have any of what he says and keep going at it stubbornly.

I think this is the point where reasonable people stop arguing, and anyone else who cares can check your comments in this thread and make their own opinion.

[1] https://news.ycombinator.com/item?id=16675843 [2] https://news.ycombinator.com/item?id=16675508 [3] https://news.ycombinator.com/item?id=16675707 [4] https://news.ycombinator.com/item?id=16675670


I'm not sure if Google even has an internal red team that performs breaches, last time I talked with someone there at a conference they didn't (that was 2016). So I am not sure Google has metrics on how easy it is to gain access by an adversary.


> I'm not sure if Google even has an internal red team, last time I talked with someone there they didn't (was 18 months ago though).

2012: Google staffs up ‘Red Team’

And this was literally just a Google away: https://nakedsecurity.sophos.com/2012/08/24/google-red-team-...


Red team is an overloaded term: "Analyze software and services from a privacy perspective, ensuring they are in line with Google's stated privacy policies, practices, and the expectations of our users." Doesn't sound like adversary simulation to me.


https://careers.google.com/jobs#!t=jo&jid=/google/security-e...

The job even lists insider threat as part of their responsibility.


Yeah, still not the same as actually performing breaches themselves to see how long it takes to compromise, and if they get detected and how long it takes to remediate and evict the adversary. I should have been a bit clearer with what I meant initially.


How do you know there isn’t a team at Google doing this? It’s standard practice at companies of even middling size and Google is so large your friend might just be unaware of it.


A Google security manager told me at a conference when chatting about this in 2016. They were thinking of staffing a breach team, but did not have one then.


I thought Project Zero tries to find vulnerabilities in Google stuff too?


Project Zero is different compared to performing end to end breaches. A breach team might use 0-days of Project Zero to actually compromise Google's internal assets to see if their defenders can detect an adversary. FB has such a team and they gave public presentations (one was at RuxCon 2016) how they compromise for instance their domain controllers and stuff.


Google has a gaggle of security teams, almost all of which occasionally red team and some of which exclusively do. I'm not sure who told you otherwise but they were certainly mistaken. Source: I TL'd a security team there several years ago.


Thanks for pointing this out. I heard it from a security manager at Google at a conference in 2016. Good to hear that they do breach simulations now, besides regular pen testing and stuff.


I'm not sure if there was miscommunication or what, but Google has had teams that do this for a while now. I typically hear them referred to as orange teams.


I worked in 2014-2015 at Google on one of the (many) teams that did exactly that.


I believe it's true of Google! I do not believe it's true in general.


shrug

Without evidence we're both just guessing. Perhaps someone else will chime in with direct knowledge of how FB works.


Evidence suggests it wasn't true at the NSA five or so years back...

It's _probably_ true that things in general have gotten better since then, and it's probably true that they're better at _some_ companies like Google, Facebook, and Amazon - but I'd tend to agree that it's very unlikely to be true for "most companies".


The Snowden case is an interesting example. He went out of his way to get access to information, going so far as to transfer into a role that had more access (I don’t recall all the details but I remember that much). Every company has some category of employee whose job it is to ensure enforcement of policies, for instance, and if these people set out to subvert the system you should expect them to be able to do so. The watching watchers onion does eventually run out of skin (and it’s not even that deep most places).


The right person with the right access can do a tremendous amount of damage. 14 years ago some servers I was hired to maintain (marketing sites for a gambling site out of Costa Rica) were wiped out as part of an inside job: http://boston.conman.org/2004/09/19.1

Who watches the watcher indeed.


I believe it's true of Facebook as well.

Source: I interviewed with their security team once and got a fair idea of how their various security teams are organized.


> shrug. Without evidence we're both just guessing.

Do I understand correctly that you just admitted that your (extremely confident!) factual statement here:

> most companies have pretty strict internal controls for this sort of thing

was actually "just guessing"?


I'm guessing based on:

1) my direct knowledge of similar companies

2) the fact that no large scale leak from internal sources has happened from FB which is evidence that they have at least some internal controls or procedures to prevent one


Unfortunately the internal tech infrastructures of many (not all) financial institutions are a mess of many decades of mergers and acquisitions resulting in a Rube Goldberg like backend of seemingly endless unnecessary complexity and dysfunction with, in many cases, superficial controls around who gets access to what.


I worked at a trio of adtechs. One of which anyone in the industry would absolutely recognize.

There were few effective internal controls. The obstacles to lookups were

1 - all info keyed by cookie. Which users can clear, and is very difficult to get identified. That is, to look you up, I need the cookie from your machine.

1a - most devs are not allowed to run the cluster jobs to look up data. Only on the appropriate teams.

2 - but what about stapling? We required partners to pass us blind uids. Certainly nothing like emails.

3 - no data export. The business is to run ads on the customer's behalf, so there's no way built to export data except targeting lists to the exchanges.


> With so much to lose and so little to gain internal leaks of this sort are extremely rare

I recently downloaded my Facebook archive [1]. If it were legal, I would certainly pay thousands if not tens of thousands of dollars for certain peoples' archives. I can think of several practical contexts in which an unethical actor would find it profitable to pay a Facebook employee a million dollars for someone's Facebook archives.

[1] https://www.facebook.com/help/131112897028467/


I would certainly pay thousands if not tens of thousands of dollars for certain peoples' archives

Really? For what purpose?


> For what purpose?

On the upside, any case where one is engaging in high-value transactions (broadly speaking). Knowing a negotiating counterpart's likes, dislikes, communication style, et cetera can help one avoid mistakes, build a personal connection and draft (and frame) terms correctly on the first try.

More seedily, such information about a political opponent (whether a politician, rival on a commercial or non-profit board, or commercial competitor) is useful.

As a risk mitigation tool, such data would find a natural home in a due diligence file. Prospective executives, board members, business partners, political donation recipients, et cetera expose one to reputational risks. Catching those in advance is already worth tens of thousands of dollars of legal time.

I would hate to live in a country where the above is legal. We should recognize the value of the information every single single Facebook employee has routine access to.


Re-sell to news media for hundreds of thousands of dollars?


I could find several unethical contexts where the same actor is paid a million dollars to kill the same person and the legal framework we live in does nothing to stop this.

Well, apart from post-factum incarceration.


> so little to gain

wait, what?

There's quite a lot to be gained. Enough to incentivize a very powerful attacker, possibly even a nation-state level actor who can extract the mole and protect / reward them.

The stakes are not low here; I can't imagine why you've said that.


In any organization there is some number of people, who, if sufficiently motivated, could work together to pull off a 'data heist' and not be immediately uncovered. At a good company that number is high, at a bad company that number is 1.

What do you think the number is at facebook? At google? At your bank? At your healthcare provider?


> You'd think so, but most companies have pretty strict internal controls for this sort of thing. Access is also carefully logged so a leaker is pretty much guaranteed to get caught at which point they'd immediately lose their job and likely face criminal prosecution.

That's not enough by any means (edit: and as [1] pointed out, I don't even think it's true). There needs to be more to security than mere deterrence. I'm pretty sure at Google, etc. it's simply impossible for a single rogue employee to mess with customer data (except for a few in very privileged positions), and my impression has been that Facebook is not like this at all (unless it has changed recently).

[1] https://news.ycombinator.com/item?id=16675494


That sort of access limitation is what I meant by "pretty strict internal controls."

Having never worked there, I can't speak to how it works at FB but I would imagine that there are a lot of limitations on what rank and file employees can do. I guess I could be wrong. Perhaps someone with direct knowledge will chime in.


> I can't speak to how it works at FB but I would imagine that there are a lot of limitations on what rank and file employees can do. I guess I could be wrong.

Cool, now read this: https://news.ycombinator.com/item?id=16675503

Any changes to your thoughts?


I hope so, but keep in mind Facebook has a magical password that worked for every account for almost a decade.


Source?



I think the previous commenter must have meant "had" as that says as of sometime before the 2012 date on that article the master password allegedly no longer worked.

Still egregious if that sort of early stage stuff hung around that long, but not the same as it being there today.


That password only worked from Facebook's corporate IP addresses.


Unfortunately the criminal charge would be theft of facebook's proprietary information.


Facebook is not most companies, though.


proper companies that have been around for a while and expect to be around for a while do this.

companies that move fast and break things don't give a shit.

I want more companies of the first type and less of the second.


FWIW, as a Facebook engineer you have a ton of trainings on how to handle data privacy. And not only is every place where you can touch data actively logged/audited/monitored (this includes DB reads from code, admin tools, etc.), but to access any data you have to explicitly request permission for that specific data.


> FWIW, as a Facebook engineer you have a ton of trainings on how to handle data privacy. And not only is every place where you can touch data actively logged/audited/monitored (this includes DB reads from code, admin tools, etc.), but to access any data you have to explicitly request permission for that specific data.

Really? So are stories like [1] complete lies? Or does someone inside just blindly grant these "explicitly requested permissions"?

https://news.ycombinator.com/item?id=16675503


You request access, and justify it with something like "I need it to debug issue #123". Someone manually oks/disallows it, and there's asynchronous reviews of these requests to double check. My guess is the intern lied about what they're using it for.

How else would you suggest to do privacy checks like these?


> You request access, and justify it with something like "I need it to debug issue #123". Someone manually oks/disallows it, and there's asynchronous reviews of these requests to double check. My guess is the intern lied about what they're using it for.

OK so an insider can just lie and access whatever they want. Heck, they can even tell the truth! Just find a bug that's exhibited in a particular profile and use that as an excuse to look at the profile.

> How else would you suggest to do privacy checks like these?

Whatever Google does. I don't know the details. But, for starters, my understanding is that their interns generally can't do what you just described, so fixing that would be one obvious step forward.


Facebook's data is very different from Google's. At Facebook you might have a bug that's related to how many thousands of different objects (and their specific properties) interrelate. How could you safely mock that out?


> Whatever Google does. I don't know the details.

Oh come on. You admit having no idea what Google does either, but surely that must be better than Facebook because you said so, until an FB insider replied and brought down your narrative.

Is it that hard to say "ok well, I stand corrected then" instead ?


> Oh come on. You admit having no idea what Google does either, but surely that must be better than Facebook because you said so

No, you are seemingly deliberately misquoting me. I said I "don't know the details", not "I have no idea". I know enough to feel fairly confident in what I've said. But if you don't believe me you're more than welcome to believe otherwise.

> until an FB insider replied and brought down your narrative. Is it that hard to say "ok well, I stand corrected then" instead ?

Stand corrected about what narrative? Everything I am (and hopefully also you are) reading right here [1] [2] [3] [4] [5] quite clearly says malicious employees can access user data, but will be fired if this is discovered, which is consistent with what I've said. (But don't actually bother replying if you want a response—I have no interest in responding after your comment.)

[1] https://news.ycombinator.com/item?id=16675664

[2] https://news.ycombinator.com/item?id=16675503

[3] https://news.ycombinator.com/item?id=16675649

[4] https://news.ycombinator.com/item?id=16675739

[5] https://news.ycombinator.com/item?id=16675968


So, you don't know how google handles this, but you are suggesting everybody should do what google does. Are you trolling?


He is not trolling. His core point is that there is no sufficient amount of training, or expertise, or monitoring, or punishment, or trying harder the 17th time you've been caught. If you are leaving the decision up to enough/too many humans, then you are by definition providing inferior security.

The real education from this story is far deeper than just Facebook. It is that Facebook employees, and Google employees, and all humans in general are susceptible to this very same "kompromat" concept, and are all susceptible to various forms of influence to greater degrees than our arrogance allows us to admit.

Human beings are attack vectors. Human beings are too self centered to do much about this in any meaningful sense. They can laugh the very idea away too easily.


Reminds me of an apocryphal story (can't find a reference but it appears to be reasonable): FCC was investigating the sale of illegal tv satellite descrambers when they confiscated a unit. Upon investigation, it was found to have been manufactured by IBM! Further investigation revealed it was manufactured at a secure IBM facility used for top-secret ("need-to-know", etc.) type projects. The manager responsible had split the work up such that no single employee there knew what they were building (because they didn't need to know---they just knew enough to do their bit).

I know it's not the same, but this reminds me of that story.


[Edit: sorry, never mind. Thanks for the story!]


Unfortunately no. I spent several minutes trying to find a link, but could not find one. That's why I labeled it apocryphal.


On an only slightly different topic, about 15 years ago, there was a pretty healthy community of people distributing the circuit boards and accompanying software to program DirectTV smart cards. These would unlock all of the channels that "Dave was already beaming at everyone's house anyway", according to the in group parlance used to absolve oneself of such things.

A decent part of that conversation seemed to center around how it seemed highly unlikely that the whole hack was even possible without insider information leading to the development of the tool in the first place.

Fifteen years later, knowing what hacks have been at least claimed to have been pulled off through social engineering, I think the more important take away is that we need to stop portraying the worst case of hacking as a masked man executing some bond villain style hack, because it is fundamentally recommending a terrible heuristic. It by definition casts aside all of the incompetence that is equally likely to cause harm, and in the case of sheer volume, the far more likely scenario to occur.


To be fair, at least at FB (I can't speak to Google or Apple or Amazon):

1. Accessing someone's data when it's not mission critical to your work means you're fired on the spot. This is drilled into new engineers over and over.

2. Privacy-related issues are escalated to the highest severity immediately (on par with data centers being down, etc.). I think the question in this whole debate is where you draw the line for this kind of issue, and what's an issue and what's a feature.


> Accessing someone's data when it's not mission critical to your work means you're fired on the spot. This is drilled into new engineers over and over.

This means they are capable of doing it and are merely punished afterwards, right? Not to mention that I would imagine getting fired in exchange for viewing private data could be quite a worthwhile 'transaction' for some people in some cases.


Would not be better to have a tool which automatically creates a/some profile/s similar to that/those the dev needs for debug purposes BUT filling it with fake data? So the bug is reproducible but the users data of them is not accessible to the dev


There's a tool for that, and it's certainly the preferred way to debug. Along with all the telemetry you get, for the vast majority of cases you don't need to touch anyone's data.


Correct. There are many stats relevant to the national discussion that a patriotic Facebook employee might leak. One is the effective CPM (eCPM) rate between the Trump and Clinton campaigns. My hunch is there was a massive disparity there, in favor of Trump. Facebook has only released the "paid CPM" rates, which is suspicious. Most Facebook advertisers look at eCPM, which combines paid + "organic" reach, in other words: the net reach per dollar spent.


Wait till there is a REAL data leak. Just your facebook profile data is 1% of the data they have about you. Using the cookies they have all over the internet as well as partnerships with offline pos transaction systems, they know almost anything you do online/offline. So all websites you have ever visited, things you buy online, the sandwich you buy with your credit card in a local store etc etc. Imagine all that being leaked.


You're assuming it hasn't happened already - all we know for sure is we haven't (yet) had one with Snowden's type of motivation.

(Or perhaps we have, and whichever trusted journalists they've chosen to share with are franticly poring over the exfiltrated data working out how best to angle the story without throwing the whistleblower and/or innocent FB users under the bus...)


Who says it hasn't happened already? How would we ever know? For that matter, how would Zuck?


Anecdotally I've heard of interns getting fired for just looking at profiles (that they aren't actual friends with) even around 5 years ago. So at least they take it somewhat seriously.


> Anecdotally I've heard of interns getting fired for just looking at profiles (that they aren't actual friends with) even around 5 years ago. So at least they take it somewhat seriously.

So the customer's privacy got violated, because interns had blanket access to private customer data. To me that's very much not taking security seriously.


i interned at fb a few years ago. any engineer, intern or not, can access production data. day one you set up an instance of fb on your dev server that you can mess around with, and its connected straight to the prod db. you're able to view anything you want, but they're very adamant that they monitor what you look at.


"its connected straight to the prod db"

I can't believe what I am reading. Why is that? Why use customer data for dev purposes. Why not work on some mock data?


I ask out of curiosity. If you have a P1 escalation due to an issue that is reproducible only in production environment but not with your test environment with mock data, how do you plan to troubleshoot it?


Yes, but you have to explicitly request data every time you access anything. IDK what it was like when you interned, but that's what it's like today.


was not the case when i was there, and it wasn't all that long ago


You and esman1 both could be right. I work at a company of similar size and sophistication as Facebook. Sometimes whether or not you have access to production data by default depends on which team you work for.


> to me, this is another silently ticking time bomb.

I agree. It'll eventually happen to some social app or email provider (think Slack, gmail, facebook, etc) where some huge portion of the database is dumped online -- not through a hack, but through a person willing to do it internally because they can and do not fear or care about the consequences. The Ashley Madison hack was a preview of what's to come.


I would imagine we would hear a lot less information about an internal issue regarding a Facebook employee than an external one as well.


"Our efforts to protect our company data or the information we receive may also be unsuccessful due to software bugs or other technical malfunctions, employee error or malfeasance, government surveillance, or other factors.

"In addition, third parties may attempt to fraudulently induce employees or users to disclose information in order to gain access to our data or our users' data."

"Although we have developed systems and processes that are designed to protect our data and user data and to prevent data loss and other security breaches, we cannot assure you that such measures will provide absolute security."

"In addition, some of our developers or other partners, such as those that help us measure the effectiveness of ads, may receive or store information provided by us or by our users through mobile or web applications integrated with Facebook. We provide limited information to such third parties based on the scope of services provided to us. However, if these third parties or developers fail to adopt or adhere to adequate data security practices, or in the event of a breach of their networks, our data or our users' data may be improperly accessed, used, or disclosed."

Source: MD&A, 2015 Facebook annual report


To anyone who actually reads annual reports on a regular basis, this is a copy/paste for basically every single tech company, on every year's annual report

There's like 50 pages of this stuff that covers literally every possible scenario in case of legal liabilities. Has no meaning whatsoever


Honestly that is all just legal boilerplate that could be found in the annual report of any public internet business.


Facebook's Board of Directors is a remarkable collection of silent-yet-complicit heavyweights:

-Marc Andreessen;

-Erskine Bowles ("President Emeritus of the University of North Carolina" and "White House Chief of Staff from 1996 to 1998");

-Ken Chenault ("Chairman and Chief Executive Officer of American Express Company");

-Susan Desmond-Hellmann ("Chief Executive Officer of The Gates Foundation" and former "Chancellor at University of California, San Francisco (UCSF) from 2009 to 2014");

-Reed Hastings ("Chief Executive Officer and Chairman of the board of directors of Netflix");

-Jan Koum ("co-founder and CEO of WhatsApp"); and

-Peter Thiel [1].

Might not be a bad idea to pen a letter to their Board [2] with your state attorney general [3] and perhaps a U.S. Senator [4] copied.

[1] https://investor.fb.com/corporate-governance/default.aspx

[2] https://investor.fb.com/corporate-governance/?section=contac...

[3] http://naag.org/naag/attorneys-general/whos-my-ag.php

[4] https://www.senate.gov/general/contact_information/senators_...


probably because speaking out would cause more trouble than its worth. i recall an Uber director decided to open his mouth during the incidents of last year...


Or recall the HP leaks.


> the HP leaks

The HP leak and spying scandal was so convoluted and left so many loose ends that I question its pedagogical utility.

"On September 5, 2006, Newsweek revealed that Hewlett-Packard's general counsel, at the behest of HP chairwoman Patricia Dunn, had contracted a team of independent security experts to investigate board members and several journalists in order to identify the source of an information leak. In turn, those security experts recruited private investigators who used a spying technique known as pretexting. The pretexting involved investigators impersonating HP board members and nine journalists (including reporters for CNET, the New York Times and the Wall Street Journal) in order to obtain their phone records. The information leaked related to HP's long-term strategy and was published as part of a CNET article.

Board member George Keyworth was ultimately accused of being the source and on September 12, 2006, he resigned, although he continued to deny making unauthorized disclosures of confidential information to journalists and was thanked by Mark Hurd for his board service. It was also announced at that time that Dunn would continue as chairwoman until January 18, 2007, at which point HP CEO Mark Hurd would succeed her. Then, on September 22, 2006 HP announced that Dunn had resigned as chairwoman because of the "distraction her presence on our board" created. On September 28, 2006, Ann Baskins, HP's general counsel, resigned hours before she was to appear as a witness before the House Committee on Energy and Commerce, where she would ultimately invoke the Fifth Amendment to refuse to answer questions."

https://en.wikipedia.org/wiki/Hewlett-Packard_spying_scandal


Tom Perkins' memoir recounts this.


Doesn't Zuckerberg have full control of the board?


> Doesn't Zuckerberg have full control of the board?

Zuckerberg has voting control of Facebook, in part due to some financial engineering in 2016 [1]. He does not control the Board.

Board members have a fiduciary "duty of care," i.e. "the duty to pay attention and to try to make good decision" [2]. This duty is to the company as a whole, not just its majority vote-holder [3][4]. (That said "American courts simply do not hold directors liable for business decisions, made without a conflict of interest, unless those decisions are completely irrational. The doctrine of noninterference is known as the business judgment rule." [2])

[1] https://www.bloomberg.com/view/articles/2016-04-28/mark-zuck...

[2] http://www.oecd.org/daf/ca/corporategovernanceprinciples/187... page 6

[3] https://en.wikipedia.org/wiki/Shareholder_oppression

[4] http://www.sgalaw.com/news-and-views/2010/4/27/shareholder-o...


What does that mean? He can't control the opinion of the board members but he can control the decisions of the board as whole?


"He's surrounded himself with people just like him -- Silicon Valley entrepreneurs," says Stuart Grant, who deposed the Facebook founder during a lawsuit filed by investors opposed to the company's proposal. Facebook withdrew the proposal last month, just days before Zuckerberg was set to testify in the suit in Delaware Chancery Court.

...

There's a reason for diversity -- it gives you a mix of opinions and ideas," says Grant, partner and co-founder of Grant & Eisenhofer, a Wilmington, Delaware, firm that specializes in securities and corporate-governance cases.

The board's near-uniformity of experience has led to a consensus of opinion that defers to Zuckerberg on all matters, Grant told CNBC. That can stray from what is best for shareholders."

For example, allowing Zuckerberg to reduce his economic interest in Facebook "dramatically" -- by selling tens of millions of shares -- while allowing him to maintain "absolute control" over corporate decision-making was a bad idea that the board should have voted down, Grant argues.

"You never want to divide economic consequences from decision-making," he says.

Zuckerberg's plan would have created three classes of shares, one with no voting rights, and allowed him to maintain voting control of the company even after selling most of his stake."

Source: https://www.cnbc.com/2017/10/05/attorney-who-deposed-mark-zu...

(Poll: Do you think it would be interesting to see the video from that deposition?)

Lead up:

"Discovery revealed that Zuckerberg in fact used his relationship with Andreessen to undermine the special committee process. Andreessen leaked Zuckerberg confidential information about the committee members' thoughts and concerns, and coached Zuckerberg through his negotiations with the committee. In one instance, Andreessen and Zuckerberg texted back and forth during a group call with the committee, with Andreessen telling Zuckerberg things like, "This line of argument is not helping. J" and "THIS is the key topic.""

...

"Trial was set for Tuesday, Sept. 26, 2017, with Zuckerberg slated to testify as the plaintiffs' first witness. On Thursday evening, Sept. 21, however, Zuckerberg asked Facebook's board to withdraw the reclassification, which it did. This withdrawal mooted the plaintiffs' litigation and averted the billions of dollars of harm to Class A stockholders that plaintiffs sought to prevent."

Source: http://webcache.googleusercontent.com/search?q=cache:https:/....

"But Andreessen, a venture capitalist at Andreessen Horowitz and a long-time Facebook board member, is a close Zuckerberg ally. While on the committee, Andreessen slipped Zuckerberg information about their progress and concerns, helping Zuckerberg negotiate against them, according to court documents. The documents include the transcripts of private texts between the two men, revealing the inner workings of the board of directors at a pivotal time for Facebook.

   ... 
Most of Andreessen`s texts to Zuckerberg during the negotiations over the non-voting shares focused on how to talk to the other two committee members. Susan Desmond-Hellmann, Facebook`s lead independent director and chief executive officer of the Bill & Melinda Gates Foundation, also led the special committee and discussed the matter on its behalf with Zuckerberg personally -- a call that Andreessen helped Zuckerberg prepare for.

Bowles, former President Bill Clinton`s chief of staff and past president of the University of North Carolina system, was especially skeptical of Zuckerberg`s proposition, as depicted in the suit. Many of Andreessen`s texts focused on persuading him. Among other things, Bowles worried that one of the concessions Zuckerberg wanted -- to allow the billionaire to serve two years in government without losing control of Facebook -- would look particularly irresponsible, according to court filings. Bowles did not respond to requests for comment.

Andreessen sought to persuade Bowles that if Zuckerberg went into politics, the government would likely require him to give up control of Facebook anyway, so the point was moot, according to the documents. A couple weeks later, Andreessen prevailed, and the vote was brought to shareholders. (The stock reclassification is on hold pending the results of the lawsuit, though.)

"The cat`s in the bag and the bag`s in the river,`` he messaged Zuckerberg. "Does that mean the cat`s dead?" Zuckerberg texted back, not understanding the spy speak.

   Andreessen replied: "Mission accomplished :-)""
Source: https://www.bloomberg.com/news/articles/2016-12-08/facebook-...

The deposition must have gone well. Heres what the plaintiffs lawyer had to say before Zuckerberg withdrew his proposal.

"This case is said to mark just the second time Zuckerberg testifies as a witness. He previously testified earlier this year over a lawsuit against Facebook-owned Oculus -- a case Facebook lost.

Stuart Grant, the attorney representing the shareholders in the dispute, didn't mince words. He suggested Zuckerberg's limited courtroom experience puts him at a disadvantage in this case.

"That gives me an advantage because I've been doing this for 30 plus years," Grant told CNN Tech. "If we were sitting down to do coding together, I'd bet on Mark, but we're not coding."

Source: http://money.cnn.com/2017/09/20/technology/business/mark-zuc...


Can someone explain to me why the Cambridge Analytica story is making people so much angrier than the later revelation that Facebook was scraping call+text info? That seems to be the larger problem to me.

Somewhere at Facebook there is a team of people who wrote software to scrape, store and analyze the personal call+text data that users didn't explicitly mean to give to Facebook.

The data that Cambridge Analytica attained (from Facebook's API) doesn't seem surprising at all. Isn't the Cambridge Analytics headline really just, "Group doesn't follow website's terms of service from five years ago".


> Isn't the Cambridge Analytics headline really just, "Group doesn't follow website's terms of service from five years ago".

I think the the headline people are seeing is more like "Group doesn't follow website's terms of service from five years ago, and ends up helping Donald Trump win presidency."

A big part of the reason this has become so big a story is political.


> I think the the headline people are seeing is more like "Group doesn't follow website's terms of service from five years ago, and ends up helping Donald Trump win presidency."

Exactly.

The Obama campaign already acknowledge they did the same thing, but on a bigger scale.

https://www.washingtonpost.com/business/economy/facebooks-ru...

"In 2011, Carol Davidsen, director of data integration and media analytics for Obama for America, built a database of every American voter using the same Facebook developer tool used by Cambridge, known as the social graph API. Any time people used Facebook’s log-in button to sign on to the campaign’s website, the Obama data scientists were able to access their profile as well as their friends’ information. That allowed them to chart the closeness of people’s relationships and make estimates about which people would be most likely to influence other people in their network to vote.

“We ingested the entire U.S. social graph,” Davidsen said in an interview. “We would ask permission to basically scrape your profile, and also scrape your friends, basically anything that was available to scrape. We scraped it all.”"


> We would ask permission

They asked each user for permission to look at their social graph, in an app designed for this task (Obama election).


"A foreign entity that was not registered as foreign agents collaborated with a multinational corporation to influence the presidential election" is why people are outraged. But it's likely that none of this would have been investigated had Clinton won the election


If Clinton had won, the propaganda and dirty tricks would have failed, and so would have seemed much lower priority.


Cambridge Analytica’s actions had real world political consequences, so yes it’s political for that reason. But another angle of the CA portion of the story is that they were ratfucking in a very effective way—such as publishing fake news about BLM activists were organizing violent rallies, then broadcasting that to likely Trump voters. In some cases they even organized BLM events with incitement to commit violence, then pushed true stories about the events that suited their narrative.

In other words, CA demonstrated a tremendous vulnerability in our political dialogue to pure propaganda and dirty tricks that social media, in its algorithmic purity, was supposed to make less likely, not more. Some of the fallout of this affair will be a tremendous distrust that any provider of social media is an honest broker of what others use their services for. If my news feed is tainted by a group like CA manipulating it at the algorithm level, what hope can I have that anything I’m receiving over the Internet isn’t compromised? Maybe the stories are fake; maybe they’re true but the balance of stories is altered; maybe the facts are true-ish but slanted or selective. These are all legitimate concerns in the normal marketplace of ideas, but now we find the marketplace of ideas is deliberately compromised by malicious entities.


> If my news feed is tainted by a group like CA manipulating it at the algorithm level, what hope can I have that anything I’m receiving over the Internet isn’t compromised?

Was it CA doing the manipulating, or was it Facebook?

It's Facebook that runs Facebook...CA was merely taking advantage of Facebook to the fullest extent it could.

I'm not saying what CA did was right, but your comment seems to suggest Facebook was somehow helplessly complicit in executing CA's malicious plan.


They weren’t helpless, they were complicit, at least if early reports are still accurate that Facebook (meaning the Facebook employees who were actually assisting CA) knew what CA was doing while they advised CA on maximizing their data haul via the same techniques the Obama campaign had used in 2012 that had never been disabled. Facebook looks amoral and negligent in this story: All they cared about was making money by selling more data to a client with bottomless pockets. CA was using the data to maximize the effectiveness of their ratfucking operation, that depended upon understanding how Facebook’s newsfeed algorithms worked.


Probably because of some of the things that Cambridge Analytica stands accused of, or else things they have blatantly admitted to while being secretly recorded: blackmail and bribery of politicians in multiple countries, hacking election results, and even more unethical acts[1,2].

People are upset that their data was essentially stolen from Facebook (it was collected for use in an academic study, then turned around and sold for profit to CA), used by a company with ethical failures as serious as Cambridge Analytica, and then Facebook buried the story. It was two years before it came to light thanks to Guardian journalist Carole Cadwalladr.

Facebook also worked closely with CA during the Trump campaign, even though they would have known by that time that data obtained under the pretense of an academic study had been sold to CA.

1. http://www.bbc.com/news/uk-43528219

2. https://www.channel4.com/news/cambridge-analytica-revealed-t...


Because people are really, really bad at understanding threats that have vague, uncertain consequences. For the last 15 years, trying to get people to seriously worry about privacy has had about as much success as getting them to worry about climate change: they claim to care, but their revealed preferences tell a different story. The problem is that the downsides are uncertain and in the future, whereas the upsides are immediate and certain. Our ape brains are horrible at evaluating tradeoffs of this kind.

But the flip side to this is why the CA story has blown up: for once, the consequences— "CA got Trump elected!"— are immediate and graspable, in a way that "Facebook is scraping your text info" is not (even if it's probably not true). When the effects are right in your face and not time-delayed, people sit up and pay attention.


I'd imagine it's at least partly expectations - people know that Facebook has their data (even if not the full extent), so adding an additional class of data doesn't change that fundamental understanding or register as strongly. The fact that a third-party can pull your data out of Facebook is less obvious.

The fact that it was used for political ends probably makes a difference as well, both in the amount of coverage it is receiving and that it makes the use of data into a more concrete issue (it's much easier to understand "this is what the data was used for" then "Facebook has your data and that's bad for hypothetical/abstract reasons").

> Isn't the Cambridge Analytics headline really just, "Group doesn't follow website's terms of service from five years ago"

That's the act, but I'd say the usage/intent behind doing so is part of the story.


It is this weeks viral thing to be outraged about.

I am buying some FB calls in the morning because no one will care about this "movement" in a month.


The CA story is the first tangible example of the societal consequences of exploiting social data for something other than selling you cat litter.


Can someone explain to me why the Cambridge Analytica story is making people so much angrier than the later revelation that Facebook was scraping call+text info?

The lie that facebook (and the like) are sold on is that there are zero possible negative ramifications of giving Facebook that data. Of course that's not true. But something has caught people's attention and they're waking up to it.

Now is the time to tell them all the other reasons to not trust facebook. Loudly scratching your head about why people care about Cambridge Analytica is to miss the opportunity you have.


Personally: I don't care what facebook does with my data - there's nothing very exciting and I have ad blockers so don't even see the ads. It doesn't effect me.

However the Analytica stuff is about conning the masses into voting for Trump and Brexit and that effects me big time and pisses me off somewhat. In face worse that conning - more initing the mob to hatred through lies and bullshit. See for example the Hillary is Satan ad paid for by the russians and targeted with facebook to the kind of people who vote on the basis of that kind of stuff https://static01.nyt.com/images/2017/11/02/us/politics/02dc-...


By giving access to your data, you are saying it's ok to take that data and pander to your psyche in some fashion. You may not see direct advertising in your browser, but it leads back to how strategists decide to engage your physical community. And even if you block ads, your friends may not. Friends will influence you through your feed, based on the ads they see, based on your data.


Yeah but my data - I'm middle aged, read HN, speculate a bit in crypto etc - is not anything I'm worried about people knowing. I can understand if people want to keep stuff secret but I don't have much of that and what I do I don't post on the internet in any form. I've never really thought of facebook as much more private than HN - ie not private.


But it’s not the secret data people are worried about with Cambridge Analytica. It’s mundane stuff that works to define a profile of your beliefs and mental vulnerabilities. And a big issue is many people see their data as not a big deal because they haven’t posted any nude photos to Facebook. Time to reframe how we view what is sensitive in the age of ML


It’s not that your data might be revealed, it’s that the data of you, your friends, and 50 million other voters might give someone an effective lever over a national election. Regardless of how you feel about Trump or Clinton, that should give you some pause.


I respect Facebook and their engineering chops as much as the next person, they are truly world class programmers, but how the holy hell is everyone daydreaming that they don’t work for an advertising company?

You sell and use people’s data to get money: this is the business plan. Full stop.

Connecting people can definitely be lucrative and useful in other ways but facebooks particular implementation is impression based not action/outcome based.


Because they’re paid well. Very well. Everyone I know that works at FB for 2+ years is making 300-500k (including stock) and already owns or is on their way to purchasing a home. That makes it a lot easier to ignore the reality of FB. Meanwhile chumps like me that consider the ethics of their employer will be renting forever. I honestly don’t blame them.


To be clear, I'm not criticizing or holier than thou —- I'd work there in a heartbeat without complaint and love it.

The part I don't get is that it seems everyone who does work there is in shock and awe that this is going on. SHOCKED!

It's comical to the point of parody.


It's a cognitive bias that kicks in when your monthly salary is involved, as succinctly noted in 1935 by Upton Sinclair:

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"


> already owns or is on their way to purchasing a home

This is insane. I get why some people want to live in the bay area, but I'll take my 4k square feet home in the great neighboorhood with the award winning schools for 300k.


Agreed, a lot of cognitive dissonance happening on campus.


Maybe what the article says it is and what reality is a bit different?


You can only sell your soul once, the second time there is nothing left.


The money probably helps, especially given the Bay Area cost-of-living.

The FB employees I've met have been fine with explaining away the consequences of their actions with "oh it's just a job", "that's not my team", or "the technology is really interesting".

And as an idealist, I'll invoke Goodwin's Law depending on our relationship.


“Hans, are we the baddies?” https://www.youtube.com/watch?v=hn1VxaMEjRU


Sheryl Sandberg is under the impression that Facebook, quote, "does not sell your data": https://www.youtube.com/watch?v=p1CTHFEcoJc

She is either wrong, or lying, or has her own ideas of what "selling your data" means.


They don't sell your data. Your actions and data are used to put you into categories, and advertisers pay to show their ads to people associated with those categories. No one is given direct access to your data. This strategy is not unique to Facebook-- it's how Google, Yahoo, and other major advertising-based tech companies work.


We do know that Facebook gave the data away for free to all apps, just like Google's Android. So that "they don't sell your data" is a deluded statement.

Maybe they also sell it (processed into aggregated reports, not raw), I wouldn't be surprised.


At the end of the day, your data goes into a black box, gets jumbled around and then sold. To suggest this is different than "selling your data" is pedantic and misleading. For Sheryl to deny this "selling of data" reveals her concern with words and reception over real actions and effects.


No, it's not a pedantic distinction at all and it's actually the "selling people's data" narrative that's misleading, not the other way round.

Facebook does not sell user data, it sells targeted ad space which is the exact opposite business model.

Their competitive edge in this business purely relies on being the only entity in the world who can target so well, precisely because they hold onto that data like a treasure.


Their competitive edge is quite dull then, because all FB apps had access to the user data. FB PR needs to work on another excuse now... "don't sell your data" is a joke after the CA scandal.


not too long ago the emphasis was on whether Facebook is a -media- or a -tech- company. Media companies have more responsibility to censor content then tech companies do.

Interesting how quickly the narrative changes.


I respect Google and their engineering chops as much as the next person, they are truly world class programmers, but how the holy hell is everyone daydreaming that they don’t work for an advertising company?

You sell and use people’s data to get money: this is the business plan.

Are people working there really so nieve(sic) as to believe that this is surprising?


Former Googler here. I'm pretty sure the vast majority of Googlers understand they work for an advertising company with a gigantic and generally well-run software "engineering" department. I don't mean that in any derogatory sense. Well-regulated advertising is important in helping consumers make informed purchasing decisions.


That's just something advertisers tell themselves to sleep better at night. And failing that, they just rub some dollar bills on their faces and then sleep like babies.

Advertising (regulated or not) is important in manipulating people into buying more stuff. There's no benefit in having your movies, music listening, games or your reading interrupted by ads. There's no benefit in a pair of breasts distracting you on the road from a billboard. There's no benefit in having to throw away all the crap you get per mail.

If someone needs something, they will look for it and buy it, but this pull model doesn't result in as many sales as the push model and CxOs have to eat.


If advertising were about "helping consumers make informed purchasing decisions" there would never be an attractive model, a wild animal, singing, dancing or any other trope in an ad ever again.

They would be a list of superior features of, and only of, the product read in clear, plain terms.


I never said that was their intent, just that it's an important function (often despite the intent of the advertiser).

I did a few searches for embedded development boards, and now I got lots of ads for alternative boards. I don't pretend that this information is in my face altruistically or entirely truthfully, but it does give me some useful information, probably paid out of the pockets of some people who take the advertisements at face value.

It's not the most efficient mechanism I could imagine for disseminating truthful information, but if you're not naive, it's a useful channel.


>They would be a list of superior features of, and only of, the product read in clear, plain terms.

And also the inferior features. And those of its competitors.

It's amazingly deluded to even consider that advertising is about informing consumers. It's clearly about disinforming them - clouding their judgement in order to make them buy something they otherwise wouldn't have.


>Well-regulated advertising is important in helping consumers make informed purchasing decisions.

Do you think the adtech industy is "well-regulated"?


I think most consumer electronics are advertised reasonably, and it's fairly easy for consumers to evaluate advertising claims there.

On the other hand, health products / dietary supplements seem woefully under-regulated, especially since the layperson seems to have great difficulty in evaluating health claims. It seems crazy that dietary supplement and drug advertising are treated so very differently.

So, I think in some areas we need better advertising regulation, but not across the board.

More importantly, my point is that I'm not holding my nose up at my former colleagues. Without the second half of my post, the first half could be read as having a very judgemental tone.


> Well-regulated advertising is important in helping consumers make informed purchasing decisions.

Citation needed.

This statement is just the pablum that people in the industry repeat to help assuage their guilt at what they are doing. I say this as someone who worked on ads for a company that had more than half of all US internet ad spend at the time.


Facebook is an enabler for individuals to successfully undermine our democratic mechanisms. It shouldn't feel nice to work for a company that has to explain itself in front of the government. The employees of Facebook should be aware of what monster they are building.


No they’re not. They’re in denial. I used to work there as an intern, so I get a front row seat to how a lot of employees are reacting. The word I would use is: indignation.

I’ve seen this us-against-them mentality play out elsewhere in various toxic cult-like organizational cultures. The NSA was a great public example of just how manifestly horrifying things can get with tens to hundreds of decent people willfully participating in corrupt or unethical practices.

The way this all works is terribly fascinating, but the short of it is that you have to become closed off and indoctrinated in order to fit in. Particularly at places like Facebook, Google, and generally anywhere else that provides free on-campus dinners (a good heuristic), employees build their social circles and identities around the company. This, coupled with various other factors, permits an astounding cognitive dissonance amongst a large group of otherwise benign and rational people.

EDIT There’s an interesting additional complication I’ve seen at times: internal spin. The media gets things about companies so wrong so frequently that it’s almost too easy to discount the things with an uncomfortable shred of truth as ‘fake news’.


internal spin.

When I worked at a Fortune 500 company during an economic downturn, we were simultaneously seeing endless austerity measures while being plastered with endless positive spin. The free pens and stuff disappeared. The new job listings shrank overnight from pages and pages to a handful. They cut back on janitorial service. There was a pay freeze. Etc.

But all the press releases and news articles being forwarded to our email was about how we flew up the Fortune 500 ranking (iow we were sucking less than other companies during the recession, even though the company shrank, because it's a relative ranking) and our CEO was named one of the biggest wealth builders in the nation and so on.

I was painfully aware of the disconnect. But I sometimes wondered if other employees really noticed or not. I never asked any of my coworkers. I felt like that would be a good way to end up eventually fired. But I wondered how many drank the kool-aid without noticing that it didn't jibe with the austerity measures we were seeing.


Agreed. I've never been at FB, but been at a similarly big "darling" software company (don't want to go into specifics for identification reasons) and it largely is about creating an internal "us-vs-them" mentality and a culture that lionizes the good deeds over the bad.

Don't think of the employees as evil, they are probably legitimately not aware of the entirety of what's going on. Like soldiers in a war, they only know how their battles are going, not the war.


> evil

Hollywood et al popularize the misconception that evil is fantastic and done with intent. Most of the time evil is banal[1]. The larger problems happen when the unremarkable, small deviances from acceptable behavior becomes normalized[2][3].

[1] https://en.wikipedia.org/wiki/Eichmann_in_Jerusalem#The_bana...

[2] https://en.wikibooks.org/wiki/Professionalism/Diane_Vaughan_...

[3] https://www.youtube.com/watch?v=PGLYEDpNu60


Are you suggesting they aren't even aware of these issue because there's a big difference for one to look for answers and stay educated and one who chooses to stay ignorant knowingly. No one will empathize with the latter group because that's how most weak people in history become complicit to great atrocities.


There is a difference with the NSA, I remember they recruited pretty heavily on our college campus and you are presented with the chance to work for and defend your country with your brainpower ostensibly. Then we throw in this war on "terror" and for me it seems like the questions and answers for an employee at the NSA would be much harder than for someone at a private company where it's much easier to just quit and walk away (if one were not indoctrinated as you describe.)


> I’ve seen this us-against-them mentality play out elsewhere in various toxic cult-like organizational cultures

Wall Street, too, after the crisis. “Of course we bet against our counterparties! They’re counterparties, not clients. If they didn’t read the prospectus they’re morons who deserved to lose their money. We're just the political whipping boy du jour."


I think it also kinda helps that most of the engineers are probably not "rich kids"; they're probably mostly from middle class backgrounds. They've never seen that kind of wealth before, never seen that insane level of benefits. Its much easier to think of someone (or some org) as benign, to give them a second chance, when they treat you personally so well. Its kinda what immigrants feel when they do well in the US as well.


I wanted to include this in the original post as well. My cynical/ironic phrasing of this phenomenon is “not wealthy enough to have principles”.

Which I totally get. I’ve been a victim myself. And the truth is it’s hard to find successful companies that aren’t profiting off of some kind of exploitation (e.g. pollution, natural resources stolen from a developing nation, behavioral manipulation of users, injecting animals with antibiotics, high fructose corn syrup, lobbying (institutionalized corruption, preying on people’s fears (media), child labor (lots of clothing supply chains), predatory lending, extreme leverage ratios in a too big to fail context, etc).

It’s weird to me that people often don’t understand the root problem here is unrestrained/underregulated capitalism. In any naturally competitive system without adequate rules, the winners will be cheaters/exploiters. In reverse: you often cannot win unless you cheat.

Hence: most people dont have the luxury of working exclusively for socially responsible companies.


I've seen the same behaviour, excellent comment and thanks for sharing. It would be great to see a documentary on these companies and their relationships with the companies they work for.


Not a documentary, but the movie The Circle with Tom Hanks explores some of these issues.


Upton Sinclair comes to mind: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"


> I’ve seen this us-against-them mentality play out elsewhere in various toxic cult-like organizational cultures.

Uber comes to mind


Big misconception here.

Private interests incentivize individuals to undermine our democratic mechanisms. If we let them, we are fools to expect anything else.

The only reason many other companies don’t have to have to explain themselves to governments is they are legally allowed to incentivize the opposite with capital. It’s called lobbying.


This is why they’re being “calmed” and kept comfortable. In the immortal words of Pickle Rick, I think it's helped a lot of people get comfortable and stop panicking, which is a state of mind we value in the animals we eat, but not something I want for myself.

Plus they can bury any qualms they have in piles of money and rationalizations.


I hope anyone worth their weight in salt leaves Facebook as an employee and as a user. Employees should already feel shame since the election. They all know what Facebook is built on and what they've done and what they're doing.


Which election are you talking about? The 2016 one where everyone cares about Facebook helping elect the US president, or the 2008 and 2012 elections where everyone was ok with Facebook helping elect the US president?


I've seen this nonsense spouted a few places now. Personally, I think there is a big difference between the two, one focused primarily on cheerleading to get out the vote, the other using inflammatory fabrications to enrage your base (and get out the vote). I see this as a classic example of when "the ends don't justify the means".


Why `or`? why not all 3? Because FB's role in all 3 elections has been disgusting and has made modern election into a degenerative state than it once was.


This is only the first major company to experience an issue like this. More will follow. If it wasn't Facebook, it would have been another.


>> One of [the Facebook employees] said he had avoided a trip home to see his family last weekend because he did not want to answer questions about the company he worked for.

Wow, some people/families are way too media-sensitive. It's just hypocrisy. Facebook is fundamentally the same company as it was last week, last year and 5 years ago. Everyone knew this, especially Facebook employees.

Facebook today is mostly made up of two kinds of employees; money-hungry sociopaths and hypocrites.


There's a big difference between a theoretical "Well technically they have all our data and they could share it with anyone and they could use it to target ads quite precisely"

In fact, that statement is true of the government as well. Most people just think it won't really happen, and if it does happen it'll be something fairly trivial like selling me shaving kits because I'm a man, and that my data isn't really all that revealing.

That is vastly different from:

"This specific data you gave facebook went to this specific company, in violation of facebook's own policies.

The breach of ToS wasn't followed up, and we have video of the CEO bragging about fake news, blackmail and honey traps.

This wasn't even a US company influencing the election.

Your data was directly used to campaign for someone you probably deeply oppose.

Not only that, but this specific targeting was probably highly important because we know the result of the election relied upon victories in specific states that are important to the electoral college whilst losing the popular vote.

It also turns out that what had seemed to be deep real organic discussion topics turned out to be targeted propaganda showing a scary ability to control the public discourse

Oh. And this is all carried out by a company whose CEO openly wants to run for political office and could use this to get himself elected next time."


What could they possibly do to fix things that wouldn't destroy their business model?


Their business model is leaky abstractions as a rule and malicious compliance when exceptions occur -- we (our policymakers) have to know the underlying complexity (of Facebook in order to issue permission for its continuing existence), otherwise it's a foot-gun by default (for constituents of those policymakers), and furthermore when we need Facebook's help to clean up its messes then it's half-hearted and doesn't address the underlying issue.

There's not much they can do other than assume absolute pacifism and absolute neutrality, to a fault. Effectively they must be resolute in their support for the stability of society, and that includes putting down political activists like Antifa and BLM. Facebook must no longer be a platform for activism of any kind, where all content deemed objectionable by anyone is pruned.


Their business model does not rely on third parties accessing private user information.


Agreed, it's quite the opposite in fact : their business model relies on jealously guarding private user information to remain the only entity in the world who can sell highly-targeted ads


How do you sell highly targeted ads without revealing your data? How did the Obama campaign download the entire U.S. social graph in 2012, and brag about doing so, with Facebook’s approval? How did Cambridge Analytica do a comparable thing in 2016?

Facebook doesn’t sound that jealous to me.


They abstract most of it away. Marketers can set specific targeting criteria and can get a high level estimate of how an ad will perform. Beyond that it's tracking impressions, clicks, split testing, etc.

The app API was different until 2013(?). App developers only needed a single user's permission to access all of that user's friends' information. Both Obama and Cambridge Analytica came out of that period. Now, users can only authorize the release of their own information.


Seriously, I don’t have much respect, if any, for those working for Facebook...unless they’re working on and can implement drastic changes in how privacy, tracking and profiling are handled for the betterment of humankind. But Facebook being an advertising company that thrives on such details, I doubt if employees would have much say on these aspects or can do anything.

There ought to be a #quitfacebook topic to get many employees to quit. But I don’t believe that would get much traction due to the attractiveness of compensation/benefits and probably some challenging work. If someone working at Facebook believes that things will get better, I’d say they’re just deluding themselves. It cannot happen with the current management.

P.S.: Since this post is about Facebook, I’m not going to talk about other companies.


Sounds like Facebook is having their NSA moment


can't wait for the next Snowden


I still remember the first Snowden, who revealed to a shocked world that the NSA was involved in signals intelligence. I suspect that the next Snowden will reveal to a shocked world that 3rd party databases have a tendency to leak into other 3rd party databases.


What amazed me the most from the first Snowden leak was that the NSA was vastly more effective and competent in their efforts than I had expected.


I think that most people think that...


This one company's unauthorized access to millions of records may just be the tip of the iceberg.


>"There was a feeling, said one of the people, that Facebook wanted to take aggressive steps to make sure it could regain user trust. And over all, he said, confidence was up."

I'm curious what might be the source of this regained "confidence."? The idea that this will all just blow ever in a few months?


Honestly, I am not very worried about rouge data analytic companies or Russian trolls on facebook.

I am worried that questionable semi-private German entity can block me (e.g. 30 days ban) on facebook at will. I am an US citizen and don't live in Germany. This is outrageous.


Whatabout a semi-private German entity?


Take this as a start (I know, Breitbart)

http://www.breitbart.com/london/2015/09/17/german-govt-hires...

I just read about "FTC’s Bureau of Consumer Protection Regarding Reported Concerns about Facebook Privacy Practices". Since I am an American citizen and was blocked for 30 days on facebook by this dubious organisation, I may actually drop the FTC an email and ask about their opinion on this.


I don’t really understand the outrage. Just what do you expect when you share things with hundreds of people (your FB friends) online? For it not to be used? The only reasonable assumption is that anyone and everyone can read whatever you share on FB.


Recommended read to add to this[1]. Employee was shocked with the amount of data they have access to without clear business need.

[1] Might be behind paywall: https://m.washingtontimes.com/news/2018/mar/17/ex-facebook-e...


Let's cut to the chase. Would you work for Facebook in your dream role, at industry leading pay.

The answer, for most of us is an emphatic 'yes'


My dreams exclude anything with "Google", "Facebook" and "Uber". And if the role we talk about involves iOS, then Facebook is even less attractive, because imho their app is an example how not to do an iOS application.


Ask instead who would work for a Big Tobacco company, it’s the same thing. Not me.


On the plus side, it's good to know that Facebook has employees who care about being ethical citizens of the Internet ecosystem. Hopefully they can exert pressure on the upper ranks in some way to bring things under control. Facebook has the opportunity to be a force for good, while also accomplishing its business model, but it won't naturally lean in that direction.


Someone I know closely worked at Facebook in its heyday, but it has been a while since he left. I asked him around 2014 (he had just left the company) "So what do you think about the way Facebook handles privacy issues?" His response was not defensive at all. Rather, it was a very curious "FB is one of the most open cultures you can ever work in. Any employee can ask any question of anyone at the highest levels and expect to get a honest answer". My thought was "So you didn't have anything to ask questions about?". He was actually a pretty nice fellow, so I stopped asking anything else at that point.

But I remember thinking that it was a very funny, cult-member like response. And you can test this too. Ask your friends who work at FB and I bet you will get some pre-programmed response very similar to that.


Nobody mentions the elephant in the room, because elephant in the room feeds the people in the room.


I worked at Facebook even longer ago than that.

What makes you assume it's got to be a pre-programmed, cult-member like response and cannot believe that this is the actual work culture ?


Ok, perhaps you are the best person to ask.

1. What was Mark Zuckerberg's response when people asked him if Facebook might be overstepping bounds in terms of data collection (shadow profiles)?

2. What did the company employees think of the backlash over their beacon project?

3. When Facebook told the EU that they cannot match FB user profiles and WhatsApp user profiles to create a single profile (remembering that they would be fined), what was the general consensus among employees? Did they know that FB had lied? Were they still OK with that? If they were, was there not a single person expressing dissent?


"KW: Mark, can you give us a sense of the timing and cost for this? Like, the audits that you're talking about. Is there any sense of how quickly you could do it and what kind of cost it would be to the company?

I think it depends on what we find. But we're going to be investigating and reviewing tens of thousands of apps from before 2014, and assuming that there's some suspicious activity we're probably going to be doing a number of formal audits, so I think this is going to be pretty expensive. You know, the conversations we have been having internally on this is, "Are there enough people who are trained auditors in the world to do the number of audits that we're going to need quickly?" But I think this is going to cost many millions of dollars and take a number of months and hopefully not longer than that in order to get this fully complete."

Source: https://www.recode.net/2018/3/22/17150814/transcript-intervi...


And even if anyone ever considers it "complete," the reality is, that it's just going to be white wash and bullshit.

Why waste the fucking money. Quit being sentimental. Just trash Facebook and pivot (lol pivot). Be a real motherfucker, and let Facebook burn. Make something cooler than Facebook. Fuck this audit stupidity.

Come on, man.


How fast can I kill all my karma by pointing out that the 3 of the top 4 articles on HN are some realization that Facebook does not give an ef about anybody's privacy?


On ArsTechnica, you just need to bring up Obama’s use of FB’s social graph


It's been almost a week of this. Actually wondering when we'll get back to interesting tech


It is attitudes like this that created the problem in the first place.


Are we going to talk about the fact that a whole bunch of these employees are former elected officials, or related to one? Which is part of why Zuckerberg isn’t actually concerned about the political fallout?


Running for president...


Zuckerberg should resign at this moment. Facebook needs new leadership if it wants to change the way it has been operating.


Nah. In a week all of this will be forgotten (do you still remember the helicopter in the river, and the bridge that collapsed?). You and me and a few more people will remember, but we all already know that everything that is posted privately on Facebook will be leaked sooner or later.

I even expect to see a few angry post from people that decided to delete their account now, and when they tried to undelete the account after a few weeks they surprisingly discover that all the old information was missing and Facebook can't recover it because it is deleted.

fake quote > If they used Windows instead of Linux, they could have send the account to the Recycler Bin, and recover it now.


I thought the same a week ago. Nearly bought some FB calls even. But this controversy has surprised me with its staying power.

The real sizzle comes not from the emotional outrage but the calls for government inquiry and potential regulation, which would do structural damage to all of Silicon Valley. This has been a stunningly bipartisan effort, the left supporting it ostensibly because they like regulating big businesses, and the right supporting it because they (perhaps correctly) see Silicon Valley megacorps as adversaries.


I did buy some calls. I am sweating currently, bad timing with all the other economic developments.


> In a week all of this will be forgotten

Nope -- there are political and legal proceedings underway, and those things take time. In a year? Maybe.

> Facebook can't recover it because it is deleted

"Deleted." It's easy for those people to fake up a new account, and remember the lessons they learned the last time around.


> Nope -- there are political and legal proceedings underway, and those things take time. In a year? Maybe.

Ugh, so we're going to have the front page of HN dominated with the exact same "discussions" for another year?


I don't think anybody expects this surge of anti-Facebook articles to continue indefinitely. Personally I'm just hoping that all of this makes people dislike the company just enough to shatter the illusion of usefulness. Plenty of people will still heavily use Facebook, but if we can pull even 5% of the technologically-aware, that's a smidgen of influence that Facebook no longer has and a chunk of people who no longer serve as lures for others to join.

It's not going to happen quickly, but if this awareness gains momentum a much more healthy (federated, preferably open source) social media site could have a higher chance of survival. I think that's worth something.


We're still talking about Snowden.

That was five years ago.


It is pretty easy to call for resignation as an outsider. This is not an elected position, i do not think people get to call this.


As a shareholder I do, this kid works for me and it is time to take full responsibility and go home...


Zuck has full control of the company by design, those shares you bought are participation options with no control.


He should donate his shares and resign... this is how he can help humanity...


You can maybe call for "donating his voting power", but donating his networth... oh wait, he did that!


You knew he had majority of voting control, and yet, decided to buy into it.


LOL, I got 1 share so I can watch the whole thing unfoldes from the front seat ;-)


dang - i fell for it!


Seeing as he's the majority shareholder, you seem to be out of luck.


“Full responsibility”...

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: