Hacker News new | past | comments | ask | show | jobs | submit login

This is Reginaldo from the Facebook Security team. We're really glad Orange reported this to us. On this case, the software we were using is third party. As we don't have full control of it, we ran it isolated from the systems that host the data people share on Facebook. We do this precisely to have better security, as chromakode mentioned. After incident response, we determined that the activity Orange detected was in fact from another researcher who participates in our bounty program. Neither of them were able to compromise other parts of our infra-structure so, the way we see it, it's a double win: two competent researchers assessed the system, one of them reported what he found to us and got a good bounty, none of them were able to escalate access.



> two competent researchers assessed the system, one of them reported what he found to us and got a good bounty, none of them were able to escalate access.

Are they allowed to escalate access, given the bounty rules?

Also, isn't the periodic collection of login credentials completely out of scope? What I mean is: once the initial vulnerability was located and the pentester got shell access in the system, shouldn't he have stopped there and reported?


You can read the story from a few months back to find that Facebook apparently has an unstated but non-repudiated policy of threatening to involve the FBI and actually doing reputational damage to researchers who they feel cross a (poorly described and enforced) line: https://news.ycombinator.com/item?id=10754194

In this case, they (correctly) went the other way, which creates even more uncertainty. Given this pattern of inconsistent enforcement mixed with threats, I would feel genuinely unsafe reporting a security vulnerability to Facebook except under very specific conditions. That's probably not what they're going for, but that's the environment they're creating.


It's actually very easy: If you are in a server, you report it. You don't go digging for more, because once you're in, you can easily do more damage.


But in this case, they specifically said that the researchers were unable to escalate. How does he know? Either the researchers violated the rules by trying and failing, or they didn't try and he's simply lying to make himself look better. By your reasoning and their policy, those are the only two possibilities.

What are we supposed to conclude from this? Under the current rules and assuming the description of what happened is accurate, it would seem you'll potentially be punished for establishing the full extent of a breach, unless it's not so bad, in which case you're rewarded for failing. In addition to being illogical and unfair, it also incentivizes OpSec to delude themselves and everyone else about their true security risks.


Accurate.

In the other case, they threatened because he proved the vuln was much much greater, infact I call it a billion dollar bug,and wanted to cover it up.

This time they're proudly telling us because the attempts failed or were not made at all.


Agree with this, getting in is enough, going further is malicious at best.


Under those rules, it's not possible to verify reginaldo's claim that this machine is cut off from more valuable data.


You don't need to verify these claims. You found a critical vulnerability, you are not the only one.


How then do we assess the extent of the vulnerability?

As Wes proved, a simple looking RCE can lead to a huge breach of security due to failures in other areas.

I agree that limits must be established, but also, these must not end research so abruptly as they can lead to further information.

One might argue this is unethical, but a black hat doesn't care either way.


> How then do we assess the extent of the vulnerability?

It's already a critical vulnerability. Unless you want to assign numbers to the infinity, which is ridiculous.


Yet we know not all critical vulnerabilities are created equal.

That's why some get a 10k payout and others get a 2.5k.


This is assessed based on how hard it was to elevate your access rights (whether it requires physical access, user cooperation, etc.), not on how much damage you can do - because once you elevated your rights the possible damage is unlimited.


Except in that case, he unearthed another completely unrelated vuln.

I agree that some actions are u ethical, but does that really matter so much when a black hat is unethical anyways? The fact that he reported meant he was harbored no malicious intent.

How is collecting logins better than that? Seriously? This is completely malicious if you ask me.

Moreover, we must not judge each case strictly to the same rule, but with a measure of consideration of the circumstances as well.


> Are they allowed to escalate access, given the bounty rules?

No. From Facebook's responsible disclosure policy [1]:

> You do not exploit a security issue you discover for any reason. (This includes demonstrating additional risk, such as attempted compromise of sensitive company data or probing for additional issues.)

Both of the pen testers in this situation broke the rules. Once they found a security issue they exploited it and probed for additional issues (as well as one tester who attempted compromise of sensitive company data by collecting logins).

It's good that Facebook doesn't always apply these guidelines to the letter.

[1] https://www.facebook.com/whitehat


Is what the other researcher did (collecting usernames and passwords) in scope? Is it impossible to use these credentials to get above standard privs on any part of facebook?


"As we don't have full control of it, we ran it isolated from the systems that host the data people share on Facebook."

"Neither of them were able to compromise other parts of our infra-structure so, the way we see it, it's a double win: two competent researchers assessed the system, one of them reported what he found to us and got a good bounty, none of them were able to escalate access."

From the write-up:

> After checking the browser, the SSL certificate of files.fb.com was * .fb.com …

You left a wildcard cert on this random internet-facing unaudited 3rd party linux box with no protection against data exfiltration, or an HTTPS proxy in front of it, or anything? I know it's not as critical as facebook.com, but this is still bad. Priv escalation CVEs for Linux come out like every month.

At the very least, using this cert, anyone could run a MITM on any * .fb.com service and compromise without ever breaking in. From the article that could include VPNs, Outlook Webmail, Oracle E-business and MobileIron. I'm hoping you did actually have a proxy in front of it and Orange just didn't catch that.


"I'm hoping you did actually have a proxy in front of it and Orange just didn't catch that."

That sounds a lot more reasonable than actually having the private key on each individual server.


> From the article that could include VPNs, Outlook Webmail, Oracle E-business and MobileIron.

The article claims those are tfbnw.net, not fb.com.


Figure that was probably a CDN, and you don't know the origin configuration


I hope it was. But if it and the other services Orange found are just random internet-facing appliances, it's unlikely they would configure a CDN to get access, because CDNs are for public traffic typically. Nobody uses a CDN to open their outlook web mail to the Internet, for example. But they might have had a regular SSL proxy in front.... maybe.


> no protection against data exfiltration

That's a big assumption


Since the article exfiltrated data, no, it is a fact.


If downloading any data from a hacked server counts as exfiltration then everyone could be said to have no protection against data exfiltration.


This is quite confusing. "A brief summary, the hacker created a proxy on the credential page to log the credentials of Facebook employees. These logged passwords were stored under web directory for the hacker to use WGET every once in a while".

So they were grabbing creds, but it's ok it was just a researcher? Pretty sure that's crossing the line and indicative of an actual compromise of employee credentials.


Facebook's stance on vulnerability research is incredibly admirable. I can only home more companies hold themselves to the same high standard.



I believe, all rich firms,including FB,Microsoft,google and many others, who are fooling public and minting trillion $,will be hacked and more advance sophisticated tools will be invented by a specific hacking group to hack them.Look Panama Paper.It is also a science & advance technology.

David Ghosh


i am problum id


This is pretty much why we shouldn't be trusting centralized services at the scale of Facebook. Given that two researchers got that far, sooner or later another will actually manage to escalate the privileges. And then, I'm not sure what we should be more afraid of - leaking it all to public or some three-letter agency using it for nefarious purposes...

Seriously, don't do Facebook. Not even once.


This reasoning is really flimsy. Are you suggesting a that a normal user will do better at securing their home grown system than a team of trained security engineers?


While it is true that individual end users probably comprise a large range of security expertise and practice it's also likely that fewer users would be affected per targeted attack.


But it would be much easier to run untargeted, large scale, automated attacks which would affect many more people. Just look at e.g. the world of self-hosted Wordpress installs. Somewhat competent people get hacked by scripts all the time because they can't keep up with patching their servers to respond to every new threat (even assuming they learn about said threats in time). Normal people don't stand a chance.

(Speaking of which, I just got script-hacked like this few days ago, and I think it's finally time to dump Wordpress and migrate to a static blog...)


Security assessment is all about risk + impact. The ability to compromise one thing and get access to lots of stuff is more severe than the near certainty of a single Windows XP compromise with no data.


I'd say that if it wasn't for Facebook, a lot of communication would take place over safer channels. If you doubt whether Facebook is insecure, click the "include 'only me' activity" in your activity log and imagine how much damage would happen if that leaked.


You and I remember very different pre-centralization Internets. Before Google, decentralized hand-administered email services were probably the single most popular way to bust into someone else's servers.


I'm not saying that we should go back to those times, I'm saying that we should be aiming for decentralization in the long run. It's not like the security paradigms we had back in the time are tied to decentralization. They're mostly tied to ignorance.


Decentralization is great for many things but has little effect on security. Decentralized channels are not inherently safer.

Aside on self-hosting: This reminds me a lot of the self-driving car accident debate. Self-driving cars could be thousands of times more reliable than humans, but unless they are mathematically perfect, some people feel safer driving themselves because they feel they are in control. Nevermind that Google/Facebook etc have millions of man hours invested into security.


I don't think it's so much that decentralized systems are technically safer, but that the people who own and control the centralized ones have incentives to keep them unsafe (at least from themselves) whereas if each person runs its own server, the incentives are properly aligned for security.

In essence, it's an economic argument, not a technical one.


>whereas if each person runs its own server, the incentives are properly aligned for security.

I don't see how this is true at all. Most people don't care/know about security. See Wordpress. Many companies don't take security seriously, there are many open mongodb pointing to the outside world - HackingTeam, a team chockfull of blackhat hackers, was partially done in by sloppy authentication and passwordless databases.

Today, when people run their own server, it doesn't seem the incentives are properly aligned at all - and given that even blackhat shops don't even take the time to secure their own systems properly, the economic argument falls flat.


Decentralization in the context of social interaction services does not work. As long as the concept of a social network exist, people will connect to similar platforms.

It's been tried, again, and again, and again, and it fails for the same reason all such projects fail - the only people who care about this architecture stuff are those who are geeky enough to be running their own decentralized services.

Heading off the inevitable response: Email is not a social interaction service, it is a message passing service that happens to have social uses.


What is a social interaction service, ans why can't it be federated or peer to peer?


Because not-techy people don't want to deal with the hassle of setting up and maintaining those things.


Don't put all your eggs in one basket.


...especially if it's not your basket.


With that logic you might as well stop using the internet altogether.


It's more like we shouldn't trust a third party closed source software for a task that is supposed to be done in a secure way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: