Are they allowed to escalate access, given the bounty rules?
Also, isn't the periodic collection of login credentials completely out of scope? What I mean is: once the initial vulnerability was located and the pentester got shell access in the system, shouldn't he have stopped there and reported?
In this case, they (correctly) went the other way, which creates even more uncertainty. Given this pattern of inconsistent enforcement mixed with threats, I would feel genuinely unsafe reporting a security vulnerability to Facebook except under very specific conditions. That's probably not what they're going for, but that's the environment they're creating.
What are we supposed to conclude from this? Under the current rules and assuming the description of what happened is accurate, it would seem you'll potentially be punished for establishing the full extent of a breach, unless it's not so bad, in which case you're rewarded for failing. In addition to being illogical and unfair, it also incentivizes OpSec to delude themselves and everyone else about their true security risks.
In the other case, they threatened because he proved the vuln was much much greater, infact I call it a billion dollar bug,and wanted to cover it up.
This time they're proudly telling us because the attempts failed or were not made at all.
As Wes proved, a simple looking RCE can lead to a huge breach of security due to failures in other areas.
I agree that limits must be established, but also, these must not end research so abruptly as they can lead to further information.
One might argue this is unethical, but a black hat doesn't care either way.
It's already a critical vulnerability. Unless you want to assign numbers to the infinity, which is ridiculous.
That's why some get a 10k payout and others get a 2.5k.
I agree that some actions are u ethical, but does that really matter so much when a black hat is unethical anyways? The fact that he reported meant he was harbored no malicious intent.
How is collecting logins better than that? Seriously? This is completely malicious if you ask me.
Moreover, we must not judge each case strictly to the same rule, but with a measure of consideration of the circumstances as well.
No. From Facebook's responsible disclosure policy :
> You do not exploit a security issue you discover for any reason. (This includes demonstrating additional risk, such as attempted compromise of sensitive company data or probing for additional issues.)
Both of the pen testers in this situation broke the rules. Once they found a security issue they exploited it and probed for additional issues (as well as one tester who attempted compromise of sensitive company data by collecting logins).
It's good that Facebook doesn't always apply these guidelines to the letter.
"Neither of them were able to compromise other parts of our infra-structure so, the way we see it, it's a double win: two competent researchers assessed the system, one of them reported what he found to us and got a good bounty, none of them were able to escalate access."
From the write-up:
> After checking the browser, the SSL certificate of files.fb.com was * .fb.com …
You left a wildcard cert on this random internet-facing unaudited 3rd party linux box with no protection against data exfiltration, or an HTTPS proxy in front of it, or anything? I know it's not as critical as facebook.com, but this is still bad. Priv escalation CVEs for Linux come out like every month.
At the very least, using this cert, anyone could run a MITM on any * .fb.com service and compromise without ever breaking in. From the article that could include VPNs, Outlook Webmail, Oracle E-business and MobileIron. I'm hoping you did actually have a proxy in front of it and Orange just didn't catch that.
That sounds a lot more reasonable than actually having the private key on each individual server.
The article claims those are tfbnw.net, not fb.com.
That's a big assumption
So they were grabbing creds, but it's ok it was just a researcher? Pretty sure that's crossing the line and indicative of an actual compromise of employee credentials.
Seriously, don't do Facebook. Not even once.
(Speaking of which, I just got script-hacked like this few days ago, and I think it's finally time to dump Wordpress and migrate to a static blog...)
Aside on self-hosting: This reminds me a lot of the self-driving car accident debate. Self-driving cars could be thousands of times more reliable than humans, but unless they are mathematically perfect, some people feel safer driving themselves because they feel they are in control. Nevermind that Google/Facebook etc have millions of man hours invested into security.
In essence, it's an economic argument, not a technical one.
I don't see how this is true at all. Most people don't care/know about security. See Wordpress. Many companies don't take security seriously, there are many open mongodb pointing to the outside world - HackingTeam, a team chockfull of blackhat hackers, was partially done in by sloppy authentication and passwordless databases.
Today, when people run their own server, it doesn't seem the incentives are properly aligned at all - and given that even blackhat shops don't even take the time to secure their own systems properly, the economic argument falls flat.
It's been tried, again, and again, and again, and it fails for the same reason all such projects fail - the only people who care about this architecture stuff are those who are geeky enough to be running their own decentralized services.
Heading off the inevitable response: Email is not a social interaction service, it is a message passing service that happens to have social uses.