The published policy didn't say anything about not doing what he did. I'm not going to argue that what he did should or shouldn't be ok, but FB has no control over what other people do. Yeah, maybe it'd be better if people asked for clarification first instead of asking forgiveness, but there's no way to force them to do that. FB does have control over what their policy says and allows/disallows. If you don't want people to exfiltrate any data and look at it on a local machine instead of just keeping a session on the exploited machine, then put that in the policy. If you don't want people poking around for other exploits after gaining access, then spell that out in the policy.
The point of the policy isn't to stop everyone. Sure it will stop some/most people, but some people don't listen. The point is that when it happens again you can point to the clear policy and say "you're an asshole, we're not paying you because you violated our explicit policy, and we are reviewing what you did with our lawyers to see if we should notify law enforcement".
Yes, doing this fix/policy update now doesn't fix this situation, but it prevents anyone else from doing something similar and claiming ignorance of this situation and FB's position.
If you're going to persecute someone on details, you had better make sure your policy is very detailed, not vague, and not left open to interpretation. In this regard, Mr. Stamos failed.
A major root cause is that the published guidelines say nothing directly about exfiltrating sensitive data. This leads to legitimate confusion for exactly the reasons given. The actual policies make sense given what the published guidelines say, but that's not good enough.
The policy needs to be changed. Not by much, but it needs changing. Here is a Responsible Disclosure Policy that might work better than your current one:
We expect to have a reasonable time to respond to your report before making any information public, and not to be put at any unnecessary risk from your actions. Specifically you should avoid invading privacy, destroying data, interrupting or degrading services, and saving our operational data outside of our network. We will not involve law enforcement or bring any lawsuits against people who have followed these common sense rules.
The real issue here is facebook's poor infrastructure security and slow response time. If the exploit had been previously reported, why was the privilege escalation still possible? Why did a (supposedly) known-to-be-vulnerable host have access to secret information at all?
The exfiltration of data may have been unethical, but facebook has no one to blame but themselves for it even being possible.
Companies take big risks in running bounty programs. They are giving hackers permission to test their live site. This isn't something that is popular with everyone inside a company. Bounty hunters need to respect that bounty programs are a two way street. If you find a serious issue like remote code execution you need to be extra careful. Wineberg was an experienced hunter. He should have known better.
So, you just wanted to cause him reputational damage and personal problems as an act of petty retaliation. You're right on some of the technical issues here, but in terms of ethics, your behavior has been far worse than his. I don't think you realize how much long-term damage you're doing to your relationship with the wider security community by threatening to jail people who were at no point acting maliciously and at no point caused any damage.
Guy discloses a vulnerability. He knows it potentially has wide reaching security concerns, and downloads enough data to prove that if necessary.
Guy gets shortchanged on the bounty, indicating that either a) facebook is trying to shortchange him, or b) facebook doesn't realize how big of a vulnerability this truly is
Everything about Facebook's response indicates b): they didn't realize how big a vulnerability this truly was. Otherwise, the data he downloaded would have been useless by the time he used it.
You can argue that the guy "went rogue" by hostaging information, but fact is he deserved to be paid more and he was able to prove it. Now facebook looks bad.
Guy's reaction to rejection: take hostages and threaten Facebook. Facebook moves to defense and cuts guy off.
You are not a good neighbor for kidnapping someone's family to prove to someone their busted lock is a big deal. You show them their lock is busted and trust they can figure out what harm that could lead to. The alternative is companies being hostile to people just looking around their locks, which is the world in the 1990's and 2000's that responsible researchers are trying to avoid going back to.
One obvious hole I can see in Facebook's story is that they insinuate that Wes broke back into the server after they disputed the bounty. If this were true, they did nothing in response to the problems Wes found for over a month.
If you look at Wes's timeline, he says access to the server was no longer possible a few days after he filed the second report.
It comes down to who you believe. Personally, I find Wes to be more credible. It sounds like it was most likely a misunderstanding by FaceBook. Now they are doing damage control.
He definitely took data off of Facebook's server.
Also you misunderstand his access being denied was a firewall change earlier in his story. This was merely to speculate other systems he could have penetrated--completely separate from the S3 buckets he took data from.
From Facebook's perspective it could very well have seemed like he went back for the goods since he submitted three separate reports, the last of which triggered the response. But this is also irrelevant, the question is whether he took data off or not and this is unambiguously yes, by Wes's own admission.
FB: He's an experienced bug bounty hunter and should know where reasonable borders are.
All the experienced security guys itt: He's an experienced bug bounty hunter and should know where reasonable borders are or at least not pivot/escalate without asking. Also never dump and hold data.
Everyone else: What he did isn't technically against the rules FB wrote, so they are screwing him, despite it also being written that they have sole discretion.
Ah, so those who disagree are inexperienced? No true scottsman indeed!
(For the record, I do, though I'm not sure I'd flatter myself by saying I'm "experienced" exactly.)
* You are stating that all (not some) experienced security folks are agreeing unanimously. The implication is that those show disagree are not "experienced security guys" (as you called them: "everyone else") - they are the ones who aren't true scotsman
* you assume those who don't explicitly indicate that they work in infosec industry do not work in the infosec industry
* also, you do you need to be "experienced" in the infosec industry to be correct / wrong.
I don't think the person you were replying to was suggesting that any infosec people who fully support the researcher aren't real infosec workers. I just don't think he saw any who even claimed to be.
Defense in depth means every defense needs to be validated not just the outer layers.
PS: Further, if FB says they know about a bug then anything he downloaded could easily be in the wild and should be investigated.
What's really getting to me is the overwhelming number of responses containing idea that everything that isn't explicitly banned is permitted, despite the recipient saying "No" (even indirectly/without justification) at some point. How to deal with the grey area of consent is something that every adult should know, and it's worrying to me that so many here seem to feel entitled to whatever they can take as long as it wasn't explicitly forbidden.
Obviously FB should update their policy, but at the same time it's important that we as the community use this as an opportunity to learn and discuss where the implicit boundaries are, where one needs clear-cut agreement to proceed.
Consent is sexy.
The question seems to be if he did it in good faith and within the rules of the bug bounty program.
The next $1M bug that gets discovered will probably go out onto the black market because of Mr. Alex's actions here.
Facebook has now demonstrated that they will not only not pay you, but they will attacking you publicly, slander you, and threaten you. Now what does that mean for the next hacker coming along? Someone who is clean and wants to stay clean will avoid Facebook. Someone who isn't will realize that Facebook is now an easier target because of the clean guys staying away.
What if Instagram blead all your browser information? So people can now fingerprint billions of people and figure out who (and their pictures) are surfing their sites? What if there are pics on instagram that people rely on being private?
"Wes was not happy with the amount we offered him, and responded with a message explaining that he had downloaded data from S3 using the AWS key..."
This isn't about whether viewing files on an internet is technically downloading them; this is about retrieving files of enough size and quantity that you have to queue them up for an overnight download.
The thing that gets me about this whole situation is that Facebook either didn't understand the extent of the vulnerability (which seems to be the case to me, and in which case I think Wes Wineberg should have been rewarded far greater than they did for showing them how serious it was, though I wouldn't say this is literally a "million dollar" bug) or they were grossly negligent for not patching it up a lot sooner than they did. They can't have it both ways.
Are they bad at managing their bug bounty program, or just bad at responding to serious security issues? It has to be one or the other.
* little guy offends large company, usually through some totally well-meaning and innocent activity that, if illegal at all, is only so due to obscure, obsolete, and/or obtuse laws
* large company unleashes unholy wrath of $1000/hr law firm on little guy threatening to destroy little guy's world if he doesn't immediately comply with all demands
* lawyers laugh at the plight of little guy and say it doesn't matter what he thinks because he can't afford to oppose large company
* little guy is forced to comply no matter how absurd large company's demands are, because only other large companies can oppose large company in court
* should the large company feel inclined to sue the little guy even after he acquiesced to their ridiculous demands, little guy loses all of his possessions in his attempt to pay legal fees. little guy will run out of money before the case wraps, resulting in him getting saddled with a judgment for massive personal liability (cf. Power Ventures)
* large company is free to make the same infractions whenever they feel it's appropriate to do so, because what are you gonna do, sue them? (cf. practically every company who has ever brought a CFAA claim; Google's whole business is violating the CFAA, as well as various copyright laws)
* bonus points: large company has friends in the prosecutor's office and gets the little guy brought up on life-destroying criminal charges (cf. Aaron Swartz). if the case makes it to trial, little guy spends time in jail (cf. weev)
I don't think I missed anything.
> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.
And once they knew how severe it was, they ought to have acknowledged the severity and paid him a lot more.
It's a really grey area beyond an initial 'access bug', so it pays not to go there. Otherwise, where should Wes have stopped? keep proving more vulnerabilities until he's downloaded their code? or got private photos of Zuckerbergs kid? Just to show that it is indeed a serious bug?
"If the security researcher did not disclose the RCE, but instead sold it to highest bidder, how much would that likely pay in this situation?"
Paying security researchers to properly disclose is a way of financially encouraging the right behavior. While it may be tough to stomach a large payout for responsible disclosure, do you really want them considering the alternative? It's like tipping in a restaurant to ensure food quality.
They are trying to condone the behaviour of data access which in all honesty falls on borderline unethical behaviour.
Any professional who participates in any company's bug bounty should respect their rights as well.
Whether the keys were accessible and it is a technical blunder is secondary but the action the researcher took a) accessing the data he did not need to b) making this into a big deal when he was the one not respecting the bug bounty's limits makes this a case for FB.
You're right. An important thing has gotten lost in the shuffle. We should be pointing and laughing at Facebook. Then when the giggling dies down, asking: Something this bad and with such a "trivial" vuln manged to get published, what else have their now-proven-to-be-shitty practices left open?
He found stuff. He didn't use it (AFAIK) for anything bad.
Reminds me of the way business dudes and non-security devs used to react before security got all popular and legit. And they could have even avoided the whole public brewhaha if communication had been better between the tester and the product staff. Classic blunder.
Complaining to his boss and acting all pissed suggests that they do not understand they they did mess up big time.
They jumped to contacting someone over his head before engaging in real talk with him. And then their public response is covering their ass by arguing over the fine print of how he shouldn't have been poking around where he was.
Obviously there are differences, but similarities are fun too!
Doesn't pass the sniff test from here.
(Admittedly there's no doubt an iceberg-sized bit of this whole drama that neither side are admitting exists.)
While Alex may have a right to be upset at Wes for taking data, Alex should recognize Wes is likely the least of his worries now. Wes wasn't/isn't a professional security researcher... and he was able to do this. That should frighten Alex, and Facebook should have been much more rewarding to Wes for forcing this issue to be taken care of.
This raises way more questions than it answers. Most notably: why aren't you recording who accesses user data?
"Quick, shut off the logging on those servers, so we don't have any record of who logged in on them!"
It also appears, based on your post, that you think that stating, approximately, "I hope we don't need to contact our legal teams or law enforcement about this," does not constitute a threat of legal or law-enforcement action, and I also find that deeply troubling. While I think you could make a legal distinction that these weren't technically threats of such action, any reasonable person in the researcher's position would by positively idiotic if he/she failed to feel threatened in that way by such statements.
In case it isn't clear, most people will interpret "I want to keep this out of the hands of lawyers" exactly as a threat to start legal action. To be honest I'm not really sure how else it should be interpreted?
OK, so lets look at this - your response showed us one extremaly important issue. No clear rules in your system. Wes actually by exploiting your system, exploited your lack of rules regarding the handling of white hat hackers.
Listen, hacker should exploit ALL possible issues. He exploited your weakest one - the rules behind the system. Close the case - reward him XX,XXX for exploiting weakness in your policy for dealing with white hat hackers, spend another as much to bulletproof your policy. Do not reward him for hacks, that are unethical, as it would be wrong, but do it for the other exposure - small dent on your white hat hacker system.
You're basically telling bounty hunters to not go any further to "prove" the severity of the bug because you're saying, "Trust us. We'll measure the maximum impact and reward you fairly"
And yet, you're not being fair at all. So the bounty hunter needs to "prove" the severity of the bug for you. You're digging your own grave here by not acting in good faith. The next guy who finds a good bug is not going to disclose it to you - he's going to sell it on the black market for a few hundreds of thousands. Or millions.
To me, this demonstrates that had Wes not reported the AWS keys, then Facebook would never have rotated them. I would argue that the fact Facebook found need to take action to resolve Wes' third vulnerability submission, could be considered an admission to its legitimacy as a bug. Therefore concluding that the bug is indeed worthy of a bounty.
Not Valid After: Thursday, 31 December 2015 11:00:00 pm Australian Eastern Daylight Time
Maybe they'll upgrade it to something better than:
Signature algorithm SHA1withRSA WEAK
It doesn't explain why Instagram has been happily using a known-compromised wildcard ssl key for two weeks now.
Makes you wonder who actually values and protects Instagram's user privacy more - the researcher or the Facebook CSO...
No, I don't wonder about this at all.
What a coincidence...
Instead, he sat on the keys for over a month, and in the meantime used them to download everything he could find onto his personal computer. Simply testing that the keys were live and disclosing this immediately would have been more than enough proof of a bug here.
Edit: downvoters - please explain how using keys to access production systems for over a month without disclosing is acceptable white-hat behavior?
All I'll remember of this entire story is the outcome- huge vulnerability found (high black market value), and Facebook is talking about lawyers and paying small bounties. Nobody will remember that technically he broke a rule that wasn't well explained. The next Wes will have his major vulnerability in hand, and have this story in his mind. It may change his decisions.
Make this right. Even if you are in the right who cares? You need the perception of your program to be impeccable, paying more than researchers expect. Facebook can afford it more than they can afford to blemish the image of their big bounty. Invite Wes to help you rewrite the confusing parts of the rules. Leave that story in everyone's memories instead.
Wes COULDN'T have been working for Synack to find bugs as your program doesn't even allow for it.
Next time someone uncovers your private keys at least they'll know upfront that there is no money in doing the right thing which might just make selling them to the highest bidder seem like a more compelling option.
This issue, which enables uber-credible phishing and other attacks with the assistance of Facebook (since Facebook falsely reports to the user that the link goes to a credible domain of the attacker's choosing while actually sending them to any URL controlled by the attacker), was rejected. Not only was I told that it was not a bug that I could be paid for, but that it really wasn't a bug at all, and that they would do nothing about it.
If these kinds of serious issues are essentially ignored because they don't meet the very narrow guidelines set forth in the bug bounty program, Facebook is going to miss a massive number of problems with its platform.
I feel like that bullet point answers your question pretty well.
Alex would have been aware via the original RCE bug that Wes was reporting on behalf of himself and not his employer. Also, it is reasonable that Wes would have mentioned that he is reporting the bug on behalf of his employer from the beginning.
I presume that Alex knew these things, but he decided to take a more dramatic approach to get Wes to stop, by contacting his employer. It obviously would be leverage, and Alex knew that he could also leverage his position at Facebook to use a security firm in the industry (who would understandably not want to do anything to jeopardize its relationship with one of the largest internet companies in the world) to ask their employee to stop.
I do not believe that Alex legitimately believed that Synack (Wes' employer) was behind the research, but he knew it would be an effective way to stop Wes from continuing, so he decided to pull those strings.
Even the researcher doesn't claim that Alex contacted the CEO of Synack because of a dispute over the bounty.
Rather, it's the other way around: the researcher disputed the bounty, and did so by revealing that he'd retained AWS credentials from Instagram long after they'd closed the vulnerability that he used to get them.
Alex contacted the CEO of Synack to ensure the credentials weren't used, because if they were, Alex couldn't be control Facebook's response: they've got a bug bounty participant who has essentially "gone rogue" and is exploiting Facebook servers long after they've told him to stop. They need him to stop.
I'm not saying this researcher was 100% in the right, but this is the CSO ass covering. "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."
A simple phone call directly to the researcher that cut through the bullshit would have made everything better. But he had to make sure it didn't get out and the only way he could do that was by using the only leverage he had: The researcher's employer.
If you understand how security works inside of big companies, this is a really silly theory to run with. CSOs are happy when shit like this gets discovered, because it gives them ammunition to get the rest of the company to adjust policies.
If you were working from the understanding that a CSO comes in and just immediately tells a team of (what is it) NINE THOUSAND developers how to do stuff differently... no. That's not how it works.
The problem is that nobody at Facebook with the possible exception of like 10 people none of whom are Alex can make huge operational changes like "change all the ways we store keys across an entire huge business unit". So, you tell Alex you took AWS credentials he didn't know existed and you're going to start mining them for a story you're bringing to the media, and now Alex is in a position where he's NOT ALLOWED to sit back and try to manage the situation himself.
Delete the keys or I have to tell legal what's happening.
The researcher NEEDED TO HEAR THAT.
>> The researcher NEEDED TO HEAR THAT.
I'm not in security, but from the outside looking in, how things worked out just doesn't smell right.
If "the researcher NEEDED TO HEAR THAT" is the priority, then why waste time looking up who the guy works for and calling them instead?
The simplest and most obvious way to tell the researcher is to tell him directly in the clearest way possible. It isn't as though there wasn't a pre-existing line of communication with the researcher.
So I still don't see how calling the guy's boss trumps that in terms of scariness. Because if I'm the wronged party (i.e., FB), that's what I'd do if I couldn't resolve it amicably.
In some of his posts, he has been, however, comparing the researcher's dump to criminal activity -- something I am not in disagreement with.
His implication that calling the researcher's boss is a sensible approach to intimidating the researcher for potentially criminal activity -- that in particular seems like a stretch if he's being truly objective.
There are basic things you can do to mitigate or isolate damage in AWS and they either aren't doing it or have done it badly. Even if he couldn't convince the rest of the company that god-mode keys are bad, he still could have built out some basic infrastructure to track when and where they keys were being used from so red flags could be raised when some random IP address is being used to pull down several buckets.
> At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.
That's a big mistake. DO NOT EVER USE YOUR COMPANY EMAIL ADDRESS if you are doing this on your own. The employer has the right to know. Imagine using a company email address on Ashley.com. Yeah, plenty of people were embarrassed after that hack.
Second, everything else being equal, Alex going to the CEO without calling or mailing the researcher first was a mistake. Going to someone's boss and saying "please do something, I don't want to get the lawyers involved" IS an implicit legal threat, both to synack and the researcher.
>At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.
According to Alex this is the timeline:
1. Researcher not happy with sum
2. Researcher already in contact using Synack email address
3. Alex calls Synack CEO
From the researcher's blog:
>I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.
This means that Alex is lying, is telling exactly the facts needed to come to a specific conclusion and nothing more or the researcher is lying. And he's "written blog posts that are used by Synack"? Come now. Reads a lot like someone looking for a third item so they can make it a comma separated list of reasons. His post smells like bullshit.
It's possible that we're all correct: This guy could be a wildcard researcher that plays fast and loose and the CSO could be covering his own ass. You say he's building a first rate application security team. Is it hard to believe that he could have made the mistake of focusing almost exclusively on that?
Given who I'm replying to, I'm assuming that I'm missing some key piece of the puzzle.
(And I totally acknowledge it doesn't change the circumstances of what either side has done, I'm just curious)
I'm just pointing out that taking AWS keys is a big deal, because it's legally a big deal.
>If you give us reasonable time to respond to your report before making any information public, and make a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service during your research, we will not bring any lawsuit against you or ask law enforcement to investigate you.
IANAL: but it could be argued (in court) that he had Facebook's permission to getting the AWS keys. In his opinion (and mine) he made good faith efforts to avoid privacy violations.
Facebook's official disclosure policy has legal weight. There is legal concept (whose name is escaping me) that could apply that in laymen's terms say the official disclosure policy gives him Facebook's tacit approval - I first heard about it in the Oracle v Google where Google argued a blog post congratulating Google provided tacit approval.
That's a mischaracterization given his description. He examined the filenames/metadata specifically to avoid buckets that might contain user data.
2. This assumes that he was perfectly accurate in his assessment of an unfamiliar project's naming conventions, data structure, etc.
3. This assumes that he was perfectly reliable in making the actual copies and didn't accidentally include potential personal data (e.g. who knows what might be in a log file?)
The problem is that we're talking about someone who already decided to exceed the bounds of what was clearly protected under the bounty program. He'd already reported the initial vulnerability and been paid for it but waited until later to mention that he'd copied a bunch of other data, had access to critical infrastructure, and wanted more money.
It seems fairly likely that this wasn't malicious but rather just poor judgement, but that makes it very hard to assume that outside of that one huge lapse in judgement he did everything correctly. It's really easy to see why Facebook couldn't trust his word at that point since it's already far outside normal ethical behaviour.
To the second and third: They only require that a researcher "...make a good faith effort to avoid privacy violations..." and I'd say he met that. You can argue that the entire endeavor wasn't in good faith but he certainly made a significant and conscious effort to avoid private data.
I think his biggest lapse in judgement was that he brought security operations issues to light in a bug bounty program run by the people that would be most embarrassed by them. Application security bugs are created by the engineering team and the CSO's application security team fixes them (or advises or whatever). Security operations issues are entirely the responsibility of the CSO's department.
Facebook (as an organization) should be thanking him. While he didn't expose application security bugs he exposed significant operational issues and blind spots. Keys with far too much access, lack of log inspection, lack of security around what IP addresses a key can be used from, etc. Operational issues and lapses in operational security are what got Twitter in hot water with the FTC in 2010. It's not as easy to play cowboy with operations as it used to be.
The CSO hasn't been around for long but by all accounts he poured a lot of effort into hiring an application security team. Perhaps that's his specialty but even one experienced technical manager hired for security operations could have caught these basic issues. They probably wouldn't have addressed the lack of least privilege in that time frame but they could have easily spun up logging to catch some rando on an unknown IP address using their keys.
But like I said, he hasn't been there for long so I don't blame him for the failure. What I do blame him for is calling up the employer to threaten them as leverage to shut up the researcher. I blame him for posting a thinly veiled justification for doing so. He could have addressed this openly, talked to the guy directly and went to the other C-level execs with it as a justification for getting everyone on board with fixing it but he tried to keep it contained to his department.
I understand how he must feel being the new guy who's responsible for the outcome but not for creating it. I know he'll get questions that he might not be able to answer since they probably aren't logging bucket access. Questions like, "Who else got a copy of these keys and what did they access?" Saying "I don't know and we may never know" in response to that, even if you weren't in charge more than three months ago, is rough.
Wouldn't it have made more sense to contact the researcher directly, rather than using his position of power to pressure the researcher's company's CEO?
Why not assume good faith? (Which is what I would think a white hat bug bounty program should assume)
"he has interacted with us using a synack.com email address,"
invalidates my reading that he was using his company's email?
> I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.
If that is true, it is either poor judgement from Alex, or bad intent to call Synack
The Facebook post does not, in any way, contest that.
I'm not disagreeing with you, only making it clear that yeukhon was played by Alex exactly as intended so he'd be out there defending him on sites like HN.
edit: Or -after calling the CEO- he should have contacted Wesley directly and so they could deescalate the problem together.
I don't disagree. But why go through his employer, when they already had a direct line to the researcher himself?
The right move here would have not been to threaten Wes, pay him, and just update the policy.
Lesson learned for Alex and his friends: Do not threaten individual contributors or suffer massive freaking drama. Thank you internet.
The response from FB's CSO is very specific to a very specific blog publication. Not regarding the flaws in how their AWS Buckets are used.
> "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."
mischaracterizes what the response by FB CSO as one that is attempting to draw criticism away from operation flaws by instead placing focus/blame on the researchers methodology.
A security researcher went public with a story of "I found this massive security hole and Facebook tried to avoid paying what I thought it was worth, and then threatened me with legal action"
The response that Alex thinks he needs to make is "my actions were reasonable because ..."
From external appearances it seems as though he is more concerned about looking like a heavy-handed, lawyer-invoking, CSO than the publicity around FB having an unpatched RCE that allowed access to highly-privileged AWS keys.
What he chooses to write about is reflection of what he saw as the most important news in the original blog post.
I suspect he's actually right. The blog post will probably raise more bad publicity around the way FB handled the research & disclosure than the existence of the bug, and it's the piece that needs to be resolved well.
How would that change anything?
If Facebook did rotate all keys the moment the researcher reported it, they made no difference.
If Facebook did not, then they aren’t taking care of their security properly.
I don't know, if the guy was just shaking them down then maybe trying to get him fired is indeed a reasonable thing to do, but I don't buy that anyone would have just assumed under the circumstances that he was doing all of this on the clock.
Now, Wes exfiltrating data rather than just looking at it? Not cool. But Facebook's side of the story is just as biased as his.
Huh? how did you make this connection? Why would he then report his findings to you?
From my point of view, contacting his employer was clearly meant as a gut punch.
It's very rare for a single vulnerability to grant you keys to the kingdom. If you check pwn2own vast majority of the hacks leverage more than one.
Most major attacks start with a small bug.
The real severity of the vulnerability is how far can it be pushed to broaden the scope. In this case that admin panel was just an entry point to a whole chain of security SNAFUs (aws keys in files at a multi-billion-dollar internet company, seriously?).
To reiterate, he got access to:
- source code
- aws keys
- plethora of 3rd party platform keys
- a bunch of private keys
- user data
This might not be the million dollar bug, but close.
Just thing about what an actual attacker could have done with it:
- login as / impersonate ANY instagram account
- impersonate whole instagram (code + ssl keys!)
- inject malware into instagram app and sign it with your keys
- download tons of user data
- wreck havoc in aws (possibly expanding what he has access to - we don't know what else he would have been able to access had he spent weeks not hours exploring).
This is not a missing permission check allowing you to delete other peoples photos. This is huge and based of that credit and significantly higher bounty is due.
Aside from that the handling of the whole matter was not good:
- if your policy is not precise interpret it to your disadvantage. you screwed up not making it clear
- contacting his boss should only happen (if at all) after he has been asked the same account
- the post about "bug bounty ethics" misses the point. Following your logic heartbleed investigation should have ended when someone discovered a buffer over-read without exploring where that leads.
You lost me at this point. Who do you think you are really?
Isn't it a security flaw that a single AWS key was able to access all of Instagram's data?
A security "mistake" then? :)
I'm glad companies can offer transparency like this.
There were other personal attacks in his response that I've talked about here: https://news.ycombinator.com/item?id=10755402
The people who like you the most and are the easiest to persuade.
There is no bug more critical than one that results in complete access to Instagram infrastructure.
Sure, the bug is stupid, but you are fooling yourself.
notBefore=Apr 14 00:00:00 2015 GMT
notAfter=Dec 31 12:00:00 2015 GMT
$ echo | openssl s_client -connect www.instagram.com:443 2>/dev/null | openssl x509 -noout -dates
notBefore=Apr 14 00:00:00 2015 GMT
notAfter=Dec 31 12:00:00 2015 GMT
If this researcher was able to access it via not much more than a hole that was _already reported multiple times_, then I think it's not a stretch to think that [many?] other less honest parties could (and in my opinion most likely do) already have it.
If it was me, even if it's definitely only a single researcher who got access (and it doesn't sound to me like they know for sure - but regardless), something _that_ sensitive would have to be rotated anyways. If it was someone outside the teams that strictly require access to it operationaly, I'd rotate it, let alone outside the company.
Not even one attempt to talk to the guy like an adult about what he was doing? You couldn't even be bothered to say anything?
You'd be amazed how a polite reply to the effect of, "thanks, you've proven your point, and we are getting a little uncomfortable with where this is headed" might have solved all of this. If he ignored you and kept hacking after that, by all means steamroll him, but if you don't even have that much respect for your peers, I'm not sure why you bother with the bounty program.
edit: Alex, how about the "shit, we really fucked up; I apologise to our users, yadda yadda" blog post?
Alex Stamos is a good person† who has been doing vulnerability research since the 1990s. He's built a reputation for understanding and defending vulnerability researchers. He hasn't been at Facebook long.
To that, add the fact that there's just no way that this is the first person to have reported an RCE to Facebook's bug bounty. Ask anyone who does this work professionally: every network has old crufty bug-ridden stuff laying around (that's why we freak out so much about stuff like the Rails XML/YAML bug, Heartbleed, and Shellshock!), and every large codebase has horrible flaws in it. When you run a bug bounty, people spot stuff like this.
So I'm left wondering what the other side of this story is.
Some of the facts that this person wrote up are suggestive of why Facebook's team may have been alarmed.
It seems like what could have happened here is:
1. This person finds RCE in a stale admin console (that is a legit and serious finding!). Being a professional pentester, their instinct is that having owned up a machine behind a firewall, there's probably a bonanza of stuff they now have access to. But the machine itself sure looks like an old deployment artifact, not a valuable asset Fb wants to protect.
2. Anticipating that Fb will pay hundreds and not thousands of dollars for a bug they will fix by simply nuking a machine they didn't know was exposed to begin with, the tester pivots from RCE to dumping files from the machine to see where they can go. Sure enough: it's a bonanza.
3. They report the RCE. Fb confirms receipt but doesn't respond right away.
4. A day later, they report a second "finding" that is the product of using the RCE they already reported to explore the system.
5. Fb nukes the server, confirms the RCE, pays out $2500 for it, declines to pay for the second finding, and asks the tester not to use RCEs to explore their systems.
6. More than a month after Facebook has nuked the server they found the RCE in, they report another finding based on AWS keys they took from the server.
So Facebook has a bug bounty participant who has gained access to AWS keys by pivoting from a Rails RCE on a server, and who apparently has retained those keys and is using them to explore Instagram's AWS environment.
So, some thoughts:
A. It sucks that Facebook had a machine deployed that had AWS credentials on it that led to the keys to the Instagram kingdom. Nobody is going to argue that, though again: every network sucks in similar ways. Sorry.
B. If I was in Alex's shoes I would flip the fuck out about some bug bounty participant walking around with a laptop that had access to lord knows how many different AWS resources inside of Instagram. Alex is a smart guy with an absurdly smart team and I assume the AWS resources have been rekeyed by now, but still, how sure were they of that on December 1?
C. Don't ever do anything like what this person did when you test machines you don't own. You could get fired for doing that working at a pentest firm even when you're being paid by a client to look for vulnerabilities! If you have to ask whether you're allowed to pivot, don't do it until the target says it's OK. Pivoting like this is a bright line between security testing and hacking.
This seems like a genuinely shitty situation for everyone involved. It's a reason why I would be extremely hesitant to ever stand up a bug bounty program at a company I worked for, and a reason why I'm impressed by big companies that have the guts to run bounty programs at all.
† (and, to be clear, a friend, though a pretty distant one; I am biased here.)
If it was not done for the purpose of intimidation, then Alex simply would have asked the CEO if the researcher was acting on the company's behalf and after hearing "no" would have ended the call and contacted the researcher directly.
Seems simple doesn't it? Perhaps you are not seeing it due to your friendship, but it seems like a dirty move and only serves to call into question how Alex handled other aspects of the situation.
Then the CEO is going to contact the researcher and he's screwed either way. God knows what the CEO would have say to the researcher privately. Having a middle man to translate is a bad idea in an emergency.
Let's face it, when you used your work email and made another company paranoid, you are putting people on the spot. Employer needs to know (they have legal responsibility), and given the prior research they did and the researcher's claim, I think the reach out is absolutely correct.
Instgram's infrastructure has flaw. That's bad but everyone's infrastructure has flaw. Shit has to be fixed. Doing more than what was needed is bad. If I am told to stop dumping data, I would stop.
Think about it a different way. If this researcher had found SQL injection in a webapp, dumped the usernames and passwords, and reported the vulnerability for a bug bounty, he should get paid. If he kept each of those credentials, and then logged into other systems using higher-privilege accounts that he'd compromised even after the SQLi is fixed, he is basically continuing the exploitation of an already-fixed bug. Those don't deserve payouts. Similarly, if he'd established some sort of persistence (such as a reverse shell, etc) on compromised assets, he can't keep coming in and getting more and more bounty payoffs. Fruit of the forbidden tree, in this case.
Where I disagree with tptacek is with regard to the benefit of bug bounty programs. Although I'm not currently running one, I find the idea fascinating and helpful for two primary reasons: first, you're almost definitely going to see generally better results in a well-managed bug bounty program (not necessarily something like Facebook's White Hat program) than traditional pentests or application security assessments. More eyes are almost always better when searching for tricky problems. Secondly, if you're a large enterprise, there are already people "testing" your security. I'd much rather be able to pay out a researcher than drive them to more nefarious buyers. You will probably encourage many people to test your security (which screws up metrics) but if finding security problems is the ultimate goal, it's worth it.
Even in this case in point, Facebook did discover an RCE that could have been (and kind of was) extensively exploited due to the fact that they held the bounty. If an actual malicious hacker had found that problem first, they would have been in significantly worse shape.
Why did those credentials still work post-report?
What if those credentials were accessed from a public dump?
The outcome of this entire clusterfuck of a bounty is one of the reasons there are still very well paid blackhats. There are no rules or terms to follow.
If their terms aren't clear (the terms he's citing certainly weren't intended for keys, rather Facebook user accounts/information), pay out and fix them.
Also, you can't just expect that "oh, just delete your data pls" will work, can you? You can't trust anyone that literally hacks your system.
With a bug bounty programme you don't generally authorise the kind of post-exploitation activities which we see here as leading to the really serious exposures, and that's not surprising as you can't easily authorise a set of unknown people to be processing your customer data.
This differs from an engaged penetration testing firm, with whom you have a contract which covers things like handling of data gained during a test.
So I don't really see bug bounties ever replacing penetration testing companies for internal work or anything that requires accessing customer data as part of the exploit...
Or do you think he should have just stopped and Facebook should have realized how bad it was and paid him a lot more than $2500?
No wonder there's a flourishing (and well paying) blackmarket for vulnerabilities. I wonder how much this keys-to-the-kingdom vuln would be worth (Mitm Instagram, bootstrap a botnet, steal celebrity pics, ... the possibilities are endless)
I don't believe any company would pay $1M for a bounty on their own systems. Only people who intend to use the vuln, or to fix it as they are the vendor.
Fr a vuln to go for $1M requires "discovering SQL injection"-levels of vuln. MS paid $100K for an entire vuln class for ASLR/DEP bypass discovery, and promptly patched the shit out of it. For a remote vuln class, I could see them paying $1M quite happily to not have all of their products re-owned.
Alex is good friend of mine and I've known him since college. He's definitely a good guy and understands the ins and outs of security vulnerability research, having done it himself for many years. I'm sure he didn't take the action of calling the researcher's employer lightly, and probably had a really good reason to do so.
There has to be a side of this story we aren't hearing, and probably never will.
He's the CSO, and this occurred under his watch. The exploit was 2 years old, and well known. It highlights an internal security problem at Facebook et al, of-which Alex sits at the top.
In this situation, his years of "doing it himself" is unlikely to have factored in - rather, he felt like he dropped the ball and could be facing some consequences, or at the very least felt embarrassment.
This would have led to a rash thought process, and perhaps Alex jumped to the conclusion of some sort of sabotage by another company.
It doesn't look like the SSL cert on instagram.com has changed recently, and the pentester specifically claims to have obtained its private key.
"stuff" was the keys to the kingdom, do you think this is acceptable for a company like facebook? So instead of them making an apology, the CSO is trashing the guy who gave them the wake up call?
I do think you are heavily biased ;)
Alex just went the drama route.
Why make such a bad situation worse, if you don't have to?
FB messed up. The researcher partly messed up too. Fix it and move on.
This reminds me of the illusion of objectivity in journalism. If you pretend to be perfectly objective and unbiased, you're lying.
Sounds like a jerk to me.
Escalating issues with a company to the CEO of that company doesn't seem like jerk behavior.
Wes counters that, "[Alex] never for a second believed I was operating on behalf of Synack"
I'm not sure how Wes knows what is going through the mind of Alex, so I'm inclined to take Alex's word on this.
As blazespin mentioned in this thread, Facebook's own terms states that they only pay individuals. That's how Wes knows - because Facebook's bounty program never deals with companies. The only other explanation would be Alex is ill-informed about the terms of Facebook's bounty program.
Dumping the users table on an 'internal' (heh) dashboard -- any company that is doing these bounty programs needs to clarify what a 'user' is. Is it someone using their application, or all employee information as well. It's an important distinction.
That said, Alex Stamos and the rest of the security team should have tried to figure out what vulnerabilities existed from this server instead of just nuking it and thinking that the problem was solved. That was lazy and stupid.
The issue here is that, in hindsight, FB failed at this step.
They nuked the server, but they didn't determine what sensitive information was available on that server, and take steps to mitigate those risks.
I think that's an understandable mistake - cleaning up after a server intrusion is hard. Knowing how much to do after a possible intrusion is even harder. But it is still a mistake and it happened on Alex's watch.
If the purpose of the bounty program is to find out about your security mistakes, then the program did its job here, and Alex should be pleased that the problem was reported so that they could fix it.
That the researcher found the mistake by overstepping what is considered ethical (and I have no doubt that they did overstep) creates a very difficult situation - you don't want to reward that behaviour, but you do want to know about security problems and this one was only discovered/reported because of that bad behaviour.
In that difficult situation it is all the more important to tread carefully. The easy cases where you're paying out a $10k bounty typically don't require much finesse. It's the tricky cases where you need to make sure your actions are well considered and above-board at every step.
From Alex's own summary it's evident that he didn't handle it as well as he could have.
Two of the longest paragraphs in Alex's write up cover what he said to the CEO of Synack, even though Synack had nothing to do with this. Even if we accept that Alex thought it likely the Wes was acting on behalf of Synack (personally, I don't think that was a reasonable conclusion to draw, thought I assume Alex is sincere in his view that it was), he should have determined that up front, and then, once he knew it was not work related, he should have avoided:
- making accusations about Wes's ethics to his boss ("Wes ... had acted unethically")
- suggesting that his external behaviour has implications for his employment ("Wes's behavior reflected poorly ... on Synack")
- bringing in the threat of lawyers ("keep this out of the hands of the lawyers")
When faced with the difficult situation of legitimate security research that has (well) overstepped the ethical boundaries, all the evidence is that Alex jumped to the position of protect yourself, protect the company, intimidate and control the researcher and though that is a common and understandable reaction, it's not the way you turn a bad situation like this into a good one.
1. Facebook is not going ballistic because this is a RCE report. They have received high and critical severity reports many times before and acted peaceably, up to and including a prior RCE reported in 2013 by Reginaldo Silva (who now works there!).
2. The researcher used the vulnerability to dump data. This is well known to be a huge no-no in the security industry. I see a lot of rage here from software engineers - look at the responses from actual security folks in this thread, and ask your infosec friends. Most, perhaps even all, will tell you that you never pivot or continue an exploit past proof of its existence. You absolutely do not dump data.
3. When you dump data, you become a flight risk. It means that you have sensitive information in your possession and they have no idea what you'll do with it. The Facebook Whitehat TOS explicitly forbid getting sensitive data that is not your own using an exploit. There is a precedent in the security industry for employers becoming involved for egregious "malpractice" with regards to an individual reporting a bug. A personal friend and business partner of mine left his job after publicly reporting a huge breach back in 2012 (I agree with his decision there), and Charlie Miller was fired by Accuvant after the App Store fiasco. Consider that Facebook is not the first company to do this, and that while it is a painful decision, it is not an insane decision. You might not agree with it, but there is a precedent of this happening.
I'm not taking sides here. I don't know that I would have done the same as Alex Stamos here, but it's a tough call. I do believe the researcher here is being disingenuous about the story considering that a data dump is not an innocuous thing to do.
I'm balancing out the details here because I know it will be easy to see "Facebook calls researcher's employer and screws him for reporting a huge security bug" and get pitchforks. Facebook might be in the wrong here, but consider that the story is much more nuanced than that and that Facebook has an otherwise excellent bug bounty history.
Edited for visibility: 'tptacek mentioned downthread that Alex Stamos issued a response, highlighting this particular quote:
At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.
Viewed in this light (and I don't believe Stamos would willfully fabricate a story like this), it is very reasonable to escalate to an employer if they seem to be affiliated with a security researcher's report.
This seems to be the crux of this whole thing. The article suggests that is not true, including some quotes from what I assume is "The Facebook Whitehat TOS" at  along with his interpretation of those quotes. As an unsophisticated person reading through that document, I don't see anything I would describe as "explicitly forbidding getting sensitive data that is not your own using an exploit". The closest seems to be: "make a good faith effort to avoid privacy violations". I'm inclined to believe you and others in this thread that this was not the most responsibly done, but the seeming repeated claim that there is an explicit policy against this, which doesn't seem to be findable makes me scratch my head. Is there some other document, that is more explicit, or is this just supposed to be implicit knowledge, or what?
But I do recognize that cracking passwords goes a step too far.
LAUGH.. Where does it say this?
I think instagram should be asking themselves: Would they rather have an honest researcher report this or North Korean hackers not saying anything and just slurping data? Security Researchers are always going to see things they shouldn't. That's just a fundamental rule. You have to know who your real enemies are and not come down on someone just because they got a little enthusiastic.
Wes  is one of the good guys - he went overboard, sure, but he should be rewarded, he should be asked not to go crazy next time, and the rules should be updated.
Personally, I think by saying the exploit was trivial shows that the CSO should be fired. If he has to make a phone call, it's not trivial.
Holding some randomly generated numbers that could be used to access a server is not.
My browser is currently showing an ssl cert for instagram.com that was issued in April and expires on Dec 31.
Doesn't look like they're in any hurry to revoke that one. (I guess like Alex Stamos told his employer - it's "trivial and of little value"...)
Seems … contradictory.
It sounds like you're only allowed to penetrate one layer of a defence in depth system. If you gain access to some edge system that isn't sensitive, I'd assume that would pay little. If you gain access to some core system, I'd assume that would pay lots. Why then are you not allowed to pivot from some nothing system to some larger system?
The purpose of bug bounties is to secure your systems. If you only ever secure the first layer, if some malicious actor finds another vector into the same system and there is a really easy pivot in sight (like full access to an S3 account!) then you've lost. If the bug bounty hunter found the escalation though and responsibly reported that, then a potential second vector loses its potency.
I'm not a security person at all so I'd like to hear some perspective on my thoughts above. It just seems fairly short sighted to specifically forbid pivoting.
FWIW dumping S3 buckets as a white hat does seem wrong to me. Listing them probably ok.
There are 4 categories of reporters: great, good, shit and crazy. Again- if you are a reporter, you should be trying to make it easy for the team to distinguish you in one of the first two categories by being simply being polite & respectful.
I will take a side- it's Facebook. Dumping data is the end of the Proof of Concept. Trying to determine if there is more data you can access through a single vulnerability chain is over the line.
Boats sink. The engineers know it. If you sink a boat in order to prove the boat had a hole, you will not get your payout.
And one final thought-
In my experience, bounty hunters almost never realize the full consequences of a vulnerability that receives a reward. Most of the time, the "Bad thing" that they identify is just the tip of the iceberg.
The choices of the researcher reflect inexperience and immaturity. The researcher has a significant misunderstanding about what is happening in the bug bounty marketplace. I think they need to apologize if they want a future in the infosec world.
Publishing this blog post was a huge error. Going to the journalist was another huge error. I don't see how this person could ever be considered employable by a reputable company.
I am saying explicitly- Wes went past the point at which he should have stopped.
He also should have known better, and the fact that he didn't is a problem in itself.
"[Alex] then explained that the vulnerability I found was trivial and of little value"
coupled with the fact that he seemed to be very worried about the problems that could be caused by the author in exploiting it. Something seems amiss.
Facebook considers the keys to their kingdom to be worth $2,500. OR Facebook doesn't know what the keys to it's kingdom look like.
Facebook will not update keys/credentials even if they are known to be compromised.
If you have the keys to the kingdom, you can use them and Facebook won't find out about it unless you tell them.
The problem is that on the one side you have security professionals who do this full time. They build up a background of implicit knowledge through extensive interaction with other security professionals, via training, mentoring, team activities, etc.
On the other side you have folks like the guy who found this vulnerability -- don't specialize in security, basically moonlighting / hobby, not necessarily connected to other security professionals or even other hobbyists. They won't have the same kind of implicit knowledge.
When someone from the first category communicates with someone from the second category, the communication can break down. That's what happened here.
Offering a million dollar bounty makes this kind of communication problem more likely -- a potential million-dollar payout catches the interest of people who have spare time and encourage them to pick this as the thing they do on the side. And further, encourages them to try anything and everything you don't explicitly forbid in by giving them hope that if they just try hard enough, they'll be able to turn what initially looks like a ho-hum two-year-old Ruby exploit into a million-dollar payday.
If Alex knows anything about his job he should know that he has to refresh all those keys even if Wes didn't report it or say anything.
The diff between Wes and everyone else is Wes just explained to Facebook how completely screwed they are. Alex is just pissed because Wes made it bluntly clear how much he screwed up.
Select the most senior security person at those companies.
Roll 1d10 and substitute that person for Alex in this exact situation.
Now bet your life that you won't have your life wrecked by a prosecutor based on the outcome of that die roll.
I don't love Stamos calling the guy's boss, but if it's between "call his boss" and "tell legal that a bounty participant has FUCKING GONE ROGUE WITH ALL OF INSTAGRAM'S CREDS", I think he made the right goddamn call.
False dichotomy - those weren't his only options, had he bothered to think more on it. There was an even better option, which strangely he chose not to take (assume an actual rogue actor got there before Wes and react accordingly: rotate the AWS keys, password reset for affected users, update SSL signing keys).
It bears asking - what exactly was he trying to achieve by calling Wes' boss, and has he achieved it? This is not his brightest moment.
What I get from your comment is that it's never a smart move to take one's chances dealing with company security people. The only smart move is to sell anonymously to the highest bidder.
The competent responses would be:
"We DO have evidence that X DID NOT happen", or
"We DO have evidence that X DID happen".
A bag of rocks also has "no evidence that Wes or anybody else accessed any user data". Would you trust a bag of rocks with your computer security?
I understand that they're top secret, but that sort of proves the extent of the vulnerability.
Facebook needs to get its shit together in key security and clarity of its bounty program. On the other hand this guy writing a blog about downloading a keychain and probing how deep it leads is definitely not responsible infosec.