Hacker News new | comments | show | ask | jobs | submit login
Instagram's Million Dollar Bug (exfiltrated.com)
1562 points by infosecau 724 days ago | hide | past | web | favorite | 513 comments



Thank you to everybody who cautioned against judgment before hearing the whole story. Here is my response: https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...


I think the root cause of the problem is the unclear policy by FB. Privilege escalation can be hard to catch, and can be a separate bug in and of itself, even if it requires a separate exploit to get the initial privileges.

The published policy didn't say anything about not doing what he did. I'm not going to argue that what he did should or shouldn't be ok, but FB has no control over what other people do. Yeah, maybe it'd be better if people asked for clarification first instead of asking forgiveness, but there's no way to force them to do that. FB does have control over what their policy says and allows/disallows. If you don't want people to exfiltrate any data and look at it on a local machine instead of just keeping a session on the exploited machine, then put that in the policy. If you don't want people poking around for other exploits after gaining access, then spell that out in the policy.

The point of the policy isn't to stop everyone. Sure it will stop some/most people, but some people don't listen. The point is that when it happens again you can point to the clear policy and say "you're an asshole, we're not paying you because you violated our explicit policy, and we are reviewing what you did with our lawyers to see if we should notify law enforcement".

Yes, doing this fix/policy update now doesn't fix this situation, but it prevents anyone else from doing something similar and claiming ignorance of this situation and FB's position.


Correct, the policy isn't clear and needs improvement. The bug bounty's policy definitely falls under the CSO's purview. So even if you approve of Alex's handling of the matter, you can't forgive him for running a sloppy bug bounty program. It's one thing if he claims mea culpa and says we could do better. But there's not one iota of regret, remorse, or apology on not making things more clear in Alex's response.

If you're going to persecute someone on details, you had better make sure your policy is very detailed, not vague, and not left open to interpretation. In this regard, Mr. Stamos failed.


The policy reads clear enough to me to warrant a huge reward. Adding additional conditions after the fact is dealing in bad faith.


most RCE bugs can be compounded into major data dumps. That doesn't make each individual RCE a million dollar bug.


Every RCE that could have been sold on the black market for a million dollars is worth almost the same reward (modulo the advantage of being legal).


Think about this guy being a Russian hacker and selling the ability to access restricted accounts, pose as Instagram administrator and I assume access user's data freely


I would have come here to say this if you had not said it already.

A major root cause is that the published guidelines say nothing directly about exfiltrating sensitive data. This leads to legitimate confusion for exactly the reasons given. The actual policies make sense given what the published guidelines say, but that's not good enough.

The policy needs to be changed. Not by much, but it needs changing. Here is a Responsible Disclosure Policy that might work better than your current one:

We expect to have a reasonable time to respond to your report before making any information public, and not to be put at any unnecessary risk from your actions. Specifically you should avoid invading privacy, destroying data, interrupting or degrading services, and saving our operational data outside of our network. We will not involve law enforcement or bring any lawsuits against people who have followed these common sense rules.


Why do the policy specifics matter? A blackhat won't be respecting those rules, and won't need to negotiate a reasonable payday with facebook.

The real issue here is facebook's poor infrastructure security and slow response time. If the exploit had been previously reported, why was the privilege escalation still possible? Why did a (supposedly) known-to-be-vulnerable host have access to secret information at all?

The exfiltration of data may have been unethical, but facebook has no one to blame but themselves for it even being possible.


> Why do the policy specifics matter?

Companies take big risks in running bounty programs. They are giving hackers permission to test their live site. This isn't something that is popular with everyone inside a company. Bounty hunters need to respect that bounty programs are a two way street. If you find a serious issue like remote code execution you need to be extra careful. Wineberg was an experienced hunter. He should have known better.


Usually serious security issues requires some kind of escalation, and escalation probably requires, at some point, exfiltration of (non personal) data. If the rules of the program are that restrictive I don't know how many serious bugs will be found by "ethical" hackers...


That might not be the point. The point might be to allow the intersection between what is palatable for the company and what serious exploits white hat hackers can come up with.

No company wants to include in their privacy policy that anyone can legally access and download your data if they are trying to perform exploits on the system.


No, the root cause is having a 2 year old, known RCE that was only patched after this researcher got SSL certs and app signing certs.


The policy hasn’t been changed, though. There’s still no explicit statement that privilege escalation invalidates a report: https://facebook.com/whitehat


Sorry Alex, you're in the wrong here. Your threats to go to law enforcement completely undermine the credibility of your bug bounty program. Your publicly calling another professional "unethical" is a serious charge for what is a grey area at best, and the facts and history of issues reported by this person would not lead a reasonable person to conclude malice. And ignoring him but going to his boss, that's just petty.

Not even one attempt to talk to the guy like an adult about what he was doing? You couldn't even be bothered to say anything?

You'd be amazed how a polite reply to the effect of, "thanks, you've proven your point, and we are getting a little uncomfortable with where this is headed" might have solved all of this. If he ignored you and kept hacking after that, by all means steamroll him, but if you don't even have that much respect for your peers, I'm not sure why you bother with the bounty program.


Agreed. You've have quite a list of arguments defending the researcher when only his track record should have been enough to prove his good will. Despite the landslide of evidence of good will, Facebook decided to act in bad faith. Unacceptable, I hope other researchers read and remember this story.


CXOs do not talk directly to anyone other than CXOs right?


Let's take a step back here: Facebook threatened to have a security analyst arrested for demonstrating and promptly disclosing the full extent of a serious exploit in a non-destructive manner. Whatever other behavior he engaged in that was unnecessary or ineligible for the bug bounty program, that's incredibly unethical on your part. Especially so, because you clearly didn't believe he was going to do any damage to your system or you would've actually called the FBI instead of someone he worked with.

So, you just wanted to cause him reputational damage and personal problems as an act of petty retaliation. You're right on some of the technical issues here, but in terms of ethics, your behavior has been far worse than his. I don't think you realize how much long-term damage you're doing to your relationship with the wider security community by threatening to jail people who were at no point acting maliciously and at no point caused any damage.


This isn't all that complicated, as far as I can tell.

Guy discloses a vulnerability. He knows it potentially has wide reaching security concerns, and downloads enough data to prove that if necessary.

Guy gets shortchanged on the bounty, indicating that either a) facebook is trying to shortchange him, or b) facebook doesn't realize how big of a vulnerability this truly is

Everything about Facebook's response indicates b): they didn't realize how big a vulnerability this truly was. Otherwise, the data he downloaded would have been useless by the time he used it.

You can argue that the guy "went rogue" by hostaging information, but fact is he deserved to be paid more and he was able to prove it. Now facebook looks bad.


Guy discloses vulnerability. Facebook is not as impressed as guy would have hoped. Maybe it's because he's one of several people to disclose the same vulnerability. Maybe there are just a lot of vulnerabilities (they've paid out 4.3m in bounties).

Guy's reaction to rejection: take hostages and threaten Facebook. Facebook moves to defense and cuts guy off.

You are not a good neighbor for kidnapping someone's family to prove to someone their busted lock is a big deal. You show them their lock is busted and trust they can figure out what harm that could lead to. The alternative is companies being hostile to people just looking around their locks, which is the world in the 1990's and 2000's that responsible researchers are trying to avoid going back to.


This is, of course, Facebook's narrative which conflict's with Wes's.

One obvious hole I can see in Facebook's story is that they insinuate that Wes broke back into the server after they disputed the bounty. If this were true, they did nothing in response to the problems Wes found for over a month.

If you look at Wes's timeline, he says access to the server was no longer possible a few days after he filed the second report.

It comes down to who you believe. Personally, I find Wes to be more credible. It sounds like it was most likely a misunderstanding by FaceBook. Now they are doing damage control.


"With the newly obtained AWS key... I queued up several buckets to download, and went to bed for the night."

He definitely took data off of Facebook's server.

Also you misunderstand his access being denied was a firewall change earlier in his story. This was merely to speculate other systems he could have penetrated--completely separate from the S3 buckets he took data from.

From Facebook's perspective it could very well have seemed like he went back for the goods since he submitted three separate reports, the last of which triggered the response. But this is also irrelevant, the question is whether he took data off or not and this is unambiguously yes, by Wes's own admission.


Honestly, I think he did go too far downloading the S3 data, but nothing in their policy stated or implied that was against the rules. He did not violate their written guidelines. And so, Facebook should have paid him (and then changed their policy), even if begrudgingly.


Here is what's happening right now:

FB: He's an experienced bug bounty hunter and should know where reasonable borders are.

All the experienced security guys itt: He's an experienced bug bounty hunter and should know where reasonable borders are or at least not pivot/escalate without asking. Also never dump and hold data.

Everyone else: What he did isn't technically against the rules FB wrote, so they are screwing him, despite it also being written that they have sole discretion.


> All the experienced security guys itt...

Ah, so those who disagree are inexperienced? No true scottsman indeed!


How is that a "no true scotsman"? Most people in this thread commenting have not indicated they work in the infosec industry.

(For the record, I do, though I'm not sure I'd flatter myself by saying I'm "experienced" exactly.)


The problems I have with your absolute statement:

* You are stating that all (not some) experienced security folks are agreeing unanimously. The implication is that those show disagree are not "experienced security guys" (as you called them: "everyone else") - they are the ones who aren't true scotsman

* you assume those who don't explicitly indicate that they work in infosec industry do not work in the infosec industry

* also, you do you need to be "experienced" in the infosec industry to be correct / wrong.


I wasn't the one who made the comment you're referring to. I'm just saying there is no evidence of a "no true Scotsman" here, as far as I can tell.


apologies - didn't notice you weren't OP. IMO, the "no true Scotsman" is implied (might be unintentional)


The general theme of the thread seems to be security industry people, like tptacek (or commenters self-identifying as being in the industry), expressing concern with the researcher's actions (while still admitting Facebook didn't handle it well). The primarily negative comments don't seem to have a specific affiliation tied to them. And given HN's demographic, odds are much more of them are developers than are infosec people.

I don't think the person you were replying to was suggesting that any infosec people who fully support the researcher aren't real infosec workers. I just don't think he saw any who even claimed to be.


I disagree; this is non customer, non-financial data which is often considered fair game because downloading data is useful to locate many security bugs. Source code or config data is a prime target, but so is network diagrams etc.

Defense in depth means every defense needs to be validated not just the outer layers.

PS: Further, if FB says they know about a bug then anything he downloaded could easily be in the wild and should be investigated.


This. Literally every single person who identified themselves as in the security fields that I saw said the researcher went too far.

What's really getting to me is the overwhelming number of responses containing idea that everything that isn't explicitly banned is permitted, despite the recipient saying "No" (even indirectly/without justification) at some point. How to deal with the grey area of consent is something that every adult should know, and it's worrying to me that so many here seem to feel entitled to whatever they can take as long as it wasn't explicitly forbidden.

Obviously FB should update their policy, but at the same time it's important that we as the community use this as an opportunity to learn and discuss where the implicit boundaries are, where one needs clear-cut agreement to proceed.

Consent is sexy.


I'm a security guy and I think what he did towards the end is dubious and strange, but again, he was following their guidelines as written.


I disagree. It's not about whether or not he downloaded the data. That is an undisputed fact between both parties.

The question seems to be if he did it in good faith and within the rules of the bug bounty program.


No. The question is whether FB understood the severity of the bug and paid in proportion to its severity. When you run a bounty program, that's what you do.


This whole thing is silly. Facebook (or any other tech company) have a lot of flexibility and hardly any accountability in defining what a "million dollar bug" is. You really can't believe they are going to just hand you over 1m because you think it is a 1m bug. It very well may be but in the end facebook will be the one deciding the value of said bug and you will have nothing to do with their decision so assume they just won't do that.


Sure, they'll be the one deciding. Except, that other bounty hunters are watching their reaction and their fairness in paying out people for their work.

The next $1M bug that gets discovered will probably go out onto the black market because of Mr. Alex's actions here.


No, the free market decides the value of the bug. You can either pay that value to a white hat to find it or wait til a black hat sells it.

Facebook has now demonstrated that they will not only not pay you, but they will attacking you publicly, slander you, and threaten you. Now what does that mean for the next hacker coming along? Someone who is clean and wants to stay clean will avoid Facebook. Someone who isn't will realize that Facebook is now an easier target because of the clean guys staying away.


Exactly this. Facebook have just demonstrated that at best they'll get an anonymous warning and then all their private keys dumpd onto pastebin when they do nothing.

At best.


I don't think he is claiming 1 million for the bugs, he mostly wanted to share the whole story (that title was just to get some eyeballs instead of using maybe "facebook cheated me")


At no point did he take hostages. It's that sort of thinking that lead to all this drama in the first place. He did however disclose, which is pretty reasonable considering a lot of us are trusting these services to protect our information.

What if Instagram blead all your browser information? So people can now fingerprint billions of people and figure out who (and their pictures) are surfing their sites? What if there are pics on instagram that people rely on being private?


Downloading data is where he crossed the line and what I meant by hostage:

"Wes was not happy with the amount we offered him, and responded with a message explaining that he had downloaded data from S3 using the AWS key..."


You make "downloading" sound more sinister than it is. Downloading something from the network is the only way to see that it's there or know what it is. There is no substantial difference between downloading and viewing in this case.


> "With the newly obtained AWS key... I queued up several buckets to download, and went to bed for the night."

This isn't about whether viewing files on an internet is technically downloading them; this is about retrieving files of enough size and quantity that you have to queue them up for an overnight download.


He kept it for a month. That is different than looking at it.


Under the assumption the keys would be revoked it's just trash anyways - it'd have been useless anyways, but apparently they didnt realize how serious stuff was, otherwise they would have revoked it A month is plenty of time to change critical S3 credentials


And how long does your browser cache the pages and assets you've looked at?


"Maybe it's because he's one of several people to disclose the same vulnerability"

The thing that gets me about this whole situation is that Facebook either didn't understand the extent of the vulnerability (which seems to be the case to me, and in which case I think Wes Wineberg should have been rewarded far greater than they did for showing them how serious it was, though I wouldn't say this is literally a "million dollar" bug) or they were grossly negligent for not patching it up a lot sooner than they did. They can't have it both ways.

Are they bad at managing their bug bounty program, or just bad at responding to serious security issues? It has to be one or the other.


I'm not sure you understand how the law works


I'm not sure anyone really understands how the law works when it comes to bug bounty programs and legal retaliation by companies. Is there any case law precedent yet?


In most cases where the opposing parties are one large publicly-traded company and one small company or individual, the law works like this:

* little guy offends large company, usually through some totally well-meaning and innocent activity that, if illegal at all, is only so due to obscure, obsolete, and/or obtuse laws

* large company unleashes unholy wrath of $1000/hr law firm on little guy threatening to destroy little guy's world if he doesn't immediately comply with all demands

* lawyers laugh at the plight of little guy and say it doesn't matter what he thinks because he can't afford to oppose large company

* little guy is forced to comply no matter how absurd large company's demands are, because only other large companies can oppose large company in court

* should the large company feel inclined to sue the little guy even after he acquiesced to their ridiculous demands, little guy loses all of his possessions in his attempt to pay legal fees. little guy will run out of money before the case wraps, resulting in him getting saddled with a judgment for massive personal liability (cf. Power Ventures)

* large company is free to make the same infractions whenever they feel it's appropriate to do so, because what are you gonna do, sue them? (cf. practically every company who has ever brought a CFAA claim; Google's whole business is violating the CFAA, as well as various copyright laws)

* bonus points: large company has friends in the prosecutor's office and gets the little guy brought up on life-destroying criminal charges (cf. Aaron Swartz). if the case makes it to trial, little guy spends time in jail (cf. weev)

I don't think I missed anything.


Total aside: I have a startup idea to throw a wrench into your accurate depiction of how things currently play out: little guy hires full time lawyer from large pool of unemployed lawyers, suddenly has legal counsel at reasonable (relative) price for extended time. Suddenly little guy has more of a fighting chance to fight back against lawsuit, instead of having to pay out his counsel at $1,000/hr. (He can add a full time yearly lawyer at the clip of every 2 weeks of his adversary's costs)


Especially when Facebook expressly authorizes this type of activity (to some degree). The relevant passage is cited in the original article.


I'm not sure in this case, that's true. But whether or not this was illegal I generally support skirting laws if it makes everyone else more secure. To that end, I also support Snowden.


laws aside, USD2500 for all that data? hmmm, is our data that cheap?


Sounds like FB acted pretty unprofessionally both in the infrastructure department and in handling of the situation. You had some embarrassing mistakes and instead of acknowledging them you tried to scare the reporter into shutting up and leaving you alone. That part is pretty clear. Whether he violated your rules and how much you pay him I don't care.


Especially in the infrastructure department. This is the huge story here.. putting all your creds on S3 in the open protected by one key?? Craziness.


Yes, exactly this. Without escalating an RCE, how would he have been able to expose this absolutely huge flaw? The initial report was inconsequential, but this seems like at the very least a much more than $2500 bug. If things like this are considered "unethical" it kind of makes finding million dollar bugs in a bug bounty close to impossible.


I agree. According to Stamos, though, there was no flaw:

> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.


If he thinks that is how it should be and nothing needs to be changed then god save their user data. He conveniently missed out the key separation and privilege escalation shown by the researcher.


Yeah, that's like gaining root access on a server and being told "well, the fact that those commands will execute is merely Linux working as designed". Talk about missing the point...


That surprised me too. Of course, AWS keys can be used to access S3, but I don't see how exposing private AWS keys on a public facing server can be "expected behavior".


He only got $2500 because the bug had already been reported by others. Most programs pay nothing in that case.


Then the first one to report it should have been paid a lot more than the $2500. The fact is that FB didn't understand the impact of the bug, and it needed Wes to show them how severe the bug was.

And once they knew how severe it was, they ought to have acknowledged the severity and paid him a lot more.


I feel that privilege escalation/lateral movement is implicitly discluded from almost all bug bounty programs, most researchers know that.

It's a really grey area beyond an initial 'access bug', so it pays not to go there. Otherwise, where should Wes have stopped? keep proving more vulnerabilities until he's downloaded their code? or got private photos of Zuckerbergs kid? Just to show that it is indeed a serious bug?


why would private keys be on any system somehow accessible from the internet? gotta put all in the cloud?


Yeah, scaring the guy with his employer and telling him that kind of bug is USD2,500 worth makes me think about how important is my data for them


The hypothetical question Facebook should ask is:

"If the security researcher did not disclose the RCE, but instead sold it to highest bidder, how much would that likely pay in this situation?"

Paying security researchers to properly disclose is a way of financially encouraging the right behavior. While it may be tough to stomach a large payout for responsible disclosure, do you really want them considering the alternative? It's like tipping in a restaurant to ensure food quality.


Agreed. To me as an outsider, this escalation bug looks a max bug, definitely dwarfing any particular admin console vulnerability, and that the processes the researcher claims to have followed were pretty much necessary to show it. Whether or not this followed the letter of the policy, by responsibly reporting the escalation in the spirit of the policy, the researcher has fulfilled the spirit of the goal.


How is this unprofessional behaviour ?

They are trying to condone the behaviour of data access which in all honesty falls on borderline unethical behaviour.

Any professional who participates in any company's bug bounty should respect their rights as well.

Whether the keys were accessible and it is a technical blunder is secondary but the action the researcher took a) accessing the data he did not need to b) making this into a big deal when he was the one not respecting the bug bounty's limits makes this a case for FB.


I am not saying that the sec researcher is right here. I don't care about him, he is just some random guy who wants publicity. Talking about FB is more interesting because it is a huge public corporation which should behave smartly. But if you want talk ethical/not ethical -- he found a serious problem in their infrastructure. Had he not looked at the data ("respected their privacy") he wouldn't have found it. You can't make the omelette w/o breaking eggs. Perhaps this is more of penetration testing, not bug bounty stuff, but again, i don't care. He found stuff. He didn't use it (AFAIK) for anything bad. FB has to thank him and quickly fix their process. Complaining to his boss and acting all pissed suggests that they do not understand they they did mess up big time.


I am not saying that the sec researcher is right here. I don't care about him, he is just some random guy who wants publicity. Talking about FB is more interesting

You're right. An important thing has gotten lost in the shuffle. We should be pointing and laughing at Facebook. Then when the giggling dies down, asking: Something this bad and with such a "trivial" vuln manged to get published, what else have their now-proven-to-be-shitty practices left open?

He found stuff. He didn't use it (AFAIK) for anything bad.

Reminds me of the way business dudes and non-security devs used to react before security got all popular and legit. And they could have even avoided the whole public brewhaha if communication had been better between the tester and the product staff. Classic blunder.

Complaining to his boss and acting all pissed suggests that they do not understand they they did mess up big time.

They jumped to contacting someone over his head before engaging in real talk with him. And then their public response is covering their ass by arguing over the fine print of how he shouldn't have been poking around where he was.

Obviously there are differences, but similarities are fun too!


So the bits where you lost the ssl keys, auth cookie keys, app signing keys, push notification keys - and had to ask him (via his employer) about what data he'd accessed are all true? Implying you have no records of who else might have done this and acquired those keys?

Boggle!


That's one interpretation: the other is that you're placing faith in them being honest, and you'll get a list of what he'd got without the time of doing forensics of the systems, and hence being able to change the keys sooner.


"Placing faith in them being honest", in the same conversation you're having with their uninvolved employer saying what they found is " trivial and of little value" at the same time as threatening them with Facebook's legal team and law enforcement?

Doesn't pass the sniff test from here.

(Admittedly there's no doubt an iceberg-sized bit of this whole drama that neither side are admitting exists.)


The bigger issue here, and the one that Alex at Facebook seems to gloss over - if Wes got this data using a 2 year old well known exploit -- then who else got it without anyone knowing?

While Alex may have a right to be upset at Wes for taking data, Alex should recognize Wes is likely the least of his worries now. Wes wasn't/isn't a professional security researcher... and he was able to do this. That should frighten Alex, and Facebook should have been much more rewarding to Wes for forcing this issue to be taken care of.


"This bug has been fixed, the affected keys have been rotated, and we have no evidence that Wes or anybody else accessed any user data."


> and we have no evidence that Wes or anybody else accessed any user data

This raises way more questions than it answers. Most notably: why aren't you recording who accesses user data?


Reads to me that they are recording the access and no-one did access it.


Not necessarily. If that were the case, wouldn't they use the stronger, "and we have evidence that no data was accessed through this exploit"? The fact is: They can't possibly protect user data once the private SSL keys leak. At that point anyone can intercept user data on third-party, non-Facebook servers if they're affiliated with an ISP, wifi hotspot or other point of access. Anyone could send targeted phishing emails for their servers: How would they know, if the SSL cert looks legit and the DNS is regionally poisoned?


Because of the sequence of events that played out...


Yes and he got paid for it.


I'm not quite sure I understand your point? Of course he got paid, that's how bug bounties work... that doesn't detract in any way from the point I made above.


And I don't understand yours. You were concerned about other people other than Wes accessing the same data via the same flaw, Alex said that did not happen.


But until Wes told them, they had no evidence that Wes was accessing the data! Or are you saying that they did have evidence, but chose to take a "wait and see" approach to someone gaining control of their entire platform?


Alex said they "have no evidence" it happened, which is classic slippery legalese. From that phrase it is reasonable to infer either that they have evidence of absence, or absence of evidence, which are not the same thing.


It's standard wording for something like this even if they had 100% evidence of absence.


Correct. It's the standard wording, whether or not they actually have evidence. Therefore we cannot assume, as you have earlier in this thread, that they do in fact have it.


This response deepens my concern about the situation, rather than alleviating it. In this response, you make it sound like calling this security researcher's employer's CEO was a reasonable escalation of the situation, and that is deeply concerning to me, especially given the actual text of the post Wes published here.

It also appears, based on your post, that you think that stating, approximately, "I hope we don't need to contact our legal teams or law enforcement about this," does not constitute a threat of legal or law-enforcement action, and I also find that deeply troubling. While I think you could make a legal distinction that these weren't technically threats of such action, any reasonable person in the researcher's position would by positively idiotic if he/she failed to feel threatened in that way by such statements.


I told Jay that we couldn't allow Wes to set a precedent that anybody can exfiltrate unnecessary amounts of data and call it a part of legitimate bug research, and that I wanted to keep this out of the hands of the lawyers on both sides. I did not threaten legal action against Synack or Wes....

In case it isn't clear, most people will interpret "I want to keep this out of the hands of lawyers" exactly as a threat to start legal action. To be honest I'm not really sure how else it should be interpreted?


"I want to keep this out of the hands of lawyers" is almost universally understood to mean "please do what I say so that I don't have to sue you, which is what I will do if you do not comply".


Maybe someday the response to this sort of threat will be "In the interests of sharing, I already passed on this information to your favorite class action law firm and the media. It's already in the hands of lawyers and your company is already being sued."


Alex, I am always in to hearing from both sides. But despite your reply, I see wrong doings on both sides unfortunately. I dont think you have discussed this message with public relations dept or rep management one.

OK, so lets look at this - your response showed us one extremaly important issue. No clear rules in your system. Wes actually by exploiting your system, exploited your lack of rules regarding the handling of white hat hackers.

Listen, hacker should exploit ALL possible issues. He exploited your weakest one - the rules behind the system. Close the case - reward him XX,XXX for exploiting weakness in your policy for dealing with white hat hackers, spend another as much to bulletproof your policy. Do not reward him for hacks, that are unethical, as it would be wrong, but do it for the other exposure - small dent on your white hat hacker system.


The lesson here is when you find Operations issues (particularly Security Operations) at Facebook don't report them. Those make the CSO look bad directly.


Yep. Code bugs, no problem. Engineers don't report to Alex!


Ok, so here's the thing. Your $2500 payout was not commensurate with the severity of the bug. It ought to have been more. A LOT more.

You're basically telling bounty hunters to not go any further to "prove" the severity of the bug because you're saying, "Trust us. We'll measure the maximum impact and reward you fairly"

And yet, you're not being fair at all. So the bounty hunter needs to "prove" the severity of the bug for you. You're digging your own grave here by not acting in good faith. The next guy who finds a good bug is not going to disclose it to you - he's going to sell it on the black market for a few hundreds of thousands. Or millions.


The real question is did you rotate the keys (and do further hardening, I hope!) because of the vuln report Wes made? If so, than you should be grateful for his work pointing out your mistaken single point of failure via AWS S3 security and you should have rewarded him handsomely.


I think that is the key right there. It seems like the sensu.instagram.com was simply firewalled at first and the AWS keys were not changed. He was then awarded the bounty for reporting this bug. Afterward he demonstrated that the AWS keys were another vulnerability, and it wasn't until after reporting this, that the AWS keys were rotated.

To me, this demonstrates that had Wes not reported the AWS keys, then Facebook would never have rotated them. I would argue that the fact Facebook found need to take action to resolve Wes' third vulnerability submission, could be considered an admission to its legitimacy as a bug. Therefore concluding that the bug is indeed worthy of a bounty.


I can't work out how to not make this sound almost infinitely cynical, but their ssl key expires in 13 days - they only had to shut him up for another few weeks and they could have pretended they weren't currently MITM-able:

https://www.instagram.com

Not Valid After: Thursday, 31 December 2015 11:00:00 pm Australian Eastern Daylight Time

Maybe they'll upgrade it to something better than: Signature algorithm SHA1withRSA WEAK


Does this have anything to do with the SHA1 sunset on 31 December?


That'll be why the key expires on Dec 31 even though it was only issued back in April.

It doesn't explain why Instagram has been happily using a known-compromised wildcard ssl key for two weeks now.

Makes you wonder who actually values and protects Instagram's user privacy more - the researcher or the Facebook CSO...


>Makes you wonder who actually values and protects Instagram's user privacy more - the researcher or the Facebook CSO...

No, I don't wonder about this at all.


Different key, dude. We rotated what was exposed.


So this new rotated key I'm seeing that has an April 2015 start date is a different key to the one your team replaced after it expired and broke everything back in April?

What a coincidence...


[flagged]


> Do you believe that after this chain of events anyone still believes you?

Personal attacks, which this crosses into, are not allowed on Hacker News. Please comment civilly or not at all.


I don't see that as uncivil or a personal attack. It's either a reasonable direct question or a rhetorical one. And as a rhetorical question, it's not a personal attack, but rather makes the point that other posts seem to damage his credibility.


It's obviously not a direct question (there are people defending him in this thread, so of course he "believes" that), and as a rhetorical one it implies that he is lying. That's not a civil debate tactic—there's a reason why parliamentary systems expel people for using it.

Everyone needs to err in favor of respect when addressing someone on the other side of an argument, especially when one's passions are agitated, because the default is to forget all that.


Seems like a reasonable, if rhetorical, question. Hope Alex doesn't complain to his employer about it though ;-P


I am not talking about the person, but the company.

And I am sorry, but after these acts the company has taken, the little bit of trust that was left in the company is gone.

I am sorry if it sounded like a personal attack, that was not intended.


OH COME ON Dang, Alex called up Wes's employer and threatened him with criminal charges and then had the balls to lie about it in his facebook post that he didn't "Threaten". Are you seriously defending this??


Asking HN users to be civil defends nothing except civility.

There's a relevant general point here though. Reactions like this, and many others in this thread, are reflexive. That's really not what this site is for. Good comments for HN aren't reflexive, they're reflective. Practicing that distinction is the most important thing for being a contributor here, and it's orthogonal to one's actual views.


Asking HN users to be civil defends nothing except civility.

This would only be true if that request were applied equally whenever HN users were uncivil. As it stands, it does generally come off as defending specific users.

...it's orthogonal to one's actual views.

Believing this is going to made you a worse moderator -- this is "fair and balanced"-style thinking. There are many perspectives whose projection onto comment reflectivity are anything but zero.


> if that request were applied equally whenever HN users were uncivil

That's asking us to operate like machines—supermachines, in fact, with incivility detection and moderation powers. That's unrealistic. HN users' capacity to be uncivil exceeds our capacity to ask them not to, so the latter maxes out.

> it does generally come off as defending specific users

We try hard not to play favorites. I'm biased, of course, but there's also more than one kind of bias here. People are more likely to notice us criticizing a comment they identify with than the times it's the other way around.

> Believing this is going to made you a worse moderator

In that case I'm a bad moderator already, because everything I've learned about HN is packed into what I said there.


outstanding question.


If he had reported the keys along with the original submission, I think it's safe to assume they probably would have rewarded him handsomely.

Instead, he sat on the keys for over a month, and in the meantime used them to download everything he could find onto his personal computer. Simply testing that the keys were live and disclosing this immediately would have been more than enough proof of a bug here.

Edit: downvoters - please explain how using keys to access production systems for over a month without disclosing is acceptable white-hat behavior?


They said they did rotate the keys.


Bug bounties are supposed to represent a high probability payoff of a lesser amount of money for finding a bug. This is in comparison to going the black hat sales root, where probability of sale might be lower, but the payoff might be higher. I can imagine one or two state actors who might pay top dollar to have keys to the kingdom to a major social network.

All I'll remember of this entire story is the outcome- huge vulnerability found (high black market value), and Facebook is talking about lawyers and paying small bounties. Nobody will remember that technically he broke a rule that wasn't well explained. The next Wes will have his major vulnerability in hand, and have this story in his mind. It may change his decisions.

Make this right. Even if you are in the right who cares? You need the perception of your program to be impeccable, paying more than researchers expect. Facebook can afford it more than they can afford to blemish the image of their big bounty. Invite Wes to help you rewrite the confusing parts of the rules. Leave that story in everyone's memories instead.


According to the rules https://www.facebook.com/whitehat/ "We only pay individuals"

Wes COULDN'T have been working for Synack to find bugs as your program doesn't even allow for it.


And according to the update on the post, Alex chose to contact his 'company' (that he had contracted for) even though he had not contacted them through the company email (meaning he sought out a way to go about intimidating Wes). Seem's incredibly petty and intimidating of Alex and reflects poorly on Facebook imo.


Yea, wouldn't want to "set a precedent" that infosec researchers will be rewarded for doing the right thing.

Next time someone uncovers your private keys at least they'll know upfront that there is no money in doing the right thing which might just make selling them to the highest bidder seem like a more compelling option.


With regard to to your final sentence: "Condoning researchers going well above and beyond what is necessary to find and fix critical issues would create a precedent that could be used by those aiming to violate the privacy of our users, and such behavior by legitimate security researchers puts the future of paid bug bounties at risk." Regardless of whether one thinks Weinberg's actions were ill-advised, there seems to be a general consensus that they were instrumental in the discovery of some very critical issues, and that you are lucky it was he who found them.


There is a definite issue with the Facebook bug bounty program in that there are many serious issues with the platform that don't fit within the relatively narrow parameters of the program. I reported an issue that enabled anyone to customize a wall post that says it goes to any site of my choosing in the post (cnn.com, whitehouse.gov, etc), completely customize both the content and photo of the post, and have the link actually go to a URL of my choosing instead of the domain it shows in the post. Examples at [1] and [2].

This issue, which enables uber-credible phishing and other attacks with the assistance of Facebook (since Facebook falsely reports to the user that the link goes to a credible domain of the attacker's choosing while actually sending them to any URL controlled by the attacker), was rejected. Not only was I told that it was not a bug that I could be paid for, but that it really wasn't a bug at all, and that they would do nothing about it.

If these kinds of serious issues are essentially ignored because they don't meet the very narrow guidelines set forth in the bug bounty program, Facebook is going to miss a massive number of problems with its platform.

[1] http://prntscr.com/9fj40t

[2] http://prntscr.com/9fj46h


Thanks for the response, but why did you start by contacting the CEO of Synack instead of the researcher directly?


> At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

I feel like that bullet point answers your question pretty well.


Sorry but this is a coverup that Alex is using to defend himself. He had easy access to Wes, as Wes was actually demanding a reply via Facebook's own system in place to communicate with researchers, and not receiving one.

Alex would have been aware via the original RCE bug that Wes was reporting on behalf of himself and not his employer. Also, it is reasonable that Wes would have mentioned that he is reporting the bug on behalf of his employer from the beginning.

I presume that Alex knew these things, but he decided to take a more dramatic approach to get Wes to stop, by contacting his employer. It obviously would be leverage, and Alex knew that he could also leverage his position at Facebook to use a security firm in the industry (who would understandably not want to do anything to jeopardize its relationship with one of the largest internet companies in the world) to ask their employee to stop.

I do not believe that Alex legitimately believed that Synack (Wes' employer) was behind the research, but he knew it would be an effective way to stop Wes from continuing, so he decided to pull those strings.


I'm more questioning the flow of researcher reports vulnerability, company awards bounty, researcher disputes bounty value, CSO of company contacts CEO of researcher's company. Is that normal escalation procedure?


Wait, you just made something up.

Even the researcher doesn't claim that Alex contacted the CEO of Synack because of a dispute over the bounty.

Rather, it's the other way around: the researcher disputed the bounty, and did so by revealing that he'd retained AWS credentials from Instagram long after they'd closed the vulnerability that he used to get them.

Alex contacted the CEO of Synack to ensure the credentials weren't used, because if they were, Alex couldn't be control Facebook's response: they've got a bug bounty participant who has essentially "gone rogue" and is exploiting Facebook servers long after they've told him to stop. They need him to stop.


The "bug" here is that they aren't really keeping track of their AWS buckets and keys at all. Least privilege, access logging, remote IP flagging, etc. These operational failures are ostensibly the responsibility of the CSO.

I'm not saying this researcher was 100% in the right, but this is the CSO ass covering. "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."

A simple phone call directly to the researcher that cut through the bullshit would have made everything better. But he had to make sure it didn't get out and the only way he could do that was by using the only leverage he had: The researcher's employer.


Alex has in the last few months built one of the best teams in application security at Facebook (Facebook security is now seemingly most of O.G. iSEC Partners). I get it, everyone hates big companies and especially Facebook evil Facebook but, come on. They know what they're doing.

If you understand how security works inside of big companies, this is a really silly theory to run with. CSOs are happy when shit like this gets discovered, because it gives them ammunition to get the rest of the company to adjust policies.

If you were working from the understanding that a CSO comes in and just immediately tells a team of (what is it) NINE THOUSAND developers how to do stuff differently... no. That's not how it works.

The problem is that nobody at Facebook with the possible exception of like 10 people none of whom are Alex can make huge operational changes like "change all the ways we store keys across an entire huge business unit". So, you tell Alex you took AWS credentials he didn't know existed and you're going to start mining them for a story you're bringing to the media, and now Alex is in a position where he's NOT ALLOWED to sit back and try to manage the situation himself.

Delete the keys or I have to tell legal what's happening.

The researcher NEEDED TO HEAR THAT.


>> Delete the keys or I have to tell legal what's happening.

>> The researcher NEEDED TO HEAR THAT.

I'm not in security, but from the outside looking in, how things worked out just doesn't smell right.

If "the researcher NEEDED TO HEAR THAT" is the priority, then why waste time looking up who the guy works for and calling them instead?

The simplest and most obvious way to tell the researcher is to tell him directly in the clearest way possible. It isn't as though there wasn't a pre-existing line of communication with the researcher.


My reading of tptacek's subtext is that Facebook wanted to show the researcher that they were really, ALL-CAPS serious, as in "get you fired and ruin-your-livelihood if you don't stop" serious. These mafia tactics are fine because the Facebook CSO "built a good team and knows what he is doing"


If FB wanted to show the researcher that they were really, ALL-CAPS serious, then they would talk to him directly as in "You've got stolen data and we're going have the FBI arrest you, seize your computers and put you in jail ruin-your-livelihood if you don't stop" serious.

So I still don't see how calling the guy's boss trumps that in terms of scariness. Because if I'm the wronged party (i.e., FB), that's what I'd do if I couldn't resolve it amicably.


If we are disagreeing, I don't quite follow your argument - I never said that this was the worst/scariest thing Facebook could to do (there's no upper limit). What I meant was that the action by Facebook was intended to intimidate (and not that the specific form of intimidation was the worst possible)


We're not disagreeing. I think your interpretation of tptacek's subtext is the same as mine.

In some of his posts, he has been, however, comparing the researcher's dump to criminal activity -- something I am not in disagreement with.

His implication that calling the researcher's boss is a sensible approach to intimidating the researcher for potentially criminal activity -- that in particular seems like a stretch if he's being truly objective.


I don't doubt that he's put together a great application security team. Or that he even knows his shit. And I do understand how it works. CSOs are happy when this kind of shit gets discovered when they can't get other teams onboard to fix it. They're unhappy when it gets discovered when they intentionally ignore it in favor of another initiative (particularly if there's a paper trail showing that someone brought it to their attention). Or when they've already spent a bunch of money and resources fixing it only for everyone to find that they haven't fixed it at all.

There are basic things you can do to mitigate or isolate damage in AWS and they either aren't doing it or have done it badly. Even if he couldn't convince the rest of the company that god-mode keys are bad, he still could have built out some basic infrastructure to track when and where they keys were being used from so red flags could be raised when some random IP address is being used to pull down several buckets.


You're perfectly right, but his employer didn't need to hear it. And that's the whole crux of the matter.


If you read the article, his company does security research and found a vulnerability in Hotmail. Plus he was using his company's email address.

> At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

That's a big mistake. DO NOT EVER USE YOUR COMPANY EMAIL ADDRESS if you are doing this on your own. The employer has the right to know. Imagine using a company email address on Ashley.com. Yeah, plenty of people were embarrassed after that hack.


Actually his write up makes pretty clear that he didn't use his company email until after Alex went over his head to the CEO.

Second, everything else being equal, Alex going to the CEO without calling or mailing the researcher first was a mistake. Going to someone's boss and saying "please do something, I don't want to get the lawyers involved" IS an implicit legal threat, both to synack and the researcher.


What Alex wrote is a bit interesting given his update (emphasis my own):

>At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

According to Alex this is the timeline:

1. Researcher not happy with sum

2. Researcher already in contact using Synack email address

3. Alex calls Synack CEO

From the researcher's blog:

>I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.

This means that Alex is lying, is telling exactly the facts needed to come to a specific conclusion and nothing more or the researcher is lying. And he's "written blog posts that are used by Synack"? Come now. Reads a lot like someone looking for a third item so they can make it a comma separated list of reasons. His post smells like bullshit.


I like how we're talking about Stamos warning a guy running around with stolen AWS credentials for all of Instagram in the same fashion as we'd talk about a DMCA threat. "Implicit legal threat"? There's nothing "implicit" or subtle about what was happening here.


You seem levelheaded throughout the thread and make good points more articulately than I ever could but this seems a bit emotionally involved.

It's possible that we're all correct: This guy could be a wildcard researcher that plays fast and loose and the CSO could be covering his own ass. You say he's building a first rate application security team. Is it hard to believe that he could have made the mistake of focusing almost exclusively on that?


LAUGH .. I love it, even his staunch defender friend says he's lying: " I did not threaten legal action against Synack or Wes "

https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...


I get your point in these threads, but unless I'm misunderstanding, who cares about stolen, potentially undeleted Amazon creds? Revoke the key in the portal and be done with it?

Given who I'm replying to, I'm assuming that I'm missing some key piece of the puzzle.

(And I totally acknowledge it doesn't change the circumstances of what either side has done, I'm just curious)


The point is, having those is a prosecute-able offense, if Facebook chose to prosecute. So it's a big threshold to cross legally, even if not meaningful from a programmer's perspective.


Facebook's terms say they will not prosecute /report whitehats to law-enforcement. Facebook could prosecute, at the price of some goodwill from the security industry (or part of it). I'm sure a competent lawyer to mount a robust defence for the security researcher (beyond reasonable doubt, IMO).


You're missing my point and talking about something entirely different from what I'm talking about. I'm not talking about whether Facebook will prosecute and what the consequences of that will be (whether they'll win or lose whatever).

I'm just pointing out that taking AWS keys is a big deal, because it's legally a big deal.


Facebook's disclosure policy reads:

>If you give us reasonable time to respond to your report before making any information public, and make a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service during your research, we will not bring any lawsuit against you or ask law enforcement to investigate you.

IANAL: but it could be argued (in court) that he had Facebook's permission to getting the AWS keys. In his opinion (and mine) he made good faith efforts to avoid privacy violations.

Facebook's official disclosure policy has legal weight. There is legal concept (whose name is escaping me) that could apply that in laymen's terms say the official disclosure policy gives him Facebook's tacit approval - I first heard about it in the Oracle v Google where Google argued a blog post congratulating Google provided tacit approval.


The part you emphasized is dependent on the first part of that sentence, however. In this hypothetical lawsuit Facebook's lawyers would easily be able argue that they would not have done anything for the initial exploit or even demonstrating that he had recovered valid AWS keys but that attempting to hoover up data from S3, etc. violated the “good faith effort to avoid privacy violations” part.


>but that attempting to hoover up data from S3

That's a mischaracterization given his description. He examined the filenames/metadata specifically to avoid buckets that might contain user data.


1. This assumes his description of his actions is completely accurate

2. This assumes that he was perfectly accurate in his assessment of an unfamiliar project's naming conventions, data structure, etc.

3. This assumes that he was perfectly reliable in making the actual copies and didn't accidentally include potential personal data (e.g. who knows what might be in a log file?)

The problem is that we're talking about someone who already decided to exceed the bounds of what was clearly protected under the bounty program. He'd already reported the initial vulnerability and been paid for it but waited until later to mention that he'd copied a bunch of other data, had access to critical infrastructure, and wanted more money.

It seems fairly likely that this wasn't malicious but rather just poor judgement, but that makes it very hard to assume that outside of that one huge lapse in judgement he did everything correctly. It's really easy to see why Facebook couldn't trust his word at that point since it's already far outside normal ethical behaviour.


To your first point: There's being skeptical and then there's calling someone a liar without actually calling them a liar because you don't have any justification for doing so. This is far from the first time I've seen this on HN and it's really not okay. There's no point in speculating about the veracity of this person's statements until there's a reason to.

To the second and third: They only require that a researcher "...make a good faith effort to avoid privacy violations..." and I'd say he met that. You can argue that the entire endeavor wasn't in good faith but he certainly made a significant and conscious effort to avoid private data.

I think his biggest lapse in judgement was that he brought security operations issues to light in a bug bounty program run by the people that would be most embarrassed by them. Application security bugs are created by the engineering team and the CSO's application security team fixes them (or advises or whatever). Security operations issues are entirely the responsibility of the CSO's department.

Facebook (as an organization) should be thanking him. While he didn't expose application security bugs he exposed significant operational issues and blind spots. Keys with far too much access, lack of log inspection, lack of security around what IP addresses a key can be used from, etc. Operational issues and lapses in operational security are what got Twitter in hot water with the FTC in 2010. It's not as easy to play cowboy with operations as it used to be.

The CSO hasn't been around for long but by all accounts he poured a lot of effort into hiring an application security team. Perhaps that's his specialty but even one experienced technical manager hired for security operations could have caught these basic issues. They probably wouldn't have addressed the lack of least privilege in that time frame but they could have easily spun up logging to catch some rando on an unknown IP address using their keys.

But like I said, he hasn't been there for long so I don't blame him for the failure. What I do blame him for is calling up the employer to threaten them as leverage to shut up the researcher. I blame him for posting a thinly veiled justification for doing so. He could have addressed this openly, talked to the guy directly and went to the other C-level execs with it as a justification for getting everyone on board with fixing it but he tried to keep it contained to his department.

I understand how he must feel being the new guy who's responsible for the outcome but not for creating it. I know he'll get questions that he might not be able to answer since they probably aren't logging bucket access. Questions like, "Who else got a copy of these keys and what did they access?" Saying "I don't know and we may never know" in response to that, even if you weren't in charge more than three months ago, is rough.


Again, you're quibbling about legal details that are not relevant to my point. I'm pointing out that his actions are a big deal because they crossed a legal threshold where a company would have a somewhat decent case to prosecute you. I don't care whether or not they would succeed.


Then why the immediate escalation?

Wouldn't it have made more sense to contact the researcher directly, rather than using his position of power to pressure the researcher's company's CEO?

Why not assume good faith? (Which is what I would think a white hat bug bounty program should assume)


I am not sure what part of

"he has interacted with us using a synack.com email address,"

invalidates my reading that he was using his company's email?


Bottom of his post he replies to Alex his post:

> I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around.

If that is true, it is either poor judgement from Alex, or bad intent to call Synack


It says nothing about who initiated contact using his company email address. It could have said, "he contacted us using" or "his facebook account was associated with" but instead it says "he has interacted with us using". Sometimes what's not said tells us as much if not more.


The Facebook reply does not state that he was using his company email address to report issues or to communicate prior to them reaching out. The researcher says that he only used the email after Facebook got his employer involved.

The Facebook post does not, in any way, contest that.


Technically it doesn't contest that but he uses multiple weak points in bullet prior to the one about contacting the employer's CEO. The intent was clearly to establish that his decision to contact the researcher's employer had merit. It was clearly carefully written so it remained factual while implying things that aren't.

I'm not disagreeing with you, only making it clear that yeukhon was played by Alex exactly as intended so he'd be out there defending him on sites like HN.


He didn't use his company address until he was contacted through his company.


In any case, there is one question remains. How do facebook defines a "million dollar" bug if the security team is not aware of the damage it can do. Since this is not the first time this bug was reported, did they actually give a big bounty to the first person who did the initial report(Given that it can lead to this much damage)? Or just another small bounty saying that it's not a very important security flaw.


There are enough laws against "cybercrime". If Alex felt threatened he should have escalated the issue to the FBI. There is no single reason to call the employer. By doing so Alex has threatened Wesley to fuck up his life.

edit: Or -after calling the CEO- he should have contacted Wesley directly and so they could deescalate the problem together.


> The researcher NEEDED TO HEAR THAT.

I don't disagree. But why go through his employer, when they already had a direct line to the researcher himself?


Intimidation.


Relative security teams are almost useless. In 2-3 years FB might have it's shit together, but three months is no where near long enough to fix there problems.


Agreed, after looking at this linked in Profile, it's hard to blame Alex for the problems as he's only been there a short while. However, he can be blamed for creating all this unnecessary drama.


If you understood how big companies work, you'd know it takes more than a few months to build "one of the best teams". This is one thing in Alex's favor though, he's new to the job. Still, if you also understood how big companies work, you'd know that everyone hates the drama queen.

The right move here would have not been to threaten Wes, pay him, and just update the policy.

Lesson learned for Alex and his friends: Do not threaten individual contributors or suffer massive freaking drama. Thank you internet.


> I'm not saying this researcher was 100% in the right, but this is the CSO ass covering. "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."

The response from FB's CSO is very specific to a very specific blog publication. Not regarding the flaws in how their AWS Buckets are used.


I'm not sure what you're getting at.


Your statement:

> "Don't pay attention to the obvious operational deficits, the problem is the researcher overreaching."

mischaracterizes what the response by FB CSO as one that is attempting to draw criticism away from operation flaws by instead placing focus/blame on the researchers methodology.


I disagree.

A security researcher went public with a story of "I found this massive security hole and Facebook tried to avoid paying what I thought it was worth, and then threatened me with legal action"

The response that Alex thinks he needs to make is "my actions were reasonable because ..."

From external appearances it seems as though he is more concerned about looking like a heavy-handed, lawyer-invoking, CSO than the publicity around FB having an unpatched RCE that allowed access to highly-privileged AWS keys.

What he chooses to write about is reflection of what he saw as the most important news in the original blog post.

I suspect he's actually right. The blog post will probably raise more bad publicity around the way FB handled the research & disclosure than the existence of the bug, and it's the piece that needs to be resolved well.


You're right, that was the purpose of trying to keep him quiet by contacting the CE-freaking-O of his place of employment with an implicit legal threat. The blog post is an attempt to do damage control when he realized the researcher wasn't going to put up with that and went public.


> by revealing that he'd retained AWS credentials from Instagram long after they'd closed the vulnerability that he used to get them.

How would that change anything?

If Facebook did rotate all keys the moment the researcher reported it, they made no difference.

If Facebook did not, then they aren’t taking care of their security properly.


Without defending the researcher here, I thought that was the weakest point in Facebook's response. Was he interacting with Facebook using his synack.com email address during this exchange rather than at some point in the past? Was he signed up on Facebook with his synack.com address? (I haven't used the bug bounty program but it appears to require a user account.) Did he mention his employment with Synack in the course of the exchange? If any of those things were true, I suspect they'd say so, rather than leaving it at "has interacted..."

I don't know, if the guy was just shaking them down then maybe trying to get him fired is indeed a reasonable thing to do, but I don't buy that anyone would have just assumed under the circumstances that he was doing all of this on the clock.


"I never contacted Facebook or Alex using my work email account. It was only after Alex contacted my employer via email that I sent a reply from my work account. Alex indirectly contacted me at work, not the other way around."


I don't think it does. Wes asked for communication via Facebook's own tools for it, didn't get it, and they went around him to his boss. That's crap.

Now, Wes exfiltrating data rather than just looking at it? Not cool. But Facebook's side of the story is just as biased as his.


But it seems obvious that in doing so he wasn't acting in good faith.


Yeah, why not just a quick email- "Hey are you working for Synack here or independently?"


Supposedly he was using his synack email address, why would they assume he worked independently?


He posted a reply on his blog saying that the only used his synack email address after the initial exchange with the synack CEO


At this point, it was reasonable to believe that Wes was operating on behalf of Synack.

Huh? how did you make this connection? Why would he then report his findings to you?

From my point of view, contacting his employer was clearly meant as a gut punch.


This section was 100% written by a lawyer, and is intended to sound obvious without in fact being obvious at all.


Shame on you for contacting his employer directly. This teaches a good lesson to all the black, grey and white hats out there. Next time they'll know to just p0wn to 0wn.


Imo you are just trying to cover up yourself poorly, you should accept the guilt of having had a server with a well known vulnerability that had the keys to the kingdom instead of blaming everything on Wes.


Um... Have to side with Wes here. Your rules were not nearly adequate, and instead of going at Wes directly with adequate and in-depth communication, the CSO went after his employer - which is _not_ ethical.


Sorry, but it looks like your technical issue has become a PR issue. Contacting his employer was an act of intimidation, and no amount of cover-up will make up for it.


"I did say that Wes's behavior reflected poorly on him and on Synack, and that it was in our common best interests to focus on the legitimate RCE report and not the unnecessary pivot into S3 and downloading of data."

You lost me at this point. Who do you think you are really?


He must be pretty delusional if he thinks that's an OK thing to write on a blog. If I was him I'd deny, deny, deny or try and make it seem a whole lot less sinister than it is.


Quite frankly I'm not surprised Wes is sour about how this was handled and the amount granted as bounty.

It's very rare for a single vulnerability to grant you keys to the kingdom. If you check pwn2own vast majority of the hacks leverage more than one. Most major attacks start with a small bug.

The real severity of the vulnerability is how far can it be pushed to broaden the scope. In this case that admin panel was just an entry point to a whole chain of security SNAFUs (aws keys in files at a multi-billion-dollar internet company, seriously?).

To reiterate, he got access to: - source code - aws keys - plethora of 3rd party platform keys - a bunch of private keys - user data

This might not be the million dollar bug, but close.

Just thing about what an actual attacker could have done with it: - login as / impersonate ANY instagram account - impersonate whole instagram (code + ssl keys!) - inject malware into instagram app and sign it with your keys - download tons of user data - wreck havoc in aws (possibly expanding what he has access to - we don't know what else he would have been able to access had he spent weeks not hours exploring).

This is not a missing permission check allowing you to delete other peoples photos. This is huge and based of that credit and significantly higher bounty is due.

Aside from that the handling of the whole matter was not good: - if your policy is not precise interpret it to your disadvantage. you screwed up not making it clear - contacting his boss should only happen (if at all) after he has been asked the same account - the post about "bug bounty ethics" misses the point. Following your logic heartbleed investigation should have ended when someone discovered a buffer over-read without exploring where that leads.


> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.

Isn't it a security flaw that a single AWS key was able to access all of Instagram's data?


No excuses for contacting his employer though. Just plain intimidation.


You talk about ethics like it is an entirely black and white concept. I would consider a lot of Facebook's practices unethical in comparison to my own set of ethics. There are ethical dilemmas, which are basically what most discussion about ethics is about to begin with. You use the word unethical but without discussing ethical dilemmas, and that makes your argument weak even though you potentially have a very strong argument.


What he did do is expose that you guys don't know how to use aws and S3. Those keys should have never been on a server in the first place. I think it would have been in your best interest to fix it and pay him. Now that other hackers know Instagram sucks at server management it is only time before someone finds another key. Guess what they are not going to do? They are not going to report it but download and sell your info.


I hope someone calls your CEO and talks to him about your conduct.


If the intention of a bug bounty program is for white hat disclosure, you have done pretty much everything you can for vulnerabilities to be dealt with a black hat manner.

Well done.


I'm not really impressed by your reaction...


> The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself.

A security "mistake" then? :)


Thank you for the response, Alex, especially the details about the researcher's email address and affiliation. It makes your actions seem reasonable, in my opinion. As a security researcher, I personally would not be dissauded from reporting to the Facebook Whitehat program due to this incident.

I'm glad companies can offer transparency like this.


I think his response was too personal. They're both adults, and calling his employing company's CEO to make a point because you can, is to me, way too close for comfort.

There were other personal attacks in his response that I've talked about here: https://news.ycombinator.com/item?id=10755402


> Thank you for the response, Alex... It makes your actions seem reasonable... I'm glad companies can offer transparency like this.

The people who like you the most and are the easiest to persuade.


> At no time did we say that Wes could not write up the bug, which is less critical than several other public reports that we have rewarded and celebrated.

There is no bug more critical than one that results in complete access to Instagram infrastructure. Sure, the bug is stupid, but you are fooling yourself.


Couldn't it be argued that instagram's choice to store private keys in a third party system (amazon) is a million(s?) dollar bug?


Why have you not rotated your private keys?

  notBefore=Apr 14 00:00:00 2015 GMT
  notAfter=Dec 31 12:00:00 2015 GMT
(Feel free to respond here if you want to pay me the bug bounty for this)


    $ echo | openssl s_client -connect www.instagram.com:443 2>/dev/null | openssl x509 -noout -dates
    notBefore=Apr 14 00:00:00 2015 GMT
    notAfter=Dec 31 12:00:00 2015 GMT
AWS bucket creds are not the same thing as SSL certs and were most likely specific to only relevant s3 buckets which are totally separate from any load balancers.


I never claimed that AWS bucket creds were the same thing as SSL certs.


Then rotating their SSL keys shouldn't be relevant.


Unless I'm misunderstanding, it's relevant because this researcher was able to access (from the blog): -- SSL certificates and private keys, including both instagram.com and *.instagram.com

If this researcher was able to access it via not much more than a hole that was _already reported multiple times_, then I think it's not a stretch to think that [many?] other less honest parties could (and in my opinion most likely do) already have it.

If it was me, even if it's definitely only a single researcher who got access (and it doesn't sound to me like they know for sure - but regardless), something _that_ sensitive would have to be rotated anyways. If it was someone outside the teams that strictly require access to it operationaly, I'd rotate it, let alone outside the company.


Going to his employer, instead of talking to him direct was just petty.


8


Yep, my opinion of Facebook reinforced to the highest extent. Utter amateurism and disgusting behaviour. What an absolutely idiotic way to handle this situation, and coming from the very top. I haven't used Facebook in years, thank you for an excellent reminder to delete my Instagram account.

edit: Alex, how about the "shit, we really fucked up; I apologise to our users, yadda yadda" blog post?


In stories like this, try first to remember that Facebook isn't a single entity with a single set of opinions, but rather a huge collection of people who came to the company at different times and different points in their career.

Alex Stamos is a good person† who has been doing vulnerability research since the 1990s. He's built a reputation for understanding and defending vulnerability researchers. He hasn't been at Facebook long.

To that, add the fact that there's just no way that this is the first person to have reported an RCE to Facebook's bug bounty. Ask anyone who does this work professionally: every network has old crufty bug-ridden stuff laying around (that's why we freak out so much about stuff like the Rails XML/YAML bug, Heartbleed, and Shellshock!), and every large codebase has horrible flaws in it. When you run a bug bounty, people spot stuff like this.

So I'm left wondering what the other side of this story is.

Some of the facts that this person wrote up are suggestive of why Facebook's team may have been alarmed.

It seems like what could have happened here is:

1. This person finds RCE in a stale admin console (that is a legit and serious finding!). Being a professional pentester, their instinct is that having owned up a machine behind a firewall, there's probably a bonanza of stuff they now have access to. But the machine itself sure looks like an old deployment artifact, not a valuable asset Fb wants to protect.

2. Anticipating that Fb will pay hundreds and not thousands of dollars for a bug they will fix by simply nuking a machine they didn't know was exposed to begin with, the tester pivots from RCE to dumping files from the machine to see where they can go. Sure enough: it's a bonanza.

3. They report the RCE. Fb confirms receipt but doesn't respond right away.

4. A day later, they report a second "finding" that is the product of using the RCE they already reported to explore the system.

5. Fb nukes the server, confirms the RCE, pays out $2500 for it, declines to pay for the second finding, and asks the tester not to use RCEs to explore their systems.

6. More than a month after Facebook has nuked the server they found the RCE in, they report another finding based on AWS keys they took from the server.

So Facebook has a bug bounty participant who has gained access to AWS keys by pivoting from a Rails RCE on a server, and who apparently has retained those keys and is using them to explore Instagram's AWS environment.

So, some thoughts:

A. It sucks that Facebook had a machine deployed that had AWS credentials on it that led to the keys to the Instagram kingdom. Nobody is going to argue that, though again: every network sucks in similar ways. Sorry.

B. If I was in Alex's shoes I would flip the fuck out about some bug bounty participant walking around with a laptop that had access to lord knows how many different AWS resources inside of Instagram. Alex is a smart guy with an absurdly smart team and I assume the AWS resources have been rekeyed by now, but still, how sure were they of that on December 1?

C. Don't ever do anything like what this person did when you test machines you don't own. You could get fired for doing that working at a pentest firm even when you're being paid by a client to look for vulnerabilities! If you have to ask whether you're allowed to pivot, don't do it until the target says it's OK. Pivoting like this is a bright line between security testing and hacking.

This seems like a genuinely shitty situation for everyone involved. It's a reason why I would be extremely hesitant to ever stand up a bug bounty program at a company I worked for, and a reason why I'm impressed by big companies that have the guts to run bounty programs at all.

(and, to be clear, a friend, though a pretty distant one; I am biased here.)


I think you're right on most points, but after reading the write up and response I do think Alex reached out to the employer first instead of the researcher as an intended act of intimidation. That was a mistake.

If it was not done for the purpose of intimidation, then Alex simply would have asked the CEO if the researcher was acting on the company's behalf and after hearing "no" would have ended the call and contacted the researcher directly.

Seems simple doesn't it? Perhaps you are not seeing it due to your friendship, but it seems like a dirty move and only serves to call into question how Alex handled other aspects of the situation.


> If it was not done for the purpose of intimidation, then Alex simply would have asked the CEO if the researcher was acting on the company's behalf and after hearing "no" would have ended the call and contacted the researcher directly.

Then the CEO is going to contact the researcher and he's screwed either way. God knows what the CEO would have say to the researcher privately. Having a middle man to translate is a bad idea in an emergency.

Let's face it, when you used your work email and made another company paranoid, you are putting people on the spot. Employer needs to know (they have legal responsibility), and given the prior research they did and the researcher's claim, I think the reach out is absolutely correct.

Instgram's infrastructure has flaw. That's bad but everyone's infrastructure has flaw. Shit has to be fixed. Doing more than what was needed is bad. If I am told to stop dumping data, I would stop.


Yeah, totally. "I did not threaten legal action against Synack or Wes" Who the f do you think you're kidding, Alex?


Coming from a pentesting background (and now working as a CISO), I can see both sides to this. tptacek is almost certainly correct in his characterization of the events, and I agree wholeheartedly with what he's said. It's important to note that this researcher didn't just chain several exploits together, but sat on sensitive data unbeknownst to Facebook in order to exploit other vulnerabilities later. Those vulnerabilities could not have been exploited without the initial (fixed) compromise.

Think about it a different way. If this researcher had found SQL injection in a webapp, dumped the usernames and passwords, and reported the vulnerability for a bug bounty, he should get paid. If he kept each of those credentials, and then logged into other systems using higher-privilege accounts that he'd compromised even after the SQLi is fixed, he is basically continuing the exploitation of an already-fixed bug. Those don't deserve payouts. Similarly, if he'd established some sort of persistence (such as a reverse shell, etc) on compromised assets, he can't keep coming in and getting more and more bounty payoffs. Fruit of the forbidden tree, in this case.

Where I disagree with tptacek is with regard to the benefit of bug bounty programs. Although I'm not currently running one, I find the idea fascinating and helpful for two primary reasons: first, you're almost definitely going to see generally better results in a well-managed bug bounty program (not necessarily something like Facebook's White Hat program) than traditional pentests or application security assessments. More eyes are almost always better when searching for tricky problems. Secondly, if you're a large enterprise, there are already people "testing" your security. I'd much rather be able to pay out a researcher than drive them to more nefarious buyers. You will probably encourage many people to test your security (which screws up metrics) but if finding security problems is the ultimate goal, it's worth it.

Even in this case in point, Facebook did discover an RCE that could have been (and kind of was) extensively exploited due to the fact that they held the bounty. If an actual malicious hacker had found that problem first, they would have been in significantly worse shape.


> If he kept each of those credentials, and then logged into other systems using higher-privilege accounts that he'd compromised even after the SQLi is fixed, he is basically continuing the exploitation of an already-fixed bug.

Why did those credentials still work post-report?

What if those credentials were accessed from a public dump?

The outcome of this entire clusterfuck of a bounty is one of the reasons there are still very well paid blackhats. There are no rules or terms to follow.

If their terms aren't clear (the terms he's citing certainly weren't intended for keys, rather Facebook user accounts/information), pay out and fix them.


I'd agree, but technically speaking, the bug is not fixed if credentials don't get reissued. Someone might already have access to them.

Also, you can't just expect that "oh, just delete your data pls" will work, can you? You can't trust anyone that literally hacks your system.


Whilst I'd agree that bug bounty programmes can be a good idea for Internet facing assets, I thought that this story actually neatly illustrated their limitations.

With a bug bounty programme you don't generally authorise the kind of post-exploitation activities which we see here as leading to the really serious exposures, and that's not surprising as you can't easily authorise a set of unknown people to be processing your customer data.

This differs from an engaged penetration testing firm, with whom you have a contract which covers things like handling of data gained during a test.

So I don't really see bug bounties ever replacing penetration testing companies for internal work or anything that requires accessing customer data as part of the exploit...


Thanks for the writeup. Based on what you've written, it sounds like you would have been surprised if Facebook had paid $1 million for the original report (and no further nefarious behavior by OP) since it was probably due to a simple oversight, even though it was a RCE that obviously could have been turned into total ownage of instagram. Is that accurate? If so, what class of vulnerability would make you say "Yep that's totally worth $1 million".

Or do you think he should have just stopped and Facebook should have realized how bad it was and paid him a lot more than $2500?


There isn't a parallel universe in which this finding is worth $1,000,000. It it was, every pentester in the country is getting way underpaid, because this is not an uncommon pentest finding.


> It it was, every pentester in the country is getting way underpaid, because this is not an uncommon pentest finding.

No wonder there's a flourishing (and well paying) blackmarket for vulnerabilities. I wonder how much this keys-to-the-kingdom vuln would be worth (Mitm Instagram, bootstrap a botnet, steal celebrity pics, ... the possibilities are endless)


This is no market for these kinds of vulnerabilities at all.


Makes sense, I'm just trying to get a sense of what sort of thing would be worth that much. Obviously only Facebook can answer that for sure. Heartbleed?


It's really dependent on the company. Ruby RCE would have the same affect heartbleed would to an entirely Ruby stack company.

I don't believe any company would pay $1M for a bounty on their own systems. Only people who intend to use the vuln, or to fix it as they are the vendor.

Fr a vuln to go for $1M requires "discovering SQL injection"-levels of vuln. MS paid $100K for an entire vuln class for ASLR/DEP bypass discovery, and promptly patched the shit out of it. For a remote vuln class, I could see them paying $1M quite happily to not have all of their products re-owned.


What about the parallel universe in which bug bounty hunters are blackhats who directly profit from the exploit? It seems like someone with that level of access could run up, among other things, a decent AWS bill.


I don't know about you, but I value the certainty of not losing a few years of my life to court proceedings/jail time at significantly above $50M.


Well, obviously we're talking about the mirror universe where nerds get away with things instead of scapegoated. Also goatees everywhere.


> † (and, to be clear, a friend, though a pretty distant one; I am biased here.)

Alex is good friend of mine and I've known him since college. He's definitely a good guy and understands the ins and outs of security vulnerability research, having done it himself for many years. I'm sure he didn't take the action of calling the researcher's employer lightly, and probably had a really good reason to do so.

There has to be a side of this story we aren't hearing, and probably never will.


> I'm sure he didn't take the action of calling the researcher's employer lightly

He's the CSO, and this occurred under his watch. The exploit was 2 years old, and well known. It highlights an internal security problem at Facebook et al, of-which Alex sits at the top.

In this situation, his years of "doing it himself" is unlikely to have factored in - rather, he felt like he dropped the ball and could be facing some consequences, or at the very least felt embarrassment.

This would have led to a rash thought process, and perhaps Alex jumped to the conclusion of some sort of sabotage by another company.


> I assume the AWS resources have been rekeyed by now

It doesn't look like the SSL cert on instagram.com has changed recently, and the pentester specifically claims to have obtained its private key.


a private key. It's not uncommon to have multiple simultaneously-valid certificates for the same domain. I'd argue that it's actually sort of irresponsible and therefore surprising for a site at the scale of Instagram not to, for backup purposes.


but using that private key can still grant him access to someone's traffic to their machines. isn't revocation necessary to imply security in that domain ever again?


every network has old crufty bug-ridden stuff laying around

"stuff" was the keys to the kingdom, do you think this is acceptable for a company like facebook? So instead of them making an apology, the CSO is trashing the guy who gave them the wake up call?

I do think you are heavily biased ;)


They should have just paid him the money, told him not to do it again, fixed the architecture bug, updated the rules, and moved on.

Alex just went the drama route.


If there's a grey area in your ToS, and a security researcher/hacker type is in the middle of it - the smart route is to appease them and fix the grey area. FB has a lot of resources, and it wouldn't have to deal with the blowback from this.

Why make such a bad situation worse, if you don't have to?

FB messed up. The researcher partly messed up too. Fix it and move on.


If you're biased, you should do the ethical thing and stay out of it, honestly. There is a ton of asymmetry here, and you and your Facebook CSO friend are being bullies. This is pretty grey, you don't have first hand knowledge, and obviously Alex can do no wrong in your eyes.


Everyone is biased. Presenting your arguments and declaring your biases so others can take them into account is the ethical thing to do.

This reminds me of the illusion of objectivity in journalism. If you pretend to be perfectly objective and unbiased, you're lying.


Please address where in your story calling the employer by your good distant friend would be justified.

Sounds like a jerk to me.


As mentioned in Alex Stamos' response, he believed Wes was working on behalf of Synack, and contacted the CEO directly.

Escalating issues with a company to the CEO of that company doesn't seem like jerk behavior.

Wes counters that, "[Alex] never for a second believed I was operating on behalf of Synack"

I'm not sure how Wes knows what is going through the mind of Alex, so I'm inclined to take Alex's word on this.


> Wes counters that, "[Alex] never for a second believed I was operating on behalf of Synack" I'm not sure how Wes knows what is going through the mind of Alex

As blazespin[1] mentioned in this thread, Facebook's own terms states that they only pay individuals. That's how Wes knows - because Facebook's bounty program never deals with companies. The only other explanation would be Alex is ill-informed about the terms of Facebook's bounty program.

1. https://news.ycombinator.com/item?id=10755746



There is no reason for the researcher not to retain those keys, IMO - Once those keys were found to be compromised by the company, they should have been revoked immediately, and considered 'in the wild'. The fact that they didn't revoke these keys is basically a security violation itself.

Dumping the users table on an 'internal' (heh) dashboard -- any company that is doing these bounty programs needs to clarify what a 'user' is. Is it someone using their application, or all employee information as well. It's an important distinction.


Your characterization of the AWS keys being sat on for over a month does make sense, now that you frame it in that light.

That said, Alex Stamos and the rest of the security team should have tried to figure out what vulnerabilities existed from this server instead of just nuking it and thinking that the problem was solved. That was lazy and stupid.


If the researcher didn't try to find what he could do with those AWS keys, they would likely be still valid. It's conceivable that some other people have found them too and did the same the researcher did, only kept everything to themselves. Thus, if the researcher didn't do the thing you consider bad, users of instagram would currently be more vulnerable. Why is then the thing the researcher did bad?


> 5. Fb nukes the server, confirms the RCE, pays out $2500 for it, declines to pay for the second finding, and asks the tester not to use RCEs to explore their systems.

The issue here is that, in hindsight, FB failed at this step.

They nuked the server, but they didn't determine what sensitive information was available on that server, and take steps to mitigate those risks.

I think that's an understandable mistake - cleaning up after a server intrusion is hard. Knowing how much to do after a possible intrusion is even harder. But it is still a mistake and it happened on Alex's watch.

If the purpose of the bounty program is to find out about your security mistakes, then the program did its job here, and Alex should be pleased that the problem was reported so that they could fix it.

That the researcher found the mistake by overstepping what is considered ethical (and I have no doubt that they did overstep) creates a very difficult situation - you don't want to reward that behaviour, but you do want to know about security problems and this one was only discovered/reported because of that bad behaviour.

In that difficult situation it is all the more important to tread carefully. The easy cases where you're paying out a $10k bounty typically don't require much finesse. It's the tricky cases where you need to make sure your actions are well considered and above-board at every step.

From Alex's own summary it's evident that he didn't handle it as well as he could have.

Two of the longest paragraphs in Alex's write up cover what he said to the CEO of Synack, even though Synack had nothing to do with this. Even if we accept that Alex thought it likely the Wes was acting on behalf of Synack (personally, I don't think that was a reasonable conclusion to draw, thought I assume Alex is sincere in his view that it was), he should have determined that up front, and then, once he knew it was not work related, he should have avoided:

- making accusations about Wes's ethics to his boss ("Wes ... had acted unethically")

- suggesting that his external behaviour has implications for his employment ("Wes's behavior reflected poorly ... on Synack")

- bringing in the threat of lawyers ("keep this out of the hands of the lawyers")

When faced with the difficult situation of legitimate security research that has (well) overstepped the ethical boundaries, all the evidence is that Alex jumped to the position of protect yourself, protect the company, intimidate and control the researcher and though that is a common and understandable reaction, it's not the way you turn a bad situation like this into a good one.


As a security researcher and engineer, I'd like to point out the following, without taking sides:

1. Facebook is not going ballistic because this is a RCE report. They have received high and critical severity reports many times before and acted peaceably, up to and including a prior RCE reported in 2013 by Reginaldo Silva (who now works there!).

2. The researcher used the vulnerability to dump data. This is well known to be a huge no-no in the security industry. I see a lot of rage here from software engineers - look at the responses from actual security folks in this thread, and ask your infosec friends. Most, perhaps even all, will tell you that you never pivot or continue an exploit past proof of its existence. You absolutely do not dump data.

3. When you dump data, you become a flight risk. It means that you have sensitive information in your possession and they have no idea what you'll do with it. The Facebook Whitehat TOS explicitly forbid getting sensitive data that is not your own using an exploit. There is a precedent in the security industry for employers becoming involved for egregious "malpractice" with regards to an individual reporting a bug. A personal friend and business partner of mine left his job after publicly reporting a huge breach back in 2012 (I agree with his decision there), and Charlie Miller was fired by Accuvant after the App Store fiasco. Consider that Facebook is not the first company to do this, and that while it is a painful decision, it is not an insane decision. You might not agree with it, but there is a precedent of this happening.

I'm not taking sides here. I don't know that I would have done the same as Alex Stamos here, but it's a tough call. I do believe the researcher here is being disingenuous about the story considering that a data dump is not an innocuous thing to do.

I'm balancing out the details here because I know it will be easy to see "Facebook calls researcher's employer and screws him for reporting a huge security bug" and get pitchforks. Facebook might be in the wrong here, but consider that the story is much more nuanced than that and that Facebook has an otherwise excellent bug bounty history.

Edited for visibility: 'tptacek mentioned downthread that Alex Stamos issued a response, highlighting this particular quote:

At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

Viewed in this light (and I don't believe Stamos would willfully fabricate a story like this), it is very reasonable to escalate to an employer if they seem to be affiliated with a security researcher's report.


> The Facebook Whitehat TOS explicitly forbid getting sensitive data that is not your own using an exploit.

This seems to be the crux of this whole thing. The article suggests that is not true, including some quotes from what I assume is "The Facebook Whitehat TOS" at [0] along with his interpretation of those quotes. As an unsophisticated person reading through that document, I don't see anything I would describe as "explicitly forbidding getting sensitive data that is not your own using an exploit". The closest seems to be: "make a good faith effort to avoid privacy violations". I'm inclined to believe you and others in this thread that this was not the most responsibly done, but the seeming repeated claim that there is an explicit policy against this, which doesn't seem to be findable makes me scratch my head. Is there some other document, that is more explicit, or is this just supposed to be implicit knowledge, or what?

[0]: https://www.facebook.com/whitehat


The "privacy violations" statement is what I was talking about. I suppose you could make an argument that this is not sufficiently explicit for this scenario, but I believe it covers this ground. It is a privacy violation to retrieve sensitive data via an exploit.


It is worth pointing out that Wesley specifically avoiding dumping data from the S3 buckets which were directly related to User Data / Information. "There were quite a few S3 buckets dedicated to storing users' Instagram images, both pre and post processing. Since the Faceboook Whitehat rules state that researchers need to "make a good faith effort to avoid privacy violations", I avoided downloading any content from those buckets" In fact, the only 'sensitive data' he retrieved in regards to user account information were the weak employee logins.


Is gathering up the credentials of employees not also a privacy violation? At this point you're going way beyond proving that you have access to something - you're actively trying to probe and see how deep the rabbit hole goes. I don't (personally) believe that this is acceptable behaviour under a white hat program.


I see your point but I'm not sure if having passwords like 'changeme' qualifies as being a privacy violation... You should almost expect it to happen at that point.

But I do recognize that cracking passwords goes a step too far.


Fair enough, I can only say that it seems like they could be more explicit on that point, but I don't see anybody arguing against the idea that that their rules could use clarification.


"The Facebook Whitehat TOS explicitly forbid getting sensitive data that is not your own using an exploit."

LAUGH.. Where does it say this?

https://www.facebook.com/whitehat/

I think instagram should be asking themselves: Would they rather have an honest researcher report this or North Korean hackers not saying anything and just slurping data? Security Researchers are always going to see things they shouldn't. That's just a fundamental rule. You have to know who your real enemies are and not come down on someone just because they got a little enthusiastic.

Wes [edit] is one of the good guys - he went overboard, sure, but he should be rewarded, he should be asked not to go crazy next time, and the rules should be updated.

Personally, I think by saying the exploit was trivial shows that the CSO should be fired. If he has to make a phone call, it's not trivial.


>If you give us reasonable time to respond to your report before making any information public, AND MAKE A GOOD FAITH EFFORT TO AVOID PRIVACY VIOLATIONS, destruction of data, and interruption or degradation of our service during your research, we will not bring any lawsuit against you or ask law enforcement to investigate you.


I certainly wouldn't consider dumping credentials to test for reuse/continued use a privacy violation. If FB wants people not to dump data, they need to make that explicit and specific.


Really? The article states: "To say that I had gained access to basically all of Instagram's secret key material would probably be a fair statement". How on earth would holding on to that data not be a privacy violation?


Holding credentials is not violating privacy. It would be possible to use those credentials to violate privacy, but merely having them is not that act.


Holding sensitive credentials is absolutely a violation of privacy. This is like saying that having a user's password is not a privacy violation unless you use it to gain access to their account.


So would you agree that holding the keys to someone's house is also a privacy violation? What if instead of keys, you were holding a set of lockpicks? Would everyone's privacy of home be immediately violated?


It's all a question of intent. If you keep the lockpicks so that you can pick locks, then yes. If you're a lockpick collector, then no.


Holding a manually chosen password can be a privacy violation because it's a small peek into the user's psyche. (I wouldn't say the employee "changeme", "instagram" etc. passwords count, although the act of running a password cracking tool meant that he could have seen a more personal password.)

Holding some randomly generated numbers that could be used to access a server is not.


I understood it was employee credentials, not customer.


Surely they would have to revoke all the keys anyway as they would have no idea if a blackhat got their first and took the keys before the vulnerability was reported?


According to the timeline, Instagram have known about the ssl keys since 1 Dec.

My browser is currently showing an ssl cert for instagram.com that was issued in April and expires on Dec 31.

Doesn't look like they're in any hurry to revoke that one. (I guess like Alex Stamos told his employer - it's "trivial and of little value"...)


Or, like almost any company that's reasonably competent, they have multiple certificates with different private keys.


And they just happen to only leave some of them in their S3 buckets?

Seems … contradictory.


Whose privacy did Wes violate? Do webservers have data personal to them?


Privacy in this case is in an infosec context. Not a personal information context. Finding the open/unsecured/unpatched server is a bug. Downloading and testing a password keyring found as a result of that bug is not finding a bug. That is exploiting a bug for additional gain.


Finding a sql injection in a query string is finding a bug. Is using the injection to dump a table exploiting the bug for additional gain?

It sounds like you're only allowed to penetrate one layer of a defence in depth system. If you gain access to some edge system that isn't sensitive, I'd assume that would pay little. If you gain access to some core system, I'd assume that would pay lots. Why then are you not allowed to pivot from some nothing system to some larger system?

The purpose of bug bounties is to secure your systems. If you only ever secure the first layer, if some malicious actor finds another vector into the same system and there is a really easy pivot in sight (like full access to an S3 account!) then you've lost. If the bug bounty hunter found the escalation though and responsibly reported that, then a potential second vector loses its potency.

I'm not a security person at all so I'd like to hear some perspective on my thoughts above. It just seems fairly short sighted to specifically forbid pivoting.

FWIW dumping S3 buckets as a white hat does seem wrong to me. Listing them probably ok.


As someone outside the infosec industry, I think the dissonance I feel reading this comes from this line:

"[Alex] then explained that the vulnerability I found was trivial and of little value"

coupled with the fact that he seemed to be very worried about the problems that could be caused by the author in exploiting it. Something seems amiss.


I feel he meant the original RCE Ruby bug which then allowed all this extra access. It was not some huge, architecture-changing security problem, just a simple upgrade to fix.


What he revealed however, was that Facebook doesn't pay attention to least privilege with key access, what those keys access[1] and more importantly where those keys access data from[2]. I have a feeling there's some scrambling to cover these blind spots over at Facebook.

[1] http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.ht...

[2] http://docs.aws.amazon.com/AmazonS3/latest/dev/LogFormat.htm...


Nothing in here is exactly wrong, but we do have to acknowledge that this whole back and forth has essentially informed everyone that:

Facebook considers the keys to their kingdom to be worth $2,500. OR Facebook doesn't know what the keys to it's kingdom look like.

Facebook will not update keys/credentials even if they are known to be compromised.

If you have the keys to the kingdom, you can use them and Facebook won't find out about it unless you tell them.


It's weird how this flies over the head of so many.


Running a bug bounty is not a suicide pact. A team had to convince a finance group that it was valuable to give money away to people who might be assholes. Bounty hunters are not a community- but if you are a bounty hunter, you should understand that many of your peers are total assholes. The company that wants to pay you a reward has to figure out if you are going to make them regret offering you a reward.

There are 4 categories of reporters: great, good, shit and crazy. Again- if you are a reporter, you should be trying to make it easy for the team to distinguish you in one of the first two categories by being simply being polite & respectful.

I will take a side- it's Facebook. Dumping data is the end of the Proof of Concept. Trying to determine if there is more data you can access through a single vulnerability chain is over the line.

Boats sink. The engineers know it. If you sink a boat in order to prove the boat had a hole, you will not get your payout.

And one final thought-

In my experience, bounty hunters almost never realize the full consequences of a vulnerability that receives a reward. Most of the time, the "Bad thing" that they identify is just the tip of the iceberg.

The choices of the researcher reflect inexperience and immaturity. The researcher has a significant misunderstanding about what is happening in the bug bounty marketplace. I think they need to apologize if they want a future in the infosec world.

Publishing this blog post was a huge error. Going to the journalist was another huge error. I don't see how this person could ever be considered employable by a reputable company.


Are you saying that if Wes hadn't pointed it out, than Alex wouldn't have to refresh all those keys? That if Wes hadn't dumped the keys than they were 100% secure?


Good lord no.

I am saying explicitly- Wes went past the point at which he should have stopped.

He also should have known better, and the fact that he didn't is a problem in itself.


Very well said. This is a mature understanding of bug bounties.


> This is well known to be a huge no-no in the security industry. I see a lot of rage here from software engineers - look at the responses from actual security folks in this thread, and ask your infosec friends...

The problem is that on the one side you have security professionals who do this full time. They build up a background of implicit knowledge through extensive interaction with other security professionals, via training, mentoring, team activities, etc.

On the other side you have folks like the guy who found this vulnerability -- don't specialize in security, basically moonlighting / hobby, not necessarily connected to other security professionals or even other hobbyists. They won't have the same kind of implicit knowledge.

When someone from the first category communicates with someone from the second category, the communication can break down. That's what happened here.

Offering a million dollar bounty makes this kind of communication problem more likely -- a potential million-dollar payout catches the interest of people who have spare time and encourage them to pick this as the thing they do on the side. And further, encourages them to try anything and everything you don't explicitly forbid in by giving them hope that if they just try hard enough, they'll be able to turn what initially looks like a ho-hum two-year-old Ruby exploit into a million-dollar payday.


But his LinkedIn profile suggests he is a security specialist.


Alex Stamos' (CSO of Facebook) reply to OP:

https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics...


The problem that Alex is skimming over here is that if Wes got access to this data, you have to ask yourself - WHO ELSE GOT THE DATA?

If Alex knows anything about his job he should know that he has to refresh all those keys even if Wes didn't report it or say anything.

The diff between Wes and everyone else is Wes just explained to Facebook how completely screwed they are. Alex is just pissed because Wes made it bluntly clear how much he screwed up.


Alex has been a vulnerability research since the 1990s, and co-ran iSEC Partners, one of the best-known software security firms in the world, through the 2000s. I'm pretty sure they're on top of the key situation.


A lot of things has changed since 1990...


Yes, that's true, and Alex is one of the reasons they've changed.


judging by this exploit and the fact that they didn't rotate keys and other folks probably got this data, I would say this wasn't one of their finest moments, wouldn't you agree?


Take the top 10 tech companies on the west coast.

Select the most senior security person at those companies.

Roll 1d10 and substitute that person for Alex in this exact situation.

Now bet your life that you won't have your life wrecked by a prosecutor based on the outcome of that die roll.

I don't love Stamos calling the guy's boss, but if it's between "call his boss" and "tell legal that a bounty participant has FUCKING GONE ROGUE WITH ALL OF INSTAGRAM'S CREDS", I think he made the right goddamn call.

Jesus.


That just sounds like ass covering to me. The fact of the matter is that Alex had no idea and no one at Facebook had any idea if this researcher indeed went rogue with their credentials, because of the lack of security that the hack exposed. No logs on S3 buckets? No separation of access between user data and operations buckets? Give me a break. Calling the guy's boss or the guy himself wouldn't give any authoritative answers as to what's on the researcher's laptop, so I really don't see how calling the researcher's boss was a way out of "telling legal that a bounty participant had THE KEY TO THE KINGDOM BECAUSE CAPS ARE REALLLLLY AWESOME!" If you think that simply calling the guy's boss was the right call and not acknowledging the massive security holes that this guy exposed, then I hope you work for a company that has a more clear bounty program and deal with equally ethical researchers who will tell you about a full systems exploit without violating any user privacy and hope he's happy with your $2500. That will happen..


> but if it's between "call his boss" and "tell legal that a bounty participant has FUCKING GONE ROGUE WITH ALL OF INSTAGRAM'S CREDS"

False dichotomy - those weren't his only options, had he bothered to think more on it. There was an even better option, which strangely he chose not to take (assume an actual rogue actor got there before Wes and react accordingly: rotate the AWS keys, password reset for affected users, update SSL signing keys).

It bears asking - what exactly was he trying to achieve by calling Wes' boss, and has he achieved it? This is not his brightest moment.


Now bet your life that you won't have your life wrecked by a prosecutor based on the outcome of that die roll

What I get from your comment is that it's never a smart move to take one's chances dealing with company security people. The only smart move is to sell anonymously to the highest bidder.


It says right in that response that the keys were already rotated.


Or... one could actually read the response article: "This bug has been fixed, the affected keys have been rotated, and we have no evidence that Wes or anybody else accessed any user data. "


Didn't they have to ask wes to figure out what data he accessed in the first place, and even then they couldn't figure out he had accessed the keys?


'We DO NOT have evidence that X happened' is evidence of incompetence.

The competent responses would be:

"We DO have evidence that X DID NOT happen", or

"We DO have evidence that X DID happen".

A bag of rocks also has "no evidence that Wes or anybody else accessed any user data". Would you trust a bag of rocks with your computer security?


Regardless of whether or not he followed etiquette or the rules he did report it and obviously had no intention of utilizing it to be a bad guy. And calling his employer? This was ass covering by the CSO.


I understand how dumping SENSITIVE data can make you a flight risk, but he specifically outlined that he avoided dumping anything sensitive (that is anything directly related to Users and their data). He did dump S3 buckets that had a treasure trove of other files (such as the API keys for the other services and static content), so I guess my question here is at what point does dumping of any kind become bad?


In infosec keychains are about as sensitive as private as it gets. They should probably change it to "do not pull or retain any data from any server except that which is explicitly needed to identify the vulnerability" for those who might not understand.


But I feel like it would have been the same if he got to the point he did and recognized that he had access to keychains. Whether or not he actually accessed them,especially since they weren't auditing (from what I understand), is sort of irrelevant at that point, they would have to be cycled either way.

I understand that they're top secret, but that sort of proves the extent of the vulnerability.


It would have been the same. Bug bounties are for quality of the bug/vulnerability - for instance they find a configuration error that directly affects every server Facebook has open. Or they find a zero day exploit with root capabilities. Those would be million dollar bugs. Facebook definitely needs to clarify that the bounty is for the severity and widespread nature of the bug itself and not an invitaion to penetration testing. They also need to be more explicit about what is not allowed. Maybe they should give bonuses for the value of the target, but the current policy is for the bug itself. He certainly did expose an embarrassing lack of procedure and awareness of key security and that's certainly worth a lot more to Facebook than the bug. However they definitely do not want to encourage penetration testing. And it's infosec code of ethics (probably should be written down somewhere) that when you find a bug you don't use the bug to download anything from the target. It means a lot of people won't be interested because they want to hack and penetration test. To be whitehat about that requires a lot closer communication and contractual obligation.

Facebook needs to get its shit together in key security and clarity of its bounty program. On the other hand this guy writing a blog about downloading a keychain and probing how deep it leads is definitely not responsible infosec.


On some level isn't the security testing a farce if you can't use local data to escalate your breach? It seems kind of like a bank that wants to know if their front door is unlocked but doesn't want you to tell them the vault's open.


Database is just a tool to store data, just like such tool is a filesystem. Can you explain the difference between dumping user logins from a table and just reading them from a file? How first one is a no-no and the second one is fine?


From your profile: https://keybase.io/breakingbits/sigs/DIO92uX_zdSeZEwYeQ74qj1... throws an error. Just FYI.


Thanks, I revoked and reissued keys recently. I'll fix that.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: