
Instagram's Million Dollar Bug - infosecau
http://www.exfiltrated.com/research-Instagram-RCE.php
======
secalex
Thank you to everybody who cautioned against judgment before hearing the whole
story. Here is my response: [https://www.facebook.com/notes/alex-stamos/bug-
bounty-ethics...](https://www.facebook.com/notes/alex-stamos/bug-bounty-
ethics/10153799951452929)

~~~
joshAg
I think the root cause of the problem is the unclear policy by FB. Privilege
escalation can be hard to catch, and can be a separate bug in and of itself,
even if it requires a separate exploit to get the initial privileges.

The published policy didn't say anything about not doing what he did. I'm not
going to argue that what he did should or shouldn't be ok, but FB has no
control over what other people do. Yeah, maybe it'd be better if people asked
for clarification first instead of asking forgiveness, but there's no way to
force them to do that. FB does have control over what their policy says and
allows/disallows. If you don't want people to exfiltrate any data and look at
it on a local machine instead of just keeping a session on the exploited
machine, then put that in the policy. If you don't want people poking around
for other exploits after gaining access, then spell that out in the policy.

The point of the policy isn't to stop everyone. Sure it will stop some/most
people, but some people don't listen. The point is that when it happens again
you can point to the clear policy and say "you're an asshole, we're not paying
you because you violated our explicit policy, and we are reviewing what you
did with our lawyers to see if we should notify law enforcement".

Yes, doing this fix/policy update now doesn't fix this situation, but it
prevents anyone else from doing something similar and claiming ignorance of
this situation and FB's position.

~~~
forkwhilefork
Why do the policy specifics matter? A blackhat won't be respecting those
rules, and won't need to negotiate a reasonable payday with facebook.

The real issue here is facebook's poor infrastructure security and slow
response time. If the exploit had been previously reported, why was the
privilege escalation still possible? Why did a (supposedly) known-to-be-
vulnerable host have access to secret information at all?

The exfiltration of data may have been unethical, but facebook has no one to
blame but themselves for it even being possible.

~~~
onewaystreet
> Why do the policy specifics matter?

Companies take big risks in running bounty programs. They are giving hackers
permission to test their _live site._ This isn't something that is popular
with everyone inside a company. Bounty hunters need to respect that bounty
programs are a two way street. If you find a serious issue like remote code
execution you need to be extra careful. Wineberg was an experienced hunter. He
should have known better.

------
tptacek
In stories like this, try first to remember that Facebook isn't a single
entity with a single set of opinions, but rather a huge collection of people
who came to the company at different times and different points in their
career.

Alex Stamos is a good person† who has been doing vulnerability research since
the 1990s. He's built a reputation for understanding and defending
vulnerability researchers. He hasn't been at Facebook long.

To that, add the fact that there's just no way that this is the first person
to have reported an RCE to Facebook's bug bounty. Ask anyone who does this
work professionally: _every_ network has old crufty bug-ridden stuff laying
around (that's why we freak out so much about stuff like the Rails XML/YAML
bug, Heartbleed, and Shellshock!), and _every_ large codebase has horrible
flaws in it. When you run a bug bounty, people spot stuff like this.

So I'm left wondering what the other side of this story is.

Some of the facts that this person wrote up are suggestive of why Facebook's
team may have been alarmed.

It seems like what could have happened here is:

1\. This person finds RCE in a stale admin console (that is a legit and
serious finding!). Being a professional pentester, their instinct is that
having owned up a machine behind a firewall, there's probably a bonanza of
stuff they now have access to. But the machine itself sure looks like an old
deployment artifact, not a valuable asset Fb wants to protect.

2\. Anticipating that Fb will pay hundreds and not thousands of dollars for a
bug they will fix by simply nuking a machine they didn't know was exposed to
begin with, the tester pivots from RCE to dumping files from the machine to
see where they can go. Sure enough: it's a bonanza.

3\. They report the RCE. Fb confirms receipt but doesn't respond right away.

4\. A day later, they report a second "finding" that is the product of using
the RCE they already reported to explore the system.

5\. Fb nukes the server, confirms the RCE, pays out $2500 for it, declines to
pay for the second finding, and asks the tester not to use RCEs to explore
their systems.

6\. _More than a month after Facebook has nuked the server_ they found the RCE
in, they report another finding based on AWS keys they took from the server.

So Facebook has a bug bounty participant who has gained access to AWS keys by
pivoting from a Rails RCE on a server, and who apparently has _retained_ those
keys and is using them to explore Instagram's AWS environment.

So, some thoughts:

A. It sucks that Facebook had a machine deployed that had AWS credentials on
it that led to the keys to the Instagram kingdom. Nobody is going to argue
that, though again: every network sucks in similar ways. Sorry.

B. If I was in Alex's shoes I would flip the fuck out about some bug bounty
participant walking around with a laptop that had access to lord knows how
many different AWS resources inside of Instagram. Alex is a smart guy with an
absurdly smart team and I assume the AWS resources have been rekeyed by now,
but still, how sure were they of that on December 1?

C. _Don 't ever do anything like what this person did_ when you test machines
you don't own. You could get fired for doing that working at a pentest firm
even when you're being paid by a client to look for vulnerabilities! If you
have to ask whether you're allowed to pivot, don't do it until the target says
it's OK. Pivoting like this is a bright line between security testing and
hacking.

This seems like a genuinely shitty situation for everyone involved. It's a
reason why I would be extremely hesitant to ever stand up a bug bounty program
at a company I worked for, and a reason why I'm impressed by big companies
that have the guts to run bounty programs at all.

† _(and, to be clear, a friend, though a pretty distant one; I am biased
here.)_

~~~
dlandis
I think you're right on most points, but after reading the write up and
response I do think Alex reached out to the employer first instead of the
researcher as an intended act of intimidation. That was a mistake.

If it was not done for the purpose of intimidation, then Alex simply would
have asked the CEO if the researcher was acting on the company's behalf and
after hearing "no" would have ended the call and contacted the researcher
directly.

Seems simple doesn't it? Perhaps you are not seeing it due to your friendship,
but it seems like a dirty move and only serves to call into question how Alex
handled other aspects of the situation.

~~~
yeukhon
> If it was not done for the purpose of intimidation, then Alex simply would
> have asked the CEO if the researcher was acting on the company's behalf and
> after hearing "no" would have ended the call and contacted the researcher
> directly.

Then the CEO is going to contact the researcher and he's screwed either way.
God knows what the CEO would have say to the researcher privately. Having a
middle man to translate is a bad idea in an emergency.

Let's face it, when you used your work email and made another company
paranoid, you are putting people on the spot. Employer needs to know (they
have legal responsibility), and given the prior research they did and the
researcher's claim, I think the reach out is absolutely correct.

Instgram's infrastructure has flaw. That's bad but everyone's infrastructure
has flaw. Shit has to be fixed. Doing more than what was needed is bad. If I
am told to stop dumping data, I would stop.

------
dsacco
As a security researcher and engineer, I'd like to point out the following,
without taking sides:

1\. Facebook is _not_ going ballistic because this is a RCE report. They have
received high and critical severity reports many times before and acted
peaceably, up to and including a prior RCE reported in 2013 by Reginaldo Silva
(who now works there!).

2\. The researcher used the vulnerability to dump data. This is well known to
be a huge no-no in the security industry. I see a lot of rage here from
software engineers - look at the responses from _actual_ security folks in
this thread, and ask your infosec friends. Most, perhaps even all, will tell
you that you _never_ pivot or continue an exploit past proof of its existence.
You absolutely do not dump data.

3\. When you dump data, you become a flight risk. It means that you have
sensitive information in your possession and they have no idea what you'll do
with it. The Facebook Whitehat TOS explicitly forbid getting sensitive data
that is not your own using an exploit. There is a precedent in the security
industry for employers becoming involved for egregious "malpractice" with
regards to an individual reporting a bug. A personal friend and business
partner of mine left his job after publicly reporting a huge breach back in
2012 (I agree with his decision there), and Charlie Miller was fired by
Accuvant after the App Store fiasco. Consider that Facebook is not the first
company to do this, and that while it is a painful decision, it is not an
insane decision. You might not agree with it, but there is a precedent of this
happening.

I'm not taking sides here. I don't know that I would have done the same as
Alex Stamos here, but it's a tough call. I do believe the researcher here is
being disingenuous about the story considering that a data dump is not an
innocuous thing to do.

I'm balancing out the details here because I know it will be easy to see
"Facebook calls researcher's employer and screws him for reporting a huge
security bug" and get pitchforks. Facebook might be in the wrong here, but
consider that the story is much more nuanced than that _and_ that Facebook has
an otherwise _excellent_ bug bounty history.

Edited for visibility: 'tptacek mentioned downthread that Alex Stamos issued a
response, highlighting this particular quote:

 _At this point, it was reasonable to believe that Wes was operating on behalf
of Synack. His account on our portal mentions Synack as his affiliation, he
has interacted with us using a synack.com email address, and he has written
blog posts that are used by Synack for marketing purposes._

Viewed in this light (and I don't believe Stamos would willfully fabricate a
story like this), it is very reasonable to escalate to an employer if they
seem to be affiliated with a security researcher's report.

~~~
droopybuns
Running a bug bounty is not a suicide pact. A team had to convince a finance
group that it was valuable to give money away to people who might be assholes.
Bounty hunters are not a community- but if you are a bounty hunter, you should
understand that many of your peers are total assholes. The company that wants
to pay you a reward has to figure out if you are going to make them regret
offering you a reward.

There are 4 categories of reporters: great, good, shit and crazy. Again- if
you are a reporter, you should be trying to make it easy for the team to
distinguish you in one of the first two categories by being simply being
polite & respectful.

I will take a side- it's Facebook. Dumping data is the end of the Proof of
Concept. Trying to determine if there is more data you can access through a
single vulnerability chain is over the line.

Boats sink. The engineers know it. If you sink a boat in order to prove the
boat had a hole, you will not get your payout.

And one final thought-

In my experience, bounty hunters almost never realize the full consequences of
a vulnerability that receives a reward. Most of the time, the "Bad thing" that
they identify is just the tip of the iceberg.

The choices of the researcher reflect inexperience and immaturity. The
researcher has a significant misunderstanding about what is happening in the
bug bounty marketplace. I think they need to apologize if they want a future
in the infosec world.

Publishing this blog post was a huge error. Going to the journalist was
another huge error. I don't see how this person could ever be considered
employable by a reputable company.

~~~
blazespin
Are you saying that if Wes hadn't pointed it out, than Alex wouldn't have to
refresh all those keys? That if Wes hadn't dumped the keys than they were 100%
secure?

~~~
droopybuns
Good lord no.

I am saying explicitly- Wes went past the point at which he should have
stopped.

He also should have known better, and the fact that he didn't is a problem in
itself.

------
biot
Summarizing what I've seen here in analogy form:

    
    
      Researcher: "I found a way to unlock your door"
    
      Facebook: "Thanks, here's $2500. We've now fixed the problem."
    
      Researcher: "Oh, BTW when I unlocked your door I rifled through
        your stuff and found your passport, your banking details, and a
        lot of personal information. I've kept copies of these. I also
        found the keys to your car and looked inside, where I found a box
        in the trunk. That box contained sensitive documents including an
        employee badge / proximity card. I used this card to gain access
        to your workplace. In doing this, I also managed to get into the
        janitor's closet which had a set of keys. I used these keys to
        get access to the complete building and took a look at all the HR
        files and rifled through a bunch of corporate contracts."
    
      Facebook: <gobsmacked>
    
      Researcher: "Can I have my million bucks now?"
    

Where the researcher stepped over the line is using the door attack to
escalate further attacks. It's little different than finding a way to reliably
impersonate Mark Zuckerberg's credentials in such a way that others will 100%
believe it. That finding is worthy of a reward. But then using that
vulnerability to social engineer others to reveal passwords, using that as a
launching point for mounting further attacks is going way too far.

~~~
bloaf

       Oh by the way, when I looked in your open front door, I noticed all your 
       computer terminals had their passwords written on post-it notes by their 
       monitors, and the big safe in the back room had its key hanging right 
       next to it on a chain.

------
tshtf
Note to self: Don't report any chained attacks to any large companies bug
bounty programs. Alex Stamos contacting the employer of the bug reporter is
completely out of line.

This is the fastest and easiest way for Facebook to stop good submissions to
their bug bounty program.

------
daveloyall
In my opinion, the author is feigning shock...

He claims to have downloaded the content listed below. And he is surprised
that Facebook responds coldly? Note the string "private keys" in this list...
Doesn't the author know how long it will take them to recover from this
breech? How much it will cost them?

On the other hand, it does sort of re-enforce the idea that he should be paid
handsomely, doesn't it? :)

    
    
        * Static content for Instagram.com websites. Write access was not tested, but seemed likely.
        * Source code for fairly recent versions of the Instagram server backend, covering all API endpoints, some image processing libraries, etc.
        * SSL certificates and private keys, including both instagram.com and *.instagram.com
        * Secret keys used to sign authentication cookies for Instagram
        * OAuth and other Instagram API keys
        * Email server credentials
        * iOS and Android app signing keys
        * iOS Push Notifications keys
        * Twitter API keys
        * Facebook API keys
        * Flickr API keys
        * Tumblr API keys
        * Foursquare API keys
        * Recaptcha key-pair

~~~
troebr
I would tend to agree.

Facebook's point is that he found a vulnerability, and exploited it instead of
stopping there. I kind of understand their point of view though. "See you have
a vulnerability there, and then I can get access to this, and then this, and
see now I have the password of your user, and then I'm just one click away
from accessing all the instagram pictures I want."

Although Facebook's handling of the problem is poor (why didn't the CSO call
the author directly to get things squared out? He does not talk to people who
are not C*O?), they do have a point.

I think the author acted in good faith, but got carried away by his findings
unfortunately.

~~~
slantedview
Exploiting the bug would have been downloading the actual contents of the S3
bucket (the instagram source and other things). He specifically says he did
not do that.

~~~
brazzledazzle
He clearly made a big effort not to violate privacy. The problem is that he
made their security look like a joke by getting the keys to the kingdom
without anyone noticing. Did that big expensive IDS catch him? Nope. Did any
of the log watchers babysitting the AWS logs? Nope. One researcher made the
CSO look incompetent in the matter of minutes.

If he had found a bug with something a developer wrote that would be a
different story. What he found was layer after layer of Operations
(particularly Security Operations) failures. This is something you hire a CSO
to think about (or at least hire/manage others to think about).

------
Zikes
Facebook's calling his employer could be slanderous, possibly even criminal
harassment.

Between stories like this demonstrating companies' apparent lack of
understanding of whitehat infosec, and Weev's incarceration demonstrating the
American legal system's apparent lack of understanding of whitehat infosec,
it's hard to believe people still participate in such endeavors.

~~~
zenincognito
Also remember that the story we have here is a one sided narration from a bug
bounty researcher.

The story tells us his side of things but what specifically Facebook perceived
as threat is still unknown ? Why would a CSO get involved unless they
specifically think that the data has been accessed violating the goodwill of
the bug bounty research in the first place.

~~~
Zikes
That's true, there could be large portions of the story that are omitted or
inaccurate. We may never even get the full story.

Assuming the story as stated is truthful or even plausible, what options do
whitehat hackers have to defend themselves in such a scenario? I mean the
whole point seems to be to try to penetrate a secure system, and the
consequences of that action seems to be fairly obvious from the start. If a
whitehat hacker is successful, that carries with it the inherent potential
that they will have some sort of access to some sort of sensitive data, right?

Surely telling Facebook "I was able to access these exact things" means he
expected Facebook to update passwords and change keys accordingly, making the
possibility that he retained those keys moot.

~~~
lettergram
It almost seems like Facebook wanted to know about the issue, but not have to
update the keys.

------
benmanns
I think the solution here is to pay $100k+ for RCE exploits and explicitly
forbid pivoting access after the first vulnerability is discovered. Facebook
offered $2,500 for a security vulnerability that could do much greater damage.
What kind of vulnerability is a "million-dollar bug" if not RCE? How would you
possibly have a "million-dollar bug" that is a single-point-of-contact bug and
how would you verify that Facebook is paying you fairly? They didn't seem to
in this case.

------
tptacek
Alex responds:

[https://www.facebook.com/notes/alex-stamos/bug-bounty-
ethics...](https://www.facebook.com/notes/alex-stamos/bug-bounty-
ethics/10153799951452929)

Critically:

 _At this point, it was reasonable to believe that Wes was operating on behalf
of Synack. His account on our portal mentions Synack as his affiliation, he
has interacted with us using a synack.com email address, and he has written
blog posts that are used by Synack for marketing purposes._

Alex's timeline seems like it matches what I wrote earlier:

[https://news.ycombinator.com/edit?id=10754627](https://news.ycombinator.com/edit?id=10754627)

~~~
dsacco
Assuming that's true (and I personally don't believe Stamos would flagrantly
fabricate a detailed story like this publicly), this is a game changer. It's
fully reasonable to escalate to an employer if they seem to be affiliated with
the security researcher's report.

Also worth noting that this is frequently done in the security industry -
folks will often credit not only themselves but also the companies they work
with and are associated with in a security report.

~~~
blazespin
No, Alex just assumed. Why didn't he just ask Wes if he was doing this for
Synack?

~~~
tptacek
He "assumed" because the researcher signed up for the Facebook bounty program
_as an employee of Synack_ and _used his Synack email_ to communicate with
Facebook.

He wasn't guessing. He didn't look the guy up on LinkedIn.

~~~
thaumasiotes
> He didn't look the guy up on LinkedIn.

I don't really see how else you can interpret the defense "he has written blog
posts that are used by Synack for marketing purposes".

And it's pointed out all over the thread, but no part of "the researcher
signed up for the Facebook bounty program as an employee of Synack and used
his Synack email to communicate with Facebook" is uncontested, nor is it
supported by the text of Alex Stamos' response. You've just read in what you
want to see.

------
danso
So if I'm reading this correctly, this massively compromising attack was made
possible by doing a little research? e.g. Knowing about one of the admin
services used by Instagram, looking in that admin's public repo, and musing
whether Instagram had bothered to change the secret key from the default entry
in the repo?

We'll probably never see a post mortem on this but it'd be interesting to hear
how this got moved to production...: was the Sensu admin panel a nice scaffold
for internal use and by the time they decided to make it remote, everyone just
assumed the secret key had been changed at some point?

~~~
dperfect
I can tell you from experience working at another similar company that this is
not surprising at all. Especially as startups transition into larger companies
(with formal security controls and policies), a lot of things can get missed
or forgotten. Your primary production servers may be completely up-to-date and
secure, but somewhere along the way, there's a high chance that an engineer
deployed an internal admin tool or a test build somewhere that ends up being
public, but ultimately lost and forgotten. The problem is, that kind of "lost"
infrastructure often contains keys, credentials, or network access to other
more critical parts of the infrastructure, and no one realizes the severity of
the mistake until it's too late.

------
joslin01
The thing that gets to me is the lack of gratitude on Facebook's end. Instead,
they turn him into the villain for breaking imaginary rules. What would have
been the harm in slapping him on the wrist and giving him some sort of reward
for exposing a huge vulnerability? Instead, they eat the reward and shit on
the guy who produced it. Real classy FB.

~~~
tptacek
Did you read the whole post? He got paid on the RCE.

~~~
joslin01
Yea I did and I realize he got paid out a little, but it was short of the $1
million.

I realize a million is a bit unrealistic, but if you're going to make a public
statement, at least back it up or prove to the guy why his findings don't
constitute a "million-dollar bug". It's not right to just cold-shoulder the
guy and hide behind vague rules that were never clearly outlined. In fact, you
might even conclude Facebook brought his behavior on themselves by making such
a statement as "if a million-dollar bug is found, we'll pay it out." $2500 is
nothing when you're thinking $1,000,000

~~~
tptacek
Nobody is going to pay you a million dollars in 2015 for the 2013 Rails YAML
bug in a stale server. Nobody is going to pay you a million dollars for a
reliable Firefox RCE, and those take months to prove out and develop, _and_
there's a liquid market for them.

~~~
joslin01
But that's not going to stop Facebook from publicizing that they will. You're
glossing over the details and attributing an aire of "old news" to the bug.
Well, yes / no. If he didn't find such an ancient bug but instead someone
devious did, they could have dumped all the private user photos. If that
happened, what do you think the financial implications might have been?

~~~
tptacek
He got $2500 for that bug. I will venture a guess that that's the most any bug
bounty program will pay for that Rails YAML bug in 2015.

~~~
bigiain
How much do you suppose blackhats would pay for instagram's ssl keys, mobile
app signing keys, push notification keys, etc?

Yeah, the researcher went deep into the grey area, but I find Alex Stamos's
reaction barely short of unbelievable - it's almost as though he's so new to
the internet he's never heard of the Streisand Effect... (Either that, or he's
just so accustomed to bullying and intimidating people who might embarrass him
that he's now got that corrupt politician "Waddaya mean I'm 'abusing my
power'? We grant multimillion dollar contracts to old school buddies all the
time? What's the problem?" look on his face.)

~~~
tptacek
Not much. Probably much less than $2500.

A script to create new bogus accounts on Facebook is probably worth more than
mass Facebook account compromise.

People _really_ don't seem to understand how the "black market" works.

~~~
bigiain
I was thinking more of the Zerodium/Gleg/BoozAllenHamilton class of buyers -
who'd on-sell it to, say, the Egyptian or Thai Government, rather than run-of-
the-mill carders or identity thieves.

(But yeah, I'm perfectly happy with my life where I have no real understanding
of how the black market for this kind of thing works...)

------
nathanvanfleet
Sort of an interesting conflict these bug bounties create. You have someone
who wants to hack as deeply as possible to have a bigger bug bounty based on
stated rules, but at the same time they will invalidate your bounty if they
arbitrarily determine it as too much?

I imagine the initial report by his friend that the server was accessibly
would not be a very high paying bounty compared to one accessing the server.
But how deep is too deep?

~~~
mfoy_
Right? If he left it at the RCE he would have gotten the $2,500 split between
him and his friend... but he continued and was able to get access to all the
S3 buckets which you would assume would warrant a much higher payout. Instead
he got a huge amount of backlash.

~~~
tobz
Right, this feels like a way for Facebook to simply not payout a bigger bounty
after they realized how big an appropriate bounty would be.

If the author submitted the RCE, and nothing else: is someone at Facebook
actually going in and trying to simulate what he actually did? Who knows,
because the process is pretty opaque. If you argue with Facebook's assessment,
and go and further exploit the system to say "no, this is actually how bad the
RCE is, in the grand scheme", you've now actually gone and proved what can be
done, against their guidelines, which potentially disqualifies your initial
discovery altogether.

------
onewaystreet
> With the RCE it was simple to read the configuration file to gain the
> credentials necessary for this database. I connected and dumped the contents
> of the users table.

This was his mistake. This is a huge no-no. You never dump data unless you
have permission. It's against the terms of most bounty programs.

~~~
phantarch
But like he said in the article, he was unable to find a clear policy that
gave him the "Stop, no further" point. It may have been a bad assumption to
think Facebook was going with the Tumblr stance of "give us a thorough POC,"
but where should he have drawn the line in his hack and why here instead of
where he did?

~~~
oldmanjay
Getting the credentials is clearly enough to prove the point. Digging through
user data is just celebrating.

~~~
dogecoinbase
Whereof one cannot speak, one should be silent. Dumping the user table is the
literal next step in a standard vulnerability assessment (in order to acquire
reused credentials), wasn't prohibited by the terms of FB's bug bounty
program, and was crucial to the development of the bug.

~~~
tptacek
No, that's the next step in an _external penetration test_ , which is not the
same thing as a vulnerability assessment.

In an external pentest, you get a set of netblocks and rules of engagement,
and you get as far as you can. That's why it's called a "penetration test".

In a vulnerability assessment, you get a target (usually an application), and
you find as many flaws in that target as you can.

Big annual pentests often have wide-open rules of engagements, where you (as a
consultant) win big by, for instance, dumping the CEO's mail spool. But those
projects also start with several meetings worth of negotiating rules of
engagement.

Vulnerability assessments virtually never have those rules of engagement!

Nobody that I know of runs a bug bounty program on pentest norms. To do so
would be grossly irresponsible, because on every network with more than 1000
hosts I've ever tested, ever, RCE behind the firewall is gameover for the
whole test: you can get everything.

~~~
dogecoinbase
You're HN's anointed expert, so I suppose all I can say is that's not my
experience.

Among the many reasons bug bounties are bad ideas is that they generally fail
to write clear rules -- as Facebook did. As written, what he did is not
against the rules and while it may fall into some best-practices bucket you
assert to be universal, that's hardly sufficient for a field in which
participants can come from any background. But please, continue to defend your
friend whose multi-billion company had a month to cycle their popped keys and
failed to do so, then responded by threatening a researcher's employment after
multiple conciliatory e-mails.

~~~
tptacek
_then responded by threatening a researcher 's employment after multiple
conciliatory e-mails._

That is NOT what happened. Look at the timeline again.

* He popped the server.

* He submitted the RCE.

* He submitted dumped file from the compromise as a finding.

* They fixed the RCE.

* They told him not to dump files.

* They paid out the RCE finding.

* A month later, they declined to pay out on the dumped file.

* In response, he submits _a new finding, with AWS creds that he stored for more than a month after they shut down the server_

* (Whatever else happens that day)

* Stamos calls Synack.

~~~
dogecoinbase
The "then" isn't temporally proximal. The quoted e-mails (unless you feel like
asserting that they're fake, which I think is the next step in your arguments
in this thread) demonstrate that he's trying to work within the unwritten
rules of the program and asking for clarification in good faith. Then after
that, rather than attempting any communication with his, Stamos threatens his
employment.

I agree with you that something seems off, but you're happily giving all the
charity to FB and none to this guy, which is your prerogative but hardly makes
for good conversation.

~~~
tptacek
Read the timeline again and then the post.

1\. Second finding is declined.

2\. New third finding, which includes AWS credentials that this person should
not have had, is written and submitted.

3\. Stamos calls Synack.

I believe the relative timing of these events is, in fact, established.

Now: stipulate that I'm right, even if you're not sure. Does your opinion of
the story change?

~~~
dogecoinbase
Not really, no. Your _should not have had_ is still presupposing a set of bug-
bounty-hunter-professional-guidelines that don't actually exist unless they're
specified in the program guidelines, and from a philosophical perspective the
actual security vulnerability under discussion now is that their sec team is
so lackluster that they can't or won't change out a credential set known to
have been externally accessible (and, the critical point, to anyone who could
have found this not-particularly-obscure vuln, not just this researcher).

------
phantarch
How likely is it that this sort of a thing stopped being a technical item of
discussion and turned into a political one by the security contacts at
Facebook?

I'm always curious about what sort of internal pressures would lead people to
take a well-reported bug that the author did not take malicious action on and
blow it up to the point that the CSO is getting involved.

~~~
jsnk
Only way I can see this happening would be finger pointing and finding others
to blame. Eventually, the problem starts with a few people then becomes inter-
team issue. Then higher ups start to get involved.

------
dperfect
Not only did this person make several large and irresponsible mistakes in the
process of uncovering and reporting the bug (dumping tons of private user
information without permission, going far beyond simply discovering and
reporting the bug, etc.), but they also keep referring to Ruby ("running Ruby
3.x, which is susceptible to code execution via the Ruby session cookie") as
the vulnerable piece, when in reality, it's the version of _Rails_ that had
the vulnerability.

~~~
kuschku
Well, that’s the point. An unexperienced person with half an hour on Google
got full access to Instagrams systems.

And the bug has been existing for 2 years.

Wonder where the person who tipped him off had the info from – could very well
have been a common target in the black hat scene.

------
kirankn
@secalex I believe that the researcher clearly fulfilled the primary objective
of bug bounty programs by exposing a weakness of yours which you, inspite of
having large and competent teams, weren't aware of and had not sealed yet. And
he did nothing to use that information with a malicious intent.

Your actions are detrimental to your relations to such good mannered external
security researchers who are helping you keeping you infrastructure safe from
the bad guys. You should have been a little more sensitive and a lot more
generous that you have been.

------
shawn-butler
Wow what happened to Instagram?

Facebook really needs to go the way of myspace if they keep this sort of
behavior up.

How can a CSO at Facebook legitimately tell a CEO of another organization that
a vulnerability of "little value" was found when the researchers has your
signing certs? Does he lack relevant info or is he just incompetent?

This is tantamount to mafia tactics. Hint, hint, we're facebook so get your
people in line or else.

------
shaunol
If companies are going to keep trying to get out of paying bounties for insane
vulnerabilities like this, white hat researchers will just move onto something
else, leaving the bounties to be paid out by the black market. Bounties aside,
contacting his employer is a disgusting move.

------
ryanlol
The fact that Alex Stamos from Facebook contacted this researchers employer
talking about potential lawsuits to threaten the employee via a proxy is
probably the single most damning thing in the entire article.

That to me is entirely unacceptable, if you want to threaten someone then have
your legal team send them a cease and desist. Don't go after their livelihood.

~~~
bechampion
I'm gutted cause of that, i can not believe the FB CSO contacted his employer
, it's such a disrespectful thing to do. Another reason to hate facebook.

------
aioprisan
This is as clear cut a case of full exploit with escalation of privilege all
the way to full services source code read access, SSL private keys, full admin
AWS credentials, services API keys from Twitter to analytics, email server
logins, the list goes on.. all of this without even looking at a single user
profile or violating user privacy, and it's not a legit security bug? This has
to be worth more than $2500, and I think Facebook sets a bad precedent where
folks won't disclose big security issues because of how unclear the TOS are,
so that they can avoid embarrassment.

------
ctvo
October 22nd: Weak passwords found and reported. Also grabbed the AWS keys
from the config file.

October 24th: Server no longer reachable. Tested keys and they still worked,
assumed to have went on a download spree.

Seems like this is the biggest issue with how Facebook handled this case. No
one looked to see what Wes accessed when he logged in with the weak
credentials? No one realized he could have accessed the AWS key?

To treat what Wes found as a minor bug and then fuck up like that is sort of
hilarious.

------
zupreme
Ridiculous.

This is why many security professionals become disillusioned with bounty
programs. This story is not uncommon at all.

Bounty programs, while presenting a tempting incentive to practice one's
skills are a very poor income strategy.

You are essentially working, unpaid, for organizations who are just as likely
to ignore you (or report you to law enforcement) as they are to pay you for
your findings.

No wonder so many young talented security pros are easily tempted to trade
their findings for the safety of a crypto transaction with an anonymous buyer
than they are to submit them through official channels.

------
tptacek
Wait a sec.

Look at his timeline again.

He tested the AWS creds in October.

They shut the server off on October 24.

He reported the AWS creds in December.

Did he tell them about the AWS creds before then? His mails don't say that he
did.

If he didn't, _why didn 't he?_

~~~
adrianmacneil
Exactly. This is extremely shady behavior, I'm sure if he (a) reported the S3
creds as soon as they were discovered, and (b) did not start randomly
downloading everything accessible onto his personal device, this would have
turned out a lot differently.

------
joepie91_
My two cents.

It seems that people defending Facebook's behaviour in this thread have
collectively lost sight of what the point of a bug bounty is to begin with -
to encourage people to report issues, rather than sell them.

We now have people arguing that "it is not acceptable to pivot beyond the
initial intrusion for a bug bounty", even though _a malicious attacker would
have done the exact same thing_. As long as standard no-damage rules are
followed, where's the problem?

The bug bounty program is working exactly as intended, but the researcher is
getting dinged over arbitrary rules. As somebody else here mentioned already:
the reason blackhat work still pays, is because such arbitrary and
bureaucratic rules _do not exist there_.

We should not forget that bug bounties are a tool, not a goal - the goal is to
convince researchers to report rather than sell, and _every_ part of a bug
bounty and its rules must be designed accordingly.

Also: Why the hell were those AWS credentials not revoked immediately after
compromise? This constitutes a grossly negligent failure on Facebook's part to
assess impact, _on top_ of their existing failure to have the "keys to the
kingdom" on a single server to begin with.

And frankly, that failure only reinforces the need for the researcher pivoting
into further systems, rather than just keeping it to a PoC - because
evidently, _nobody_ is going to assess impact at Facebook, if the researcher
doesn't do it himself.

~~~
random_eddie
This is an excellent point, but there's a good answer to it.

The purpose of a bug bounty is _not_ to encourage a particular individual to
report an issue rather than sell it. The purpose is to encourage _more_ people
to get into the business of finding and reporting bugs _before_ the people who
are in the business of selling bugs to criminals find them and sell them. If,
in the process, some black hat researcher _also_ decides to report some
particular bug rather than facilitate a crime, so much the better - but you
can't rely on that, and you shouldn't design a bug bounty program around it.

In other words, you're not _competing_ with the black market. Instead, you're
_paying_ to improve your security, and accordingly, you want to get the most
bang for your buck. Finding previously-unknown entry points is high-value.
Finding internal pivots is extremely _low-value_ because they are ubiquitous,
and your infrastructure is already designed around the assumption that they
are ubiquitous.

Which isn't to say that you aren't interested in finding the internal
vulnerabilities and eliminating them. You are. Which is why you conduct
penetration tests. But pen tests are big deals, with rules of engagement
around them. You deliberately give the testers elevated internal access so
they can test under the assumption that there may be an entry point you don't
know about. You establish ongoing communication between the testers and the
clients, especially at any potential pivot or escalation point prior to
proceeding. You don't run a pen test by opening it up to anyone who wants to
give it a whack and hoping that they'll tell you about it afterwards (i.e. a
bug bounty program). That's an insanely high-risk, low-value way to discover
your internal vulnerabilities.

------
joeyspn
It's clear to me after reading _between the lines_ of both sides of the story,
that Instagram/FB sec team screwed up not acknowledging the severity of the
bug and paying accordingly to the researcher.

Why get mad about a "low level bug"... I mean, if you can dump private user
pics from a photo sharing app, how is this low level? really?

It's also pretty clear that the researcher shouldn't have dumped data although
most likely he reserved this hidden card for later since he was expecting the
lowball... but there are smarter ways to reply to lowballing.

IMO poorly managed on both parts.

------
mef
An interesting decision on Alex's part to only pay the $2500 for the RCE bug.

On one hand, this signals to anyone else who might want to disclose security
issues that Facebook bounties don't pay out anywhere proportionally near the
full potential damage impact of the issue.

On the other hand, if they pay out a lot more now, they're signalling that if
you find a vulnerability, you need to dig deeper in order to have insurance in
case Facebook gets stingy.

Probably the best outcome would have been to pay out a more proportional
bounty, even though Wes' exploration was beyond what's generally acceptable,
so that Facebook's bounty program reputation is preserved.

That or press criminal charges to discourage any other researchers from going
over the line.

------
pmontra
It's not the main point of the post, which is Facebook's response to the
researcher, but I'm really surprised that they're storing unencrypted secret
keys and source code on S3. They trust Amazon a lot and have no fear that
somebody could eavesdrop Amazon servers (if I were a black hat I'd go for the
accounts of the big guys, not for the one of a random guy)

[http://www.exfiltrated.com/research-Instagram-
RCE.php#One_Ke...](http://www.exfiltrated.com/research-Instagram-
RCE.php#One_Key)

I wonder what any claim of protecting user's privacy is worth when they leave
their credentials unprotected in that way.

[https://www.instagram.com/about/legal/privacy/](https://www.instagram.com/about/legal/privacy/)

"We use commercially reasonable safeguards to help keep the information
collected through the Service secure [...]"

Ops.

I can imagine why they didn't appreciate the efforts of the researcher.
Hopefully they'll change their current practices.

------
Animats
The initial bug in Ruby/Rails is striking in its stupidity.[1] You can send
something to Ruby/Rails in a session cookie which, when unmarshalled, stores
into _any named global variable in the namespace of the responding program_.
It's not a buffer overflow or a bug like that. It's _deliberately designed to
work that way_. It's like doing "eval" on untrusted input. This was on YC
years ago.[2] Why was anything so idiotic ever put in Ruby at all?

Something like this makes you suspect a deliberate backdoor. Can the person
who put this into Ruby/Rails be identified?

[1] [http://robertheaton.com/2013/07/22/how-to-hack-a-rails-
app-u...](http://robertheaton.com/2013/07/22/how-to-hack-a-rails-app-using-
its-secret-token/) [2]
[https://news.ycombinator.com/item?id=6110386](https://news.ycombinator.com/item?id=6110386)

~~~
danso
I think you're overextrapolating here, though I admit my knowledge on this
isn't totally up to date.

As I understand it, Ruby's Marshal function, which takes text data and
deserializes it, _is not safe by default_. So, is that a flaw of Ruby? I
_guess_...except that this kind of serialization seems to be a standard
feature in languages (well, Ruby and Python, the two things I currently use):

[https://docs.python.org/3/library/pickle.html](https://docs.python.org/3/library/pickle.html)

> _Warning The pickle module is not secure against erroneous or maliciously
> constructed data. Never unpickle data received from an untrusted or
> unauthenticated source._

So the true bug seems to be that in Rails ActiveSupport (in a deprecated
class, which uses some of Ruby's fun meta magic to deal with missing methods
-- so basically, the classic obfuscation of functionality as a tradeoff for
some sugary magic, all in a deprecated function that likely no one revisits),
you can trigger a set of functions and routines in which the final decoding
step, for whatever reason, ends up invoking Ruby's Marshal (via Rack:
[http://www.rubydoc.info/github/rack/rack/Rack/Session/Cookie...](http://www.rubydoc.info/github/rack/rack/Rack/Session/Cookie/Base64/Marshal#decode-
instance_method))

~~~
sanderjd
Also, only the server is allowed to put things into the session cookie, which
is enforced by checking the cookie's signature which is generated from a key
that only the server is supposed to know. Using a "native object" serializer
(like Marshal or pickle) for session data and storing the secret token in a
file that is easy to accidentally check into source control are both _stupid_
things to do, but they're also common mistakes and you have to do both at the
same time for this attack to work, so it seems quite overboard to suggest it
was done deliberately.

~~~
zapt02
Completely right. If the secret server token is compromised, it is presumed
that you can fake any data. Should that allow for RCE? That's where Ruby steps
in and provides the double whammy.

------
piker
Posting this write-up might be the last thing the researcher should have done
--from a criminal liability perspective. First, the negative press might serve
to piss off Facebook (who could have some perspective we are not privy to
here). From Facebook's angle, the criminal aspect here may be a much closer
issue, and this write-up could serve as the tipping point. Second, as a party
admission, this post is could very well be admissible against the researcher
at trial. Without a doubt, it can be used to contradict any testimony he might
provide in defense of his actions here. (So, you HAD read the ToS, correct?)
Even without Facebook's "pressing charges", a US Attorney with political
aspirations might just decide she has enough here to move forward against the
researcher in an effort to appear "tough on cybercrime". This whitehat stuff
is murky territory for sure.

~~~
troisx
I can't see Facebook ever pursuing the criminal angle in this situation. I
actually wonder if Alex's boss isn't a little unhappy with his response
because it will make people think twice about their bug bounty (just look at
the backlash here). The bug bounty was put out there so that people don't use
or sell exploits as blackhats.

~~~
piker
Facebook doesn't have to "pursue" criminal charges, however. It's the
Government that brings criminal charges. In this case, Alex would just be a
witness (willing or otherwise) the Government used to produce evidence of the
researcher's crime. There is a mistaken understanding that if the "victim" of
a crime doesn't "press charges", then there is no criminal liability. However,
the "victim" is really only a witness to the actual crime in the eyes of the
law. Here, the researcher has arguably confessed to a number of computer
crimes, and if a DA/USAO or the DOJ were interested in making a statement,
they might have enough evidence to indict the researcher on the strength of
this post alone. Facebook, while perhaps not interested in "pressing charges",
would have to comply with a criminal investigation here.

------
guard-of-terra
Once again we see how people act hard-ass in sight of gaping vulnerability in
their system. Be it law system, computer system or moral system, you will see
denial and intimidation.

We should have "pastebin hat" list and Facebook should definitely be on it.

The problem with humans is that they will rather go extinct over such things
than behave properly. You could try to teach us by painful example but death
will probably come first.

------
danra
I don't see how the CSO's response makes sense for Facebook's security
interests. As CSO, it is in your interest to allow a researcher to exploit an
RCE to its furthest. Otherwise, you would only ever allow researchers to
inoculate your outest layer of protection, while leaving any inner level
untested and thus less secure.

If indeed only credentials and technical information were obtained, all aimed
at finding more security issues, Facebook should be thankful for finding all
the vulnerabilities across all their security layers.

------
arbitrage314
If accurate (which it seems to be), a very disappointing handling by Facebook.

~~~
MichaelGG
Either way, it's awesome for the world. This kind of attack is great to tell
people one more reason why they should not trust Facebook, WhatsApp,
Instagram, etc. It'd only have been better if someone malicious had done it
and made some data public (perhaps slightly redacted).

In particular, it might help with Signal vs WhatsApp.

------
adrianmacneil
When reading the author's article, it would certainly be easy to grab the
pitchforks. It is actually a pretty interesting/useful vulnerability that some
low-level AWS keys were able to be escalated to some highly privileged keys,
and that none of these keys where IP-whitelisted.

However, the biggest issue I see here is that the author (in their own
timeline at the bottom of this post) says that they discovered the AWS keys on
October 24, yet they did not report this to Facebook until December 1 (in the
meantime, they were having various discussions with Facebook about whether
their other submissions were valid). That is seriously concerning behavior, if
you find come across some live AWS keys this should be reported immediately,
you should absolutely not just sit on them for over a month as if they are
some sort of bargaining chip.

------
kunle
If accurate, seems like a pretty counterproductive way to handle this.

------
spicyj
Alex Stamos (Facebook CSO) just posted an official response:

[https://news.ycombinator.com/item?id=10755060](https://news.ycombinator.com/item?id=10755060)

------
Garthex
Cached version:
[https://webcache.googleusercontent.com/search?q=cache:vR9o3U...](https://webcache.googleusercontent.com/search?q=cache:vR9o3UYqgIoJ:exfiltrated.com/research-
Instagram-RCE.php&hl=en&gl=us&strip=1&vwsrc=0)

------
AVTizzles
Why call the CEO and not his Mom?

------
Pxtl
On the one hand I got a little squicked in the story when he started cracking
passwords, but on the other hand I kind of assumed that bug bounty systems
would want the tester to find out how deep the bug goes. Otherwise the depth
of your security isn't being tested.

------
Dolores12
The lessons i learned here are: 1) any RCE vulnerability of Instagram leads to
unrestricted access to user data. Facebook knows it, does nothing about it. 2)
facebook will not pay you your bug bounty reward, but will complain to your
employer.

------
marincounty
"As a researcher on the Facebook program, the expectation is that you report a
vulnerability as soon as you find it. We discourage escalating or trying to
escalate access as doing so might make your report ineligible for a bounty.
Our team accesses the severity of the reported vulnerability and we typically
pay based on its potential use rather than rely on what's been demonstrated by
the researcher."

Well, FB feels your bug bounty is worth $200? Strike that figure. We feel like
your bug bounty is worth a $100 advertising credit, if you buy $100 in
advertising? Next time just report the bug. Thanks!

(I don't know if my innate dislike of FB, or I feel it shouldn't be up to a
company to determine what they feel a bug is worth? If you are going to have a
bug program--put in some Very solid rules? They shouldn't be just winging it
at this point? It's not some cute little start up? It's a huge machine that's
making a fortune off it's victim?

I'm still not sure if FB really cared about this hacker's escalation of a
potential attack, or it's about money? Would I want a hacker to show me my
vulnerability with my clients information--no, but make that crystal clear in
the TOS.)

------
giancarlostoro
I really don't want to imagine what would of happened if he wasn't part of the
bug bounty and instead after malicious intent how bad things would of gone.

------
redditplebs
Looks like the sites' down. Mirror/Google cached page:
[http://webcache.googleusercontent.com/search?q=cache:vR9o3UY...](http://webcache.googleusercontent.com/search?q=cache:vR9o3UYqgIoJ:exfiltrated.com/research-
Instagram-RCE.php+&cd=2&hl=en&ct=clnk&gl=us)

------
ishanr
It's really simple. This is the beginning of the end of Facebook. With their
fake clicks on their ads and what not.

------
eecks
imo Facebook should be grateful for people like this instead of burning them

~~~
slantedview
Indeed. I can somewhat understand the fearful reaction, but ultimately it
hurts the company's rep.

------
ianhawes
I'd like to see a service where a company's source code/database/confidential
info is placed in escrow pending the payout from a bug bounty. Or, perhaps
more likely, some sort of 3rd-party arbitration.

~~~
Mandatum
Good luck finding an escrow to not only trust, but would be willing to take
the heat for that one.

To be a trustworthy escrow, you must have a good reputation or track-record.

There's near no anonymous escrows that could provide a service trustworthy
enough to handle this. And going the non-anonymous route would be near
impossible, Facebook would litigate an entire country over this.

------
henley-cs
that's a lot of posturing on both sides. FB had some severe vulnerabilities
that the author certainly pointed out. And the author could have read the
bucket contents without downloading them. FB clammed up. The author
overreached. Neither ends up really winning anything here. Tis a shame.

------
socrates2016
Nerd owns FB and wants to rub it in their face. FB power plays nerd. Nerd
publicly pawns FB in retaliation.

------
ibic
CSO slaps a legal threat to a security researcher and talks about ETHIC? Good
job man, gooooooooooooooooooooooooooooooooood job.

------
mml
Bad form on Mr. Stamos' part.

edit: if it's indeed true, but I have my doubts that's the case. Hard to say
either way.

------
bsmartt
I thought their stack was django?

------
joshmn
> Ruby 3.x

Rails 3.x _

------
twerkmonsta
Is it normal for security researchers to use Windows for their OS?

~~~
purpleidea
Not good ones!

------
maemilius
Am I the only one mildly annoyed that the author constantly conflated Rails
and Ruby?

~~~
raesene9
nope I was too. Interesting illustration (assuming it wasn't just a typo) that
exploitation of vulnerabilities doesn't necessarily require deep understanding
of the tech. stack in question.

~~~
eat
And on the flip side, deep understanding of the technology stack in question
doesn't necessarily lead to implementing it securely. This is division of
labor at work.

------
blazespin
In general, if you have a green handle, you shouldn't be commenting on things
like this. Otherwise we'll have sock puppets galore muddying the waters.

~~~
abrookewood
What does a green handle indicate by the way? I checked the FAQ and there's
nothing there.

~~~
PhantomGremlin
_green handle_

New account, IIRC less than 2 weeks old. The name is colored green. But I've
seen it not be consistent, where some posts are green, others aren't. All in
the same thread.

~~~
mintplant
It's more complex than just creation date. Somewhere in there it involves
votes cast on your posts, which is why you might see someone's name switch
colors from one post to another in the same thread (the system doesn't go back
and switch name colors on previously-created posts). IIRC the exact mechanism
isn't public.

~~~
dang
No, it's simpler than that—just a function of account age.

