Hacker News new | comments | ask | show | jobs | submit login
How I Hacked Facebook and Found Someone's Backdoor Script (devco.re)
865 points by phwd on Apr 21, 2016 | hide | past | web | favorite | 124 comments



This is Reginaldo from the Facebook Security team. We're really glad Orange reported this to us. On this case, the software we were using is third party. As we don't have full control of it, we ran it isolated from the systems that host the data people share on Facebook. We do this precisely to have better security, as chromakode mentioned. After incident response, we determined that the activity Orange detected was in fact from another researcher who participates in our bounty program. Neither of them were able to compromise other parts of our infra-structure so, the way we see it, it's a double win: two competent researchers assessed the system, one of them reported what he found to us and got a good bounty, none of them were able to escalate access.


> two competent researchers assessed the system, one of them reported what he found to us and got a good bounty, none of them were able to escalate access.

Are they allowed to escalate access, given the bounty rules?

Also, isn't the periodic collection of login credentials completely out of scope? What I mean is: once the initial vulnerability was located and the pentester got shell access in the system, shouldn't he have stopped there and reported?


You can read the story from a few months back to find that Facebook apparently has an unstated but non-repudiated policy of threatening to involve the FBI and actually doing reputational damage to researchers who they feel cross a (poorly described and enforced) line: https://news.ycombinator.com/item?id=10754194

In this case, they (correctly) went the other way, which creates even more uncertainty. Given this pattern of inconsistent enforcement mixed with threats, I would feel genuinely unsafe reporting a security vulnerability to Facebook except under very specific conditions. That's probably not what they're going for, but that's the environment they're creating.


It's actually very easy: If you are in a server, you report it. You don't go digging for more, because once you're in, you can easily do more damage.


But in this case, they specifically said that the researchers were unable to escalate. How does he know? Either the researchers violated the rules by trying and failing, or they didn't try and he's simply lying to make himself look better. By your reasoning and their policy, those are the only two possibilities.

What are we supposed to conclude from this? Under the current rules and assuming the description of what happened is accurate, it would seem you'll potentially be punished for establishing the full extent of a breach, unless it's not so bad, in which case you're rewarded for failing. In addition to being illogical and unfair, it also incentivizes OpSec to delude themselves and everyone else about their true security risks.


Accurate.

In the other case, they threatened because he proved the vuln was much much greater, infact I call it a billion dollar bug,and wanted to cover it up.

This time they're proudly telling us because the attempts failed or were not made at all.


Agree with this, getting in is enough, going further is malicious at best.


Under those rules, it's not possible to verify reginaldo's claim that this machine is cut off from more valuable data.


You don't need to verify these claims. You found a critical vulnerability, you are not the only one.


How then do we assess the extent of the vulnerability?

As Wes proved, a simple looking RCE can lead to a huge breach of security due to failures in other areas.

I agree that limits must be established, but also, these must not end research so abruptly as they can lead to further information.

One might argue this is unethical, but a black hat doesn't care either way.


> How then do we assess the extent of the vulnerability?

It's already a critical vulnerability. Unless you want to assign numbers to the infinity, which is ridiculous.


Yet we know not all critical vulnerabilities are created equal.

That's why some get a 10k payout and others get a 2.5k.


This is assessed based on how hard it was to elevate your access rights (whether it requires physical access, user cooperation, etc.), not on how much damage you can do - because once you elevated your rights the possible damage is unlimited.


Except in that case, he unearthed another completely unrelated vuln.

I agree that some actions are u ethical, but does that really matter so much when a black hat is unethical anyways? The fact that he reported meant he was harbored no malicious intent.

How is collecting logins better than that? Seriously? This is completely malicious if you ask me.

Moreover, we must not judge each case strictly to the same rule, but with a measure of consideration of the circumstances as well.


> Are they allowed to escalate access, given the bounty rules?

No. From Facebook's responsible disclosure policy [1]:

> You do not exploit a security issue you discover for any reason. (This includes demonstrating additional risk, such as attempted compromise of sensitive company data or probing for additional issues.)

Both of the pen testers in this situation broke the rules. Once they found a security issue they exploited it and probed for additional issues (as well as one tester who attempted compromise of sensitive company data by collecting logins).

It's good that Facebook doesn't always apply these guidelines to the letter.

[1] https://www.facebook.com/whitehat


Is what the other researcher did (collecting usernames and passwords) in scope? Is it impossible to use these credentials to get above standard privs on any part of facebook?


"As we don't have full control of it, we ran it isolated from the systems that host the data people share on Facebook."

"Neither of them were able to compromise other parts of our infra-structure so, the way we see it, it's a double win: two competent researchers assessed the system, one of them reported what he found to us and got a good bounty, none of them were able to escalate access."

From the write-up:

> After checking the browser, the SSL certificate of files.fb.com was * .fb.com …

You left a wildcard cert on this random internet-facing unaudited 3rd party linux box with no protection against data exfiltration, or an HTTPS proxy in front of it, or anything? I know it's not as critical as facebook.com, but this is still bad. Priv escalation CVEs for Linux come out like every month.

At the very least, using this cert, anyone could run a MITM on any * .fb.com service and compromise without ever breaking in. From the article that could include VPNs, Outlook Webmail, Oracle E-business and MobileIron. I'm hoping you did actually have a proxy in front of it and Orange just didn't catch that.


"I'm hoping you did actually have a proxy in front of it and Orange just didn't catch that."

That sounds a lot more reasonable than actually having the private key on each individual server.


> From the article that could include VPNs, Outlook Webmail, Oracle E-business and MobileIron.

The article claims those are tfbnw.net, not fb.com.


Figure that was probably a CDN, and you don't know the origin configuration


I hope it was. But if it and the other services Orange found are just random internet-facing appliances, it's unlikely they would configure a CDN to get access, because CDNs are for public traffic typically. Nobody uses a CDN to open their outlook web mail to the Internet, for example. But they might have had a regular SSL proxy in front.... maybe.


> no protection against data exfiltration

That's a big assumption


Since the article exfiltrated data, no, it is a fact.


If downloading any data from a hacked server counts as exfiltration then everyone could be said to have no protection against data exfiltration.


This is quite confusing. "A brief summary, the hacker created a proxy on the credential page to log the credentials of Facebook employees. These logged passwords were stored under web directory for the hacker to use WGET every once in a while".

So they were grabbing creds, but it's ok it was just a researcher? Pretty sure that's crossing the line and indicative of an actual compromise of employee credentials.


Facebook's stance on vulnerability research is incredibly admirable. I can only home more companies hold themselves to the same high standard.



I believe, all rich firms,including FB,Microsoft,google and many others, who are fooling public and minting trillion $,will be hacked and more advance sophisticated tools will be invented by a specific hacking group to hack them.Look Panama Paper.It is also a science & advance technology.

David Ghosh


i am problum id


This is pretty much why we shouldn't be trusting centralized services at the scale of Facebook. Given that two researchers got that far, sooner or later another will actually manage to escalate the privileges. And then, I'm not sure what we should be more afraid of - leaking it all to public or some three-letter agency using it for nefarious purposes...

Seriously, don't do Facebook. Not even once.


This reasoning is really flimsy. Are you suggesting a that a normal user will do better at securing their home grown system than a team of trained security engineers?


While it is true that individual end users probably comprise a large range of security expertise and practice it's also likely that fewer users would be affected per targeted attack.


But it would be much easier to run untargeted, large scale, automated attacks which would affect many more people. Just look at e.g. the world of self-hosted Wordpress installs. Somewhat competent people get hacked by scripts all the time because they can't keep up with patching their servers to respond to every new threat (even assuming they learn about said threats in time). Normal people don't stand a chance.

(Speaking of which, I just got script-hacked like this few days ago, and I think it's finally time to dump Wordpress and migrate to a static blog...)


Security assessment is all about risk + impact. The ability to compromise one thing and get access to lots of stuff is more severe than the near certainty of a single Windows XP compromise with no data.


I'd say that if it wasn't for Facebook, a lot of communication would take place over safer channels. If you doubt whether Facebook is insecure, click the "include 'only me' activity" in your activity log and imagine how much damage would happen if that leaked.


You and I remember very different pre-centralization Internets. Before Google, decentralized hand-administered email services were probably the single most popular way to bust into someone else's servers.


I'm not saying that we should go back to those times, I'm saying that we should be aiming for decentralization in the long run. It's not like the security paradigms we had back in the time are tied to decentralization. They're mostly tied to ignorance.


Decentralization is great for many things but has little effect on security. Decentralized channels are not inherently safer.

Aside on self-hosting: This reminds me a lot of the self-driving car accident debate. Self-driving cars could be thousands of times more reliable than humans, but unless they are mathematically perfect, some people feel safer driving themselves because they feel they are in control. Nevermind that Google/Facebook etc have millions of man hours invested into security.


I don't think it's so much that decentralized systems are technically safer, but that the people who own and control the centralized ones have incentives to keep them unsafe (at least from themselves) whereas if each person runs its own server, the incentives are properly aligned for security.

In essence, it's an economic argument, not a technical one.


>whereas if each person runs its own server, the incentives are properly aligned for security.

I don't see how this is true at all. Most people don't care/know about security. See Wordpress. Many companies don't take security seriously, there are many open mongodb pointing to the outside world - HackingTeam, a team chockfull of blackhat hackers, was partially done in by sloppy authentication and passwordless databases.

Today, when people run their own server, it doesn't seem the incentives are properly aligned at all - and given that even blackhat shops don't even take the time to secure their own systems properly, the economic argument falls flat.


Decentralization in the context of social interaction services does not work. As long as the concept of a social network exist, people will connect to similar platforms.

It's been tried, again, and again, and again, and it fails for the same reason all such projects fail - the only people who care about this architecture stuff are those who are geeky enough to be running their own decentralized services.

Heading off the inevitable response: Email is not a social interaction service, it is a message passing service that happens to have social uses.


What is a social interaction service, ans why can't it be federated or peer to peer?


Because not-techy people don't want to deal with the hassle of setting up and maintaining those things.


Don't put all your eggs in one basket.


...especially if it's not your basket.


With that logic you might as well stop using the internet altogether.


It's more like we shouldn't trust a third party closed source software for a task that is supposed to be done in a secure way.


It's buried in the bottom of the post, but I'm happy to see that Facebook paid a bug bounty of $10,000 for this. In the past we've seen Facebook refuse to pay bug bounties when the hacker goes beyond scope. Interesting that going beyond the usually scope of bug bounties actually discovered a latent exploit and helped Facebook. Maybe this will result in change of policies for bounty scope.


I'm not sure I understand some of the comments here claiming that 10k is not enough money for this. It clearly is enough money because Orange found the problem and reported it.

These arguments always remind me of people claiming that certain professions are not paid enough. They forget that there is a market for labor and in this case the labor is finding vulnerabilities. People will either be willing to work for the posted price or not. In the case of pen testing facebook I'd be willing to bet there are plenty of people out there looking for bugs who aren't even really concerned with what the final payout is going to be.

Yeah, they could have gotten completely owned if he didn't report this. But to him reporting it and getting 10k in compensation was sufficient. Why would facebook pay him a million if he was willing to take 10k?


It was enough for Orange. But think about demand elasticity. Perhaps there are 100 other Oranges for whom the bounty was not enough.

Clearly the bounty was not enough for the mystery attacker / researcher / hacker / whatever that Orange discovered exploiting the same hole.


From Reginaldo's post it appears that it was another bug bounty guy who was the mystery attacker.


I personally find it a little difficult to believe that this was a security researcher. Exploiting a vulnerability (against the rules of engagement), _and_ uploading a web shell?

Seems more likely that Facebook wasn't thrilled that Orange included the details of an existing, unknown Facebook compromise in his write-up.


that's understandable


There are two markets for this type of labor: one provided by the bounty programs and one provided by those who want to abuse the vulnerabilities, eg: secretive three-letter-agencies, etc.

I would imagine that the latter of the two is almost always willing to pay more. I would also imagine that by the time you're a skilled pentester, you're in your mid-to-late thirties and maybe are worried about how you're going to put your kids through college, or how you're going to retire.

So what do you do? Do you take the larger sum of cash and plague yourself with worrying about bitcoins, how you're going to lie on your taxes, and deal with the ethics of helping shady organizations?

Or do you help the company? Now you don't have to lie on your taxes or launder bitcoin, but you do have the pressure to find more security problems to make enough cash to meet your financial needs.

And the ball is solely in the court of the companies running bounty programs-- if they were to always provide more money than the black market, there's virtually no reason to bring it to someone else.

I don't think it's unreasonable for them to not want to give away more than they have to, but I get the sense that there's little to no negotiating power for the vulnerability finder-- and they should probably work on that.


No, there isn't. Message board nerds love to try to reason through vulnerability valuation, but the reality is that there are very few people who will pay for serverside vulnerabilities at Google or Facebook (or anywhere else).

The reason is that for a vulnerability to be worth money, someone needs to have a business process ready to go to monetize the vulnerability. Without that proven process, a vulnerability is just like any "Show HN" without a business model or revenue.

There are certain kinds of vulnerabilities --- browser code execution, most notable, but a couple others --- that organized criminals have whole businesses set to drop in and run and make money with. If you have one of those vulnerabilities, you've got lots of takers for it, and the prices for those vulns are nosebleed high.

There are a few kinds of organizations that will pay for a Facebook serverside RCE. Good luck finding them. Or, I should say, not finding them. Those same organizations will kill you and your whole family just to make a point. That is, after all, the only reason they want to buy Facebook serverside RCEs.


Because the pay scale is subjective and the reporter doesn't know the amount before they report. FB and others have guidelines for their bounty programs that leave an upward bound open for severe vulnerabilities.

The argument these comments are making is that this report should qualify in some way for a higher payout.

I'm not arguing for either side here, just noting that the comments that you refer to are fairly reasonable.


There is an implicit "without exploitative downward pressure on wages" clause to the sentence every time people say "_______ isn't paid enough"

You forget that no assertion of values makes sense when divorced of context.


There is no free market of labour. Under penalty of death you are forced to sell your labour. Ultimate buyers market.


Companies die without employees too!

A little more seriously, somehow we have to find language for different types of coercion. Otherwise we'll end up lumping the whole complicated world into one lump.


I could choose to sell a different form of my labor, like my ability to perform physical labor instead.


This is a beautiful summary. Thank you. I will be stealing this.


You can run your own business.


I really think 10,000 for serious exploits like these is just not enough money. Even if OP only spent an hour or two on finding this out (although highly unlikely), they should pay based on seriousness/potential damage of the bug. Great writeup though. Super interesting stuff.


common response to why companies don't pay a ton of money for these exploits [1]

[1]: https://news.ycombinator.com/item?id=11249173


And I still think that line of thinking is bullshit.

The bottom line is that the dollar value for this stuff is arbitrary, and Facebook arbitrarily picking $10,000 for getting COMPLETELY OWNED and exposing any selection of personal data (in the case of the other bug, this one seems to have the potential to be even worse due to credential stealing, although it's murkier) is pretty gross IMO.

I don't know what the number should be - again, it's arbitrary - but in my personal book $10,000 is about 10x too low.


Facebook didn't get COMPLETELY OWNED. A third party product they were using for some backend line of business process that lived in a DMZ got COMPLETELY OWNED, and the researchers were unable to escalate privileges beyond it.


In parens, I said I was referring to the linked discussion, which was about a researcher that had access to any FB account. IMO that qualifies as TOTALLY OWNED. The only thing worse would be a full dump of every account.

I agree this one is murkier, although at first glance the proxy method employed by the "mystery adversary" seemed promising for privilege escalation.


Doesn't Facebook's policy prohibit privilege escalation? The write the following:

You do not exploit a security issue you discover for any reason. (This includes demonstrating additional risk, such as attempted compromise of sensitive company data or probing for additional issues.) [0]

It's no wonder other bounty researchers didn't find further vectors for exploiting their privileges. There was a researcher not too long ago hit by the book for this.

[0] https://www.facebook.com/whitehat


That's an oversimplification. What actually happened was: a researcher found a serverside bug in a random backend box, got RCE, logged in, scraped and banked all the creds off the box, reported the bug, and then a month later during a dispute used the creds he stored to attack other Fb properties.

Dumping directories from machines and banking their creds isn't "escalating privileges". If you did that on a pro red team project, saving the creds to use a month or two later, you'd get fired.


The case (or however one wants to construe what or how things really happened) isn't too interesting to me. Do you read FB's whitehat rules of engagement differently?

I dug up the mentioned case, and FB's first contact with the researcher included, "Please be mindful that taking additional action after locating a bug violates our bounty policy." Between FB's whitehat policies and that, I'd be pretty sure not to escalate privileges.


Me too.


Given that a vulnerability may be exploited by a malicious party and that this could cost facebook X millions of dollars: How much should facebook pay for vulnerabilities to reduce the risk of Such an event? That is, given some cost/benefit model what is the ideal price for a particular class of vulnerability?

This suggests two related questions, 1. how does buying vulnerabilities reduce the risk of a malicious use of a vulnerability and 2. by how much?

I suggest two answers for question 1:

First buying a vulnerability and then patching it prevents that vulnerability from being used by an attacker. It only makes sense to do this if vulnerability are very rare, since the more rare they are the greater the benefit of fixing them.

Second someone who discoveries a vulnerability might have a human urge for recognition and/or payment. "I did the work, I deserve some credit/payment". In this case facebook is competing with the vulnerability blackmarket, but facebook has an inherent advantage (all things being equal a legal dollar is more beneficial than an illegal dollar and you get bragging rights which has both intrinsic and monetizable value).

I have no idea how to answer question 2 as it is quantitative. Perhaps an economist has written pricing models for bug bounties and how this should impact cyber-insurance premiums?


Major companies do not store most information on most humans in 1st/2nd class countries.

For the love of me I could not imagine what implication o huge hack into facebook could have on the civilized world. Imagine someone has a database of all emails with all activities all connections, on everything everybody in America Europe and Asia does.

The ability to spam people into oblivion would be just a tip of an iceberg. Most likely countries like UK or Germany would ban facebook altogether. Not to mention there are millions of active credit cards stored in their wallets. The implication of a hack at that scale would mean hundreds of millions of dollars spent on only printing new plastic cards for affected cardholders.

For $10,000 you cannot even buy a modest 80" TV... I am disappointed how little FB values their system to be secure, but oh well... who uses FB anyways /sarcasm


>> And I still think that line of thinking is bullshit.

You haven't given any real refutation to the comment linked by the parent. How qualified is your opinion? You're entitled to it, but know that most bug bounty participants and members of the actual security industry disagree with you.


My refutation is in the linked discussion.

And I'm reasoning from economic first principles, not experience in the field. From first principles, I don't understand the argument that $10,000 is fair. At least, I don't understand that argument any more than why $10 is fair - which is my point, that it's arbitrary. And in my arbitrary opinion, $10,000 is grossly low compared to the relative work involved and money at stake.

The FBI just paid $1M to access one guy's iPhone. The vulnerability in the linked discussion, which was guaranteed access to any FB account, was a $10,000 bounty. IMO those numbers need to be a lot closer together.

Edit: $15,000


The usual way to evaluate this is to consider the chance of being discovered (D) times the chance of being used as an exploit (E) times the cost of the exploit (C) with an appropriate discount factor (F). Think of the discount as the wholesale price of the exploit, what a mal actor might pay for the exploit.

For example, if there was a 1% chance of discovery, and a 50% chance of the person discovering it using it as an exploit, and it cost them 1 day of revenue ($50m) and they used a discount factor of 10%, that would indicate that the bounty would be worth about C * D * E * F = $25k.

If it's likely that the exploit would only last 5 hours, then $10k is a reasonable bounty.


> Cost of revenue

That itself is pretty vague to determine as a hack could have an impact on reputation and the impact might not be limited to just one day. Future users might be afraid to use the product, current users might leave in few weeks.


Which is ironic because "impact on reputation" is quite vague itself.


That was my point, actually. My poor english is to be blamed if it wasn't clear :)


"Say they buy it for $20,000. Do you really think someone will derive $20,000 of profit from this before it's caught and patched by Facebook?

The only vulnerability worth $15,000 or more is one directly impacting a language, a widely used development library/framework or a widely used piece of software."

I think that statement might apply here given the FTA issues. Hackers could've gotten plenty mileage out of it. Especially if others at Facebook, like their AI team, used it for something that's a trade secret. That's speculation but it's not like hacking a news feed.


Its not just the value to the finder but the cost to facebook. Generally the profit an attacker makes is an order of magnitude less than the cost they incur.


Good point. That should be factored in.


I guess the only way to find out if that's true is to try and use the black market first?


Good luck with that. "The black market" isn't buying vulnerabilities in 3rd party serverside components at Facebook.


Seriously! Especially for a company as big as facebook!


I'd say $25,000 at least. It's a number I've seen a few companies cite for full-scope, penestration tests. Usually to sell their products they make the real money on. :)


I'd have to say that it's pretty clear Facebook isn't offering enough, otherwise the first guy through the system would have claimed it.


Oh yeah, I'm just trying to determine a number that would makes sense. Another angle to look at is what black market would pay for whatever level of access. Might need official bounty to be a good fraction of that or equivalent to get more of the 0-days from black market. There's also balancing the cost of straight-up, security staff vs the bugs others are finding. Maybe just pay good consulting to people with experience that you rotate in and out to find stuff others overlook with bounties paid based on effort and significance.

Many possibilities. This was worth way more than $10,000, though, given it detected a subversion. I'd have applied the consultant to a few other areas of my operation given the aptitude.


This isn't the first time files.fb.com has been publicly reported as having been breached: http://www.nirgoldshlager.com/2013/01/how-i-hacked-facebook-... .


Nice write up. Of course, this would be the team member whose photo is merely an Orange. Paranoid security people haha...

Part that jumped out at me, aside from obvious goodies, was this:

"FTA is a product which enables secure file transfer, online file sharing and syncing, as well as integration with Single Sign-on mechanisms including AD, LDAP and Kerberos"

...followed by...

"...web-based user interfaces were mainly composted of Perl & PHP... PHP source codes were encrypted by IonCube... lots of Perl Daemons in the background"

Wow. That inspires a lot of confidence in the "secure" product. I'd have doubted Facebook relied on such a system had I not known they built their empire on PHP. We all know its reputation. Their "secure, file-transfer appliance" fits right in.


Article is exactly as headline advertised, and a well-laid out write-up. Neat to come across it.


Nice work, very detailed. However this is hack of Accellion’s Secure File Transfer. How should Facebook, or anyone for that matter, protect themselves in these cases? I mean other then some obvious ones like not running as root, limiting file access, limiting network access to other servers...


Reason about the software as if it has already been compromised. Think about how user credentials and private keys the server touches can be used to attack other internal services, and try to limit the scope as much as possible.


Which is apparently exactly what Facebook does with this thing.


To be fair it looks like they aren't purely using SSO which is what provided the credential scrapping attack vector that was used by someone else.


I know nothing about pen testing, but this was very interesting and easy to follow regardless. Thanks so much for sharing!


This is the same researcher that found a RCE in Uber: https://hackerone.com/reports/125980

Shameless plug but if you like that kind of articles I suggest signing to my newsletter: http://bugbountyweekly.com. A free, once–weekly e-mail round-up of news and articles about Bug Bounty.


Fascinating. Looking for a hackable system and finding someone beat you to it.


The author and Facebook, Inc are lucky that the earlier hacker was just a regular spam criminal, not some bigtime "Nation-State Hacker". The Nation-State Hacker probably would have been a great deal more careful and not left easy-to-spot PHP backdoors lying around.

This also points out a weak area in our knowledge of hacking - how often does a given exploit get rediscovered? This and other anecdotes show that it happens at least once in a while. Prevalence of rediscovery could put lie to the NSA's "NOBUS" assumption, though. So we're likely to never see the results of such research.


This is a great write-up. I know little about pen testing, yet I was able to follow along easily.


Seems like two factor authentication here would have helped.


How?


Because the backdoor was logging fb developer credentials. The stolen creds would not be useful with two-factor required every time.


If the attacker has control of the box, he can just man-in-the-middle a two-factor token.

It would certainly require the attacker to be a little more proactive, but it would hardly stop the credentials from being useful.


But he did not have access to the box with the 2FA. The attack just had access to a box hosting software from a third party, completely isolated from FB's infrastructure.

With the passwords, however, he might have gotten access to the VPN or services. 2FA would have certainly helped.

This is of course only interesting if the passwords were reused (even the most security minted folks do that). If a third party vendor does not support 2FA, or when dealing with legacy code, it believe it is good practice to only use randomly generated passwords by password managers.


They were able to snarf passwords plaintext.


If there is someone to be upset with in this situation its accellion the vendor who backs files.fb.com.

Looking at how egregious their security mistakes are they dont appear to take security seriously.

This is the same company that (last I was down there) had a billboard on 101 that says "Secure".

Many echos of oracles "unbreakable" ad campaign while being an aggressively bad at security company


So Wes got only 2.5K after successfully proved he could access signing and api keys,after he was threatened with a lawsuit.

How does setting up a shell and collecting credentials and then downloading them later give you a pat on the back?

Is this some kind of a joke?


emreayben23


Why do you use so much emoticons in your article?


Quite clever find! Good write-up, too! Kudos!


Only $10000? What the hell do you have to find to qualify for that "million dollar bug"?


If you can find a way to use their backup tool to download an arbitrary user's profile, and they don't pay you out at $1M, "BS" can be safely called.


So since they were unable to pivot laterally, you pat them on the back and call it a win. But last time someone did successfully pivot laterally, you threatened his employer? You guys are really sending mixed messages! Are they allowed to escalate or not? And if that's the new policy, shouldn't you pay the other guy who did escalate?


We detached this subthread from https://news.ycombinator.com/item?id=11543926 and marked it off-topic.


That's not what that person did, and you know that, because you were on the thread where this was picked apart.

The person who had problems with Facebook didn't "pivot laterally".

He popped a shell, dumped and banked directories from the server, held them secretly, and used their contents more than a month after reporting the bug to Facebook as a cudgel in a dispute over a bounty.


I just went to that thread, read both articles and the HN thread. Your post is completely inaccurate.


[flagged]


(a) No, we didn't. Stamos was running Domain Services by the time NCC acquired Matasano --- a totally different line of business. I have never worked with Stamos. I've never been in an organization that shared a number with Stamos. For the overwhelming majority of my pentesting career, I knew Stamos primarily as my arch-competitor at iSEC.

(b) Please don't accuse me of commenting in bad faith. I don't care whether you use a calm tone of voice or not. It's especially weird to be accused of being in the tank for someone by an anonymous account.

(c) If I'm really incorrect, you should make an actual argument, rather than a drive-by snipe.


No, I shouldn't. You know as well as I do that it would be a waste of both of our time. Sometimes I'm ok with that, but this horse was already beaten to death.

I humbly submit for your consideration that my assumption of you making comments in bad faith is not unlike yours and Alex's assumptions of bad faith on the part of the researcher, and perhaps we could all improve a bit here.

Snipe aside (which was fired at facebook, not you), I am genuinely confused about their ambiguous policy, and it looks like other commenters are too.


If you had a real argument here, you'd make it, rather than suggesting that I'm commenting because of a fictitious working relationship with Alex Stamos.


That's kinda disingenuous and you know it. From the previous discussion[0], you should know not to take one side of the story at face value.

[0]https://news.ycombinator.com/item?id=10754194


Having participated in that conversation I can say that the timelines and statements from facebook were suspect. I'm sure the researcher didn't make the best choices but how facebook handled it was horrible and should make anyone participating in that bug bounty carefully consider every action they take against facebook's infrastructure.


Wow. That was an interesting set of comments to read. The consensus of the crowd was actually against Facebook in that probably due to their overpromising on bounties for big compromises and under delivering plus going after dude's job. A number of security professionals, including a friend of Stamos, were against that because he dumped and sat on data plus had his business info involved. Cited expectations of pentesters and responsible disclosure. What a mess.

I don't think Wes acted in good faith in that one but neither did Facebook in anything privacy-related. Who cares about fundamental ethics given parties involved. I will say his actions were nearly warranted if Facebook was promising huge bounties for something that could cause them big problems which that case seemed to be from that thread's comments. I don't know for sure.

Far as escalation or downloading data, I found that to be the only way to get taken seriously by management. Had to be done non-disruptively with trusted personnel, protection of that data (eg RAMdrives, crypto), and assurance it was gone after. Rarely even read it as filenames & credentials were enough. Nothing like showing marketing plans or private emails to execs with a contract that's vague enough for it to be legal to get security taken seriously. Responsible disclosure debates of 90's showed us that letting vendor decide almost always resulted in them downplaying risk saying it "hypothetically" could do something but probably overstated. People playing that game usually get bounties that yearly add up to less than a median IT person.

Rather not play that game. If the company bullshits, do what you can within their legal framework to call them on it and provably without doing any damage. If they didn't in that case, then he went way overboard and looks like he's running an extortion racket. I think key parts of the story aren't published and I can't be sure. Good news is Facebook and Wes both of don't mean shit to me. Moving on. Appreciate the entertainment and different perspectives, though. :)


i hacked facebook and someone saw me hacking and siad why are you hacking and so that's how i hacked facebook




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: