Absolutely no backend code should be pushed out that isn't first audited by a security company. God knows they can afford it, and mistakes like this could end up being much more costly to Facebook (stock price, lawsuits, etc).
Crap like this makes it clear that not only are critical changes to the security infrastructure at Facebook not at all audited (in-house or outsourced!) for even the most ludicrously-obvious security vulnerabilities, but also that Facebook itself does not take even begin to take security seriously.
And this is completely ignoring the fact that it took them five days to acknowledge such a critical issue, which is a further symptom of Facebook's sheer apathy to the security, privacy, and data of their users, both corporations and individuals alike. To think that a company/website like Facebook, containing as private and personal information as Facebook profiles have, and with such incredible monetary and technical resources at its beck and call cannot even triage incoming vulnerability reports correctly makes absolutely zero sense.
First, it's a numbers game. There are order magnitude more people trying to break the product than there are people trying to make it secure (dev team vs. rest of the world?). As a developer you are ALWAYS at a disadvantage.
Second, different objectives. As a vulnerability seeker you only have to find one weakness, while as a developer you attempt to write securely everywhere. This just isn't realistic. The best developers can do is try as best they can. There is no indestructible software.
Third, the response and fix time is actually good? If anything - 5 days is incredibly good turn around. We don't know what else is going on, what other crazy vulnerability may have been reported at this time or being worked on previously, etc. While security is important it is unrealistic to imagine that the team in charge of these kind of fixes is all that large or has infinite resources.
It is hard (impossible) to secure something like Facebook fully. I agree with the other sentiments that if anything their crowdsourcing efforts have been quite successful. If you are unhappy with a 5 day turn around - start looking for another solution. I think you'll be hard pressed to find anything 1) more secure, and 2) with quicker responses to security issues.
Yes. Especially since the code wasn't actively being exploited in the wild.
Especially for a major holiday weekend when a lot of people go on vacation....
This individual saw the bug, announced it to Facebook using their bounty program, they fixed it pretty fast and gave him money. What more do you want?
I actually agree to an extent with ComputerGuru. The company deploying the code (facebook) is responsible for any exploits. We don't know if Facebook does consult a security team (given the bounty amounts, I'm sure they have one internally).
The real problem with code though is that bugs will ALWAYS exists. They can get even worse as more people have a hand in the package. I've encountered this alot on projects, where code snippets will either be redundant, over complicated, or (in this case is my guess) will conflict with other pattern checks.
To be honest, I'm actually surprise that we heard about this exploit. I'd almost imagine that most companies would be tempted to have the hacker sign an NDA in order to collect the bounty.
"Warning: Suspected phishing site!
The website at blog.fin1te.net contains elements from sites which have been reported as “phishing” sites. Phishing sites trick users into disclosing personal or financial information, often by pretending to represent trusted institutions, such as banks."
The home page doesn't produce this message, even though the linked article is summarized there. Clicking on the article from the home page also produces this message.
Nonetheless, very simple yet very clever exploit! I'm sure someone kicked themselves pretty hard over that one.
If they weren't willing to hit the database to recall the profile_id for the reset operation, it makes me wonder whether the confirmation codes are in fact deterministic, rather than randomly generated.
We have a rails test that we give dev candidates, and red flags go up when we see this happening (which is far more often than I'd like to admit). Kind of scary that there's likely a bunch of production code floating around that is so easily hackable.
Every time I hear the reward amounts, it entices me to divert my attention to finding bugs and loopholes in systems. :/
Considering exploits are supposed to be hard to find (why they're large bounties), it's just the incentive/hush money to pay the hacker, because you have to consider a few things...
1) why and how did you find the exploit (were you trying to hack someones account, stumble upon it [that's lucky], are you a security firm [meaning you have success in this before], were you black hat contracted, ect).
2) a hacker would prefer the recognition [possible employment], the reward [sandwiches aren't free], and release of liability [a company may still file charges for probing their systems 'weev,' is an example]
I can think of very few vulnerability testers that have gained employment at the companies where they find the exploits. Comex is one I can think of off the top of my head (created the jailbreak for iphones, landed an internship at Apple, then career Google).
That exploit has a value on the 'black market'. If it comes down to "no money" or "$20k", people are going to be looking at the "something" instead of "nothing", no matter what the laws say.
The bug bounties don't always have to be a lot - most people will want to do the right/safe thing anyway. They just have to offer some incentive (we've all seen some success with even $800 bug bounties) to keep the honest people honest.
There's nothing wrong with having a talent and wanting to make a living from it.
Personally I'm of the opinion that the only responsible disclosure is full and anonymous disclosure.
Also good to see that the finder was amply rewarded for his effort.
A side note - the SMS confirmation code text should explain what is going to happen when the code is used. Along the lines: "Facebook mobile confirmation code ds3467hj. Note. Entering this code would link this phone to your Facebook account".
Otherwise, if the SMS is just "confirmation code ds3467hj" it is overly easy to create a phishing attack which results in the user (striving to get access to some resource, like a magazine article for example) in entering the code on an attacker web site.
5 Days to Acknowledge: Yipes!
Anyone remember the bug as everyone had access to private photos of Marc Zuckerberg?
Same auth-bypass shit.
Facebook has some of the best engineers in the world. They also have their own modified version of PHP.
And really it doesn't what they use, they could use Lua and still have this issue. Don't think just because ebay used C++ or cgi or what ever they used doesn't mean ebay didn't ever have issues. Same goes for every other site/language out there.
The PHP hate is getting a little old.
I certainly agree that the problem wasn't the technology here, but I disagree with your conclusion. "some if missing somewhere" is far easier to avoid technologically than a high-level design flaw like this. It's fairly easy for a type system to notice that not all cases in a conditional are accounted for, but it's much harder for a type system to understand that it's inappropriate to use client-submitted data as a profile ID for a password reset request (as opposed to operations like submitting a friend request, where it's perfectly valid).