This bug honestly deserved a year's salary. But that's just my opinion I guess.
In other thoughts, I really need to try my hand at this stuff :)
In the security industry we use the term "vulnerability half-life" for this purpose. Basically, a bug in Google will be discovered between a day and a week after it is first exploited instead of reported. If you try to commercialize it, it will quickly be discovered because the top tech companies have the best incident response teams in the world.
Once the flaw is patched once in Google, it's effectively patched. Game over for the attacker. Compare this to a vulnerability like Heartbleed that is actually worth money - critical flaw that can compromise over a third of all the servers on the entire internet. If that vulnerability is patched anywhere, it's not patched everywhere, unlike a single web application instance in Google.
The greater the half life, the greater the value of the vulnerability. A vulnerability in Java is worth money because it will still exist in the wild for years, providing consistent income and ROI for a purchased exploit.
A vulnerability in Facebook is worth money to Facebook for brand integrity, but it isn't worth much to blackhat groups. You could theoretically commercialize it, but not quickly enough or in a meaningfully consistent or lucrative enough way to really make it worth the hassle.
Facebook has not got a boundary for maximum reward, so they can pay as much as they want…
No. They should go further. We should have a law, similar to Sarbanes-Oxley, that forces companies to undergo a security audit every year.
Otherwise, we're going to be in an endless cycle, where companies refuse to invest in security, a huge breach occurs, and everyone suffers.
The current system does not incentivize investments in security because they hurt the bottom line and have no tangible, immediate value to shareholders. That's a dangerous situation.
I'd rather see some security standards (updated yearly or so) and heavy fines and reimbursements after an hack (not necessarily malicious - proof of concept published by a white hacker would do), if the security was lax. Triple them if the company hid the fact that they had been hacked.
Fining companies heavily for being hacked is like fining someone for being rained on. Except, in this case, the rain is pretty much a guarantee, and the person knows that, and when they get rained on, their customers get screwed. So you fine them for not having an umbrella.
An audit doesn't necessarily need to be done the way it has before. It could even just be a bug bounty hackathon, like the big browsers do.
If whitehats had a ton of easy-to-find work to do, there'd probably also be fewer blackhats.
If security audits were to become mandatory, they would only apply to companies of a certain size.
The compensation level also comes down to the supply side - how many other people might have discovered this bug shortly after this?
For this reason, there's probably a good argument to increase the reward according to how long the vulnerability was present, to the extent that's knowable. (More so with an open source libraries under version control than a website.)