Sure. First and most important thing for understanding Real World costs: one engineer is $20k per month in fully loaded costs (salary, benefits, taxes, overhead). One engineer-day costs you $1k. These go up for emergency response because a) on-call b) experts are even more expensive than generic engineers.
Github's first response to this would be pushing a Big Red Button that would get 4+ engineers to devote their Sunday to this. That's $4,000, cash money. The predictable second step after the bleeding stops is to do a line-by-line audit of their entire code base. My guesstimate for Github is that that would cost north of 50 man days ($50k).
But wait, there's more! As a result of this compromise, Github is likely going to hire external security firms to pentest them and make process recommendations. The caliber of firm they would consider employing will cost, bare minimum, five figures. Cost goes up pretty rapidly.
But wait there's more! Github will, as a result of this incident, have a number of people close accounts today (totally measurable) and an unknown number avoid creating accounts in the future. LTV for SaaS customers very quickly becomes motivational numbers. A single company moving its repo from Github to an internal system because Github Let Anyone See Any Repo (+) could easily cost $5k+ in LTV, and that scales horizontally across their entire client base. Scaring your customers' PHBs is never fun. This issue will be held against Github in a thousand internal conversations.
Long story short: getting hacked is Very Bad News.
+ This is the key takeaway from the hack, not "Someone did a one-line defacement of an OSS project."
" this stunt will cost Github five to six figures."
The vulnerability already existed. There could have been github customers' accounts getting hacked without the customer knowing and leaking confidential information. There can be easily more than five to six figures, borne by Github's customers here.
Rails is insecure by default and the website would have required work==salary anyway to be secured. Instead of fixing it for cheap early they have to fix it for several times that cost now. Since we all know time is money and interest is paid on loans, paying a higher price now is only a natural consequence of not fixing it earlier.
Therefore, I don't believe it is the stunt that costed github five to six figures. The loss of wealth was already there from day 1 when Github developers did not read Rails documentation and/or when Rails decided to make attributes publicly accessible by default. Today it is merely a "correction" where instead of Github's customer losing confidential company information without knowing it is now Github bearing the costs upfront, as it should be.
In the "emperor has no clothes" story would you say it was the kid who pointed out the emperor had no clothes caused the emperor's embarrassment? The emperor never had any clothes, he should be embarrassed, but he wasn't. The kid pointing it out corrected this, and I believe the same has happened here with Github, Rails and Egor Homakov.
> The predictable second step after the bleeding stops is to do a line-by-line audit of their entire code base.
And this would be unnecessary if a vulnerability was discovered internally rather than demonstrated to exist by a well-meaning outsider? (Never-mind a malicious third-party).
A bank that left the back door open wouldn't need to conduct an audit if the breeze was noticed a few years later by a teller, rather than if it was pointed out by a customer throwing notes on paper aeroplanes through the open door?
(Ditto all your other points).
You cannot and should not assume that a "0 day" (questionable terminology in this case) has not already been discovered and exploited.