It's a (thinly) veiled threat. Of course, it's irrelevant to what's being discussed, but it's a way of saying "I can embarrass you with this irrelevant issue if you press this" without actually saying it. Classic extortion tactic.
Since Josh printed the exchange in full, they no longer have this on him. Still, I don't know if it's a smart move on his part. He could have left that out entirely, and if they brought it into the public, he'd have more than $10,000 to talk about.
Actually, I'm sure there are two sides to the database story. He could have been poorly trained, or working 80 hours per week-- at which point mistakes are inevitable-- or given bad tools. These kinds of fuckups rarely have only one person at fault, especially in the sloppier startups where management is thinly-spread and incompetent and there are too many moving parts (startups are hard to keep track of when well-run, and many aren't). I'm surprised he didn't get into detail of what happened there.
Fat fingering an rm -rf or corrupting a DB, if you have good backup policies, should generally not cause End of Days.
1. Responsibility to have a backup.
2. Responsibility to not screw up live data.
When someone screws up on #2, it inconveniences the person responsible for #1 and potentially loses any data since the last backup. That is the limit of #2's responsibility in this.
If #1 hasn't done his job right, it will come out when someone eventually plays the part of #2 (mistakes happen). Once that happens, the damage from not having a backup is #1's responsibility, not #2's.
In this case, we do not have clear information about who was responsible for #1.
It is my job to make sure we have a well-run software engineering product team. Ensuring that backups run and are tested is one small part of my job. Not doing the backups, sure, but making sure the team has taken care of it.
Mistakes happen and folks shouldn't be punished for that.
Remember that disks are fallible - human error needn't be involved to require recovery from backups.
Regardless of the root cause of data corruption, requiring three days to recover is completely a management/organization problem.
He could have been poorly trained, or working 80 hours per
week-- at which point mistakes are inevitable-- or given
The phrase "poorly trained" puts the blame on someone else, rather than on the person who made the mistake. Anything to avoid responsibility.
If a programmer is consistently making mistakes that cost other programmers time and energy, and either incapable or (worse yet) unwilling when it comes to improvement, then he's a bad programmer and should be fired. No company can tolerate dividers.
It doesn't sound like this is what happened to the OP. It sounds like this was an occasional mistake. They happen.
If a run-of-the-mill junior programmer mistake is an existential threat to the company, that's management's fault. Either (a) it was too early to hire someone that junior, or (b) the infrastructure was poorly designed.