Aaaahhh, the feeling you get when you notice that you fucked up. Everything gets quiet, body motion stops, cheeks get hot, heart starts to beat and sinks really low, "fuck, fuck, fuck, fuck, fuck, fuck, fuck, fuck, fuck, fucking shit". Pause. Wait. Think. "Backups, what do I have, how hard will it be to recover? What is lost?". Later you get up and walk in circles, fingers rolling the beard, building the plan in the head. Coffee gets made.
Pffft, it's not a real panic until you weigh the pros and cons of leaving the country with nothing but the clothes on your back and becoming a illegal immigrant shepherd in a nation with too many consonants in its name.
I don't think there's any public technical mistake that'll prevent you from ever getting a job in tech. Demand is just too high. Peaceful oblivion still isn't my default even though it should be.
I had this experience when, years ago on my first day as group lead at $JOB, I was being shown a RAID 5 production server that held years of valuable, irreplaceable data (because there were no backups. Let me repeat that there were no backups). For some bizarre reason, I thought "oh cool, hot-swappable drives" and pulled one out of the rack. This naturally resulted in loud, persistent beeping from the machine, which everyone ignored on the assumption that the fellow who was just hired as the group lead knew what the f he was doing.
While I didn't know what I was doing, I did manage to get the beeping to stop, and had to come in at 5 a.m. the next day to restripe the drive I'd yanked out.
Did I mention there were no backups? When I was a little bit more seasoned on the job, I raised a polite but persistent issue with management of the need for durable backups. Although I kept at it for months, they thought about it, talked about it, and ultimately did nothing. A few months after I left, the entire array failed. Since the group's work relied on the irreplaceable data, all work ground to a halt for the several months it took for an off-site company to recover the data.
My previous boss stores company data this same way. I begged him to approve the $5 per month cost for Backblaze on the computers I used. He approved it for some, but not all (about half of the ten computers). He completely rejected the idea for the company's data. After all, it was already protected by RAID.
Theoretically but there are often other things at play. I know the story is older but since about 2015 raid5 has been dead to me, mostly because at current drive sizes a raid5 rebuild takes so long your chance of a cascade failure and losing a second drive which makes it a "send to a recovery lab" risk. Anywhere you would use raid5 just do raid6.
To add to the comments of cascading failure: if a drive goes bad, another drive from the same manufacturing batch is disproportionately likely to go bad. RAID arrays are often built with drives from the same batch, since they were bought at the same time from the same vendor. This means array failures include multiple drives more often than you'd expect.
lol, its amazing how fast the blood leaves your face when your mind transitions from "cool that worked well" to "Oh no, what have I done?"
That backups comment sounds very familiar.
I accidentally deleted a clients products table from the production database in my early years as a solo dev. There was only a production database. Luckily I had written a feature to export the products to an excel sheet a while before and happened to have an excel copy from the prior day. I managed to build an export to ingest the excel and repopulate the table in record speed while waiting for my phone to ring and the client to be furious. Luckily they never found out.
damn, your description is spot on and reading this triggered PTSD in me...
Last time I had this feeling was two years ago when I destroyed one of our development servers because of a failed application update. I know exactly how I wished Ctrl + Z to exist in real life... We had backups of the machine, but it was still kind of a humiliating feeling to tell everybody and ask for restore from backup (everybody was cool though in the end)
I lost 1hr and 30 minutes of a Slack like app (chat messages). Luckily at the time we were pretty small so not much data was lost but holy shit did that make me almost throw up.
Thank God my automatic backups were so close to the mistake I made and I didn't lose 24 hours.
Haven't made a mistake like that since and I don't destroy DB records like that anymore.