
Accidentally Deleting Everything - wslh
http://feld.com/archives/2016/01/accidentally-deleting-everything.html
======
chucky_z
When I was fresh with Gluster, I was blindly following tutorials for how to
fix a split-brain. Backstory: I inherited this, so I had to fix it as quickly
as possible.

Basically, you can mount individual bricks in /tmp and run a 'forcefully fix
this split brain.'

I left this running to let things get happy overnight in a screen. The
following morning every single file was gone.

I had left a tmpwatch on /tmp to remove all files in /tmp after a certain
point in time. We had a backup, but it was a few days old. That was a fun
weekend.

It is the worst feeling in the world, and one that I'll never make again... at
least like that.

------
FussyZeus
Mylio seems to have the same problem as iTunes (and a lot of OS X software
frankly). Why does a cataloguing app feel the need to screw around with the
source material? Every other media app I've ever used from Windows Media
Player to Winamp to Amarok builds it's catalog in a local DB and links to the
relevant files, yet iTunes (and I'm pretty sure iPhoto) move by default or
copy if you make them to a new directory where they proceed to screw around
with your files without making clear what they're doing.

Sod off the lot of you and leave my files where you found them.

~~~
dan1234
For iTunes, you can stop this happening by unchecking "Keep iTunes Media
folder organized" (In Preferences/Advanced).

~~~
FussyZeus
That eliminates some of the behavior, but for someone like me who operates in
both Windows and Mac and uses other programs to keep my large collection of
mp3's up to date on all my machines, it doesn't work because iTunes won't
simply use a directory as the basis for it's library and leave it at that, it
feels the need to move everything AND as a bonus deletes it from the original
location.

When I tried to set this up using BitTorrent Sync, iTunes kept removing the
files BTS would copy. I had set it to read only so BTS would then copy the
file over, and iTunes would add it a second time to the library. Then a third
time. Then suddenly when I had 12 copies of everything and my half a terrabyte
SSD was nearing capacity for no obvious reason, I finally figured out what the
hell was going on and put a stop to it.

Appreciate the attempt at help though. :)

------
mintplant
I do backups, of course, but I also have rm aliased to

    
    
        echo "Bad! Use `trash` instead."; return 1
    

and use trash-cli [1] instead, which moves the target files and folders to the
Linux equivalent of the Recycle Bin. If I mess up, my files are one restore-
trash command from being recovered. Every so often I trash-empty to free up
space.

[1] [https://github.com/andreafrancia/trash-
cli](https://github.com/andreafrancia/trash-cli)

~~~
bpchaps
Man.. I'm always skeptical of doing things like this. It's one of those things
that just comes to rear its ugly head in the future on a machine/user that
doesn't have the alias configured.

Recent example - I wrote a bash script using many of the fancy parameters of
lsof to map my work infrastructure's network. The code ran on my arch desktop,
but as soon as it was run across the environment.. oh my god the stderr.

Eventually bit the bullet, logged onto the oldest centos4 machine and wrote
the silly thing from there in sh. It ended up being significantly better than
the bash version, too. :)

/tangent

~~~
rdancer
Your problems were using GNU extensions, instead of writing in POSIX-standard
subset, not knowing what the standard was, and not testing. None of which
apply to the grandparent!

~~~
bpchaps
Yeah, should've been 10x more clear. What I meant to say was that by using
aliased commands for something as widely used as rm, it's easier to forget the
nuances and cause some serious issues.

A better example might've been rsync, where it's incredibly easy to wipe a
directory if you forget a leading slash with the right flags.

------
caseysoftware
One of my main interview questions is: Tell me about a time you thought you
were going to get fired.

I want to hear about the bug they deployed, the customer issue, dropping the
database, etc. I want to hear about how they realized their mistake and what
they do differently as a result now. Basically, "what did you learn from it?"

Mine:

I once dropped the article table for [major news organization] which handled
[large number] of updates each day. It was something stupid like "delete from
articles where id = ~" and when it took more than a second, I realized my
issue.. damage done.

Now, I avoid running sql queries directly whenever possible in favor of
migrations. But I test those migrations on smaller/test databases which are
normally backups of production to have the same structure, etc. And I _always_
do a select first to make sure I get the results I expect.. then change it
into a delete or update.

~~~
gjtorikian
That is an amazing interview question. What's the average reaction like? If I
heard that I'd laugh, because it's so unexpected, and then tell a tale of woe
that of course in hindsight I've learned from. However I can also see how the
interviewee might be offended by it, so I'm wondering how people have
responded to it (before stealing it for myself).

~~~
caseysoftware
Sometimes I set a little more context before I ask but it's gone over pretty
well.

I think it acknowledges a simple truth:

We all make mistakes. Some of them stunningly big and we think - even for a
moment - "my boss is going to fire me!" And then once we calm down, we stop to
think about what happened, how to fix it, and how to prevent it from ever
happening again.

And if you claim to have never made one, you're either a) lying, b) oblivious
or c) so green that you'll make those mistakes with me. The first two reasons
don't fly with me.

~~~
derekp7
Or, d) so overly cautious that barely any work gets done at all, due to too
many precautions slowing you down. But seriously, I can't think of any time in
the past 25 years where I thought "This is going to get me fired"... with one
exception. At a medical related company, I got in a blow-up shouting match
with a director that was trying to get me to cut a three day procedure down to
less than one day. The only way this could happen is if I rushed through
everything and "hoping it was done right", vs. double checking not only my
work, but everyone else's inputs also.

Long story short, I cornered that director some time after the meeting,
apologized profusely, and explained that I'm overly sensitive to mistakes in
this position since I witnessed first-hand what happened when a predecessor of
mine made a "simple" error, which resulted in a major patient-impacting event
(which resulted in an FDA investigation). The upshot is that this director,
from that point on, trusted me more than anyone else on the team (even though
I was "highly insubordinate" in the previous meeting).

------
wtbob
It'd be kinda cool if there were a filesystem which were the unholy offspring
of venti and git (with garbage collection) and the ability to roll back to any
non-GCed point in time. Of course that'd only get one so far, and eventually
one would still need backups. But it sure would be convenient.

~~~
throwa2016
ZFS?

~~~
z3t4
ZFS has snapshots with no duplication and little overhead. You can for example
have rolling snapshots, so that you can get the state of a special file say
seven days ago, or one month ago, without saving "backups".

But of course you should also have backups, but on another
system/offsite/offline.

------
notlisted
Lost a bunch of files after switching to the Mac due to the ridiculous
'replace' (OSX) vs 'merge folder contents' (Windows) feature when dragging a
folder. After upgrading my Mac with a bigger hard drive, I dragged a projects
folder from the backup drive to the computer, interpreted the "...will
replace..." prompt as "duplicate files will replace" which was fine. Lost a
lot of work. Just confirmed this destructive behavior is true to this day.

Long live dropbox with rat-pack option.

------
arjie
Relatedly, does anyone know of a UPS which accepts some sort of commodity
battery and which has a USB/serial port to communicate to a computer when the
main power is off? Everything I've seen has a proprietary battery design
resulting in a battery that gets more and more expensive as the hardware gets
older and fewer and fewer people have it. The batteries don't have to hold
much energy after all. Just enough to flush to disk and power off safely.

~~~
Kadin
I've never seen a UPS with truly proprietary batteries (not saying they don't
exist, but I haven't ever seen one). Most of the time I've opened up dead
UPSes, they're just commodity lead-acid SLA batteries slapped together into a
24V or 48V pack with a minimal wiring harness. Quite a few seem to use 12V 7AH
SLA batteries frequently used in emergency lighting, which cost about $15-20
per online, if you're willing to fuss with getting them bundled up and shoved
back in tightly.

Many of the bigger rackmount UPSes have a big plug on the back to attach extra
battery cabinets. I wouldn't get a big UPS without this feature. They'll let
you hang whatever batteries you want off of the charger/inverter, as long as
you get the voltage and charge rates right. (Obviously we are in warranty-
voiding territory here.)

I have a couple of old Tripp-Lites, bought basically at scrap value because
the internal batteries were dead, and they use Anderson SB175 forklift-charger
connectors on the back. It works fine with just external batteries, giving you
the advantage of being able to buy whatever SLAs happen to be cheap on the
local market. (Though I'd definitely want to make sure all 4 were purchased at
the same time since you're wiring them up together in a set.)

~~~
arjie
You're probably right and thanks for the advice, but this is for on-site
storage I'm considering for my parents. They live in a country that takes me
about 24h to fly to, with daily power outages, and while I'm sure they could
do it if they wanted to, they're rather averse to the idea of doing any wiring
work.

A user-replaceable battery that can be replaced in the way that an older
Thinkpad battery can would be nice.

I'm looking at the options suggested
[here]([https://news.ycombinator.com/item?id=10831796](https://news.ycombinator.com/item?id=10831796))
and
[here]([https://news.ycombinator.com/item?id=10831721](https://news.ycombinator.com/item?id=10831721)),
but I'm having a hard time finding APC battery availability in India.

------
ekidd
Ugh, I hate situations where multiple screwups turn into massive data loss.
I'm glad this one worked out.

I remember the time when a sysadmin gave two weeks notice and left just before
the Christmas holidays, before getting new backup system fully working.

During the holidays, when almost everybody was out of town, one of the disks
in our main server's RAID array failed. Then when the RAID controller tried to
rebuild the array using the hot spare, a second disk failed. Then, just
because the situation wasn't already horrible enough, the ancient proprietary
controller (the one which knew how all the data was actually spread across all
the disks) decided that this would be a great time to give up for good.

We ended up paying about $7K to DriveSavers, who managed to salvage nearly all
the data from the remnants of the RAID array. And while we were interviewing
sysadmins, we replaced the hardware and set up a real backup system. Then we
built an offsite mirror of the backup system, and we configured paranoid
monitoring for all our RAID controllers.

It's easy to get 100% uptime and no data loss for several years, if you're
lucky. But if you build sloppy systems, sooner or later everything will go
wrong, and when it does, the chaos can be impressive.

------
pixelmonkey
I was once asked to help a friend debug his server. Its mysql state was not
loading properly, so I wrote an rsync line to back up all the mysql data as a
safety step before debugging. A few seconds later, I realized I put a forward
slash and a space in the wrong place and had managed to delete the entire
directory.

I felt like such an idiot. At least I haven't messed up an rsync command
since!

------
codingdave
"To err is human. To really fuck up requires root access."

