Hacker News new | past | comments | ask | show | jobs | submit login
Ooops. (github.com/mrmeee)
1123 points by peteretep on June 16, 2011 | hide | past | favorite | 238 comments



I learned a rather unusual trick to keep myself safe from unintended glob matches. It is not "fool"-proof, but it will probably dilute an unmitigated disaster into an incomplete disaster: Keep a file named -i in the sensitive directories. When glob picks it up, which should be fairly early, it will be treated like a command line argument. Has saved me on occasions.

I also had a friend in school who used to, for whatever reasons, name his files using characters drawn from a pool of only two characters. One being "." the other "*". Please don't ask me why. He would then try to delete some particular file. You can well imagine what would happen next. This happened multiple times, till I went ahead and populated his directories with "-i" files. That worked great.

I usually keep rm aliased to 'rm -i', but once I did get burned. It was not because of hitting return early but because of having a space between a directory and the trailing "/".....while running rm as root. It was taking a bit longer than I had imagined, so I looked again at the prompt to see what had I typed..$#@!&~ :)


Quoting the directories would also stop this, eg.

  rm -rf /usr /lib/nvidia-current/xorg/xorg # KABOOM!
  rm -rf "/usr /lib/nvidia-current/xorg/xorg" # Harmless.


I hate that feeling you get in the pit of your stomach after you realize what's happening.

Same thing when you forget the WHERE on a DELETE FROM.


Over the years I've been steadily training myself to type "WHERE" earlier in the process, until I have finally settled on the obvious-in-hindsight solution: Always simply start with the WHERE clause.

(Of course every effort not to be on the SQL shell of a production server in the first place should be taken, but sometimes you need a sledgehammer and nothing else will work.)


The habit I learned was: before running any DELETE or UPDATE statement, run a SELECT with the same WHERE. (e.g. if I meant to say DELETE FROM puppies WHERE cute = 0, first run SELECT FROM puppies WHERE cute = 0.)

I find I remember to do that because of the direct benefit (getting a sneak preview of what I'm going to delete), but it also means I end up thinking twice about the WHERE statement, so I'm much less likely to miss it out or get it dramatically wrong.


I'm a couple of levels more paranoid than that. First, I'll write the DELETE as a regular SELECT (to preview number of rows), then turn it into a SELECT INTO to save the soon-to-be-deleted rows into a table with a backup_date_ prefix (So old backups can be deleted occasionally). Next, before changing anything, I wrap the statement in a BEGIN TRAN, and ROLLBACK TRAN. After all that, I will finally modify the SELECT INTO into a DELETE statement, run it once while wrapped in the transaction (to verify that the number of rows modified hasn't changed), and then finally run without the transaction to delete the rows. Overkill?


Possibly overkill. But at least there won't be any tears.


What? No volume snapshot first?


I do that for 'rm'. I do an 'ls' first, then reedit the command and put an 'rm'.


Ooh! I'm going to start doing that.


I've always written my sensitive delete queries like this:

		select *
		-- delete x
		from Table x
		where x.whatever = 1
That way by default it's just a select, and then AFTER you verify the result set you can highlight from the delete on and then run the query (as long as you're in a shell that will run only the highlighted part. I was working in SMS.) This was a common idiom where I worked.


I do the same thing. Not sure that it has ever saved me from a disaster, but I do like the sneak peak and am DELETE FROM disaster free. knocking on my desk


Good tip.

Unfortunately when I was working as a PHP programmer, I once made a typo in a variable name, a variable that had my WHERE clause in DELETE...

Back then I fixed it with a box of chocolates and flowers.


I always (with sql server at least) add a begin tran/commit/rollback before any prod statements, because of getting burned in the past.

Even if you add the WHERE, but put it on a second line and only run the first, the transaction will help...

Of course, if it's going to lock the data, do all of the statements together: BEGIN TRAN UPDATE ... WHERE ... SELECT ... WHERE .... -- show that the update worked ROLLBACK

Then run the actual statement


Same - I now write DELETE FROM WHERE and then fill in the blanks.


I do that now, too. I have a trigger-happy semicolon finger.


My method is to write each DELETE as a SELECT first. This has the benefit of actually verifying the DELETE you were about to write.


Doesn't work when you're writing a stored proc.


  BEGIN TRAN
  DELETE FROM ...
  SELECT FROM ...
  ROLLBACK TRAN -- things look bad, itchy finger, etc
  COMMIT TRAN -- things look good, skip previous line
That, and I have 15-minute backups for all databases I care about.



Sweet! Just "alias mysql='mysql --i-am-a-dummy'" in /etc/profile and never make mistakes again!


DELETE FROM? Try TRUNCATE TABLE CUSTOMER ... that one skips the transaction log too, no ROLLBACK possible.


You can't accidentally write a TRUNCATE statement though.


You can have dev-environment scripts that accidentally point to a production instance.


ALL of my PROD environments are unreachable from DEV boxes, for a good reason.


I'll bet I could.


I wish there was a way to put a limit on table updates in mysql's config file.

"Only let me update 5 rows at a time." or somesuch.

I now usually type the command out-of-order.

So I'll type:

where user_id = $f00 limit 10;

then prepend the "update foo set bar = " to it...


I made that mistake once. My DBA never let me live it down.

Thank goodness for daily backups.


For me it's not the stomach. First, breath stops, then for a few seconds numbness in chest and jaw, then face turns pale and soon red. A small nausea follows and then regret sets in. The rest of the day is ruined.


How eloquent.


SQL is desperately in need of some updates.

For starters, DELETE FROM should never be allowed without a WHERE clause. DBs should simply define that as malformed sql.


Exactly. There is no need given TRUNCATE exists.


No entirely. DELETE will work with cascading foreign keys, while TRUNCATE will not, at least on SQL Server. Also, DELETE is logged and (I believe) TRUNCATE is not. Having said that, I agree that a WHERE clause should be required - you can always say "WHERE 1=1" or similar if you really mean to delete all rows.


well, the only difference is that truncate also resets the auto-increment to zero. But you could allow where 1=1 to make it explicit if people really wanted an unbounded DELETE FROM.


or at least interactive sql shells could ask you if you are really sure


The simplest hack I've ever used for this kind of thing is to keep a notepad on my desk and whack it instead of the enter key whenever I'm doing critical work.

In the time it takes to thump the notepad, it gives my brain a vital few seconds to triple check what I just typed before I blast monkies into space. Saved my behind more than a few times.


If you are on a Mac, you must know that you should never put the `-i` option at the end:

  $ ls | wc -l
         5
  $ rm * -i
  rm: -i: No such file or directory
  $ ls | wc -l
         0


Yeah, it is a GNU extension to rm/ls/and others to have the options at the end of the list. Never did like that, but it does mean that people who log into my BSD boxes can never force delete or recursive delete ;-)


Just make sure no one is sadistic enough to follow you around running

    touch -- --


I've run a TRUNCATE in production by accident thinking I was logged into the dev system. I truncated the 4 core user / transaction tables. I've never felt the blood drain from my body so quickly.

Good old binary logs and an office floor to sleep on. I make copies of the data whenever deleting or truncating now. I think we all have to do something ridiculously stupid to start taking precautions.


Perhaps because I'm not a GitHub user, and because I've only ever peeked at HNers' GitHub accounts, but I was always under the impression that given the nature of the service, it would have an early-days-of-HN feel wrt to user behaviour.

It was a little disheartening to see the number of Reddit-esque comments that are simply a couple of words along the lines of "omfg" and a constant stream of meme abuse. I expected better from the programming community.

Sigh. Am I just becoming old, jaded and too elitist for my own good?


It's like reading Youtube comments--not for the faint of heart.

And the jab at reddit from the HN pedestal is probably misguided... reddit used to be more like HN, and HN is becoming more like the bad parts of reddit every day.

Every site tends toward Youtube level comments as time passes, and the people who don't like it eventually jump ship to a new site, and then the process repeats itself.


If your account is less than a year old, please don't submit comments saying that HN is turning into Reddit. (It's a common semi-noob illusion.)

http://ycombinator.com/newsguidelines.html


I've been around for much longer than a year, just not with this name.


In that case, you're welcome to submit comments saying that HN is turning into reddit.


I've only been here for almost 2 years (well close enough to 2) however I would say the average HN thread is still more insightful than the average Reddit thread (including filtered sub-Reddits)


HN is not at all like the 'bad parts' of Reddit.


In the discussion on PDF in JavaScript yesterday, this document was linked in an HN comment: http://crocodoc.com/EfqW081

I wonder if the only saving grace of HN is that it's not completely anonymous.


"And the jab at reddit from the HN pedestal is probably misguided... reddit used to be more like HN, and HN is becoming more like the bad parts of reddit every day."

But you just did the same thing...


Not really the same thing. I'm saying that HN shouldn't look down its nose at reddit, since it appears to be heading the same direction. I'm taking a jab at both sites (and all sites, really).


There's nothing more irresistible to a tech geek than recursive irony, regardless of where they fall (or wish they fell) on the YouTube/Digg/Reddit/HN commenter spectrum.


The change is a few weeks old, and all of the comments are about an hour old. Seems safe to say that the comments reflect more on "people linked to this change from HN/Reddit/etc" than "people that use Github"


I'm not about to go looking at all of their accounts but considering the comments most left it looks like they were already logged in. I doubt they all signed up just to post a 'lol'. I'd put my money on these comments being from an intersection of github and reddit/hn/digg users.


> I doubt they all signed up just to post a 'lol'.

Well, actually /b/ often raids web pages and do such things. If anybody working at GitHub can correlate the comments and the age of the accounts, I wouldn't be surprised to be surprised.

Plus, to me, it appears more like /b/ than reddit. I have a very limited experience in reddit though (yes, those anonymity-driven, fast-flowing communities are interesting).


Actually, your comment sounds more like r/programming than anything I've seen on HN lately.

Github comments are usually on-point, because nobody comments on github-hosted projects unless they're actually working on the project.

The exception is when somebody does a funny commit, and it gets circulated through...you guess it, HN, reddit, etc.


Exactly. This isn't representative of 'the github community', it's more like 'the subsection of the reddit/hn/etc communities that feel the need to post funny pictures using their github accounts to do it'


You're not too much of an elitist, you're too much of a crank. Does it really dishearten you to see a bunch of people having a good laugh? Does every word written on the internet need to be researched/profound to be worthy of reading? This is the internet, it will forever dishearten you if you think in these terms.


No, that was exactly my impression as well. Those comments remind me more of the Daily WTF where everyone points and laughs, but people rarely step forward to explain exactly what the problem is, and more importantly the solution.


I laugh at Daily WTF but I always cringe thinking about some of those its3AMgottalaunchat8AMshitI'msotired nasty kludgy hacks I've been responsible for.

Or just the times I was well rested, under no pressure, and just coughed up some stupid.


>It was a little disheartening to see the number of Reddit-esque comments that are simply a couple of words along the lines of "omfg" and a constant stream of meme abuse. I expected better from the programming community.

You don't get good discussion from a commenting system. Github's comments aren't designed for with-each-other discussion, they're designed for at-the-author/audience commenting.

HN takes comment quality extremely seriously, and with-each-other discussion is perhaps the main focus.

reddit is somewhere in between: most of the users of reddit don't seem to be interested in discussion, but in open-ended polls a la AskReddit, IAMA, DAE, etc. The fact that reddit threads only last for a few hours, and that the volume of comments is so huge that it's hard to expect the people you're aiming your comment at to get anywhere, these both reward commenting rather than discussion; /r/bestof does as well. That said, reddit's topical breadth draws in lots of smart people, who are usually looking to talk about something interesting.

The vBulletin/phpBB model of forums doesn't really scale too well (unless you go the SomethingAwful route and impose a fee and lots of super-strict moderation) but it works well with up to around 150 active users. The best forums have the highest SnR on the Internet.

4chan-ish anonymity works up to around 5000 before you get chaos, hence the longing for "old /b/" and the value in the other boards.


I suppose what you're seeing here is actually a reflection of Reddit, 4Chan & friends, i.e. I suppose this very commit has been posted in some of those places as well. I still have some hope for Github-in-general.


Actually, almost all of the comments are since it got posted here, and as of right now this second, I don't see it's popular on Reddit...


HN is mirrored on tons of sites. The fact that it's the default on somewhere like jimmyr.com for 'coding' should tell you how well-known news.yc has become. Don't think it's full of erudite hackers only anymore.


The reason we all know to be careful with rm -r is because of that one time we weren't.

Me, it was the time I rm r'ed the MySQL data directory for my company's customer service management system. Oops. Thankfully we had a backup from the month prior, but I learned two things that day: a) be really careful with rm and b) take it on yourself to make sure IT is backing up stuff that you're messing with.

You gotta hedge against your own stupid.

What about you, how'd you learn the hard way?


Was asked to uninstall IBM/Rational ClearCase from our source code repository server. Apparently at the time, Clearcase's installer NFS mounted 127.0.0.1:/ to a subdirectory. Don't ask me what brain-dead system designer thought this was a good idea.

So, I did a simple /etc/init.d/clearcase stop (not sure that is the exact name) and:

# rm -rf /opt/clearcase

(hmm... that seems to be taking a little too long to run)

Panic - then ctrl-C - it was too late, /opt/clearcase/foo was NFS mounted 127.0.0.1:/ and it had already trashed /bin, /sbin, /etc, /var, and most of /usr.

Luckily I had good backups, but we did spend the rest of the day rebuilding the source repository while the developers couldn't check in any code.


One of my first tasks after being given root access at my first n?x job at an ISP was "clear off all of the DNS stuff on $main_server since we have $dedicated_dns_server now".

So I merrily started mv-ing things to a scratch directory that we could wipe in 6 months if we didn't need anything from it.

Unfortunately, the zone file directory was NFS mounted from* $dedicated_dns_server. With pass root set.

I think I took all of A through to K of client zones offline before we noticed.

I'm just very very glad I decided to do it as an mv rather than an rm, since it meant all I needed to do was copy things back.

Not that I really learned my lesson that time; it took a couple more semi-disasters before I got sufficiently paranoid to be reasonably safe as root.


Too late now, but remember to escape your asterices.


I've also done this, but it was a cPanel server and the jailshell did bind mounts to various directories on the system. I tried to remove the jailshell instance, and ended up removing a whole lot more.


Not useful in your case, but I've made a habit of using rmdir when removing directories that I think might be mounted.


dead tired, 4am in the morning working on a client app for a huge client of theirs (I was subcontracting) was trying to remove everything inside a folder and instead of going

rm -rf ./*

I went

rm -rf . (can't get the wildcard thingamajig to show up)

It took me a second to understand why the command was taking so long to run, by the time I figured it out and killed the command, I had wiped out almost half of what was on the drive.

Biggest "oh. my. god" moment of my life. I think I had an actual panic attack for a bit even.

Luckily, media temple had a backup from just a few hours earlier (I was lucky, they only ran them periodically and it just so happened to fall on that day).


Wouldn't that just delete the folder instead of emptying it?


Back in the day, rm followed .. if you specified it on the command line. It went like this:

You are in .; The current directory listing include ..; Recursively deleting everything deletes everything on the drive.

Actually, I think the one time I saw someone do this, wildcards were involved. And I was going to explain, but the comment system is making my asterisks into bold markers.


I can't get the wildcard character to show up ... it should be x.x


You can prepend two spaces to insert <code> (http://news.ycombinator.com/formatdoc):

  rm -rf ./*
  rm -rf  *.*
To stay on topic, because of this thread, I just added alias rm="rm -I" to .bashrc, and I have never yet needed -f.


In a script:

  sudo chown -R username:group $WEBAPP
For various reasons, $WEBAPP got set to "/ usr/local/blah/blah" (note the space)

I had the Linode backup service, so got away with restoring the whole VM from the previous night's backup.


More than once I've made the mistake of running an UPDATE and forgetting the WHERE clause.


In case you are using MySQL this can be handy: http://dev.mysql.com/doc/refman/5.5/en/mysql-command-options...


I've started quoting arguments in shell scripts even when it's not technically necessary to avoid problems with spaces. I can't count how many scripts I've written/encountered that didn't work with a path containing a space (apparently much more common with OS X users than Linux users)

This wouldn't delete the correct directory, but at least it won't delete "/usr" either:

    rm -rf "/usr /lib/nvidia-current/xorg/xorg"
There are lots of other pitfalls associated with not quoting things in shell scripts, like this common one:

    if [ $foo -eq "bar" ];
will cause an error if $foo isn't set or is an empty string, while this will work correctly:

    if [ "$foo" -eq "bar" ];
Bonus that your syntax highlighter can highlight the arguments. My rule is that flags aren't quoted, but paths and other parameters are.


I always use set -eu. Makes my scripts much safer and less unpredictable.


I don't remember the exact details, but a few years ago there was a Perl module that did something like:

   my $path = something_that_can_return_undef_on_failure;
   `rm -rf /$path`;
during the execution of its test suite. The author didn't catch it in testing because he never ran "make test" as root (who would?). But people on the Internet ran "make test" as root, with disastrous consequences.


I'm not sure if you're being sarcastic with the "who would?" portion of your comment, but doesn't cpan run tests, by default, whenever installing something? And, most people who aren't Perl developers with their own private Perl installation install CPAN modules as root, so they are available to all users. So, to answer the question seriously: Most people.


Uh, the cpan client runs "make install" as root, not "make test".



$ rm -rf /

rm: it is dangerous to operate recursively on `/'

rm: use --no-preserve-root to override this failsafe


rm: use --no-preserve-root to override this failsafe

Brilliant, at long last at least a bit of protection! :) Sadly, I can still remember doing this to my one of my first Linux installs, albeit via the classic:

rm -fr .*

On the plus side, that day I learned one hell of a lot about how Linux works ;)


Here is the original bug report. https://github.com/MrMEEE/bumblebee/issues/123

I've never heard of BumbleBee, but it must be great if a user can have their machine wiped out and still thank the library author for their work.


The other bug reports linked in the commit are kind of sad. From 122:

"I thinking that something is messed up with bashrc or sh because in few seconds I cannot execute any command in other terminal.

Am I doing something wrong? Hope You can help me."

https://github.com/MrMEEE/bumblebee/issues/122


It is great. Quality linux support for nvidia optimus cards is pretty important for a lot of people I know, and bumblebee appears to provide it. (Apparently, optimus cards act /really/ strangely without a proper driver.)


To be fair, it was only rendering the machine unusable, not wiping out your home directory. A simple OS reinstall should get people back to square one. (Unless the OS reinstall script was botched and formats your disk when you've told it not to, which I've heard the latest Ubuntu does.)



When you design something and the default case is to shoot yourself in the foot, you should stop and rethink things.


That never gets old


Amazing.


I use the safe-rm (http://freshmeat.net/projects/safe-rm) package to help me avoid doing this kind of thing.


Do you not find that a lack of wide installation limits its usefulness?


You can use GNU remove, which lets you put arguments last for safety, on any Linux box:

Eg:

rm /whatever -rf

Ie, if you hit enter too early, you still haven't forced.

I've been using Linux for 14 years and have never accidentally rm'd recursively. I'm not sure when they added it, but I've been using it for a very long time.


I just use it for my boxes. It doesn't stop me putting something silly in a make file (as in the example here), but it does stop me getting burnt by it.


I see; thank you.


Neat. I just installed that on my machines.


Back in the late 90s I worked on a small Windows product... our CEO complained that when the uninstaller ran, it left the empty directory behind along with some small temp files created by the software (that the package hadn't put there during install). So the guy making the package added a command to remove the directory and its contents...

... and the first reporter to try software, for reasons I'll never totally understand, chose to install it in C:\. Worked great until he went to uninstall it.


I've done a similar thing a few years ago when I was first starting work on my guild hosting company's code.

At the time, the main thing hosted on that machine was my WoW guild's website, which I had been working on for close to a year, and was beginning work on converting the site over to a general purpose guild hosting site.

I was doing some work for a client, setting up a mirror of sorts for some kind of yearbook thing I had built for them. For that, I made a script that would mirror the online yearbook with wget, zip up the whole directory, then clear out the mirrored pages (all I cared to store and serve was the zip file).

All of my websites were stored in /www on the server, and the raw yearbook was located at /www/clientname/www. Inside the clientname directory, I had the mirror script which was something like this:

  wget -whatever_options http://whatever.address www
  zip yearbook.zip www
  rm -fr /www
Unfortunately, because of how frequently I type / before www to get to my web dev directory, I instinctively put "/www" in the script where I just wanted to do "www". I ran the script, checked to make sure and it looked good, and deployed it to a cronjob.

My heart sank when I tried loading my guild page a few minutes later (just to see what was going on on the forum, if anything), and it served up a bunch of 404s.

I went to /www/guildsite and saw it completely empty, and almost immediately figured out what had happened. At that point, I had to get my composure and figure out what I was going to do (I did not have backups or source control). I unmounted the directory, and went off to lunch with a friend, shaking with anxiety.

Upon return, I started writing out a perl script to scour the device byte for byte looking for PHP scripts, looking for instances of <? and then capturing the following 2000 lines or so, assuming that would be a sufficient buffer. When the script was done, I set it to run on the partition, and 45 minutes later I had a few hundred files to work with.

I had to manually go through every file (which were named numerically in the order they were found in the filesystem) and determine if it was the most recent (or recent enough) copy of the code, clear off any trailing bytes, and rename it to the filename it used to have. Luckily I could remember the name of almost every file in the system. It took about 8 hours to go through the few hundred files and recover them.

Needless to say, I learned my lesson after that, but the feeling of victory I got from recovering those files from the device was epic.

6 years later, I realize that that's a rather trivial thing to do, but at the time, I didn't know what I was going to do, and remembering that the file system doesn't clear all the bytes of the a file just it's reference gave me tons of hope.


I've done a similar thing a few years ago

Ha - I'll bet everybody here has a story that starts like that. (Although your heroic save was heroic!)

My sad delete story was in 1982 - I had a fantastic graphical robot-vs.-robot game stored on tape on an HP portable computer (I was in high school). For some reason, the delete command on the HP had a variant that deleted from a given block forward on the tape, as I recall, and for some other reason, my fingers just decided to type that variant even though I had never done it before.

I still miss that game. It just may have grown in my memory, but I'm pretty sure that was the coolest piece of software ever written anywhere at any time.


Mine was plugging in the power connector backwards on a super important ide drive (it was the only backup for an already failed server) and watching smoke issue from its controller board.

Having to explain what I'd done to the boss was so scary, I slunk home with the drive and traced the power circuit with my oscilloscope until I found a capacitor that had blown short. I soldered on a through-hole replacement and it worked!

I pulled the data and felt like king of the whole frikkin world for the next week or so.


My gosh that's one of the best stories I've heard in a long time. The thing that worries me is that things like this are going to me rarer and rarer - there seems to be less interest in electronics, and less electronics that can be hack-fixed like that.


I am a software engineer by degree, but also took many electrical engineering courses in uni, and I can honestly say that it is starting to make a comeback. Sparkfun, Adafruit, MAKE and many other places are starting to make it more accessible, cheaper and easy to learn. The Arduino has been a boon, providing people with a cheap but powerful microcontroller to get started.

While yes technology is getting smaller I have found that with many parts I can now easily find replacements online, I can get advice from other professionals, I can easily figure out how something works so that I can fix it. I've currently got a power supply sitting on my work bench that has a weird issue and I am slowly going through, making a net list and building a schematic with part numbers in an attempt to isolate the fault.

Maybe I am a rare breed, but seeing as how the interest at Maker Faire keeps going up, and interest in electronics also keep going up I will assume that eventually more and more people will get into experimenting in this field.

At least that is my hope.


arduino FTW! just getting into it and caused my first electrical fire. :-) good stories.


>Mine was plugging in the power connector backwards on a super important ide drive (it was the only backup for an already failed server) and watching smoke issue from its controller board.

yep this could cause you to lose a job.

> Having to explain what I'd done to the boss was so scary, I slunk home with the drive and traced the power circuit with my oscilloscope until I found a capacitor that had blown short. I soldered on a through-hole replacement and it worked!

pulling this off, however, could land you a job!

lesson: mistakes are human. talent and initiative are rare.


I'm pretty sure that was the coolest piece of software ever written anywhere at any time.

Hah! I literally (not figuratively) laughed for a minute at that. I'm still smiling a few minutes later. Well played.

Let's mourn together at the memories of long lost code.


Let's mourn together at the memories of long lost code.

I'll drink to that.


Yeah, the same thing happened to me a couple of years ago and at the time I thought I was the coolest guy on the planet when I recovered most of my data. I had accidentally deleted all my chat logs. Luckily, all the chat logs had a timestamp in them, so I just searched for all timestamped lines on the disk and dumped them to a file. I then wrote a script to group and sort all the logs based on the date and who I was talking to and recreate all the files. It worked better than it had any right to especially since I was doing everything on a live disk!


I thought I was the coolest guy on the planet when I recovered most of my data.

That's the feeling. It's almost euphoric. It's the "FUCK YEA" meme (http://knowyourmeme.com/memes/fck-yea).

In actual content, it's convenient that you had timestamps to work with. That eliminates a lot of the need to trim off trailing bytes. Kudos to you for the epic save, and on a live (mounted?) disk too? Living dangerously :)


At work, we had a USB drive full of Ghost images. Someone else didn't know how to use Ghost that well and they managed to nuke the partition table by trying to write an image on the USB drive onto the drive itself... or something like that anyhow (newer versions don't let you do that any more, I note). Fortunately, they didn't destroy the data itself.

I rebuilt the partition table and saved all of our backups.


That's like the time my supervisor was done setting up a server, and then was going to image it. He had two identical drives in the machine and imaged the empty to the full drive instead of vice-versa... I was the one to cleanup the mess, but learnt a lot.


I did not have backups or source control

There’s an important motto to bear in mind here: data you haven’t backed up is data you don’t want. To be fair, this does bite us all at one point or another, but once it’s got you once you make damn sure it doesn’t get you twice.

Also, these days I get shaky just doing FTP deploys. Give me capistrano and git, or even better a direct Heroku push any day of the week.


And, code listings that are not printed out are not backed up. Lets see you accidentally delete toner from paper.


Well, fire can. Which is why you need multiple off-site backups for important data.


There wasn't any version control software out there 6 year ago? Oh, I've been programming for 7 years, I remembered that I used VSS when I started working. But maybe I was using pirate version and you couldn't afford it.


There were plenty of RCSes out at the time, and I knew about them, but didn't use any of them. It was still a pet, personal project and I didn't consider it important. I had used VSS at my previous job and hated it, and I hadn't taken the time to learn any of the OSS VCSes at the time.

It was laziness, nothing more, and I got burnt playing with fire.


This terrifies me now of every single install script that refuses to run if not root.


With a bug like this, an install script without root permissions can still wipe out your $HOME when messing around in the dotfiles. Scary!


That is why I keep a second home directory, /home/experimental for trying things out.



I backup a couple of servers daily, why not the ol' home machine too?


It's a pretty messy script anyway - rm'ing /usr/lib/nvidia-current/xorg/xorg doesn't strike me as the most delicate approach to the problem.


This release fixes the install script; it no longer deletes the /usr directory in its entirety. But how on earth did this get through even basic testing? Absolutely shocking!


Seems non-trivial to test. First, its in a bash install script - i wouldn't know where to start. Second, if you tested the behavior, you might just test that it "deletes /urs/lib/nvidia-current/xorg/xorg" by running the install script and checking that the folder is gone. Guess what, the folder is gone...test passes.


Not really that shocking if you realize it's a mostly one man project that probably doesn't have basic testing.


A couple months ago I had to recover some rm'd files by basically grepping 512-byte blocks on the file system for the file headers then writing out the next few KB to a file on a separate partition to manually go through..

My command sequence was more like this though, rather than a straight rm:

    find -name '*.java' | xargs grep --color 'something'
    # guh, get rid of these old copied .svn dirs polluting output
    find -name '.svn' | xargs rm -rf
    # now what was that..
    find -name '*.java' | xargs rm -rf
Forgot to edit the right side of the pipe back to the grep. Zealous use of the up-arrow burned me...


While we're on the subject, has anyone successfully found/created a replacement for rm on OS X that moves files to the trash, but doesn't break the interface of rm for use in scripts?


Normal Unix method of doing this is via LD_PRELOAD, then you wrap unlink() in something that moves things to a folder. I used something - think it was libtrash - when I used Linux on the desktop, but haven't investigated what the OS X equivalent would be.


rmtrash

http://www.nightproductions.net/cli.htm

I prefer 'brew install rmtrash'


There's lots of stuff like that out there, I'm talking about something I can actually replace the rm binary with, a wrapper that precisely maintains the rm interface as described here: http://developer.apple.com/library/mac/#documentation/Darwin...

I like the idea of actually wrapping the unlink syscall, but I have no idea if/how darwin allows that.


You can alias trash-cli over rm, though it ignores -i, -r and -f and instead runs recursively, forced and non-interactively by default.


I use trash-cli (http://code.google.com/p/trash-cli/) on Linux, it is in Python so it shouldn't be too hard to get running on OS X (although I guess you might have to modify the location of the trash folder)


I only recently discovered molly-guard:

http://packages.debian.org/sid/molly-guard

which prevents you from running halt/shutdown etc. via SSH without first confirming the hostname.

I've done that before with pretty traumatic consequences, so it's now on my list of must-have's for any important remote box.

http://www.catb.org/jargon/html/M/molly-guard.html


rm is like a knife where the handle has a sharp edge too...


And I'd say rm -r is like a rocket-propelled chainsaw, powerful and out of hand.


A double edge knife/sword?


"Double-edged sword" has never worked for me, as a cliche. Do you often find yourself inadvertently smacking against the dull side of a single-edged sword, such that you stay away from double-edged swords for your own safety? rm is like a double-ended knife, i.e.: http://image.shutterstock.com/display_pic_with_logo/4253/425...

(Fun fact: many knife throwers grip the blade end anyway, rendering the cliche to an even simpler "rm is like a knife".)


This is completely unrelated to the original topic at hand, but I thought it might be interesting to shed some light on knife throwers' grips. For a given thrown weapon, whether a knife or an axe (or anything else that is meant to spin end-over-end), the number of revolutions is roughly fixed and is a function of the distance from the thrower to the target. Thus if you happen to be at a distance where you're getting a half revolution, you'll throw from the knife blade, or turn the axe around (so it's pointing towards you on throw). That enables it to hit the target the right way around. Of course, many people prefer to take a half-revolution step forwards/back so that they can throw from the tip regardless, just as a matter of form -- I did this for a while, although I feel I get better control when throwing from the handle.


I have always been amazed by knife-throwers' ability to calculate the amount of revolutions between their hand and their target when throwing. It just seems like one of those things that the human brain couldn't possibly calculate correctly on a consistent basis. Does it take a very long time to get comfortable with it when training?


It took me a couple months to get decent at it, though I'm still a quarter revolution or so off frequently. Axes came far more naturally to me, where I generally get very close to perfect on my first throw. If distances are marked I'm fine with either, though.


Are there specific techniques to learning it, or is it just experience? For example, do you have to be very familiar with the specific weapon you're using? Does "visualizing" help? Are there tricks or points of reference that you use to help out?


I just practiced a lot, really. Familiarity with a specific weapon helps in terms of knowing how it flies, how it's balanced specifically, etc but you pick that up after a few throws. Biggest piece of advice I can give is to not rush -- I have a tendency to get too "into it" and lose focus on my technique, so I had to slow myself down, take a few breaths, and think through the whole process.

If you want to get into it, I recommend two things: first, find someone who does thrown weapons and can talk you through the basics and point out mistakes in your form (SCA events are a great way to do this, and that's how I got into it), and the second is just to get a slice of a tree trunk and some weapons and start practicing regularly.

I found it to be a great way to relax and get my brain away from tech. It's one thing I miss in moving to NYC.


I had a friend who actually knew proper sword technique, and he was not impressed by my machete with saw teeth on the other edge. With a single-edged sword you can use your forearm to support it when blocking, but with a double edged sword, not only can you not do that, but trying to block with the sword itself allows your opponent to overpower you and press your own sword against you.


For what it's worth, you can use your arm to support it if you're armored as well. If you're in chainmail/plate, the force is distributed well enough that there's no reason you can't do this, even with a double-edged weapon. That said, there are many swordfighting styles where armor hinders you significantly, and you frequently see single-edged weapons in these. It's actually really interesting to study the history of swords, their techniques, and the armor commonly used along side them -- used to have a really good book about this, but can't find it now.


I did not think of that angle, and am now appropriately recalcitrant.


more like a double-edge bladed boomerang that'll come back to bite you in the ass if you throw it just the right way.


I actually ran that piece of code. Sure was glad it was only on a test partition. I did lose some trust in the developers after this, but I tried bumblebee again later and am happy I did because it works great!

I forgive them.


Never use rm's -f flag while operating as the root user. Never. Replace with -i until you are absolutely 100% certain the script you're writing works as expected. Always doubt yourself; be humble.


The -I flag is a bit more friendly; one may safely alias rm to rm -I and not even notice on the common case (deleting a single file).


I use a script that moves a file to a ~/.trash directory when I use rm. If I am sure I want to actually delete a file permanently I just use /bin/rm.


But that's also the best way to becoming careless about typing "rm".

And the day you will be ssh'd on another computer, you may forget that you are using a command which is really deleting.


Sounds like a great way to fill up your disk to me.


You could use a cron job that removes the "deleted" files after some period, such as 30 days.


I had a hard drive crash on me once, and I wasn't that worried about it, because I had setup network-based backup to the server in a different room. I remember thinking, "Restoring this backup will be sooo much easier if I can just connect it directly to the PC." Que me walking with the backup HDD towards the PC in question, when I drop it on the floor. When I plugged it in, it literally bounced inside of the chassis (platters fell off motor or something).


Is there a good reason why Linux doesn't have a recycle bin?


Mine does, now. I have a directory ~/.trash and a script rm that moves files to the trash. If I really want to permanently delete something I need to use /bin/rm (including for cleaning out my .trash dirs).

ADDED: Note that I have a .trash in each user's home directory including /root. And a copy of the rm script in each user's home/bin.


<sarcasm>because real Operating systems don't need one?</sarcasm>


That's what I thought. I'd be happy if anyone chimed in with a legitimate reason, but I won't be surprised if the lack of a recycling bin is just one more symptom of the Linux developer community's apathy towards the actual human beings who use their software.


I would speculate that it is a historical reason. It's no secret that the Unix environment was not designed for personal use in homes, but on mainframe time shared computers inside universities and businesses. Space was limited, and just moving files to another location to deal with later added unnecessary steps to a process that didn't have much of a benefit, at the time.

As space has become less valuable on computers, and they have become less of a specialized tool, it may be wise to add one, but most of the desktop environments already implement it already so there is no need to recreate the functionality at a lower level.


there is no need to recreate the functionality at a lower level.

I refer you to the original submission.


Care to provide any example of an OS that would prevent this? Windows and OSX both have this same fault.


I don't have an example, I'm just thinking what's the best way for an OS to handle deletion.


Even if it did, you wouldn't use it in this case. Windows batch files delete files outright too, instead of sending them to the recycling bin. You don't want to have to depend on the user to clean up after your automation.


You don't want to have to depend on the user to clean up after your automation.

I don't see what's the big deal is. Once in a while when the recycling bin gets too big, the user can empty it. Or you can have a scheduled operation that deletes stuff after 30 days.


> Once in a while when the recycling bin gets too big, the user can empty it.

Many Linux systems don't even have a regular "user". They just sit in a corner and serve webpages, or do other tasks silently.

> Or you can have a scheduled operation that deletes stuff after 30 days.

Why not just keep backups for 30 days?


Backups have a latency that a recycle bin doesn't. When I made something two hours ago this is the worst time to unlink it.


Why not just keep backups for 30 days?

Be realistic. Setting up backups takes effort that many users are not going to expand, while the recycling bin mechanism can be set up by default.


Almost all of the desktop environments commonly used with Linux do have a trash bin.


If it wasn't obvious, "Why is `rm -rf` not connected to the recycle bin?" is implicit in my original question.


You're operating at a layer below where a trash can makes sense. Pretend that this command had moved everything to a trash can. The command to move the files out of the trash can now resides in the trash can, where it isn't being very useful. It's still possible to recover the files, but then again, it's also still possible to recover the files you deleted with rm.

rm is on the same layer as the DOS del command. Neither goes to the trash can, because they operate on a lower level.


it's also still possible to recover the files you deleted with rm.

If you're lucky and didn't write too much to the hard-disk after deletion, yes, but with a recycling bin you have much higher chances of recovery.

Regarding `del` in DOS: You're the second one to bring up that analogy. I don't see how this is relevant. Just because Windows does it that way doesn't mean that it's good.


If you want a recycling bin don't use rm, use mv.


Because that's not what rm does. Changing it would be breaking all sorts of standards. Many, many things depend on rm simply unlinking files. Why don't you use a different program if you would like to have some sort of trashbin behavior?


I'd be cool with using a clone of `rm` that sends to the recycling bin instead of actually deleting. I think that Linux should include a clone like this by default.


Good question. Here's my 60 second solution. <:)

  alias rmmmmmmmmmmmmmmmmmmmmmm='rm -i'
  rm() { D="~/.Trash/`date +%s.%N`"; mkdir -p $D; mv "$@" $D; }


Looks like someone never read the Unix Hater's Handbook. Another fun thing is rm + shell expansion. A file named * or / can cause extremely unintended deletions.


that why you should add -- after your parameters to be extra safe

    rm -- -r -f  # removes files -r and -f


Files can be named anything but / in Unix.


I'm pretty sure there's a file named /


I keep /usr in a squashfs, mounted with aufs over top of it, for the 0.01% speedup I probably get (I grew up on gentoo, forgive me). Periodically, I need to rebuild the squashfs to reclaim space in the aufs writable directory.

I once had a script something like this:

    mksquashfs /usr /squashed/usr/usr_tmp.sfs -b 65536
    umount -l /usr
    umount -l /squashed/usr/ro
    rm /squashed/usr/usr.sfs
    mv /squashed/usr/usr_tmp.sfs /squashed/usr/usr.sfs
    rm -rf /squashed/usr/rw/*
    mount /squashed/usr/ro
    mount /usr
Guess what happens if you hit Ctrl-C during the mksquashfs? That's right, bash runs all the rest too, including the deletion of the old squashfs file. I was left without a /usr, and it was brutal. Managed to recover (it's incredible how resilient a linux system is, as long as /bin and /lib are intact), and immediately put "set -e" in almost every bash script on my machine (I also fixed the script to keep the system in a recoverable state at all times...).


You get in trouble not at the moment this happens, you get in trouble much earlier than that - when you allow yourself into a situation where a single typo leads to ruin.

Myself, I use TimeMachine and bitbucket on my Mac, and every-15-minutes snapshots on all Amazon EC2 EBS volumes. Similar solutions can certainly be found for your platform of choice.


1000 points really? This is on par with initial tsunamii news.


The designer of rm is the greater culprit here, not the author of that install script. A single mistyped character should not lead to such drastic consequences.

"Usability? What's that? I'm really smart, so I don't make mistakes. If lesser humans do - that's their problem". That seems to be the attitude of many Linux programs and rm is among the worst of them. No doubt I'll get downvoted for saying this, but I've rarely, if ever, heard of such things happening in Windows. (And people still manage to delete files in Windows without too much difficulty.)


The two sure fire ways to live a long and happy life:

1. find . -name "pattern" <enter> <look carefully> <up arrow> | sudo xargs rm -f

2. WHERE some_id = 36; <home> DELETE FROM table_name


Unix could really do with a command that you can wrap around this type of call. Either a sanity check on the path part or a safe rm alternative that contains it. I would gladly give up full rm access to know that I can safely (or safer-ly) delete in scripts.

It could be something as simple as a file with paths on each line it - match one path or a path with a glob - and the script fails before destroying anything important.

Overriding it might involve adding a --override=/path/to/something but at least then it would be very explicit


I sometimes use a wrapper around mv that moves the folder to trash. Guess it's not that portable but it could be replaced by moving to /tmp or something instead perhaps.


Ah. It seems I have just described 'safe-rm'.

Great minds etc :)


Indeed. Those were exactly the design goals I had when I wrote safe-rm (shortly after deleting my /usr/lib!).


just tried to explain this to a few of my windows using coworkers.


Just tell them it's the same as a Windows installer accidentally removing the entire Program Files directory.


I knew someone who, in the days of Windows 3.1, managed to accidentally invoke "format c:" from inside Microsoft Word - I was in the same room as them when they did it and heard the cries for help. What they couldn't do was explain to me what they had done to accompish such a feat.


As Windows was probably installed on C it probably didn't do any real damage, did it?


If I remember correctly, that was before Windows started protecting boot volume.


To be more precise, NT had file locking enforced by the OS unlike DOS.


Pool of Radiance had a bug where the uninstaller would wipe your hard drive. Oops.

http://arstechnica.com/reviews/01q4/pool_of_radiance/pool-1....


Okay, that is not the pool of radiance I played as a kid.


It's a DELTREE on %whateveritis%


and the entire windows directory, and the entire users directory, and ...


It was only deleting /usr, which doesn't contain home directories or config files.

I suppose that /usr does store a good chunk of what the Windows directory has in it though.


In other Unix flavors, /home may be an alias to /usr/home.


Sorry, was thinking of the awful `rm -rf /` variant...


"Tried?" Certainly it wasn't that hard to explain.


Windows users that aren't privy to *nix culture generally don't find all of the "rm -rf ..." jokes all that funny. They get it, but it's sort of like telling a german joke in English to an English speaker only; they get it, but it looses it's humor if you aren't privy to the culture.


no, they got it. they just didn't get why its funny. I guess you have to fuck up rm -rf at least once to get these sort of jokes.

it's like explaining jeep jokes to non-jeep owners.


If you explain it the humor evaporates


...my co-workers using windows

ftfy.


Why would he need to explain it to the windows that are using his coworkers?

Also, it's very unclear to me what that the situation you are describing entails.

/s

Grammatically, your sentence is no less ambiguous.


oh man, thursday evening, a couple of glasses of wine, and here I thought I was the grammar nazi :)


"my windows-using coworkers" would be a better fix.


"...my coworkers who use Windows" makes more sense.


my windows-using coworkers.


This tip only applies to interactive shells: I often prefix potentially dangerous commands with '#' while I edit them. Tab-completion still works (with bash at least, so I assume also with certain other vastly superior shells).

    # rm -rf /path/to/^I


Reminds me of some slackware install script back in 1999 or so, I think it was for the gnome documentation.

Anyway, it accidentally removed rm-rf'ed /, I only discovered it in time because it gave errors about removing nodes in /proc...


(I'm not a Linux user, forgive my ignorance)

Is there no way to prevent bugs like this at the source, by modifying Linux, rather than hoping there isn't any extra white space in a command that might delete your usr directory?


Once we got catastrophic backup

rsync -a --delete /home/project/ /mnt/backupDisk

Left unnecessary slash after "project" and rewrite all content on "backupDisk" by project files (instead of sync project folder on it)



linux can still boot without a /usr. this was a commit; how many people ran it?

on the other hand, delete the boot.ini[1] and most windows systems cant even boot. now deploy that boot.ini deleting build on an MMO (eve online) and watch the fur fly. that, ladies and gentlemen, is how you earn a :golfclap:.

[1] http://google.com/search?q=ccp+boot.ini


This would be a lot funnier if there was anything preventing me from doing exactly the same thing... ouch.


That's pretty heinous, but deleting /usr shouldn't cause you to lose any irreplaceable data.


From the FreeBSD handbook ( http://www.freebsd.org/doc/handbook/users-introduction.html):

    The home directory is the full path to a directory on the system in
    which the user will start when logging on to the system. A common
    convention is to put all user home directories under /home/username
    or /usr/home/username. The user would store their personal
    files in their home directory, and any directories they may create
    in there.
All of my irreplaceable data is stored in `/usr'. `/home' is a symlink to `/usr/home', as created by the installer.


Oh man.

an RPM I had at one point had / as one of its trees. There's a reason I moved to debian.


I wonder if this is what the founders of github expected from "social coding"?


That poor guy. But I appreciate the reminder to double-check any rm command.


That's quite a big bug. :/


and it was 666 points a second ago. symbolic.


glad it wasn't me..


A decade ago, I worked on a DNA sequencer / aligner product. This produced easily 1GB+ raw data files, and they typically exploded by a factor of ten by the time you performed a bunch of cleaning, smoothing, filtering, etc on them. For several reasons, not least of which was a 4GB file size limit in fat32, this software had to use a directory as a pseudo file.

I was working on some file saving logic. A customer had a problem where they'd overlaid a new logical file on top of an old logical file. Where these actual files, this would just have overwritten the old file, but since these were directories, we got a mishmash of pieces of two different logical files overlaid in the same directory, and of course our software got confused as hell. So, I wrote code that, in case you saved a new file as an extent filename (really directory name), would perform the equivalent of

  rm -rf $dirname; mkdir $dirname;
You can see where this is going... Some grad student didn't understand this, and named a pseudo file as the root directory of a tree of research. Two years of research vanished into the ether, despite a dialog box that had red text in it. That sucked.


It is crazy not to have backups of research data.


<rm -rfi> -i interactive mode man is here to save the day - now you can put up to 50% more blame on the end-user!


I hate languages with significant whitespace


Zim'sclassicbook"CodesandSecretWriting"beginswiththisexample ofaverysimplecryptogram:

   EAR LYTOB EDEAR LYTOR ISEM AKE SAM ANHEAL THYWEAL THYAN DWISEM
Ihaveactuallyprogrammedinlanguagesthatdisregardedwhitespacee ntirely.OldversionsofMicrosoftBASICforexampleusedaonelevelpa rserwhichdidnothaveaseparatetokenizer.Fortranalsoignoredwhit espace(asidefromthewhitespaceneededtogetyourcodeintotheright columnonthecard!)withtheunfortunateeffectthatthesetwostateme ntswereequivalent:

    do 20 i = 1. 10
    do20i = 1.10
Thefirstoneisanobvioustypoofthefollowing:

    do 20 i = 1, 10
Which,ifmymemorydoesnotfailme,istheoldFortranwayofwriting

    for (i = 1; i <= 10; i++) {
withline20beingthecorresponding"}".¶Ihopethiscommenthasbeenh elpfulinunderstandingwhySteveBournethoughtmakingwhitespacesi gnificantinhislanguagedesignwasagoodidea!


The "significant whitespace" here is a space between command-line arguments. For that to qualify, you'd also have to include C as such a language.

  long jmp();
  longjmp();


c'mon everyone, I was being facetious :)


Das ist nein gut.


lol, f*ckups don't get any bigger than this


OHHHHHHHHHHHH wow .... great link, passed around the office for a few laughs ... this is one of those heart sinking moments ....




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: