Hacker News new | comments | show | ask | jobs | submit login
How do I prevent accidental rm -rf /*? (serverfault.com)
51 points by necenzurat 1834 days ago | hide | past | web | 51 comments | favorite



First of all, the whole point of the "-f" option is to disable confirmation, which really means "I know exactly what I'm doing". The easiest fix is to stop using that option all the damned time.

When you have strong permissions (e.g. running as "root"), you should never use patterns in destructive commands, period.

At best, you should perform a nondestructive pattern command such as a "find" and generate a precise list of target files that can be audited. For example, here is one way to produce a script of commands that deletes an exact list of matching files:

    find * | awk '{print "rm \047"$0"\047"}' > delete.sh && chmod +x delete.sh


Better yet: don't remove things. Move them to a folder. My rm is aliased to a 'mv to a trahs folder' function. I have aliased rrm for real rm.

Guis know this, but somehow this piece of UX is forgotten on command line tools.


It's really more effective to have a very regular backup (e.g. ".snapshot" directories are really nice), because you can't control all the ways a file may be deleted.

Just because you protect one "rm" command doesn't mean there isn't another. Someone might have used unlink() in a Perl script or a C program. Maybe "mv" was used to write one file over another, or "cat >! filename", or a dozen other things.

In the end, if a file needs to be safe then it needs a backup (and the sooner it can be restored, the better). And then given a good backup the file still needs an appropriate Unix group, owner, file access control list, etc. to minimize the chance that you'll ever need the backup.


Note that 'sudo rm' doesn't care about your aliases. It would be safer to replace the binary.


Cool idea. I usually move folders and files to the /tmp/ folder instead of deleting them. The next time I boot up, they are gone.


The problem with using /tmp is that you may not realize something critical has been deleted until you reboot. Using an explicit trash or backups folder is safer.


> somehow this piece of UX is forgotten on command line tools

Because command line tools were first, and are used by people who do need to remove files, to clear disk space.

If you're never deleting anything, how do you clear disk space?


Use a script to alias rm to a mv command. Then to really delete something explicitly use /bin/rm.


An amusing and helpful trick that I learned was to keep a file named "-i" in the directories that you want to protect. Glob style pattern matching picks it up and the program rm interprets it as the "-i" flag. It is of course not quite foolproof as it can be subverted, but it has saved the day on occasions, particularly one of my friend's. I had a friend who, for totally incomprehensible reasons, would name his files with * s and "."s only and then try to delete one of them with predictable and undesired results.

Well, I have done stupid things myself, for example typing "rm -rf / some_dir" instead of "rm -rf /some_dir". I noticed because it was taking a wee bit too long. It is always good to do an ls with the intended pattern first, to check what all files and directories are being matched before invoking rm with the same pattern.


One thing that you can do is ls what you want to delete and then do a ^ls^rm -rf to replace ls with rm -rf in the last command if satisfied.

you can even do a ls -rf ./somedirectory and then just do ^ls^rm at least you can in bash on os x.


Yeah, the ./-i file trick is mentioned on the best answer: http://serverfault.com/questions/337082/how-do-i-prevent-acc...


In the other thread someone mentions using a file named -i. A better approach is to use a file named -X, which is an invalid flag for virtually every file-oriented command. They'll bail out complaining an invalid option has been supplied.

On company I contracted for had something clever going on. Not only did they litter -X files everywhere, attempting to remove one (rm -- -X) would result in an access violation of some kind and your session would be killed as a result, preventing a recursive rm from continuing.

People alias -i and forever supply -f. That doesn't do any good at all. The real answer is to be more careful. It eventually becomes habitual. In about 15 years I have lost data to rm twice: the one time I mistakenly removed the wrong folder, and once when I thought I had a copy of the data.


Because of the inherent dangers in -f, I rarely use it...with one major exception. Whenever I am trying to delete a directory with a git repository in it, the fact that a lot of the things in the .git directory are write-protected means that I have to either punch Y for what is likely dozens of files, or use -f (or some other incredibly ridiculous and equally dangerous thing like "yes | rm").


Same here, my #1 reason for supplying -f is also to kill a git repo.


One: don't do anything as root. Root is the system's account, not your user account. If you need to run a service or application, make a new user for that! I've never needed root for anything other than system administration tasks, like apt-get or adding a user. Also, don't run multiple applications as the same user. If you have a web server, a blog, and a forum, you need three users. The web servers can talk to the backend servers via UNIX sockets or TCP.

Two: don't pass -f. Do you even know what -f does, or are you just cargo-culting it? If you need -f, rm will tell you. Don't use it until then.


Technically doesn't prevent rm -rf /* itself, but still goes long way to prevent a disaster: use a snapshotting filesystem, like NILFS2 http://en.wikipedia.org/wiki/NILFS

Some solutions here center on avoiding issuing rm -rf /* interactively... that's not enough! A broken script or unexpected variable expansion can wreak just as much havoc.

For example rm -rf $SOMEDIR/* :

- if $SOMEDIR is empty, or

- (if you suffer from bash) if $SOMEDIR contains trailing space so it will be expanded into separate words: SOMEDIR='foo '; rm -rf $SOMEDIR/* => rm -rf foo /* (which means, `remove ./foo and remove /* ')

An alias won't help if full path to command is specified; that is quite common for start-up scripts.

I have experienced consequences of rm -rf /* once or twice. Now I pause for a moment every time I am about to remove something and double-check the command. Sometimes even prepend `echo' for a dry run ;-)

Edit:

another nasty case of unintended deletion I had was due to a dumb Makefile rule:

  $(CC) -o $(OUTFILE) $(INFILE)
for some reason $(OUTFILE) ended up empty, so outupt went to $(INFILE) -- a C source file -- effectively removing its content. How would I guard against that kind of data loss? A snapshotting filesystem...


How about replacing rm with something like https://github.com/andreafrancia/trash-cli ? If you only purge the trash when it's necessary and not automatically after every rm, you'd save yourself.


Why not type * instead of ./* in the first place? Sidestep the whole issue by eliminating redundant information.


Just use tab completion to be sure you're getting what you asked for


Some of the answers above seem to involve 'babysitting' measures that almost certainly won't be on the next box you use.

appropriate countermeasures imho:

1. run an account with the right amount of access.

2. don't use sudo. (su + password makes you think a bit more)

3. this sounds dickish, but I mean it constructively. Pay attention. the -f flag means something...

4. when all else fails, rsync'd folders are a beautiful thing :)


Type rm -rf /* in your terminal emulator, place your finger over the Enter key and feel the temptation:

"We stand upon the brink of a precipice. We peer into the abyss—we grow sick and dizzy. Our first impulse is to shrink away from the danger. Unaccountably we remain... it is but a thought, although a fearful one, and one which chills the very marrow of our bones with the fierceness of the delight of its horror. It is merely the idea of what would be our sensations during the sweeping precipitancy of a fall from such a height... for this very cause do we now the most vividly desire it."

Edgar Allan Poe - The Imp of the Perverse


If I'm going to be doing something major to a lot of files, I often write a script that outputs the commands to execute, so I can verify what's going to be done. Then I reexecute and pipe to bash.

It's not quite applicable to something used as off-handedly as rm, though it could be done. Something like:

    make-rm -rf /*
might produce output like:

    rm /usr/bin/a
    rm /usr/bin/b
    # ...
    rmdir /usr/bin
    rm /usr/lib/a
    #...
And so on. That can then be piped to bash as a confirmation.


I personally go with a two prong strategy:

1. Backups 2. Sudo

I rarely do anything as root. If I am switching to root I already know what command I want to run. Backups are for when I do something stupid.


Seems kind of strange to be trying to run:

  rm -rf ./*
instead of just:

  rm -rf *
Is there a difference between the two that I'm not aware of?


If any files begin with '-', the second version will just expand to '-filename' which rm will try to process as an option (possibly failing or creating undesired results). Using

  ./*
expands to

  ./-filename
which won't be picked up by option processing. Note: You can also do

  rm -rf -- *
to prevent option processing after the '--'.

edit: added a bunch of breaks to prevent the * from being converted to italics.


The former will gleefully delete all files/directories, even if there exists a directory entry named "-i", without asking.

The difference is in glob expansion: ./* keeps the prefix on every expanded item. As mentioned above, using any sort of path (relative or absolute) prefix when globbing will circumvent all the careful "-i" wards a superstitious sysadmin may have put in place.


Create a version of rm that detects if you try to delete the root filesystem, deny it, and makes you put a --really-delete-filesystem-root flag to do so.


It does seem like this would be appropriate. All *nix systems share the desire to not accidentally rm -rf /, and it should be easy to check for inside rm.


There's already a flag to preserve /, but that won't help you with /*.


a flag you have to type all the time, or a default that you have to flag out? You can write code to detect /* too.


It might be default in some distros but there is no reason not to put it in an alias. The point is that it's already there and you don't have to modify your rm binary.

And how can you detect /* if the shell expands it?


Needing the effects of "rm -r /<something>/* " is rare. Just cd first.

I think I rarely use rm -r with an absolute path. And tab completion does something similar (list your targets) if you don't jump the gun with <enter>.

PS I'm entirely comfortable with my Alt-B as "rxvt -e 'sudo zsh'".


My habit is to use 'ls' to confirm what I'm deleting, then pressing up and replacing 'ls' with 'rm'.


I follow the process of file list validation, as mentioned (by me) at http://serverfault.com/questions/337082/how-do-i-prevent-acc...


Always type the full path, there's various key combinations to pull this into your command line. Also, you should be using the 'find' command to list (which you check), then delete files. In short, take your time.


I don't have an account on StackExchange but the best way to avoid this is to always run find and then pipe to rm.

find . -name "whatevs"

hit enter, verify you are deleting what you expect then hit the up arrow and type:

| xargs rm -rf


...or just use `-delete`:

   find . -iname "whatever" -delete
Otherwise you're screwed if someone managed to put a file named "/" somewhere in your find directory, which `-delete` has safe-guard for.


I used to run: rm /somedir -rf

If I would have hit Return to soon it would have bailed out automatically.

Worked well on Redhat Linux, unfortunately it doesn't work on my Mac these days...


Prefix your destructive commands with echo. Variant: for find, run it once with -print, then once it looks okay replace with -delete .


I'm sure many of us know the feeling of dread that creeps over you when you suddenly realize an rm command you've dispatched is taking longer to complete than one would expect based on the contents of the directory you think you're deleting...

There should be a name for that.



Thankfully, Gmail allows a grace period to allow you to cancel the sent message.

Not that I have had to use it.

I am also very happy with myself for putting my most important files in Dropbox, so it is semi-idiotproof.


rmygod!


When deleting directories I type their full paths


alias it to ask "really? (y/n)" perhaps.


This is a dangerous but common crutch. The reason it's dangerous is people get used to it, and then when they go to a system where it's not there, pain and anguish (or hilarity, depending on your point of view) ensue.


I'm more of a fan of the other comment, which was basically "don't use -f then". Personally, when using that command at root, you should be pretty aware of what you're doing.


Maybe people wouldn't get so used to it if it's only activated for rm -rf / (and perhaps /* and variants).


Better yet, just use an OS that doesn't come with a self-destruct button: http://en.wikipedia.org/wiki/Rm_(Unix)#Protection_of_.2F


Don't be root until you need to.


I don't remember if it's something I had to explicitly turn on, but zsh gives me a "sure you want to delete all files in ... [yn]?" prompt when I do any form of "rm *", even if I include -f




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: