Still, it raises two questions:
What about fragmentation?
Why don't have a GNU safe-rm yet that moves files to the (freedesktop.org specified) trash location to avoid this?
File managers already implement a trash function as per the freedesktop.org spec: http://www.ramendik.ru/docs/trashspec.html
I think this is hilarious. :-) Throughout Unix/Linux/BSD history, there is a steady series of essays, lamentations, wails, and gnashings-of-teeth regarding the recovery, attempted recovery, or irretrievable loss of really important data that got somehow mistakenly rm'd by some admin.
...and, every single time, someone says, "Shouldn't this be made safer?", and every single time someone else says, "Nope, rm is doing exactly what it's supposed to! Just be more careful!"
As if the huge volume of arcane commands and various scripting languages disguised as configuration files weren't proof enough that the mass of Unix/Linux/BSD admins and developers all share a common streak of masochism, we also seem hell-bent on ensuring that we have tools which can -- and eventually will -- bite us in the ass.
For my part, I think that having some form of undelete option standard in every file system is as obvious as keeping backups.
Alternatively, just slow down a little bit before using rm, especially when operating as root. Understand that it's (intended to be) permanent. Use echo first when using rm with a splat in order to ensure you're actually deleting what you expect to delete.
The question, "Shouldn't this be made safer?" is irrelevant. At some level, you have to have an rm command. If users decide to use it regularly, then it's up to them to "Just be more careful!" The smarter thing would be to create a workflow that doesn't rely on using rm at all. Why whine and complain (not you, I mean users in general) about an operation that can be easily changed?
Assuming you have backups, of course. Which you'd be insane not to.
(Grepping your hard drive for file fragments is suggested in the ext3 FAQ - http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html)
Whatever happened to backups?
To help prevent this problem...
KEEP A BACKUP.
However, you're right that making rm(1) express move semantics isn't the right solution. Maybe if the filesystem had a "BEGIN TRANSACTION" command that you could ROLLBACK...
Actually, that sounds like exactly the cognitive dissonance people had when they first started using Gmail. Perhaps filesystems need an "Archive" folder as well? Not even a Trash folder—because people want to empty a Trash folder—but rather just an enforced (and shell-supported) directory where things go when you don't have any reason to keep them, and therefore have no place to put them?
I guess this is a pretty common problem. The blog post I wrote about it in 2005 continues to be the most searched-for entry point on my site: http://csummers.com/2005/12/20/undelete-text-files-on-linux-...
cat /dev/mem | strings | grep -i llama
cat: /dev/mem: Operation not permitted
x86: introduce /dev/mem restrictions with a config option
"This patch introduces a restriction on /dev/mem: Only non-memory can be read or written unless the newly introduced config option is set."
Command-line access to /dev/mem in Ubuntu
I was looking forward to catting for llamas.
(From afar, I understand my Colonial cousins' struggle with these two words.)
I think it should be mentioned that this will work properly only if the file was not fragmented - Which will usually be the case in EXT3 unless you are using almost all of the space in the drive, but may happen frequently if you are using a FAT file system (which is used a lot in USB disks).
Also, If you just deleted a binary file this method will be problematic as well, and in that case you can use a tool like photorec to scan the disk and even limit it only to the free space on the drive - which reduce the time it takes to go over a disk and can detect all kinds of binary file types (uses the magic number of the file to detect the type).
Like other people mentioned here before, you should recover all the data to a different partition/disk than the one you are trying to recover a file from.
With that said - recovering data is a tedious and error prone process, so if the data is worth enough(and for some silly reason you don't have a backup) you should:
A. turn off the computer immediately after you've discovered the loss of data (to reduce the chances of overwriting anything important)
B.Give the computer/disk to a professional to recover (because you obviously aren't one since you don't keep backups)
SHRED(1) User Commands SHRED(1)
shred - overwrite a file to hide its contents, and optionally delete it
I especially like the -n option!
CAUTION: Note that shred relies on a very important assumption: that
the file system overwrites data in place. This is the traditional way
to do things, but many modern file system designs do not satisfy this
assumption. The following are examples of file systems on which shred
is not effective, or is not guaranteed to be effective in all file sys‐
* log-structured or journaled file systems, such as those supplied with
AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)
* file systems that write redundant data and carry on even if some
writes fail, such as RAID-based file systems
* file systems that make snapshots, such as Network Appliance's NFS
* file systems that cache in temporary locations, such as NFS version 3
In the case of ext3 file systems, the above disclaimer applies (and shred is thus of limited effectiveness) only in data=journal mode, which journals file data in
addition to just metadata.
In both the data=ordered (default) and data=writeback modes, shred works as usual.
the rebuild tree trick mistakenly sees entries in the dd'd copy as being files in the parent file system, and then sprays them all over your drive.
the last part, about using an alias for rm is something that I've never thought about it and now I'm gonna use always on my servers.
What I do instead is make a nearby (and simple) alias. For example: