Hacker News new | past | comments | ask | show | jobs | submit login
$ rm Important.txt (uh oh) (xenodium.com)
74 points by xenodium on Sept 17, 2022 | hide | past | favorite | 90 comments



I've had this function/alias in my {bash,zsh}rc files file years:

  function rm {
    for file in $@; do
      mv -t ~/.local/share/Trash/files/ -- "${file}"
      cat <<FROG > ~/.local/share/Trash/info/"$(basename ${file})".trashinfo
  [Trash Info]
  Path=$(realpath "${file}")
  DeletionDate=$(date "+%FT%T")
  FROG
  done
  }


Sounds useful, though I'd advice using a different alias - just to avoid surprises when for some reason your function wasn't loaded properly.


Using rm as the alias is the whole point, so muscle memory doesn't work against you.


I use trash-cli [0] for that.

[0] https://github.com/andreafrancia/trash-cli


>Can I alias rm to trash-put? You can but you shouldn't. In the early days I thought it was a good idea to do that but now I changed my mind. Although the interface of trash-put seems to be compatible with rm, it has different semantics which will cause you problems. For example, while rm requires -R for deleting directories trash-put does not.


I have aliased "rm" with "trash-put" for years and I just can't believe anyone is living without a trash capability under CLI.

What kind of ancient environment is it without a trash capability? That's pre Windows 95. How are people handling the mistake of "rm *" in a wrong folder?

Added benefit of trash-put is, you don't need "-r" to delete a folder. It's just an annoyance from user's perspective.

If you think there's a compatibility problem in a script, just use /bin/rm and stop wasting time pulling hairs on unrecoverable deletion.

"@daily trash-empty 30"

Put this in cron to purge trashes older than 30 days daily.


Another implementation is `garbage`: https://git.sr.ht/~mzhang/garbage

It is written in a compiled language and appears to be faster on my old computer.


All the comments here about how awful my script is were true!

In Bash that didn't work on files with spaces or funky characters. But, I never noticed because in Zsh, my primary shell, it works fine with all filenames.

Here's an updated one which incorperates the advice from several posts to also work in Bash:

    function rm {
     for file; do
      mv -t ~/.local/share/Trash/files/ -- "${file}"
      cat << FROG > ~/.local/share/Trash/info/"$(basename 
    "${file}")".trashinfo
    [Trash Info]
    Path=$(realpath "${file}")
    DeletionDate=$(date "+%FT%T")
    FROG
     done
    }


This gets even more quirky in busybox/ash especially if using `rm -Rf some_dir`. I think the function needs a more distinct name to protect rm use cases and avoid having to prefix wth \rm. maybe something like "function trash"


if you delete things things on a different volume (like an SD card), your function will move the files to your home volume, and if they're large files maybe not what you want.


You have a bug in your function. You need to use "$@", with the quotes, or wildcards expanding to a file with spaces will not delete properly.

At least with bash.


No need for "$@" there? (Not many file names with spaces?)


Also, just use

  for file; do blah done
which is equivalent to

  for file in "$@"; do blah done
shorter and no bug.


And they’ll want to quote the interior part of "$(basename "${file}")" too.


My best backup has always been having a buffer visiting the file I'm about to delete. I think that's saved my bacon once or twice, but honestly I have 0 paranoia about losing data. For whatever reason, I've never accidentally deleted .git and an important unpushed change at the same time. As a result I disable swap files and backup files in Emacs, and I haven't regretted it.

My biggest data loss incident was like 25 years ago when I ran "swapon" on /. Wouldn't recommend it. (And that will trash your Trash, probably.)


I also tend to have important stuff in a buffer before doing anything destructive outside emacs although I definitely keep backup files on.

Reminds me of a time a hard drive failed on a computer I had. I had an active ssh connection, running terminal display emacs remotely, with a very important file -- with no backup copy -- in a buffer. The computer continued to function, but I couldn't access the disk at all, and I figured as soon as I closed the ssh connection, it was game over. I managed to salvage the file by cutting and pasting through my terminal emulator.


I know we pooh-pooh Dropbox as they try and find some more expansive business model than backups, but having been a customer for more than a decade, I haven’t had a single moment’s anxiety about losing any version of any file. It’s never failed when I needed to go back and retrieve something, to the point that my OS’s trash folder is just that thing I empty if I ever want to save some disk space.


Dropbox is only useful if you are using it for just one computer.

I tried using it for syncing the files on several computers at one time. Instead of adding files to the computer that was lacking them so that both computers matched, it deleted files from the computer that had them so that both computers matched.

These days I just use 'rsync' instead.


I’ve been using dropbox across three computers since a few months after they were founded and have never once encountered this problem - and if I someday do, it’s a matter of seconds to undo deletions in Dropbox.


That doesn't sound right. I think you owe us all another try. :-) I've been using it on 3-4 computers for over a decade and never had this happen...


This shouldn't happen, at least shouldn't happen frequently.

I use all the major sync providers (namely Dropbox, GDrive, OneDrive) with three computers. Have only encountered sync problem <5 times (mostly happens when I didn't open certain computer for a long time) and they all have been just having duplicate files.


Sorry that happened! I’ve used one Dropbox account across about 20 machines in my life, usually with 2/3 overlapping at any one time, and I’ve never lost anything.


Take a look at Syncthing. I think you'll like it. ;-)


Heh, I've done exactly what your parent describes on an entire directory with syncthing. It was my fault for misunderstanding how to add a new device, but that catastrophic confusion has scared me away ever since.


How did that happen?

I've got ZFS snapshots as well, so I'm not overly worried, but I wasn't aware that was a failure mode.


If I remember right, I was adding a new device and misread the dialog for which way to sync the files. I ended up setting it up so that an empty directory overrode a full one. It was pretty immediately obvious what I'd done, and I was able to recover from a backup, but that brief moment of panic after realizing what I'd done definitely got my attention!

This definitely isn't a criticism of the SyncThing project, I just made a dumb error, but it is a lesson on not "testing in prod" or entrusting important data with a non-professional (such as myself).


While this is good guidance for Emacs users, another, more shell-based tactic is to use the "-i" option for "rm" which requests a confirmation before deleting the file. Aliasing "rm" to "rm -i" in either .bashrc or .zshrc (depending on what shell you use) for interactive operations has saved me a countless number of times from regret.


> Aliasing "rm" to "rm -i" in either .bashrc or .zshrc

Or, alternatively, don't. People with that alias learning to add -f to every rm invocation are also a large source of problems.

Rm should really do something more sensible on the -i switch, like displaying a summary or the files and asking for confirmation only once. Personally, I like to run "find -name file" and once I'm ok with it, I circle the command back and add -delete to it. It is much more usable.


Considering the difference between bare rm and rm -f is that interactive rm without -f is required to ask for each read-only file (which has approximately nothing whatsoever to deal with whether you can or should be deleting it!), I'm pretty sure learning to add -f to every rm invocation causes a median of 0 problems per person learning it.


I’ve run with

  alias rm=‘rm -iv’
for years without adding -f.

Much like I don’t prepend ‘sudo’ until it requires me, at which point I stop and think.

Your ‘find -delete’ trick does the same: it creates time to think between expressing yourself and evaluating the outcome. This is great.


> "Personally, I like to run "find -name file" and once I'm ok with it, I circle the command back and add -delete to it. It is much more usable."

This is becoming more and more my favorite method as well. It's easy, it's reasonably safe, it's readable, and it works well.


For such folks, telling them that invoking the full path of the executable will bypass the alias, e.g. “/bin/rm -f” will do the right thing. Discoverable? Nah. But we are talking about shell command behavior which is pretty arcane anyways.


Not really something I'd suggest with rigor, but prefacing an aliased command with a backslash will also defeat it


TIL. I was unaware of that. so if rm is aliased to "rm -i", then \rm will do the normal thing?


Indeed! I had to use this pretty routinely supporting a particular customer at a previous employer.

Their shell profiles were littered with two-letter aliases, replacing quite a few common things with their proprietary [often unrelated] versions

For example, mv for moving files became mv for Mail Volume, or something


dearlordy, that's even worse. aliasing an existing cmd with options preset is one thing. aliasing a well known cmd to some esoteric cmd is mindboggling


You can also do that by prefacing an aliased command with "command".


i like the \ vs 7+space additional key presses. that's like typing the full path without utilizing tab expansion. only noobs do that /s


Aliasing "cp" to "cp -i" also prevents you from overwriting files without confirmation. It's more of a preference thing than a recommendation, but overwriting on copy is something that bit me in the past when working a bit mindlessly. I guess the proper recommendation would be "don't operate the commandline mindlessly"?


Also mv, ln.


It's been the default on Fedora and RHEL for many, many years.


Could i add my slight tangent tip. Sql(delete,updates etc)

I got into habbit of writing my sql as

'zdelete from product where id = 123;' and THEN i remove the prefixed-z.

Too many a time i copy and pasted wrong ids or screwed up something else.


I always run small mutations like that in a transaction.

  BEGIN;
  DELETE FROM product WHERE id=123;
  ROLLBACK;
  --COMMIT;
And then I remove/comment the ROLLBACK and enable the COMMIT when I'm confident it's correct. This way I also get the number of affected rows for the statement.

I do this even on the command line but it's handier in an Emacs buffer and for busy databases you then don't need to worry about other transactions and locking while you type.


I write them as

  SELECT Count(*) FROM product WHERE id = 123
And if the number looks right I replace "SELECT Count(*)" by "DELETE"


you can also take advantage of transactions:

  begin;
  delete from product where id = 123;
  --verify things are correct with counts, by issuing "selects", etc.
and then issue either a "rollback;" or a "commit;"... I've been happy to have followed this pattern on occasion ;)


Lol I like the "amount" of comments (>0) I have on my original-comment, and there I was thinking, I'm the only fool todo deletes/updates via db-console in production :D


delete-by-moving-to-trash is a variable defined in ‘src/fileio.c’.

Its value is nil

Specifies whether to use the system’s trash can. When non-nil, certain file deletion commands use the function ‘move-file-to-trash’ instead of deleting files outright. This includes interactive calls to ‘delete-file’ and ‘delete-directory’ and the Dired deletion commands.

  This variable was introduced, or its default value was changed, in
  version 23.1 of Emacs.
  You can customize this variable.
---

If this option doesn't work for you, you should report a bug.


Zfs. lots of snapshots.

cd .zfs/whatever the nearest one was, copy it, done


Now if only https://github.com/openzfs/zfs/issues/10348 could be fixed. I'm using frequent zfs snapshots, but recovery is more annoying than I'd like.


You need...you want... httm[0].

[0]: https://github.com/kimono-koans/httm


Same! Works great for me.

I'm using zfsnap which take care for creating and destroying the snapshots. This I keep 72h with hourly snaps..


On Windows, File History works similarly, once you’ve set it up.


APFS on macOS can definitely do snapshotting, I wonder if anyone’s come up with a good way of utilizing it for this…


Seems relevant and not just for emacs : https://github.com/rushsteve1/trash-d I have been using it after for some time after it was posted to HN. It's a great tool. There's a .deb in the releases page.


I have rm aliased to trash for interactive use because it prevents me from making mistakes. There's a macOS trash utility you can download that lets you put files in the trash properly with full Finder support.

The great thing about modern computers is that they have so many resources that it's okay to build more forgiving tools.

Time Machine has been a huge lifesaver as well. Ideally you should be able to mark certain folders to automatically track all file changes over time.

There's still so much room for improving user friendliness and accessibility of computer systems, but it feels like we're either stuck in the Unix box or we've gone full kiddie lockdown mode with mobile platforms.


It still does sometime feel like we're in the 90's in CLI environment. There are many rooms for improvements.


These are nifty functions and methods but my personal preference that has always saved me was to use rsnapshot for anything that is important to me so that I have several historical copies in a date based folder structure. I do a local snapshot and run it ad-hoc if I am working on anything sensitive. That snapshot then gets lftp rsync-like sync'd to a chroot sftp server. Using rsnapshot can file-version entire directories so that dependencies are also versioned by day even if outside of git or in gitignore. This could even include all important laptop/device settings so that the laptop is easily replaceable. One distinct advantage of rsnapshot is that it uses hard-links for duplicate files to minimize disk space usage.

In the spirit of this article, one could make a function to call rsnapshot any time sensitive work is being done on anything important. Extending beyond the example could be to have a "fire-drill" function that git commits all local repos to an alternate branch, runs rsnapshot, sync's the filesystem, lftp transfers any important directories to an sftp server and/or git push and powers off the device in the event the building or datacenter go up in smoke or the laptop is stolen.


Switch to zfs.


After over 2 decades of Linux use, the other day I created a folder called `~` in my code dir accidentally and then just ran `rm -rf ~` like a moron instead of `rm rf ./~` like a smart man. Fortunately for me, it didn't get far and I ^C before it got anywhere.


This reminds me of myself back in the 90’s when I learned the hard way that I should probably create an alias called “rm” where it just moves the files/directories to a “trash” directory where I could purge things later.


A continuous snapshotting filesystem would be a more rigorous approach.


APFS is that, no? That’s how Time Machine works.


Kinda. It supports consistent snapshots but they are not easy to access or copy or ... except through TM. It ends up closer to VSS or LVM in usability, though it doesn't have to be since the architecture is better.

ZFS is likely the canonical example of a system that makes constant snapshots easy to use and access and deal with.

BTRFS is also not bad


Does ZFS snapshot every single operation you do? That sounds pretty cool. Unless you're saying snapshots are cheap but you have to create them manually, in that case it's not very useful against accidental deletion.


Not every operation. It's manual but easily automated. https://github.com/zfsonlinux/zfs-auto-snapshot


No, it doesn't snapshot every action. Even as cheap as they are, that would probably be overkill.

I have snapshots every 15 minutes, every hour, every day, every week (different retention schedules for each). And then anything dangerous there's a wrapper for "snapshot, do it, snapshot again".

It's really pretty difficult to lose anything this way, and recovery is _very_ easy. The snapshotted files are available in a magic directory named .zfs or something.

Backups are still needed of course, because if the fs shits itself (or enough redundant drives die), you'd still be in trouble.


I’ve still managed to do it, when I hit a poudriere bug which resulted in it doing a zfs destroy on my root partition, but yeah generally it’s hard to lose things on zfs.

What is it with OS manufacturers inventing clever file systems and then hiding all the features? APFS snapshots could be really useful to power users if the tooling was better. And NTFS was really modern for its time, but things like alternate streams were super hidden to the point nobody apart from malware authors used them.


> Does ZFS snapshot every single operation you do?

Snapshot-ing upon certain operations (e.g. file/s moved to a directory) is certain possible.

This blog post covers that pattern with my tool, httm: https://kimono-koans.github.io/inotifywait/


Sanoid is a great set-and-forget ZFS wrapper for snapshots. Also has syncoid to do offsite backups (ssh or usb hard disks)

https://github.com/jimsalterjrs/sanoid/


I wipe USB sticks with:

# dd if=/dev/zero of=/dev/sdd (CTRL-C after a few seconds)

It's quick and easy and only a slip away from a wiped hard disc. On a SSD, that would probably rip through /boot (EFI) and swap and be munching on my / before I stopped it. Oh well I could restore the OS and /home should still be intact ... probably.

As I current use Arch and prior to that Gentoo for over a decade, it's probably not the worst breakage I've done to a laptop 8)


I routinly need to dd to a thumbdrive at work. On newer computers the internal harddrive tends to be on /dev/nvme*, making the usb drive sda. Apart from several months of being very nervous when I went to flash a drive, this significantly reduced the risk of an accidental destruction of you computer.

Having said that, I have forgotten that I had an ssh session open and accidentally destroyed a dev server like that.


I have thought for years that I should patch dd to require some force option if it writes to a block device bigger than a typical USB stick/SD card. Never did, but now my development machines have NVME and I hardly ever use USB sticks anymore because my target systems have OTA updates most of the time. So the need has become history.


You can do better than that. Assuming you don't care about portability, it is easy to tell if a disk is USB or not by poking around /sys. Linux also seems to have a notion of "removable disk", as the eject command can tell the difference, but I'm not sure what that looks at.

If I really cared, I would suggest setting up udev so the relevent disk is created with DAC permissions that allow you to flash it without needing root.


USB or removable is not the right differentiator. The only time I really did screwed up in 15 years of using dd was when I exceptionally had my external USB backup disk connected and it had received the drive letter that normally was used by the SD card of for my embedded target.

Right, the udev rule idea sounds great. That should work based on model name, I don't see any other good attriute in udevadm info output right now. Unless you use the same model for important data and transferring disk images.


I use the GNOME Disks app (udisks frontend) because it provides a lot more context for wiping drives, which helps prevent the sort of problems you mention.

https://wiki.gnome.org/Apps/Disks


As a last resort, if you remember part of the lost file, you can grep the hard drive: http://blog.nullspace.io/recovering-deleted-files-using-only...

Saved some of my code once.


I've been using this for the past 20 years and it's saved my bacon multiple times:

http://www.mikerubel.org/computers/rsync_snapshots/


i use git-timemachine for emacs, letting me move backwards or forwards through commits in the current file.

i do something similar for uncommitted changes. every time emacs saves a file, it saves a copy to a .backups/ folder with a timestamp appended, then prunes copies to the latest 50. with similar hotkeys to git-timemachine i can travel backwards and forwards through saves of the current file.

after that i backup my home folder with git and tar[1].

losing data definitely sucks. it has to happen once, and then one has to decide: never again.

1. https://github.com/nathants/backup


I have nightmares about doing

  rm -rf *
in my home dir. With this it's a bit better, but you could still do

  rm -rf .*
in your home dir and lose your hidden .trash dir too.


I make a habit of rm -fr ../current_dir_name/* when trying to delete a broad glob, just to at least make sure to sanity-check deleting from the right place.


hmm I typically do tab completion so I'd be nervous just doing a quick rm -rf ../(mis-KeyPress) [TAB + Enter] and watching the entire parent dir getting wiped out.


I've done that before, now I have a custom shell function that recursively lists the files to be deleted and forces me to confirm that they should be deleted.


shameless plug: https://github.com/bAndie91/libyazzy-preload/#recyclixso alternative safebelt against accidental file removal.

recyclix.so is an LD_PRELOAD-able shared library intercepting file deletions and moves files to recycle bin folder.


Can one also enable system bin for terminal (zsh)?


I use trash-cli which implements the freedesktop trash spec:

https://github.com/andreafrancia/trash-cli


I was just thinking that I'd write a command for this if it didn't exist already. But it does: https://apple.stackexchange.com/a/50852


`system-move-file-to-trash` is predefined and all of this just works as expected if you use the Emacs GUI on Mac.


well at least it does not overwrite it it could possibly be recovered. a backup trash folder is a good idea for those things aliases are great for a lot of things including larger scripts that would probably be in a separate file.

shred or similar functions do overwrite files.


Alias rm to trash


sudo apt-get install trash-cli

alias rm=trash




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: