I learned a rather unusual trick to keep myself safe from unintended glob matches. It is not "fool"-proof, but it will probably dilute an unmitigated disaster into an incomplete disaster: Keep a file named -i in the sensitive directories. When glob picks it up, which should be fairly early, it will be treated like a command line argument. Has saved me on occasions.
I also had a friend in school who used to, for whatever reasons, name his files using characters drawn from a pool of only two characters. One being "." the other "*". Please don't ask me why. He would then try to delete some particular file. You can well imagine what would happen next. This happened multiple times, till I went ahead and populated his directories with "-i" files. That worked great.
I usually keep rm aliased to 'rm -i', but once I did get burned. It was not because of hitting return early but because of having a space between a directory and the trailing "/".....while running rm as root. It was taking a bit longer than I had imagined, so I looked again at the prompt to see what had I typed..$#@!&~ :)
Over the years I've been steadily training myself to type "WHERE" earlier in the process, until I have finally settled on the obvious-in-hindsight solution: Always simply start with the WHERE clause.
(Of course every effort not to be on the SQL shell of a production server in the first place should be taken, but sometimes you need a sledgehammer and nothing else will work.)
The habit I learned was: before running any DELETE or UPDATE statement, run a SELECT with the same WHERE. (e.g. if I meant to say DELETE FROM puppies WHERE cute = 0, first run SELECT FROM puppies WHERE cute = 0.)
I find I remember to do that because of the direct benefit (getting a sneak preview of what I'm going to delete), but it also means I end up thinking twice about the WHERE statement, so I'm much less likely to miss it out or get it dramatically wrong.
I'm a couple of levels more paranoid than that.
First, I'll write the DELETE as a regular SELECT (to preview number of rows), then turn it into a SELECT INTO to save the soon-to-be-deleted rows into a table with a backup_date_ prefix (So old backups can be deleted occasionally). Next, before changing anything, I wrap the statement in a BEGIN TRAN, and ROLLBACK TRAN. After all that, I will finally modify the SELECT INTO into a DELETE statement, run it once while wrapped in the transaction (to verify that the number of rows modified hasn't changed), and then finally run without the transaction to delete the rows.
Overkill?
I've always written my sensitive delete queries like this:
select *
-- delete x
from Table x
where x.whatever = 1
That way by default it's just a select, and then AFTER you verify the result set you can highlight from the delete on and then run the query (as long as you're in a shell that will run only the highlighted part. I was working in SMS.) This was a common idiom where I worked.
I do the same thing. Not sure that it has ever saved me from a disaster, but I do like the sneak peak and am DELETE FROM disaster free. knocking on my desk
I always (with sql server at least) add a begin tran/commit/rollback before any prod statements, because of getting burned in the past.
Even if you add the WHERE, but put it on a second line and only run the first, the transaction will help...
Of course, if it's going to lock the data, do all of the statements together:
BEGIN TRAN
UPDATE ... WHERE ...
SELECT ... WHERE .... -- show that the update worked
ROLLBACK
For me it's not the stomach. First, breath stops, then for a few seconds numbness in chest and jaw, then face turns pale and soon red. A small nausea follows and then regret sets in. The rest of the day is ruined.
No entirely. DELETE will work with cascading foreign keys, while TRUNCATE will not, at least on SQL Server. Also, DELETE is logged and (I believe) TRUNCATE is not. Having said that, I agree that a WHERE clause should be required - you can always say "WHERE 1=1" or similar if you really mean to delete all rows.
well, the only difference is that truncate also resets the auto-increment to zero. But you could allow where 1=1 to make it explicit if people really wanted an unbounded DELETE FROM.
The simplest hack I've ever used for this kind of thing is to keep a notepad on my desk and whack it instead of the enter key whenever I'm doing critical work.
In the time it takes to thump the notepad, it gives my brain a vital few seconds to triple check what I just typed before I blast monkies into space. Saved my behind more than a few times.
Yeah, it is a GNU extension to rm/ls/and others to have the options at the end of the list. Never did like that, but it does mean that people who log into my BSD boxes can never force delete or recursive delete ;-)
I've run a TRUNCATE in production by accident thinking I was logged into the dev system. I truncated the 4 core user / transaction tables. I've never felt the blood drain from my body so quickly.
Good old binary logs and an office floor to sleep on. I make copies of the data whenever deleting or truncating now. I think we all have to do something ridiculously stupid to start taking precautions.
Perhaps because I'm not a GitHub user, and because I've only ever peeked at HNers' GitHub accounts, but I was always under the impression that given the nature of the service, it would have an early-days-of-HN feel wrt to user behaviour.
It was a little disheartening to see the number of Reddit-esque comments that are simply a couple of words along the lines of "omfg" and a constant stream of meme abuse. I expected better from the programming community.
Sigh. Am I just becoming old, jaded and too elitist for my own good?
It's like reading Youtube comments--not for the faint of heart.
And the jab at reddit from the HN pedestal is probably misguided... reddit used to be more like HN, and HN is becoming more like the bad parts of reddit every day.
Every site tends toward Youtube level comments as time passes, and the people who don't like it eventually jump ship to a new site, and then the process repeats itself.
I've only been here for almost 2 years (well close enough to 2) however I would say the average HN thread is still more insightful than the average Reddit thread (including filtered sub-Reddits)
"And the jab at reddit from the HN pedestal is probably misguided... reddit used to be more like HN, and HN is becoming more like the bad parts of reddit every day."
Not really the same thing. I'm saying that HN shouldn't look down its nose at reddit, since it appears to be heading the same direction. I'm taking a jab at both sites (and all sites, really).
There's nothing more irresistible to a tech geek than recursive irony, regardless of where they fall (or wish they fell) on the YouTube/Digg/Reddit/HN commenter spectrum.
The change is a few weeks old, and all of the comments are about an hour old. Seems safe to say that the comments reflect more on "people linked to this change from HN/Reddit/etc" than "people that use Github"
I'm not about to go looking at all of their accounts but considering the comments most left it looks like they were already logged in. I doubt they all signed up just to post a 'lol'. I'd put my money on these comments being from an intersection of github and reddit/hn/digg users.
> I doubt they all signed up just to post a 'lol'.
Well, actually /b/ often raids web pages and do such things. If anybody working at GitHub can correlate the comments and the age of the accounts, I wouldn't be surprised to be surprised.
Plus, to me, it appears more like /b/ than reddit. I have a very limited experience in reddit though (yes, those anonymity-driven, fast-flowing communities are interesting).
Exactly. This isn't representative of 'the github community', it's more like 'the subsection of the reddit/hn/etc communities that feel the need to post funny pictures using their github accounts to do it'
You're not too much of an elitist, you're too much of a crank. Does it really dishearten you to see a bunch of people having a good laugh? Does every word written on the internet need to be researched/profound to be worthy of reading? This is the internet, it will forever dishearten you if you think in these terms.
No, that was exactly my impression as well. Those comments remind me more of the Daily WTF where everyone points and laughs, but people rarely step forward to explain exactly what the problem is, and more importantly the solution.
I laugh at Daily WTF but I always cringe thinking about some of those its3AMgottalaunchat8AMshitI'msotired nasty kludgy hacks I've been responsible for.
Or just the times I was well rested, under no pressure, and just coughed up some stupid.
>It was a little disheartening to see the number of Reddit-esque comments that are simply a couple of words along the lines of "omfg" and a constant stream of meme abuse. I expected better from the programming community.
You don't get good discussion from a commenting system. Github's comments aren't designed for with-each-other discussion, they're designed for at-the-author/audience commenting.
HN takes comment quality extremely seriously, and with-each-other discussion is perhaps the main focus.
reddit is somewhere in between: most of the users of reddit don't seem to be interested in discussion, but in open-ended polls a la AskReddit, IAMA, DAE, etc. The fact that reddit threads only last for a few hours, and that the volume of comments is so huge that it's hard to expect the people you're aiming your comment at to get anywhere, these both reward commenting rather than discussion; /r/bestof does as well. That said, reddit's topical breadth draws in lots of smart people, who are usually looking to talk about something interesting.
The vBulletin/phpBB model of forums doesn't really scale too well (unless you go the SomethingAwful route and impose a fee and lots of super-strict moderation) but it works well with up to around 150 active users. The best forums have the highest SnR on the Internet.
4chan-ish anonymity works up to around 5000 before you get chaos, hence the longing for "old /b/" and the value in the other boards.
I suppose what you're seeing here is actually a reflection of Reddit, 4Chan & friends, i.e. I suppose this very commit has been posted in some of those places as well. I still have some hope for Github-in-general.
HN is mirrored on tons of sites. The fact that it's the default on somewhere like jimmyr.com for 'coding' should tell you how well-known news.yc has become. Don't think it's full of erudite hackers only anymore.
The reason we all know to be careful with rm -r is because of that one time we weren't.
Me, it was the time I rm r'ed the MySQL data directory for my company's customer service management system. Oops. Thankfully we had a backup from the month prior, but I learned two things that day: a) be really careful with rm and b) take it on yourself to make sure IT is backing up stuff that you're messing with.
Was asked to uninstall IBM/Rational ClearCase from our source code repository server. Apparently at the time, Clearcase's installer NFS mounted 127.0.0.1:/ to a subdirectory. Don't ask me what brain-dead system designer thought this was a good idea.
So, I did a simple /etc/init.d/clearcase stop (not sure that is the exact name) and:
# rm -rf /opt/clearcase
(hmm... that seems to be taking a little too long to run)
Panic - then ctrl-C - it was too late, /opt/clearcase/foo was NFS mounted 127.0.0.1:/ and it had already trashed /bin, /sbin, /etc, /var, and most of /usr.
Luckily I had good backups, but we did spend the rest of the day rebuilding the source repository while the developers couldn't check in any code.
One of my first tasks after being given root access at my first n?x job at an ISP was "clear off all of the DNS stuff on $main_server since we have $dedicated_dns_server now".
So I merrily started mv-ing things to a scratch directory that we could wipe in 6 months if we didn't need anything from it.
Unfortunately, the zone file directory was NFS mounted from* $dedicated_dns_server. With pass root set.
I think I took all of A through to K of client zones offline before we noticed.
I'm just very very glad I decided to do it as an mv rather than an rm, since it meant all I needed to do was copy things back.
Not that I really learned my lesson that time; it took a couple more semi-disasters before I got sufficiently paranoid to be reasonably safe as root.
I've also done this, but it was a cPanel server and the jailshell did bind mounts to various directories on the system. I tried to remove the jailshell instance, and ended up removing a whole lot more.
dead tired, 4am in the morning working on a client app for a huge client of theirs (I was subcontracting) was trying to remove everything inside a folder and instead of going
rm -rf ./*
I went
rm -rf . (can't get the wildcard thingamajig to show up)
It took me a second to understand why the command was taking so long to run, by the time I figured it out and killed the command, I had wiped out almost half of what was on the drive.
Biggest "oh. my. god" moment of my life. I think I had an actual panic attack for a bit even.
Luckily, media temple had a backup from just a few hours earlier (I was lucky, they only ran them periodically and it just so happened to fall on that day).
Back in the day, rm followed .. if you specified it on the command line. It went like this:
You are in .; The current directory listing include ..; Recursively deleting everything deletes everything on the drive.
Actually, I think the one time I saw someone do this, wildcards were involved. And I was going to explain, but the comment system is making my asterisks into bold markers.
I've started quoting arguments in shell scripts even when it's not technically necessary to avoid problems with spaces. I can't count how many scripts I've written/encountered that didn't work with a path containing a space (apparently much more common with OS X users than Linux users)
This wouldn't delete the correct directory, but at least it won't delete "/usr" either:
rm -rf "/usr /lib/nvidia-current/xorg/xorg"
There are lots of other pitfalls associated with not quoting things in shell scripts, like this common one:
if [ $foo -eq "bar" ];
will cause an error if $foo isn't set or is an empty string, while this will work correctly:
if [ "$foo" -eq "bar" ];
Bonus that your syntax highlighter can highlight the arguments. My rule is that flags aren't quoted, but paths and other parameters are.
I don't remember the exact details, but a few years ago there was a Perl module that did something like:
my $path = something_that_can_return_undef_on_failure;
`rm -rf /$path`;
during the execution of its test suite. The author didn't catch it in testing because he never ran "make test" as root (who would?). But people on the Internet ran "make test" as root, with disastrous consequences.
I'm not sure if you're being sarcastic with the "who would?" portion of your comment, but doesn't cpan run tests, by default, whenever installing something? And, most people who aren't Perl developers with their own private Perl installation install CPAN modules as root, so they are available to all users. So, to answer the question seriously: Most people.
rm: use --no-preserve-root to override this failsafe
Brilliant, at long last at least a bit of protection! :) Sadly, I can still remember doing this to my one of my first Linux installs, albeit via the classic:
rm -fr .*
On the plus side, that day I learned one hell of a lot about how Linux works ;)
It is great. Quality linux support for nvidia optimus cards is pretty important for a lot of people I know, and bumblebee appears to provide it. (Apparently, optimus cards act /really/ strangely without a proper driver.)
To be fair, it was only rendering the machine unusable, not wiping out your home directory. A simple OS reinstall should get people back to square one. (Unless the OS reinstall script was botched and formats your disk when you've told it not to, which I've heard the latest Ubuntu does.)
You can use GNU remove, which lets you put arguments last for safety, on any Linux box:
Eg:
rm /whatever -rf
Ie, if you hit enter too early, you still haven't forced.
I've been using Linux for 14 years and have never accidentally rm'd recursively. I'm not sure when they added it, but I've been using it for a very long time.
I just use it for my boxes. It doesn't stop me putting something silly in a make file (as in the example here), but it does stop me getting burnt by it.
Back in the late 90s I worked on a small Windows product... our CEO complained that when the uninstaller ran, it left the empty directory behind along with some small temp files created by the software (that the package hadn't put there during install). So the guy making the package added a command to remove the directory and its contents...
... and the first reporter to try software, for reasons I'll never totally understand, chose to install it in C:\. Worked great until he went to uninstall it.
I've done a similar thing a few years ago when I was first starting work on my guild hosting company's code.
At the time, the main thing hosted on that machine was my WoW guild's website, which I had been working on for close to a year, and was beginning work on converting the site over to a general purpose guild hosting site.
I was doing some work for a client, setting up a mirror of sorts for some kind of yearbook thing I had built for them. For that, I made a script that would mirror the online yearbook with wget, zip up the whole directory, then clear out the mirrored pages (all I cared to store and serve was the zip file).
All of my websites were stored in /www on the server, and the raw yearbook was located at /www/clientname/www. Inside the clientname directory, I had the mirror script which was something like this:
wget -whatever_options http://whatever.address www
zip yearbook.zip www
rm -fr /www
Unfortunately, because of how frequently I type / before www to get to my web dev directory, I instinctively put "/www" in the script where I just wanted to do "www". I ran the script, checked to make sure and it looked good, and deployed it to a cronjob.
My heart sank when I tried loading my guild page a few minutes later (just to see what was going on on the forum, if anything), and it served up a bunch of 404s.
I went to /www/guildsite and saw it completely empty, and almost immediately figured out what had happened. At that point, I had to get my composure and figure out what I was going to do (I did not have backups or source control). I unmounted the directory, and went off to lunch with a friend, shaking with anxiety.
Upon return, I started writing out a perl script to scour the device byte for byte looking for PHP scripts, looking for instances of <? and then capturing the following 2000 lines or so, assuming that would be a sufficient buffer. When the script was done, I set it to run on the partition, and 45 minutes later I had a few hundred files to work with.
I had to manually go through every file (which were named numerically in the order they were found in the filesystem) and determine if it was the most recent (or recent enough) copy of the code, clear off any trailing bytes, and rename it to the filename it used to have. Luckily I could remember the name of almost every file in the system. It took about 8 hours to go through the few hundred files and recover them.
Needless to say, I learned my lesson after that, but the feeling of victory I got from recovering those files from the device was epic.
6 years later, I realize that that's a rather trivial thing to do, but at the time, I didn't know what I was going to do, and remembering that the file system doesn't clear all the bytes of the a file just it's reference gave me tons of hope.
Ha - I'll bet everybody here has a story that starts like that. (Although your heroic save was heroic!)
My sad delete story was in 1982 - I had a fantastic graphical robot-vs.-robot game stored on tape on an HP portable computer (I was in high school). For some reason, the delete command on the HP had a variant that deleted from a given block forward on the tape, as I recall, and for some other reason, my fingers just decided to type that variant even though I had never done it before.
I still miss that game. It just may have grown in my memory, but I'm pretty sure that was the coolest piece of software ever written anywhere at any time.
Mine was plugging in the power connector backwards on a super important ide drive (it was the only backup for an already failed server) and watching smoke issue from its controller board.
Having to explain what I'd done to the boss was so scary, I slunk home with the drive and traced the power circuit with my oscilloscope until I found a capacitor that had blown short. I soldered on a through-hole replacement and it worked!
I pulled the data and felt like king of the whole frikkin world for the next week or so.
My gosh that's one of the best stories I've heard in a long time. The thing that worries me is that things like this are going to me rarer and rarer - there seems to be less interest in electronics, and less electronics that can be hack-fixed like that.
I am a software engineer by degree, but also took many electrical engineering courses in uni, and I can honestly say that it is starting to make a comeback. Sparkfun, Adafruit, MAKE and many other places are starting to make it more accessible, cheaper and easy to learn. The Arduino has been a boon, providing people with a cheap but powerful microcontroller to get started.
While yes technology is getting smaller I have found that with many parts I can now easily find replacements online, I can get advice from other professionals, I can easily figure out how something works so that I can fix it. I've currently got a power supply sitting on my work bench that has a weird issue and I am slowly going through, making a net list and building a schematic with part numbers in an attempt to isolate the fault.
Maybe I am a rare breed, but seeing as how the interest at Maker Faire keeps going up, and interest in electronics also keep going up I will assume that eventually more and more people will get into experimenting in this field.
>Mine was plugging in the power connector backwards on a super important ide drive (it was the only backup for an already failed server) and watching smoke issue from its controller board.
yep this could cause you to lose a job.
> Having to explain what I'd done to the boss was so scary, I slunk home with the drive and traced the power circuit with my oscilloscope until I found a capacitor that had blown short. I soldered on a through-hole replacement and it worked!
pulling this off, however, could land you a job!
lesson: mistakes are human. talent and initiative are rare.
Yeah, the same thing happened to me a couple of years ago and at the time I thought I was the coolest guy on the planet when I recovered most of my data. I had accidentally deleted all my chat logs. Luckily, all the chat logs had a timestamp in them, so I just searched for all timestamped lines on the disk and dumped them to a file. I then wrote a script to group and sort all the logs based on the date and who I was talking to and recreate all the files. It worked better than it had any right to especially since I was doing everything on a live disk!
In actual content, it's convenient that you had timestamps to work with. That eliminates a lot of the need to trim off trailing bytes. Kudos to you for the epic save, and on a live (mounted?) disk too? Living dangerously :)
At work, we had a USB drive full of Ghost images. Someone else didn't know how to use Ghost that well and they managed to nuke the partition table by trying to write an image on the USB drive onto the drive itself... or something like that anyhow (newer versions don't let you do that any more, I note). Fortunately, they didn't destroy the data itself.
I rebuilt the partition table and saved all of our backups.
That's like the time my supervisor was done setting up a server, and then was going to image it. He had two identical drives in the machine and imaged the empty to the full drive instead of vice-versa... I was the one to cleanup the mess, but learnt a lot.
There’s an important motto to bear in mind here: data you haven’t backed up is data you don’t want. To be fair, this does bite us all at one point or another, but once it’s got you once you make damn sure it doesn’t get you twice.
Also, these days I get shaky just doing FTP deploys. Give me capistrano and git, or even better a direct Heroku push any day of the week.
There wasn't any version control software out there 6 year ago? Oh, I've been programming for 7 years, I remembered that I used VSS when I started working. But maybe I was using pirate version and you couldn't afford it.
There were plenty of RCSes out at the time, and I knew about them, but didn't use any of them. It was still a pet, personal project and I didn't consider it important. I had used VSS at my previous job and hated it, and I hadn't taken the time to learn any of the OSS VCSes at the time.
It was laziness, nothing more, and I got burnt playing with fire.
This release fixes the install script; it no longer deletes the /usr directory in its entirety. But how on earth did this get through even basic testing? Absolutely shocking!
Seems non-trivial to test. First, its in a bash install script - i wouldn't know where to start. Second, if you tested the behavior, you might just test that it "deletes /urs/lib/nvidia-current/xorg/xorg" by running the install script and checking that the folder is gone. Guess what, the folder is gone...test passes.
A couple months ago I had to recover some rm'd files by basically grepping 512-byte blocks on the file system for the file headers then writing out the next few KB to a file on a separate partition to manually go through..
My command sequence was more like this though, rather than a straight rm:
find -name '*.java' | xargs grep --color 'something'
# guh, get rid of these old copied .svn dirs polluting output
find -name '.svn' | xargs rm -rf
# now what was that..
find -name '*.java' | xargs rm -rf
Forgot to edit the right side of the pipe back to the grep. Zealous use of the up-arrow burned me...
While we're on the subject, has anyone successfully found/created a replacement for rm on OS X that moves files to the trash, but doesn't break the interface of rm for use in scripts?
Normal Unix method of doing this is via LD_PRELOAD, then you wrap unlink() in something that moves things to a folder. I used something - think it was libtrash - when I used Linux on the desktop, but haven't investigated what the OS X equivalent would be.
I use trash-cli (http://code.google.com/p/trash-cli/) on Linux, it is in Python so it shouldn't be too hard to get running on OS X (although I guess you might have to modify the location of the trash folder)
"Double-edged sword" has never worked for me, as a cliche. Do you often find yourself inadvertently smacking against the dull side of a single-edged sword, such that you stay away from double-edged swords for your own safety? rm is like a double-ended knife, i.e.: http://image.shutterstock.com/display_pic_with_logo/4253/425...
(Fun fact: many knife throwers grip the blade end anyway, rendering the cliche to an even simpler "rm is like a knife".)
This is completely unrelated to the original topic at hand, but I thought it might be interesting to shed some light on knife throwers' grips. For a given thrown weapon, whether a knife or an axe (or anything else that is meant to spin end-over-end), the number of revolutions is roughly fixed and is a function of the distance from the thrower to the target. Thus if you happen to be at a distance where you're getting a half revolution, you'll throw from the knife blade, or turn the axe around (so it's pointing towards you on throw). That enables it to hit the target the right way around. Of course, many people prefer to take a half-revolution step forwards/back so that they can throw from the tip regardless, just as a matter of form -- I did this for a while, although I feel I get better control when throwing from the handle.
I have always been amazed by knife-throwers' ability to calculate the amount of revolutions between their hand and their target when throwing. It just seems like one of those things that the human brain couldn't possibly calculate correctly on a consistent basis. Does it take a very long time to get comfortable with it when training?
It took me a couple months to get decent at it, though I'm still a quarter revolution or so off frequently. Axes came far more naturally to me, where I generally get very close to perfect on my first throw. If distances are marked I'm fine with either, though.
Are there specific techniques to learning it, or is it just experience? For example, do you have to be very familiar with the specific weapon you're using? Does "visualizing" help? Are there tricks or points of reference that you use to help out?
I just practiced a lot, really. Familiarity with a specific weapon helps in terms of knowing how it flies, how it's balanced specifically, etc but you pick that up after a few throws. Biggest piece of advice I can give is to not rush -- I have a tendency to get too "into it" and lose focus on my technique, so I had to slow myself down, take a few breaths, and think through the whole process.
If you want to get into it, I recommend two things: first, find someone who does thrown weapons and can talk you through the basics and point out mistakes in your form (SCA events are a great way to do this, and that's how I got into it), and the second is just to get a slice of a tree trunk and some weapons and start practicing regularly.
I found it to be a great way to relax and get my brain away from tech. It's one thing I miss in moving to NYC.
I had a friend who actually knew proper sword technique, and he was not impressed by my machete with saw teeth on the other edge. With a single-edged sword you can use your forearm to support it when blocking, but with a double edged sword, not only can you not do that, but trying to block with the sword itself allows your opponent to overpower you and press your own sword against you.
For what it's worth, you can use your arm to support it if you're armored as well. If you're in chainmail/plate, the force is distributed well enough that there's no reason you can't do this, even with a double-edged weapon. That said, there are many swordfighting styles where armor hinders you significantly, and you frequently see single-edged weapons in these. It's actually really interesting to study the history of swords, their techniques, and the armor commonly used along side them -- used to have a really good book about this, but can't find it now.
I actually ran that piece of code. Sure was glad it was only on a test partition. I did lose some trust in the developers after this, but I tried bumblebee again later and am happy I did because it works great!
Never use rm's -f flag while operating as the root user. Never. Replace with -i until you are absolutely 100% certain the script you're writing works as expected. Always doubt yourself; be humble.
I had a hard drive crash on me once, and I wasn't that worried about it, because I had setup network-based backup to the server in a different room. I remember thinking, "Restoring this backup will be sooo much easier if I can just connect it directly to the PC." Que me walking with the backup HDD towards the PC in question, when I drop it on the floor. When I plugged it in, it literally bounced inside of the chassis (platters fell off motor or something).
Mine does, now. I have a directory ~/.trash and a script rm that moves files to the trash. If I really want to permanently delete something I need to use /bin/rm (including for cleaning out my .trash dirs).
ADDED: Note that I have a .trash in each user's home directory including /root. And a copy of the rm script in each user's home/bin.
That's what I thought. I'd be happy if anyone chimed in with a legitimate reason, but I won't be surprised if the lack of a recycling bin is just one more symptom of the Linux developer community's apathy towards the actual human beings who use their software.
I would speculate that it is a historical reason. It's no secret that the Unix environment was not designed for personal use in homes, but on mainframe time shared computers inside universities and businesses. Space was limited, and just moving files to another location to deal with later added unnecessary steps to a process that didn't have much of a benefit, at the time.
As space has become less valuable on computers, and they have become less of a specialized tool, it may be wise to add one, but most of the desktop environments already implement it already so there is no need to recreate the functionality at a lower level.
Even if it did, you wouldn't use it in this case. Windows batch files delete files outright too, instead of sending them to the recycling bin. You don't want to have to depend on the user to clean up after your automation.
You don't want to have to depend on the user to clean up after your automation.
I don't see what's the big deal is. Once in a while when the recycling bin gets too big, the user can empty it. Or you can have a scheduled operation that deletes stuff after 30 days.
You're operating at a layer below where a trash can makes sense. Pretend that this command had moved everything to a trash can. The command to move the files out of the trash can now resides in the trash can, where it isn't being very useful. It's still possible to recover the files, but then again, it's also still possible to recover the files you deleted with rm.
rm is on the same layer as the DOS del command. Neither goes to the trash can, because they operate on a lower level.
it's also still possible to recover the files you deleted with rm.
If you're lucky and didn't write too much to the hard-disk after deletion, yes, but with a recycling bin you have much higher chances of recovery.
Regarding `del` in DOS: You're the second one to bring up that analogy. I don't see how this is relevant. Just because Windows does it that way doesn't mean that it's good.
Because that's not what rm does. Changing it would be breaking all sorts of standards. Many, many things depend on rm simply unlinking files. Why don't you use a different program if you would like to have some sort of trashbin behavior?
I'd be cool with using a clone of `rm` that sends to the recycling bin instead of actually deleting. I think that Linux should include a clone like this by default.
Looks like someone never read the Unix Hater's Handbook. Another fun thing is rm + shell expansion. A file named * or / can cause extremely unintended deletions.
I keep /usr in a squashfs, mounted with aufs over top of it, for the 0.01% speedup I probably get (I grew up on gentoo, forgive me). Periodically, I need to rebuild the squashfs to reclaim space in the aufs writable directory.
Guess what happens if you hit Ctrl-C during the mksquashfs? That's right, bash runs all the rest too, including the deletion of the old squashfs file. I was left without a /usr, and it was brutal. Managed to recover (it's incredible how resilient a linux system is, as long as /bin and /lib are intact), and immediately put "set -e" in almost every bash script on my machine (I also fixed the script to keep the system in a recoverable state at all times...).
You get in trouble not at the moment this happens, you get in trouble much earlier than that - when you allow yourself into a situation where a single typo leads to ruin.
Myself, I use TimeMachine and bitbucket on my Mac, and every-15-minutes snapshots on all Amazon EC2 EBS volumes. Similar solutions can certainly be found for your platform of choice.
The designer of rm is the greater culprit here, not the author of that install script. A single mistyped character should not lead to such drastic consequences.
"Usability? What's that? I'm really smart, so I don't make mistakes. If lesser humans do - that's their problem". That seems to be the attitude of many Linux programs and rm is among the worst of them. No doubt I'll get downvoted for saying this, but I've rarely, if ever, heard of such things happening in Windows. (And people still manage to delete files in Windows without too much difficulty.)
Unix could really do with a command that you can wrap around this type of call. Either a sanity check on the path part or a safe rm alternative that contains it. I would gladly give up full rm access to know that I can safely (or safer-ly) delete in scripts.
It could be something as simple as a file with paths on each line it - match one path or a path with a glob - and the script fails before destroying anything important.
Overriding it might involve adding a --override=/path/to/something but at least then it would be very explicit
I sometimes use a wrapper around mv that moves the folder to trash. Guess it's not that portable but it could be replaced by moving to /tmp or something instead perhaps.
I knew someone who, in the days of Windows 3.1, managed to accidentally invoke "format c:" from inside Microsoft Word - I was in the same room as them when they did it and heard the cries for help. What they couldn't do was explain to me what they had done to accompish such a feat.
Windows users that aren't privy to *nix culture generally don't find all of the "rm -rf ..." jokes all that funny. They get it, but it's sort of like telling a german joke in English to an English speaker only; they get it, but it looses it's humor if you aren't privy to the culture.
This tip only applies to interactive shells: I often prefix potentially dangerous commands with '#' while I edit them. Tab-completion still works (with bash at least, so I assume also with certain other vastly superior shells).
Is there no way to prevent bugs like this at the source, by modifying Linux, rather than hoping there isn't any extra white space in a command that might delete your usr directory?
linux can still boot without a /usr. this was a commit; how many people ran it?
on the other hand, delete the boot.ini[1] and most windows systems cant even boot. now deploy that boot.ini deleting build on an MMO (eve online) and watch the fur fly. that, ladies and gentlemen, is how you earn a :golfclap:.
The home directory is the full path to a directory on the system in
which the user will start when logging on to the system. A common
convention is to put all user home directories under /home/username
or /usr/home/username. The user would store their personal
files in their home directory, and any directories they may create
in there.
All of my irreplaceable data is stored in `/usr'. `/home' is a symlink to `/usr/home', as created by the installer.
A decade ago, I worked on a DNA sequencer / aligner product. This produced easily 1GB+ raw data files, and they typically exploded by a factor of ten by the time you performed a bunch of cleaning, smoothing, filtering, etc on them. For several reasons, not least of which was a 4GB file size limit in fat32, this software had to use a directory as a pseudo file.
I was working on some file saving logic. A customer had a problem where they'd overlaid a new logical file on top of an old logical file. Where these actual files, this would just have overwritten the old file, but since these were directories, we got a mishmash of pieces of two different logical files overlaid in the same directory, and of course our software got confused as hell. So, I wrote code that, in case you saved a new file as an extent filename (really directory name), would perform the equivalent of
rm -rf $dirname; mkdir $dirname;
You can see where this is going... Some grad student didn't understand this, and named a pseudo file as the root directory of a tree of research. Two years of research vanished into the ether, despite a dialog box that had red text in it. That sucked.
I also had a friend in school who used to, for whatever reasons, name his files using characters drawn from a pool of only two characters. One being "." the other "*". Please don't ask me why. He would then try to delete some particular file. You can well imagine what would happen next. This happened multiple times, till I went ahead and populated his directories with "-i" files. That worked great.
I usually keep rm aliased to 'rm -i', but once I did get burned. It was not because of hitting return early but because of having a space between a directory and the trailing "/".....while running rm as root. It was taking a bit longer than I had imagined, so I looked again at the prompt to see what had I typed..$#@!&~ :)