"svn update" will randomly delete your work with no way to ever get it back.
No, it will not.
You're left with a bunch of conflicted file, and a bunch of merged files, with no way to ever get the unmerged files again.
Umm... no? Take a look at conflicted-filename.mine. Whenever there is a conflict, svn appends .mine to your copy of the file. After that, it creates conflicted-filename.r123 and conflicted-filename.r124, and goes crazy with the angle brackets in conflicted-filename.
I've lost plenty of data with git. Most of it has to do with innocuous-sounding commands that don't ask for confirmation when deleting data. For example, git checkout filename is equivalent to svn revert filename. Of course git checkout branchname does something completely different. If a branch and a file share the same name, git will default to switching branches, but that doesn't stop bash autocomplete from ruining the day.
Here's a crazy idea: If you have an innocuous action and a dangerous action, do not label them with the same command.
Yes it will. Imagine you check out revision 1, which consists of two files:
foo:
a
bar:
b
You do some hacking, and end up with:
foo:
a
bar:
b
d
While you were doing that, though, someone else committed, revision 2,
which is:
foo:
a
b
bar:
c
You go to commit your changes, and svn tells you you can't, because
you are out of date. So you have to svn update. Now you have:
foo:
a
b
bar: CONFLICT
>>>>
b
d
====
c
<<<<
Now, how do you roll back to what you had before you updated? The
state of "foo" has been lost forever by the successful merge.
For example, git checkout filename is equivalent to svn revert filename.
Annoying, maybe, but this is user error, not design error. With git, if I want to losslessly discard my working copy, I can just "git stash". If I want to losslessly update my svn working copy, though, I have to make a copy myself, and then manage the copy.
By your logic, "rm" is flawed because it doesn't ask for confirmation when you pass -f instead of -i. Well, yeah. Sorry.
You didn't lose any data in that scenario. Every line of code you wrote still exists in those files. The problem is that you are in a conflicted state that must be manually resolved. Unfortunately, making updates atomic across a branch has disadvantages. For example, svn lets you update individual files or directories instead of the whole branch. If you want to avoid this pitfall in the future, run "svn merge --dry-run -r BASE:HEAD ." before a real update. (I wish svn update had a --dry-run flag. Just because git is bad doesn't mean svn is perfect.)
Also, your scenario is extremely unlikely. I've used svn for 5 years and I've encountered that problem once. It was for a binary file. Two versions of the same image. It's not very often that two developers create a new file with the same name at practically the same time. It's even less often that those files can be properly merged.
By your logic, "rm" is flawed because it doesn't ask for confirmation when you pass -f instead of -i. Well, yeah. Sorry.
$ git checkout blah
This command either switches to branch blah or it erases all uncommitted changes in a directory or file named blah. Without more information, you can't tell. I find that frustrating and annoying. Your analogy would be more accurate if rm somename was the equivalent of apt-get update, and rm othername was rm -fr othername. Oh, and somename is never tab-completed but othername is.
> You didn't lose any data in that scenario. Every line of code you wrote still exists in those files. The problem is that you are in a conflicted state that must be manually resolved.
I think that the point is that file 'foo' has already been merged, regardless of the conflict in file 'bar'. There is no way for you to revert to the pre-update state. In git, your previous commit still exists in the objects store even if it is no longer connected to the tree. And garbage collection won't even clean it out right away because it is still in the reflog ('git reflog'). The point being that once something is committed, it's permanently (barring garbage collection) in the repository. Whenever you make a change to a commit, a new commit is created, some pointers are changed, and the old commit still remains in the repository.
> Oh, and somename is never tab-completed but othername is.
Responsibility for the tab-completion falls squarely on your shell (or where ever you got the tab-completion setup from). Don't point your finger at git and say, "git sucks because bash tab-completion screwed me up." Neither rm nor git can control how your shell bothers to determine tab-completion.
Still, it can't be right that "get checkout foo" does one of two COMPLETELY different things depending on whether or not there is a file called foo in the current directory. Surely one of those two commands should have a different name.
I've always felt like `git branch` could be the one to switch branches (since it's used to create them too). But `git checkout -b` also creates branches... I think semantically checkout is the right command for this.
It's never come up as a problem because I tend to know what files are in my project and I also tend to know what it is I'm about to/want to do. I very rarely switch branches with a dirty working copy anyway and my branches are never named even remotely close to what files are named (by coincidence, I suppose, but I name branches after: 'releases' which have names like "2.2.2"; 'bug fixes' which have names like "bug2598" or 'features' which have names like "dashboard-rewrite" and "chunk-load-thumbnails").
Here's another crazy idea: don't run 'git checkout ...' on a dirty work tree. Problem solved.
Another one: don't reuse filenames as branch-names.
To be honest: I have the same problem with careless invocations of 'rm' ruining my day but when I'm muttering curses it is at my lazyness/stupidity and not at bash completions or the behavior of 'rm'
Honestly, it's annoying that we all use single-version filesystems. It was a good idea back when computer storage consisted of a big rotating metal drum and a major government could only afford 1MB of storage.
Now that 1TB is like $70, we should just keep every filesystem state around. Maybe not forever, but so that "rm -rf *" is just a "gah, that's not what i wanted!" moment instead of a "the rest of my day is ruined" moment.
Premature optimization is the root of all data loss.
No, it will not.
You're left with a bunch of conflicted file, and a bunch of merged files, with no way to ever get the unmerged files again.
Umm... no? Take a look at conflicted-filename.mine. Whenever there is a conflict, svn appends .mine to your copy of the file. After that, it creates conflicted-filename.r123 and conflicted-filename.r124, and goes crazy with the angle brackets in conflicted-filename.
I've lost plenty of data with git. Most of it has to do with innocuous-sounding commands that don't ask for confirmation when deleting data. For example, git checkout filename is equivalent to svn revert filename. Of course git checkout branchname does something completely different. If a branch and a file share the same name, git will default to switching branches, but that doesn't stop bash autocomplete from ruining the day.
Here's a crazy idea: If you have an innocuous action and a dangerous action, do not label them with the same command.