Hacker News new | past | comments | ask | show | jobs | submit login
Git undo: We can do better (waleedkhan.name)
762 points by arxanas on June 21, 2021 | hide | past | favorite | 476 comments



Damn, this is such a GREAT idea. I've messed up repos a few times, and it's never good. It's always -- "what's the magic want I have to wave now"?

The truth is, while we use git every day, most people really don't understand how it works.

There I said it. And I'm not ashamed.

I don't really know how Git works. And I think I'm not the only one.

What does "git reflog" or "git reset --hard ...." do? What are the implications?

We don't really know.

I feel stupid. But hey, at least I'm honest.


I know (or, at least, have known) how git works, in the way most people mean that (the data structures & on-disk layout, what a commit is, what a tag is, what a branch is, what HEAD is, staging, et c.). What I can't keep straight is WTF the commands are actually doing, in that low-level sense, which is a different thing, and there's approximately a 0% chance I'm ever going to use more than a tiny fraction of the commands often enough to remember that information.


definitely agree, and I'm in the same boat. I don't even think the data model of git is that hard to grok at all, it's mostly that commands are very unclear on what they operate on and in particular people get really tripped up about how many levels of state there are (stage, working tree, local branches, remote refs) that they have to interact with.

Like, I've had to explain a lot of times why you `git pull origin master` but when you want to interact with that remote branch otherwise it's `origin/master` instead. The lack of clarity is in what commands operate on what levels, with many of them operating on several at once.

There have been some efforts to reform the command set to be more clear, like `git switch`, but the old commands will persist forever along with a lot of other footguns (like `git push --force` really ought to be replaced with `git push --force-with-lease` and moved to `git push --force-I-really-mean-it` so it hardly matters.


I've actually worked on git internals and I'm in the same boat.

As part of a security-related project some years ago, my team and I hacked jgit to use SHA256, which required changing the length of pretty much every on-disk data structure. Sadly, there was (probably still is) no HASH_LEN constant, just a lot of magic offsets strewn throughout the code. I had to compare lengths against the git spec at every step.

And yet I still scramble for stackoverflow every time something goes slightly amiss.


There's an ongoing effort to rework core Git so that the hash implementation can be swapped out for eg. SHA256. [1]

jGit is actually a separate project from core Git, but once it gets adopted into core Git we can expect that jGit will follow suite, given that it's critical to Gerrit and other projects.

[1] https://lore.kernel.org/git/20191223011306.GF163225@camp.cru...


What a pointless project! U hope you were paid well, at least.


I was. But it wasn't quite as pointless as it sounds - the tool was a sort of tripwire-like system, with changes shipped to an append-only log, that itself was checkpointed in an early blockchain-ish structure. The threat model was "nation state actor" so the client wouldn't accept SHA1.

It was actually a pretty cool system. I don't think it was ever sold though.


Man, I thought zero days and secret backdoors were bad enough. Now we have to worry about manufactured hash collisions in all our repos' files dating back forever?


That seems like an overkill. Couldn't you combine the hash together with the date to obtain uniqueness?


The date isn't really meaningful since it can be set to anything on a file. But if you can force two dissimilar files to have the same hash, you can combine that with some other attack to inject it into some sort of chain of trust, whether it's git or some other type of checksum based system. Then combine that with a SolarWinds like attack and even if they try to revert to something from years earlier, they can't guarantee that the rollback files are still unaltered unless they had multiple hashes to compare it to or diffed it manually. But multiply that by X thousand files over Y commits during Z years and it would be very difficult to detect.


I do not remember jgit internals, but its API is pretty bad. I always assumed it was some kind of throwaway PoC suddenly turned popular.


> some kind of throwaway PoC suddenly turned popular

Wow, that description feels spot on.


> levels of state

This is the crux for me. Command naming is completely unrelated to and unindicative of state.

It feels like surely there's an opportunity for the basic CRUD operations to be collapsed down into a standard "{action} {source} {target}" style.

There will be nuances, specifically around branching, but the basics should be basic. As opposed to a Swiss Army knife, where you have to pull out the scissors and squeeze them three times before you can unfold and use the blade.


i can't stress just how great magit is. it's worth trying out emacs for just that. something like spacemacs as a wrapper is useful too since it gives you some well configured defaults for file operations. emacs is a kinda trash text editor but an amazing text utility toolkit that enabled magit.


I'll stress with you. Even if you hate Emacs, magit alone is a valid single use-case to start up emacs.


Are there any git frontends that do this today?


The one built into IntelliJ IDEs is pretty good. SourceTree is decent too. They are both cover the vast majority of day to day operations. I only ever very rarely have to resort to the command line for ritualistic summoning of the git demons.


Magit comes close to this action-source-target model whenever possible.


can you explain the `git pull origin master` thing one more time here?


I don't think using `git pull` is a particular good way of working. A pull is a fetch and merge or a rebase combined.

If it's difficult to keep your mental model of some system up to date, I doubt that doing bigger steps at once makes things easier.

So

1. run `git fetch`

2. if the textual output does not tell you what has happened, run `gitk -all`

3. Decide what to do. Rebase, merge, whatever.

Of course if you know exactly what you are doing, pull can be fine. If you changed the repo yourself on another computer that is the case. Otherwise, how can you know your second step, before having even seen the data you are operating on? Well, it can work, but if it doesn't, don't complain.


> I don't think using `git pull` is a particular good way of working.

I agree. For a DVCS like git, separating the network transaction from updating the working copy on disk is the best way to go about it. Going in the other direction, this is the default since git add, git commit and git push are executed separately.


This is literally the first advice I give when teaching people git. The first months of use, just run the two commands separate. Many mistakes are avoided that way.


I agree, but I end up using `pull` anyway just because the alternative is so tedious. I wish there was a short command that did the same thing as pull without fetch: merge the remote-tracking version of the current branch's default upstream into the current branch.

Essentially the whole concept of "upstream" is weird and non-orthogonal. Another one that bothers me is that as far as I can see there's no way to globally turn off setting an upstream on newly created branches (I can pass a flag to the specific "git branch" command, but that's tedious and error-prone).


like why it's different?

`git fetch` (and by extension `git pull` when given a remote) and `git push` copy data to and from a remote. When you specify `git pull origin master` you're saying "pull down a copy of the remote ref master from origin", which it then saves locally as the ref `origin/master`.

Everything under `origin/` (or really `refs/heads/origin/`) is just a cached pointer to the last known state of that ref on the remote.

All other commands operate only on these local references. So when you want to refer to what you know to be the state of things on `origin`, you can use `origin/master`. Otherwise that command has no particular knowledge of how to talk to origin.

Incidentally this is a shortcut I use all the time to update my local master from a remote:

`git fetch origin master:master`

Which is super unclear in its meaning but it means fetch origin's master HEAD and put it in my local master ref. I actually use this more often than git pull nowadays.


I tend to default to `git pull --rebase`.


I have this configured as default everywhere and strongly believe that merge-pulls are always wrong. The first place I used git we were learning together (i.e. nobody knew what a sensible workflow was) and people would push their local merge commits back to master. It was horrible.


`git config merge.ff=only` is really helpful for enforcing this. It makes you have to say what you want for any non-trivial update of a ref through pull or merge.


Strongly disagree. Never rewriting local commits is great for the same reasons that never rewriting published commits is great; if you rebase you lose the ability to fearlessly work on multiple branches in parallel that's the great advantage of git.

Pushing merges is great. Pushing random (unreviewed) local commits directly to master is bad, but it's no worse when those commits are merges than when they're not. Conversely, rebasing master (which is quite easy to do if you're inexperienced but have been advised to use git pull --rebase) and pushing that creates a self-perpetuating mess that is very hard to fix (because even if you fix what you did, any other user who did a rebase-pull of master in the meantime is going to reintroduce the problem). Using rebase also trains you to force-push which makes messing up published branches much easier.


Also, one advantage of `git pull origin master:master` is that you don't have to checkout master first.


so the distinction here is

- origin master <=== the actual remote version of the master branch

- origin/master <=== a local branch that you cached from the "origin master" remote, may or may not be in sync with the real "origin master"


origin and master are completely arbitrary too...

`git pull remote_repository_name branch_name` is the generic way to look at it instead of some magic incantation.

I like to call origin "upstream" to differentiate them.

and then git pull is another way to think of git fetch and git merge as one command roughly.


yep, that's right. Or rather, origin and master are just two parameters given to pull/fetch/push to describe a target while origin/master is just the local name for, as you say, the locally cached ref.

Comparing against that locally cached ref is also what git uses to tell you how far behind/ahead of the upstream you are in `git status` or whatever. Fetch and push are the only git commands that actually talk to a remote (at the "user level" of the command set anyways, those are also composed of lower level commands).


> (or really `refs/heads/origin/`)

It is worth the time to fully understand refspecs. Once people do, they tend to understand all essential ramifications of branch and repository naming.


What's wrong with `git push -f`? When I'm working on a branch that's been previously pushed with `-u`, it's pretty normal to force push it, particularly if you're amending or reordering commits in response to review feedback, or rebasing due to conflicts in preparation to merge.


changing `-f/--force` to act like `--force-with-lease` would have no effect on that flow whatsoever. What it would prevent is you accidentally overwriting something on the remote because you didn't know its current state, potentially silently backing out changes someone else (or perhaps you yourself on another machine) had pushed.

All it does is add this simple check before actually pushing:

    if (remote_ref("blah") != local_ref("remote/blah"))
        fail();
Most of the time it doesn't matter, and for most people's uses of --force it would have no effect (because most people are just pushing to a branch they're the only one pushing to). But every now and then it helps a lot to avoid losing data.


It’s important to also understand where this might fall down: many tools fetch automatically and this can cause issues with reliability here.


Ultimately, I suppose, git usage is somewhat cultural. I personally have an aversion to push -f, along the premise that once it’s pushed it’s public and someone else may have branched (and pushed changes of their own) or simply had it checked out for review; doing push -f “changes reality,” while checking out a new branch is idempotent. If someone else has committed on that branch it’s especially jerky to push -f.

I try to be pragmatic about this sort of thing yet push —force is one of those cultural no-no’s for me.


It means you can't fearlessly pull from other people's feature branches. So people mostly don't bother looking at each other's feature branches (because there's nothing you can reasonably do with someone else's change-in-progress except wait for the branch to hit master), so you collaborate later and end up with more conflicts.


I think it's because the commands are poorly named. "Reset" vs. "Revert" tells me nothing about what is happening at the low level, I just have to remember it. And yet the two operations, despite having fairly similar English language meanings, have entirely different meanings in the context of Git.


Yes. Especially

git init --submodule --recursive

Or is it

git submodule --init --recursive?

God I hate this UX so much I usually have a ./fetch-subrepos.sh that runs a bunch of "git clone" commands.

And if I push without first pulling, must it always punish me with a merge commit? Can't I say "oh shit I don't want to do this, go back and git pull"?


> And if I push without first pulling, must it always punish me with a merge commit? Can't I say "oh shit I don't want to do this, go back and git pull"?

This is a source of probably 50% of my "ah, fuck, time to undo..." moments with git, these days. I hate that shit. Muscle-memory gets ahead of me and I commit on a shared remote branch, which would be fine given our workflow except that I didn't pull first. What a pain in the ass.


I have this in my .gitconfig so the pull will fail rather than merge.

    [pull]
        ff = only
If it does fail I can decide whether to merge or rebase.


I guess I have `git pull --rebase` as muscle memory.

I would guess there's an easy way to make git do this automatically for you via config so you never forget, but I just never, ever `git pull`

Or:

> git config --global alias.up '!git fetch && git rebase --autostash FETCH_HEAD'

From:

https://github.com/JKrag/git-up


git config --global pull.rebase true

You probably also want:

git config --global rebase.autostash true


For the sake of your coworkers (and your future self), please don't lie just to make your history look pretty.


I try to make each commit a snapshot of a working repository (compileable or runnable or testable, whatever is the heuristic for working) where the difference between each snapshot can be explained by the commit message. Ideally they are isolated to a single logical "unit" of change (fix, refactor, add, remove). The goal here being to minimize the amount of confusion and work for anyone traveling up and down the tree. I often have to rewrite my local history to make this happen, because the actual changes that I make can happen in a somewhat arbitrary order. How has local history revision bitten you?


Rebasing leads to either having long stretches of non-compiling commits in history or giant non-bisectable commits. E.g. you added a method call on your feature branch, that method was renamed in master while you were working on your feature. If you merge then your commits still compile and I can use automated `git bisect` the way it's intended. If you rebase then your commits don't compile and I can't bisect through commits on your feature branch. If you squash then your whole feature development becomes a single monster commit and I can't bisect through it.

I agree with having as many commits as possible be compilable, but that's not the sole criterion, because there's a tension between that and having granular history: if you squash the whole history of the repo into a single commit then that means 100% of commits are compilable, but it's still a bad move. Conversely, a non-compiling commit in between two compiling commits is not a big problem (you just make sure your git bisect script skips non-compiling commits) - what really matters is keeping the diff between two successive compiling commits as small as possible. IME the best way to achieve that is never rewriting history.


That's a very good point with rebasing. Thanks for explaining.


It is for this reason that I have changed my workflow to always stash first, then pull, then pop the stash and do the merges locally, then push.


And always diff before stash because sometimes it's just random shit I wasn't serious about, so I'll re-checkout that file and then stash the rest.


> And if I push without first pulling

I think I know git well, but you got me confused. I've never heard of pushes causing merges. Surely you are talking about pulls, right?


push causes the error, the resolving pull creates the merge; the correct resolution has been pointed out as git pull --rebase but most people don't realize this.


Maybe somebody who has a habit of using --force when pushing. A major downside of rebase-centric workflows is that it teaches you to ignore the safety rails when pushing, or when deleting branches.


`--force-with-lease` would fix this problem (it needs an alias). Also, `--force` wouldn't cause a merge commit; it would overwrite the remote changes.

The only theory that makes sense is that this person doesn't know how to `pull --rebase`, but the order of `push` vs `pull` wouldn't change the presence of merge commits, so I'm still confused.


I don’t know git well, but I often run into the problem being discussed.

If I pull from origin before making my changes, I don’t have to merge, obviously.

But correct me if I’m wrong: I think that if I don’t pull first, but my changes don’t conflict with any part of what was done by the previous commit(s) I missed, I’ll still have to merge if I touched a file they touched.

This is a common scenario for me. Correct some typos in comments for example, and I get forced to figure out how to merge using vim, which I don’t know how to use at all (being a nano user). I’m sure I could and should switch to at least using nano by default, but I don’t know how merging really works, either.

What I really want to do is undo my commit, pull, and redo my commit. Then I don’t have to figure out git merge.


> I don’t know git well, but I often run into the problem being discussed.

I do understand the problem being discussed; what I don't understand is what it has to do with pushing first. You have the same problem no matter which order you use `git push` vs `git pull`.

> I think that if I don’t pull first, but my changes don’t conflict with any part of what was done by the previous commit(s) I missed, I’ll still have to merge if I touched a file they touched.

Yes, that's true.

> What I really want to do is undo my commit, pull, and redo my commit. Then I don’t have to figure out git merge.

You can do that with `git pull --rebase`, which, as others have mentioned, you can set as the default behavior of `git pull` like this:

https://news.ycombinator.com/item?id=27581416


Ooh, --force-with-lease looks like a nice feature, especially for updating github PRs that aren't yet merged. I still wouldn't want to use it where anybody else has a copy of the changes, since that's where you need a merge commit to avoid breaking somebody else's repo, but that gives me a safer option than a blind --force.


Just remember that --force-with-lease only protects you from overriding commits you have not yet fetched.


Wait, what? I've probably been using Gerrit too long but why do you ever need force in a rebase workflow?


These may be specific to a workflow with git + github, when using git from the command line, but here are the cases I've run into where overriding safeties is needed.

1. After making a PR, there are conflicts when merging into main. In a merge-based workflow, I would merge main into the feature branch, resolve any conflicts, then push. In a rebase-based workflow, I rebase the branch onto main, resolve any conflicts, but now I need to push --force. As some of the other comments have mentioned, this can be improved with --force-with-lease, but still isn't the greatest.

2. After making a PR, there are some typos that need to be fixed. Fix these in an interactive rebase, to edit the same commit that introduced the typos. Also requires either --force or --force-with-lease.

3. When the PR is accepted, the result is rebased on top of main. My local branch still exists, and must be deleted. I would prefer to use `git branch -d` to delete the feature branch, but this rightfully says that the feature branch hasn't been merged in. I instead need to use `git branch -D` to forcefully delete it, introducing a point of human error. (There are some cases where git can delete the branch safely, which I think occurs either when the feature branch has only a single commit, or when the feature branch can be applied on top of main without a rebase, but I haven't exactly determined it.)

#1 and #3 are cases where a safer option cannot be used due to a rebase-workflow. #2 would exist in either case, since even in a merge workflow, rebasing of branches before they are pulled makes sense to do.


> There are some cases where git can delete the branch safely, which I think occurs either when the feature branch has only a single commit, or when the feature branch can be applied on top of main without a rebase, but I haven't exactly determined it.

FWIW: it occurs when the feature branch was based on the tip of master (because no-one else has committed to master since you branched/since you rebased onto master) - in this case rebasing your feature branch onto master is a no-op and the commits that go into master have the same hashes as they had on your feature branch.


I usually use `git pull -r` to rebase upstream changes


Git init inits a git repo.

Git submodule runs commands on submodules.

What is hard about this UX?

And it's not punishing you, its doing what you asked, to pull into a non matching head, how does it know you're not using git in the intended and distributed way?

Btw, just quit the editor without saving, it aborts.


The "intended" way generates a completely spurious merge commit - it doesn't represent a real commit, and rarely do you care about keeping track of merges into a short lived branch which are already tracked on master.

Most people want a single source of truth workflow that corresponds to the old total ordering imposed by svn or p4.


> The "intended" way generates a completely spurious merge commit - it doesn't represent a real commit, and rarely do you care about keeping track of merges into a short lived branch which are already tracked on master.

On the contrary, you want those commits for bisection, which is the main reason to have a VCS history at all.

> Most people want a single source of truth workflow that corresponds to the old total ordering imposed by svn or p4.

People think they want that, but I've never seen a convincing case for why. Bisect works better if you use merge. Blame works better if you use merge. And if you really want to see the history without merges (why?), it's one flag to do that.


So use svn?


I would, but the option is not mine to make.


[misunderstanding removed]


I think they mean the commit message editor, which git will use to open a temp file to save the message to if you don't specify a message in-line with the "-m" flag when committing, including when a merge commit is initiated by a "pull". This happens on the CLI, it's just usually (though doesn't have to be!) a command line editor that it opens. I think vim's a common default.

AFAIK whatever's opened does need to block the CLI, so you can't use a command that opens a GUI editor then returns immediately or git will interpret that as your having closed the file without saving, but otherwise any editor should work, CLI or GUI, and can be assigned in your git config.


In a thread about common sources of confusion, I think it would be more helpful to leave the misunderstanding so others might learn from it. Ie, edit to add "this is a misunderstanding" to the top, not replace it entirely.


Git invokes an editor, for writing commit messages, etc. (it looks in the VISUAL and EDITOR env vars). That could be a GUI text editor, or something running in the CLI (personally, I use emacsclient to open a new buffer in an existing Emacs window)

What they're saying is: if you quit that editor without saving the commit message, git will abort.


You can also quit the CLI editor, e.g. vim.


YES! It always seems like people’s issues are handwaved away with something like “oh you just need to understand the underlying data structures better.” No, the UX is often very bad! Like, I know exactly what I want the underlying repo to do, but how the hell am I supposed to remember which —-option of which command is going to do that thing?


YES! you learn the happy-path commands you use all the time and the handful of "sadder-path" approaches you try when things go south, but there is a dramatic fall-off in knowledge and understanding from there that leaves otherwise clever and confident people feeling stupid and frustrated. This is not a silver-bullet for productivity but still a very worthy problem to address that could have meaningful impact for a lot of people.


I'm the same, but I think that's... Fine? If I understand what I want to do in terms of first principles, there's no harm in searching for the exact incantation if I do that only once every few months.

For the rest, there's shell autocomplete and muscle memory.


Maybe for low-level, somewhat rare tasks the ideal Git porcelain would be a GUI that just exposes the data model directly.


SmartGit is pretty good. It's $70/yr though (but well worth it IMO)

I'm the guy people go to fix Git screw ups at my jobs but I just click a few buttons or drag a few commits...


I wonder should Undo as a concept apply to all Git actions/commans which have state side-effects on the repo or work dir OR should Undo only cover certain operations (which ones)?


Take a look at https://eagain.net/articles/git-for-computer-scientists/?

Maybe you've already read it, but this is what let me grok the underlying data.


The parent commenter makes it clear that they already grok the underlying data. The problem with Git, as explained so, so many times, is its horribly intuitive mapping from UI to the operations those commands preform on that model.

Comments like this, which points to a resource intended to help people "grok the underlying data", has the effect of seizing the focus of conversation and implicitly retargeting it to be concerned with with people who don't understand the underlying data model. When you been through this enough times, it just comes off as incredibly annoying and a source of tiresomeness.


I often come back to a local repository to change something and think, while I'm at it, I'll just `git pull` and end up with a non-working working directory. Surely I should know better, but I think it's also hostile to users, when the easy thing to do is often the wrong thing to do.

Even worse, I'm not sure I correctly remembered the weird combination of actions and flags to use to get back to the state where I can continue with what I wanted to do in the first place.

That article is a good example of the problem. It tells me `git rebase` is an easy thing to do but I better not use that distributed VCS to publish my work that way, where 'publish' probably also applies to different machines of mine.


But... a monad in X is just a monoid in the category of endofunctors of X, with product × replaced by composition of endofunctors and unit set by the identity endofunctor ... so what is the problem? /s


I think the parent commenter says that they do understand the underlying data.

It's just that the command-line interface is very opaque regarding what it does to that data.

For instance, say I want to apply the last three commits I made in one branch to another branch. It's a very simple operation conceptually.

Good luck remembering that the command that does it is rebase, and what the arguments for it are.


I understand how git works, and I still can't use it. There are three problems:

1. Despite the claim that git never loses data, there are actually some dangerous operations that will irretrievably nuke your work with no warning. Git checkout is the canonical example. This makes me very gun-shy about doing anything that I'm not intimately familiar with.

2. Git's merge is not smart enough to realize that identical changes in two branches are not actually a conflict. I've often ended up in situations where a small bug has been fixed in two branches which then won't merge without manual intervention. This is incredibly annoying. (To be fair, this is not unique to git. But because git encourages branching more than other systems, I encounter it more when using git in idiomatic ways.)

3. This is the biggie: translating the abstract idea of what I want to do into an actual git command is a black art. The underlying model is beautiful, but the UI is atrocious. The plumbing is great. The porcelain is cracked and mildewy. There are mysterious valves and pipes all over the place when all I want is one control for the hot water and another one for the cold.


I really would wish git would embrace the idea of not destroying any data unless specifically prompting for it first. Same for merging pull requests. How hard would it be to again prompt in case of conflict how to proceed?


Commits are immutable, itś incredibly hard to destroy data that has been committed. Even if they become loose objects, Git is very conservative when it comes to deleted that. If you have uncommitted data, git will refuse to do lots of operations. The default is always to not delete changes, even if resetting a branch.

Contrast that with TFVC which tries its best to nuke everything within it's reach (that is not under version control) in inexplicably stupid ways.

  tf reconcile . -r -clean
This command is equivalent to

  git reset --hard origin/head && git clean -fdx
Because that's sensible.

  tf reconcile . -r -i clean
One might think -i is a short flag for -ignore. No. It's short for -preview. That's a "feature". Good luck finding the documentation for this idiotic behavior.

How about if you pipe a list of files into tf reconcile, to avoid it's idiotic behavior? Say

  git ls-tree HEAD -z | xargs -0 tf reconcile -r -clean
You better hope that ls-tree outputs something, otherwise that is the same as calling

  rm -rf .
I can't say I recognize your experience. You only risk creating a mess of changes to the point where it's hard to recover due to the share amount data it hasn't deleted.


Unfortunately, it is still very easy to lose data, e.g. by trying to undo a temporary commit with `git reset --hard HEAD^` (note the --hard option) before committing your changes.


Agreed. I consider myself pretty competent at git (consistently helping out the rest of my team of ~15 people with it) and even then I've shot myself in the foot using `git checkout --` instead of `git reset` before to unstage a file, and lost all of my work on it. Really felt that should have given a warning.


use git stash instead to unstage changes not committed to the index and you'll never lose anything ever again


The point is that you shouldn't have to learn everything by brutal trial and error, losing hours of work each time you try to learn a new operation and make a small mistake.

It's the same reason consumer operating systems have a trash can and undo features. Just railing on people with "you should've known better" doesn't really help.


Well, reading the fucking manual BEFORE you touch a machine is a very good idea. Otherwise the machine may rip out your limbs, or worse.

The manufacturer of the machine won't be responsible for sure if you didn't even read the manual… Clear case.

That's reality in engineering.

If developers want to call themself "engineers" they should behave as such.

On the other side you don't let people without special training even close to industrial machines! The risk they could get killed by an accident is just to high.

Here lies the discrepancy actually: "Software industry" is as much an industry as running a kindergarten is. Also there is the most time no "engineering" at all in "software engineering"… Just trail and error until "something works" (or at least looks on the surface like it would, no matter how broken it is on the inside).

As long as people are supposed to "learn on the job", and there are no clear quality / security standards in this "industry", and stuff is called "engineering" even no true engineering approaches are followed, nothing will change. Machines will continue to kill people in completely avoidable accidents (virtually)…

But that's in my opinion a fully homemade misery actually.


As someone who just learned a thousand electrical norms let me assure you of one thing: the way git handles "dangerous actions" that might delete things is NOT something you would find an equivalent for in the industrial machines, where you have mandatory warning signs, switches for operation with two hands (or even two people), mandatory inspections of these safety features etc.

It is not a different league it is a different sport. If a factory was designed with the equivalent of gits usabilty today, it wouldn't even get a permission to be built based on the plans alone. Also the person who planned it would have a hard time ever doing so again.

If git was a bridge it would have no handrails and it would oscillate in certain winds and for some odd reason there is a roundabout in the middle.

It is still better than no bridge for sure. As one of the first of its kind a lot can be excused, but fundamentally better engineering is possible as well.


Right. I've worked with many machines that can chop or rip off limbs and I can't think of a single instance in which the operator even had access to the manual. "Push these two buttons and this foot pedal to make it go. Don't wear loose clothing." If it was more complicated than a few button-presses to use and that risky, it would either be operated by a team or it wouldn't be used.

The only single-operator machine I can think of which has an interface even 1/10th as complex as git is an automobile. The number of fatal accidents that could be remedied by reading the manual is vanishingly small.


Not a big fan of human factors in engineering, huh?

Documenting poor design rather than improving it is lazy, bad engineering. Requiring rote learning of an very complex interface that poorly represents a moderately complex data structure is lazy, bad engineering. Macho, hyperbolic, gatekeeping arguments for non-design as a design philosophy is advocating for bad engineering. Being mad that people want products with interfaces that make sense is being mad that people want an important facet of good engineering.


> Well, reading the fucking manual BEFORE you touch a machine is a very good idea. Otherwise the machine may rip out your limbs, or worse.

We are discussing designing the machine in a way that removes these options altogether.

This is objectively and obviously a better strategy because humans are fallible, even when well trained, and the time put into safety training can now be put into an actually productive direction.

You're working from the perspective that machines and git must be dangerous to be effective. This is an assumption that should be challenged before the enormous waste of safety training is accepted.


Y'know, for some of us peasants, dev work can just be a way out of poverty. We don't have engineering degrees from a top school, we don't work for the FANGs, and we don't work on mission-critical code. The software industry has expanded a lot since the 80s, and now hobbyists can and do make a living out of it even without formal training. So what?

When I started web dev, I earned $15/hr. Beat the $8/hr I was making before that in landscaping. Now I make a little more than that, which still isn't much. My clients/employers don't pay crazy wages and they don't expect crazy quality work. They know they get what they pay for, and it works out for both parties.

Maybe's OK to have shitty, mediocre code for 90% of the world's needs... a small biz website, with the ecommerce/PCI bits outsourced? Sure, why not. It mostly works, and if it goes down for a few hours a year, maybe that doesn't meet super-reliability standards (can we count 8s instead of 9s?) but it gets the job done well enough? Shrug.

Sure, proper engineer techniques matter for certain applications. I would never want to touch industrial machines, or medical, or space, or automobiles... anything that could blow up and/or kill someone. But most code out there is just for some local, small-scale use, mostly temporary anyhow and bound to obsolete in a few years if not months. There will always be mediocre businesses needing mediocre devs for mediocre pay, just as there will be elite enterprises that require the world's smartest people.

Problem is, git was designed by the super smart for the super smart, great engineering with terrible UX. And it was kinda just trickled down to the rest of us, and it feels a bit like trying to teach Mom to use DOS and edit config.sys just to play a game. Now consumer software UX has leaped forward by decades and it shows, but a lot of the command-line dev tools are still incredibly arcane. They don't have to be, but it's not a priority to fix/improve their UX because, I suppose, it's engineers who are proud of their engineering, not proud of their ability to dumb it down for the rest of us. I don't blame them, I just know I can't meaningfully contribute to git (the project) because I'm not smart enough, well trained enough, whatever, and that complaints would fall on deaf ears like yours. It's an altogether different culture. Elitist by design, or meritocratic if you will.

And believe or not, I've probably spent more time learning git -- reading documentation, diagramming it out with coworkers, cloning repos and experimenting with commands, etc., following a shit-ton of tutorials -- than any other skill I've ever had to learn. It was quicker to learn Perl and regex than mid-level git.

If you want to pay for the world's would-be engineers to receive all the training to go from mere dev to proper "software engineers", by all means please do. But otherwise, well... you know what? World's gonna keep producing mediocrity. Most of us are just average.


As long as there are people who create things for others to use, there will be people who blame users for not being technical enough to work around their lack of ability or willingness to produce good interfaces. Developers just happened to also be the users in this instance.


A bit late but for what it's worth I'm very familiar with `git stash` and use it all the time, and it's not clear how this helps my situation of aiming for one command and accidentally doing a very similar one but one which deletes work with no warning. For example, `git commit --` is a completely moronic way of deleting files, and I am baffled it took them so long to add aliases for it which have a coherent name.


That's why it's a flag. You could do git reset HEAD^ && git stash instead

Also, git reset ---hard HEAD^ deletes nothing. The commit HEAD was pointing is not deleted. You have to work really hard to delete that commit accidentally


It does delete whatever uncommitted changes you had.


Well, that's the purpose of "reset". You explicitly ask the system to delete whatever uncommitted changes you had.

If you type in "rm -rf ." there will also be no "warning" about what happens next…

I for my part don't like systems that after giving it a command very explicitly asks me whether it should really execute that command. "You just pressed the button to delete those files. Do you really want to delete those files?" "Oh, no sorry. I'm the operator of this system but I press buttons randomly just for fun. Actually I have no clue what I'm doing. Thanks for that pointer! Please ask me also the next time, as I'm not going to remember what this button does. In fact I have no clue at all what I'm doing." Dude…

Git is not a backup tool. (Even you could use it as such).

If you need a backup, make a backup.

I'm aware of the fact this this comment may sound impolite to some. But out of my perspective it's a feature and not a bug that unixy systems traditionally don't ask stupid questions after you told them to do something. It's an attitude of respect towards the operator: The system assumes that the operator knows what he is doing. Asking questions in the style of "do you really, really want me to do what you just said" is on the other hands side just extremely rude against the operator—as the systems assumes the persons in front of the keyboard does not know what he is doing at all! I hate systems with build in "support wheels". The creators of such systems obviously thinks their users are cretins, and that's very annoying.


Humans make mistakes. Well designed systems are resilient to those mistakes. That doesn't mean prompting after every single command, which as you point out would be super annoying, but it does mean making operations reversible unless there is a very good reason they cannot be. Essentially every system designed in the last few decades has a concept of a 'recycle bin' or undo feature for deletions at least for a short window of time.


> Well, that's the purpose of "reset"

Well, that's the purpose or "reset --hard". "reset" alone doesn't do that. That's the difference with "rm". With "rm" you expect something to be removed. "rm" may fail because you gave it a directory but you forgot "-r". Or it may fail because you don't have the right to the parent directory or something. But it is pretty much expected that something will be deleted if you run "rm".

From the perspective of people who don't understand the difference between "git reset", "git reset --hard", etc., that some of those will destroy their data but not others, is part of the confusion. And when they picked one instead of the other and lose their modifications, they'll blame git from being confusing to use, not themselves for not doing a backup before running a command in their source directory with a program that is used to manage source directories.


> But out of my perspective it's a feature and not a bug that unixy systems traditionally don't ask stupid questions after you told them to do something. It's an attitude of respect towards the operator: The system assumes that the operator knows what he is doing.

By the way, have you ever run across a situation where the standardized behaviour of rm without -f (to wit, check the permissions of each non-directory, and, if it's not writable, prompt if stdin is a terminal)?


true, if they are modifications and deletes but not additions

it'd be trivial to modify that behavior to refuse to continue in such a case if you want double seatbelts

also, if the changes were staged they were recorded in the database and remain there


> that identical changes in two branches are not actually a conflict

I think this is part of a downside of rebase-centric workflows, since it encourages making multiple branches with identical changes, but no shared history. At some point I want to read more pros/cons on different workflows. My current thinking is that rebase-only sacrifices far too much on the altar of a clean-but-inaccurate commit history, but I don't yet know enough to say for certain.


I think if you run into this a lot, git revere will be helpful:

https://www.git-scm.com/book/en/v2/Git-Tools-Rerere

If you have a lot of long-lived branches that don't merge and get deleted you have a whole different world of pain from normal git usage, and need some resources designed as SCM experts to manage this stuff. But really short lived branches that deliver net steps forward are the way to live.

I hate branches because it breaks git-bisect. Linux kernel history has lived without it, so I'm guessing the "our code is so complex we need it" is a false idea.

Also, the actual history of what main/master points to is linear. If the bit-post merge are identical, it doesn't matter if they were rebased or not except having 2 ancestors for HEAD is a lie.


> Linux kernel history has lived without it, so I'm guessing the "our code is so complex we need it" is a false idea.

Those guys are still sending commits (patches) to each other by mail, I think they willingly live in such a special and weird bubble that it just can't be compared to any "real life industry project".

We've recently started to enforce linear history on our main branches at work, because it makes analyzing broken builds such a breeze. The mantra is "Insertion is hard, analyzing/understanding is easy" (for a merge-based history, it's pretty much the other way around), and so far it's quite the success.

It did take quite a bit of effort though to get everyone up to speed, and it still requires the occasional help to clean up the commit history of some merge request, but it was well-worth the trouble so far.


Ooh, rerere looks really nice. I have a few cases where I've needed to repeatedly rebase long-lived branches. (Feature branch A depends on feature branch B, but some small aspects of feature branch B are still under discussion. Feature branch B needs to be occasionally rebased onto main to avoid conflicts, after which branch A needs to be rebased onto B again.). Something that I try to avoid, but that sometimes happens anyways.

Can you explain what you mean by git merge breaking bisect? I've never run into that problem at all. And sure, it might bounce between the branches as you bisect, it still identifies the commit that introduced a bug. The only issue is if one of the commits on the branch fails to compile/test, which is poor commit hygiene, but can be excluded from the bisect.

I still need to do more research, but I think I'd lean toward having a merge workflow, but with no fast-forward commits. That leaves a clean history on main with --first-parent, but still leaves the details available in the branches as needed.


There is a design for undoing changes to the staging area: https://github.com/arxanas/git-branchless/issues/10

The similar project [Jujube](https://github.com/martinvonz/jj) has experimented with backing up even unstaged changes after every command, and apparently it works well for them, so we could do the same in the above design.

Undoing even untracked changes might be a bit much.


Yeah, this barely scratches the surface though.

Take branching, which is something you're supposed to be doing all the time. So abstractly, I want to be able to get a list of my current branches, create a new branch, check out an existing branch, delete an existing branch, and maybe rename a branch. I would expect the commands for these operations to be something like:

    git branch list
    git branch create [name]
    git branch delete [name]
    git branch rename [old-name] [new-name]
We can argue over whether checking out a branch should be "git branch checkout [name]" or just "git checkout [name]", but in either case, if I have unsaved changes in my working directory, I would expect to at least get a warning about this before that work got clobbered.

None of these things are actually the case. Git checkout will clobber unsaved changes in my working directory. "Git branch list" is just "git branch". "git branch create" is "git branch [name]". So creating a branch is the SAME COMMAND as the one you use to produce a list of current branches, just with an argument. Madness. And this is the rule, not the exception. Minor variations on a theme can result in radically different commands with various mysterious arguments. There is no rhyme or reason or regularity. The only way to know is to look it up.


> "git branch create" is "git branch [name]".

I've been using Git professionally for of a decade and you just taught me something new. I always use `git checkout -b [name]` (which is a completely bananas UX, but it's the one I know).


The two commands aren't actually equivalent, because of course they're not. Usually when you create a new branch you'd like to switch to it, but git branch [name] actually doesn't do that, so you have to execute a second command after.


Checkout should be load or open.


> Git branch list" is just "git branch". "git branch create" is "git branch [name]". So creating a branch is the SAME COMMAND as the one you use to produce a list of current branches, just with an argument. Madness. And this is the rule, not the exception.

As a counter-opinion, I actually like it this way. Especially since it's the rule and not the exception. When one makes the small effort to learn the commands, the payoff in saved keystrokes from avoiding typing the redundant pieces is great.

As a painfully simplified justification: Directory listing is "ls" and removing a directory is "rmdir" - I much prefer that to typing a hypothetical "directory list" and "directory delete", even if the later fits some nice consistent shape.

It's one of the reasons I hate powershell/windows-style CLI and its 60-character long arguments for every function.


There's nothing wrong with having a UI for power users in addition to a more intuitive one. But having a power-user UI as the only option (or even as the default) is not so good.


In particularly well designed UIs, the interface for power users and the intuitive one are one and the same!


> 2. Git's merge is not smart enough to realize that identical changes in two branches are not actually a conflict. I've often ended up in situations where a small bug has been fixed in two branches which then won't merge without manual intervention. This is incredibly annoying. (To be fair, this is not unique to git. But because git encourages branching more than other systems, I encounter it more when using git in idiomatic ways.)

I agree with your other criticisms, but IME git's branching-heavy approach makes this one easier to avoid: it's natural and easy to create a separate branch for that bugfix and then merge it everywhere that it's wanted. Or to just merge your colleague's feature branch that has the fix on into your own feature branch, if you know their branch doesn't contain any dangerous changes / will hit master ahead of yours.

(If the diff is actually bit-for-bit identical I think it doesn't conflict? But obviously that doesn't happen so much in practice. git merge -Xignore-all-space can be helpful in some cases).


Yes, but to make that work you must do it for every change you want to make, even the most trivial. Find a typo while you're working on something else? You can't just fix it. You have to branch, edit, cherry pick, commit, and then push to make sure someone else doesn't find and fix the same typo. It's damned annoying because it would be so simple to fix: just tweak the diff algorithm to check if the conflicts are identical and auto-merge them if they are. How hard can that be?


There shouldn't be a cherry pick involved, and my protip would be that you don't actually need to make a local branch for small changes; quite often I'll just do `git checkout origin/master`, make the fix, commit, and `git push mine HEAD:refs/heads/quickfix`. But yeah, it would be nice to have a way to just commit a diff at a different point in history without having to check it out.


> There shouldn't be a cherry pick involved

There is if you discover the typo in the middle of working on something else, which is usually the case for me.


> There is if you discover the typo in the middle of working on something else, which is usually the case for me.

Why/how? If I had other uncommitted changes when I found the typo I'd stash or commit them so that I had a clean checkout to do the typo fix from, but can't think of any case where cherry pick is the right answer.


Yeah, but I find that can be pretty disruptive to my flow. You have to:

1. Switch from your editor to a shell

2. Stash

3. Switch from your shell back to your editor

4. Revert the editor buffer

5. Fix the typo

6. Save

7. Switch back to your shell

8. Commit

Now you have to push that fix into all the upstream branches (because the whole point of this exercise is to avoid merge commits from someone else fixing the same typo), so:

9. Checkout

10. Pull

11. Merge the typo fix (at which point you might discover that someone else has already fixed this typo, or that you have an actual conflict that needs to be dealt with)

12. Commit

13. Push (at which point you might discover that someone else was in the process of fixing the same typo and just happened to push before you did. This is unlikely, but it becomes more likely the longer you wait between step 10 and 11 because there is a race condition here.)

14. Repeat for every upstream branch in which anyone could have already fixed the same typo

Then, finally:

N. Stash pop

N+1. Switch back to your editor

N+2. Revert your editor buffer

N+3. Try to remember what you were working on before you started this process.

It's a hell of a lot easier to just fix the typo and say, "ah, fuck it", and then curse at git for not being smart enough to figure out that identical changes aren't conflicts. ;-)


> Yeah, but I find that can be pretty disruptive to my flow.

Well, I think it's very much worth having git support integrated into your editing tool, so for me the workflow is more:

1. Shelve current changes, if any

2. (Optional) fetch origin

3. Switch branch

4. Make the fix

5. Commit-push

6. Merge this branch upstream

7. Switch branch back

8. Pull upstream master

9. Unshelve if necessary

I wouldn't consider it my responsibility to apply the fix to anyone else's branch - rather every branch has an owner and it's their responsibility to update from origin master at whatever frequency suits them - and if someone else has fixed the problem in parallel then either I notice at stage 4, or, if they did it in between me reaching 2 and me reaching 6 then 6 fails harmlessly (I just continue to 7 and 8 and I pick up their version of the fix, no need to resolve any conflict). This does rely on having a single shared "origin master" (whether that's your git hosting system, your release manager's machine, or something else) that can do step 6 atomically - if you really want a 100% decentralised system then yeah the price of that is eventual consistency.

All that said, git merge absolutely does treat a byte-for-byte identical change as not a conflict (I just confirmed with git merge-file), and will treat an identical-except-for-whitespace fix as not a conflict if you pass it the appropriate flags, so I think something else must be going on to cause the problem you're seeing.


> I wouldn't consider it my responsibility to apply the fix to anyone else's branch

I guess my situation is a bit unusual. We have two production branches because we're running our code in two different environments. So we have two branches that are effectively origin/masters.

> git merge absolutely does treat a byte-for-byte identical change as not a conflict

That's news to me. I guess I'll have to try that again because I have a very clear memory of seeing this happen and thinking WTF? But that could have been a long time ago.


the recommendation is actually used a different algorithm for the language that you're working in so I just use beyond compare and 99% of my simple conflicts or Auto resolved. It'd be better if these algorithms were just built into the sea Alli itself but they decided in unix style to delegate


>The plumbing is great. The porcelain is cracked and mildewy.

You might enjoy using Magit.


I would if I used Emacs


Plenty of people run Emacs purely in order to use Magit & don’t use it for anything else. That’s how good Magit is.


It is excellent, and you can use it well without reading anything (assuming you know git, and I know emacs so maybe that too, although it has the pleasant property of listing a lot of keyboard commands in the UI).


> The underlying model is beautiful, but the UI is atrocious.

I definitely agree. The UI is terrible. But it is learnable.

I would recommend using a GUI initially so you can learn what the operations are and do without having to figure out that it's not `git remote list` it's `git remote -v`; it's not `git clean` it's `git checkout . && git clean -fdx` and a million other paper cuts.


Are you using Git via GUI(s)?

Between my colleagues, there is a strong correlation between using a GUI, and messing up the repository.

I agree that Git's UX is pretty bad (checkout overloading; overlapping between checkout and reset; push overloading... yikes!), however, I believe that in contexts where using Git is a constraint, stop using GUIs is the best strategy one can apply to improve the understanding.


I disagree.

First of all, a well-designed Git GUI (examples below) exposes the git's underlying data model to you; in contrast, the CLI obscures it until something breaks and you're forced into it without context. Git operates on a graph and there is simply no way around it. The more you're exposed to the graph, the better you can mentally model it and ask it to do the right things.

While there are many half-baked Electron-based UIs that only unnecessarily complicates things, there are good ones too:

On Windows (and Linux thru Mono), I use GitExtensions. The visualizations are sane and discover-friendly. It will tell you what to expect.

When I work on C++ projects, the one built into CLion (JetBrains family; as mentioned by a sibling comment) is very good on its own. What impresses me the most is that has good visual support of a patch-oriented workflow. You can work with multiple "changelists" offline, shuffle individual changes around, seamlessly convert between changelists and patch files, and (most importantly) still work nice with the vanilla git model as the "actual history". It also works transparently with conceptually monorepo projects that have multiple physical repos, allowing you to do simultaneous commits.

I feel VSCode, GitHub client, Kraken, and several other Electron-based stuff, are too focused on the "polish" than substance, or are too opinionated to be used across repos I don't own.


So back when I was learning git, I was very shy about using any GUI because I was afraid that it would make learning how git works that much harder. I think that I somehow felt that the CLI was more fundamental, in some way. But I think you are right, and I was wrong. A better interface that puts the graph front and center would have led me to learn it so much faster, especially when it also exposes the command line equivalents, as magit does (or so I've heard).


The CLI doesn't obscure as much as presupposes that you understand internal workings of git.

All GUIs are not created equal. The JetBRains GUIs are pretty good. Others try and impose git "their way" and just invite creating a mess.


Yep I love the git GUI in Jetbrains IDEs, especially the change lists and shelf based workflow. I also like using the IDE’s diff to quickly use parts of an old shelved patch or when I’m working through a book I can diff my project with the answer repo to quickly see and work through the differences. But my favorite feature is Local History which has saved me from many overzealous git disasters: https://www.jetbrains.com/help/webstorm/local-history.html

Basically the IDE has its own revision history of your project’s files that it stores and you can recover your files from it. So when you accidentally revert instead of undo or use —-hard when you didn’t really mean to, Local History comes and saves the day.


I use Git via the GitHub Desktop client (https://desktop.github.com) and find it _very easy_ to use Git without issue by following a simple rule: don't be clever. I have branches, I commit, I squash merge via a Pull Request. No rebasing, no moving commits around. There might be workflows where rebasing etc. are important and certainly in those cases, using a GUI is probably not a great idea -- but it's certainly possible to use a GUI without issue if you keep your workflow simple.


I squash locally, but learning to do that required learning vim. Fortunately vimtutor. Then my distro changed the default text editor launched from git... fortunately I knew enough to get by with that one - which might be emacs but I haven't verified.

Git is really weird but useful.


Git will use whatever editor you have defined in $EDITOR. You can also define this specifically for Git via global config (~/.gitconfig) if you don't want it to use your session's $EDITOR variable:

    [core]
       editor = vim


The default editor is usually configurable (of course you'd have to learn all the different contexts you can do this from first to know this is a thing ... discoverability is hard).

For example in ubuntu you can do

  sudo update-alternatives --config editor
Some programs will use the environment variable $EDITOR, so you can add this to your shell startup configs

  export EDITOR=vi
Or specifically for git cli you can run

  git config --global core.editor vi


I use the Pycharm/Jetbrains Git GUI. At this point I can't imagine effectively handling merge conflicts any other way.

EDIT: Also, I really like being able to look at a diff of every file before I commit, and easily choosing which files to include in a commit. Too often I see people on the CLI accidentally committing changes they didn't mean to, because there is no easy way to check everything at the last minute.


even if you use the git CLI you can still set up a mergetool so that when you are resolving merge conflicts you can use something like BeyondCompare or P4Merge to handle the merge conflicts.


And every Xcode installation on macOS comes with FileMerge.app.


Meld


There’s a really easy way to check the diff before committing on the command-line:

    git commit -v
Displays a diff at the bottom of the editor that pops up to write a commit message.


you can also set

  git config commit.verbose true
to always have this behavior


The Jetbrains IDE git GUI is by far the best git GUI I've used, I feel lost trying to use git without it.

There's a user request that's been sitting around for a while to pull it out into a dedicated application, which I could personally get behind since we have one project at work that's pretty difficult to run outside of Eclipse

https://youtrack.jetbrains.com/issue/IDEA-152437


Yes Jetbrains have totally nailed this. I have tended to prefer the command line for git but the exceptionally designed GUI support in Rider is fast changing that - especially for rebasing.


What's wrong with 'git diff'?

Then there's 'git add -i' to easily choose which files to add


> What's wrong with 'git diff'?

Not GP, but I find it extremely hard to parse visually even after having had to do it a lot. Something like git-delta makes it bearable, but a decent syntax-highlighted, color-coded, side-by-side diff view with proper keyboard navigation (next diff, next file) is a huge quality of life boost for me.

> Then there's 'git add -i' to easily choose which files to add

Didn't know that, just tried it, and, wow, that's easily the most unintuitive and inscrutable TUI since vi. I haven't been able to make it show a simple diff; someone else also tried, failed as well.


it's very easy from the CLI. Much easier than from GUIs actually

  git diff
  git diff --staged
  git restore --staged --partial
  git commit
you can also add commit hooks to reject changes like conflict markers


I'm a huge fan of git fork [1] on windows (or mac). Looks good, never failed me and does everything I need (branching, rebasing, merging, squashing, cherry picking, blaming and it can even show lost commits with reflog). And it performs really well (in contrary to sourcetree).

[1] https://git-fork.com/


This looks neat! And a one-time purchase, too.


Sorry, I'm trying to get work done, not memorize an obscure set of incantations like some kind of D&D wizard. If the repo gets messed up, I delete it and reclone.


I really understand your sentiment, and I've been there for long enough myself - but once you have a somewhat decent understanding about gits internal data model, the commands that allow you to clean up pretty much any mess (reset & reflog probably being the most prominent ones) start to somewhat make sense. If you had previous exposure to anything computer-science, it very likely won't take more than a few hours until the puzzle pieces start to come together.

And it's time well-spent in my opinion; git will probably be one of the longer-lasting constants in software development.


> And it's time well-spent in my opinion; git will probably be one of the longer-lasting constants in software development.

Unfortunely.


It's a handful of commands, very well documented with tons of SO questions that is one search away if you can figure it out yourself.

It's something I use all day, every day. I'd say it's worthwhile to learn if your daily job involves working under source control


Yes I agree. If you’re going to use git (whether by choice or force) it is 100% worth learning the small set of commands required to undo a screwup without needing to reclone.

Recloning to me is a bit like tearing your house down and rebuilding it just because you painted your living room the wrong colour (in most cases!)


at least if you are a carpenter


Isn't understanding how to use your tools part of being effective at getting work done?

Sure, recloning and cp your changes to a fresh local repo will _work_, but what happens on the day you don't have access to the remote? What happens when you have to work on a repo that takes minutes to clone from the remote? All of those lost minutes because you don't understand how to use your tools means you aren't getting work done.

> memorize an obscure set of incantations like some kind of D&D wizard

If you're using Git then you're probably someone that works on software, isn't our whole job memorizing and reproducing permutations of obscure incantations to produce business value? Knowing how to use your tools provides business value, that's not some hot take but literally how you provide value to whoever pays you.


I find GUI's to be very helpful; most of my commits happen through VSCode's commit panel. For my home Unity3D project I use sourcetree because my commits unfortunately end up being to a lot of unrelated files (blame unity), and having a UI to stage them saves a lot of typing. I avoid anything like rebasing (never got the point of that), but I use branches liberally. I find with this setup, I only need to use a handful of command line options, and I haven't screwed up a local repository in a long time.


If you are on Windows, Visual Studio's GUI is actually quite good for staging, merging and diffing. I tend to use it for most of those simpler kinds of things while still doing commits, rebase etc. from the command line.

Same for VSCode on non-Windows.

I only mention this because I came to hate Sourcetree in its newer iterations on Windows so much that I tried out Visual Studio's support and was pleasantly surprised.


Funnily, I use a GUI almost exclusively to resolve mess-ups: Ungit is a great tool to understand the current state of the repository and fix it.


Not a single person I’ve met using git in the last decade has thrived using a UI for it.


> Not a single person I’ve met using git in the last decade has thrived using a UI for it.

I do most of my git through one or another UI (right now, mainly the integrated functionality in VSCode, a gitflow workflow extension for VSCode, and repository history graph extension for VSCode that supports doing operations against branches/tags/commits from the graph.) I feel loke I’m thriving that way.

Its not a substitute for knowing git, though, and I think that people who lean on a UI as a substitute for knowing what is going on underneath rather than as a convenience layer are not likely to thrive.


That's a failing of Git. TortoiseSVN brought source control to millions of people. A good tool should be fully embeddable in a UI, 15 years after its launch.


We still use tortoiseSVN at work. It's crude but shockingly simple to use. Moving to git would be a significant expense just to train people not to break things and to get them used to CLI.


TortoiseGit exists too and is fantastic IMO: https://tortoisegit.org/


Wish they put some screenshots on their site, but I'm glad it does if it does to git what it does to SVN!


Indeed, it makes using git so much less painfull.

There are days where I daydream of Clearcase when dealing with git.


Early in my career- when I got evil twin error- I was conditioned not to daydream of clearcase ever. :)


On the positive side, messing up the view configurations doesn't wipe our files.


Or a success of git? No good GUI exists because they realise they'd just be recreating things that exist, but with a GUI frame?

If you wrote git 2.whatever from scratch would you structure rhe commands a bit differently? Yeah, sure, probably; but I always think these threads are way overblown. The common stuff that you use frequently.. well you use it frequently, so either you remember it as a result or you use the alias feature so that you can. For the less common stuff.. if you have to look it up in the excellent documentation, is that a failing?


> if you have to look it up in the excellent documentation, is that a failing

Yes, if we ever want version control to become mainstream.

Version control is a very practical, day to day concept.

It's just almost unusable for regular folks.

And regular folks for sure won't use CLIs (unless you point a gun to their heads).


Version control is mainstream; git isn't, and isn't trying to be? Git is version control for plaintext files, (yes yes I know LFS exists) primarily software, and used by people who like using CLIs and have all kinds of other CLI tools for it to interface with.

Who's trying to make 'regular folks' use git, with or without a gun?


> Who's trying to make 'regular folks' use git, with or without a gun?

I would have loved to have that option quite a few times. Lots of projects that involve software, but also get contributions from non-developers, or need almost no user-facing config, but not exactly zero. If git wasn't so hostile, quite a few tiny config web UIs I've built over the years could have been SublimeText or the like and a very userfriendly git client. It's kind of frustrating to have a repository with a directory full of .sql files, a CI pipeline that versions and verifies those and deploys the updated app, but the domain expert who is perfectly capable of writing the SQL still needs me to put the file in the right place and commit+push it, because they (understandably) just won't use git on Windows.


> they (understandably) just won't use git on Windows.

I'm not sure how understandable that is, especially in the age of WSL - but I just won't use Windows so what do I know :)


magit (for emacs) is quite good. But I still only use it for browsing around mostly, or single-file commits. For serious work, I start with git status, and then git diff all the changes, and then group them and then commit then in groups, then pull --rebase then push HEAD:good_branch_name (by the time I've done all the above, I have a better chance at a good name than I do for the first commit). Then over to PR land, where we have "ff if possible and delete source branch."


I've said this in other threads but I'll proselytize here as well. I really started to grok git after using the lazygit terminal UI. I think it's really handy to see what the current "state" of git is in and how to browse it easily. Would continue to recommend.

https://github.com/jesseduffield/lazygit


I agree and limit Git Gui usage to read-only use, any mods to git I use command line.


Git Gui is a decent tool to compose commits, but for everything else I can't see myself using anything but the command line.


I do virtually everything with Magit. It's the best of both worlds IMO. You still need to actually understand git and its commands but it's like using the CLI with far fewer keystrokes and a better log interface than the CLI.


gitk (and similar) are great for browsing the history and figuring out what is going on. I couldn't live without it.


For some reason people love to defend the obscure and strange and oftentimes objectively terrible Git CLI. I’ve found Mercurial much more straightforward for my (mundane and boring but prevalent) use cases, and I lament that it isn’t more widely used.


Every single time we are discussing git this comment shows up.

But why is one popular when the other one is so much better? I guess we will never know


There are several reasons. In the beginning, hg was critisized to be slow as it was written completely in python. The effect of the Linux team going with Git instead created a lot of attention to git, probably also a lot in people who were faszinated by the capabilities of git and did not care much about user friendlyness. The biggest push for git undoubtedly came from github. That is now the premier platform for hosting software, especially open source software.


GitHub is the reason. Turns out giving beer to developers globally is an effective way to get a technology adopted.


Bitbucket used to provide free (and pretty good) Mercurial hosting.


Bitbucket had a much crappier UI.


Actually, one of the reasons I preferred hg to git is that Windows explorer GUI integration back then was far superior to gits, which was buggy as hell.


This doesn't seem relevant to the comment you're replying to. Bitbucket's web UI sucks, which is why it didn't have the effect on Mercurial that Github had on git.

Having used Bitbucket and GitHub, I think GH is much nicer - both in the sense of basic stuff like page loads being faster, and in terms of features. And since it's the main tool that everyone on the development team spends their time on for communication & collaboration, those things really matter.


Also, remember to compare the sites ten years ago rather than today.


That is solely because of network effects. Anyone of them could be the popular one, as could any of the weirder ones like darcs, even it having technical flaws (mostly fixed by now).


The Torvalds effect.


I know you imply, but this could also be interpreted as "battle proven" or how about "guaranteed to still work in 30 years"


Git and Linux go hand-in-hand, so there is your killer use case right there.


This also shows that quality of the technology matters. Like the discussion on the business value of Amazon's "use APIs always," using git is using a superior source code technology, designed from the ground up for distributed development by a master of distributed development; good things are enabled automatically. Linus' naming and UX choices and inconsistencies aside, the tool is awesome. That's why it wins in distributed development environments - the bazaar not the cathedral and not the bespoke engineering team hidden away in the corporation.


Although it sounds like a silly reason, compare the names themselves. I don’t even know how to pronounce “mercurial” without looking it up. The word doesn’t exactly roll off the tongue like git does. And then the command itself is “hg”. WTF is up with that? I mean, ha-ha we all get the joke, but was the program made for chemists? Unnecessarily clever. Don’t underestimate the extent to which a difficult/confusing name/brand can harm adoption.


For non-native speakers it's much more obvious how to pronounce mercurial than git. Mercurial can only be pronounced like mercury, I guess, is there any other sensible choice? But in git, it's not obvious if the g is pronounced as in get or as in gin until you look it up.


> I don’t even know how to pronounce “mercurial”

But you do know how to pronounce "Unnecessarily"?


I think for most people, they would encounter it in a science class early in life (at least for English speakers). And Hg is just the chemical symbol for mercury, https://pubchem.ncbi.nlm.nih.gov/element/Mercury#section=Ide...


Personally I think it just needs a 3.0 where they completely rename all the commands so that they're really unified. I know there was pushback on this in the past


The thing with command line interfaces is that since the same interface is used by humans and computer scripts, you essentially end up with an unversioned API that you can never make breaking changes to.


It is versioned. There is git —-version. If your scripts break with the new version, don’t upgrade.


With other sorts of APIs you are given the choice between say the v1 version or the v2 version, with both endpoints being available at the same time. However, when it comes to command line programs for all practical purposes you can only have a single version installed at a time. And it of course doesn't help that essentially none of the existing scripts even check the versions of the software they run.


This is easily fixable with a new command or envvar.

This happens all the time


That would be nice but will probably never happen due to backwards compatibility. Breaking changes in git would be even worse than the slow switch to Python 3.


Just call the consistent one "ggit" and the inconsistent one "git."


The UX philosophy is radically different. One presupposes that you understand a lot more of the underlying system. The other tries to focus on what it is what you want to do, as opposed to how.

Mercurial is less intimidating if you don't know much about internals. But tbh, when I use Mercurial I still find myself searching which command to do things. As a frequent power user, I find the Git CLI to be more useable.


> The truth is, while we use git every day, most people really don't understand how it works.

I once knew how it worked with moderate level of detail, but I simply do not need anything advanced for my day-to-day work.


Yeah that's my issue as well. I generally forget anything I don't use often and git is full of important stuff that you need infrequently.


It's very simple, it's a graph, and you can reason what changes you want to make to the graph, and then google for the commands. Re-writing history is really only reserved for binary checkins, and that task should be assigned to whomever checked them in, so you should never need to do a destructive change. Even credentials checked in should be rotated so they aren't valid, not re-written.


I come across this attitude a lot.

Usually my response is if you use something daily, and you know you lack the skills to use it effectively, why don't you improve your knowledge and seek out training or education?

Do you treat a new programming language or framework with the same disdain?

I know I'm gonna get some hate for pointing this out but this same disdain is what causes things like the branchless workflow. A workflow that hamstrings yourself to git stash as a poor man's branch. I get it, I used that as crutch for years, then I spent some time learning git properly.

Most people haven't even watched the one hour talk where Linus talks about the design and building of git.

One hour might sound like a lot to understand why branching is so amazing, and why distributed source control is hard but it's a tool I've used for almost a decade and won't stop using for the next decade. I think it was worth it.


At one time I was an expert in 'C'. I was the goto guy at our company for porting and performance issues and was often just handed the entire project if it involved porting.

I have no memory of any of that beyond the basics now. Git is like that. There is no way I will remember commands I only use once a quarter or so when something goes wrong.


Bingo.


I don't know what to tell you, commit more code? Branch more? Work in teams?

Maybe git isn't the right tool for you?

I have a set of commands I use multiple times a day, for everything else there are manuals and docs to reference.

Git branch, commit, rebase, merge, clone, check out, pull, push, submodule, remote, and maybe a couple more, are there specific commands you don't use daily? Other than remote and submodule I use all of those almost daily.


That's nice, but most of us here are version control consumers, not version control professionals. We need something that has very few knobs to turn because our job is focused around delivering value through other tasks.

Git is highly professionalized. It has layers of modal state. That is built into the operating model. It is made for Linus Torvalds, a professional merger of code. If you are using all of those commands "almost daily", you are a professional code-merger too.


This is such a strange attitude to source control as a developer. It's literally the most important tool a developer will use second only to an editor.

And again, maybe git is the wrong tool for you, that was kind of my point, use cvs or svn it's much closer to what you seem to want.


Some other things I rank above version control: Programming language compiler, runtime, etc. Email and similar communication tools, phone, chat, whatever. Web browser for documentation, Q&A, etc.

Some sort of basic version control is definitely important.


If it was so easy, the decision of what SCM platform to use isn't part of the developer teams, rather IT.


I was replying to this part of your comment:

> Usually my response is if you use something daily, and you know you lack the skills to use it effectively, why don't you improve your knowledge and seek out training or education?

Seeking out that training would be useless since I don't use git enough.

> Maybe git isn't the right tool for you?

Git is definitely the tool for me. It's better than any other available tool for my situation.

Why the hell would I branch more? Just to get better at git? Sorry, I'm sorry that I work on a small team! Maybe I should go back to CVS?

Stop gate keeping.


> Why the hell would I branch more? Just to get better at git? Sorry, I'm sorry that I work on a small team! Maybe I should go back to CVS?

I have multiple branches on the go in a one-person project. I find that very useful. But if you don't, maybe you should go back to SVN? Using git without merging between branches seems like you're making your life complicated for no gain - like using a distributed system framework to run a single-node system.


I have multiple branches, we use gitflow (well modified). This isn't about branching, it's about what happens in the edge cases and how difficult git is when things go wrong.

The comment I'm replying to said "branch more" as if that would solve some kind of problem with edge case complexity.


That comment likely assumed merging between branches was the "edge case" you were hitting; the general principle - do whatever it is more often, so that the edge case becomes routine - is sound. I found gitflow horrendously overcomplicated so I can imagine it might introduce some edge cases, but I'm honestly struggling to think what they might be - in my experience as long as you never rebase/cherry-pick/squash, frequently have parallel branches, never rebase/cherry-pick/squash, and frequently merge, there aren't actually any edge cases in git - as long as you remember to never rebase/cherry-pick/squash.


This is just retarded.

You don't want branching and you still think git is the tool for you? Compared to say svn?

Honestly, use svn, its the better tool for your use case.

You don't have to use git.

Don't gaslight me by saying you don't need gits main feature and then suggest I'm gstekeeping, I'm not, youre welcome to use git, just don't complain about your lack of ability to RTFM.


Nice ableist slur, on brand.

I use a modified gitflow. The article and this thread are about how when you get into trouble, git quickly becomes complicated. It’s not about branching being too hard or whatever you turned this conversation into in your head.


> I know I'm gonna get some hate for pointing this out but this same disdain is what causes things like the branchless workflow. A workflow that hamstrings yourself to git stash as a poor man's branch.

FYI, the branchless workflow linked to in the post is really the opposite of this. It encourages making commits even more often than you would in the traditional branching workflow, and discourages using the staging area or stashes when commits would work fine.


The problem with this is that eventually you need to branch, in that case, stash works well. But eventually you learn that branching is better and easier than stash almost 99 times out of 100.


All my git training starts with a whiteboard and drawing out a commit tree and branch pointers. I only talk about the commands in reference to the drawings, not the other way around. Most commands are manipulating this commit tree, so it gives a visual to latch understanding onto.

`git reflog` shows the commits where your current branch pointer has been.

`git reset --hard` moves your current branch pointer and to the given commit, then modifies your working directory to match that commit. `--soft` moves the branch pointer and does not modify your working directory. `--mixed` moves the branch pointer, does not modify your working directory, but does clear staging.


> What does "git reflog" or "git reset --hard ...." do? What are the implications?

This is supposed to be covered in `man git reflog` and `man git reset --hard`. I admit that it could be more readable though. Currently, it's more of a technically-correct introduction than a layman's. I guess it's really more of a documentation for experts. Some have made fun of this: https://git-man-page-generator.lokaltog.net/

In layman's terms, `git reflog` is the history of the positions you were at: you'll see every commit you've visited recently, so as long as something was committed, you won't lose it. It's here in case you lose some commit identifier (for instance you finished rebasing but are not happy with the result: the branch now points to the sad commit. Grab the reflog, copy the commit identifier and reset the branch to point it to where it was before).

And `git reset --hard`... `git reset` changes the branch "tip" (pointer) to another commit: the commit tree always exists. Branches are "named commits". `git reset` moves these tags around. The `--hard` part "just" replaces the entire content of your working directory (including non-committed changes) to the commit you give as a parameter. With no parameters, it just resets to the latest commit in that branch, so it's like saying "clean my working directory back to what was committed, discard my changes". Perfect for losing work.

I agree that git "porcelain" commands are sometimes ill-named and a bit counter-intuitive to grasp. For me, learning git paid of (sort of: I don't spend time fighting my issues, I spend it helping others fix theirs).

I'm keeping an eye on better-designed alternatives like pijul. mercurial is interesting and has better-named command, but I sunk some time learning git already, and know it better, so hg has very little more to offer to me.


Minor additions; branches are named commits that move along as you commit, while tags are named commits that stay.


Have you tried reading the manual[1]? I read it cover to cover once and it is invaluable because it gives you the ability to understand and describe what you're trying to achieve. The solution is always one search away if you know how to ask.

[1]: https://mirrors.edge.kernel.org/pub/software/scm/git/docs/us...


>The truth is, while we use git every day, most people really don't understand how it works.

Most people don't know how the Internet works and yet it's widely used.

You don't need to understand the inner workings of git. You just need to know some commands and some basic concepts.


> Most people don't know how the Internet works and yet it's widely used.

Yeah. I've met many developers who have no idea what a cookie even is, people who have never read a single IETF RFC.


Me too, far too often. Those developers are most often negative contributors...


The problem is not knowing git internal can very easily backfire by deleting data, rewriting history, etc


> The truth is, while we use git every day, most people really don't understand how it works.

It is a tool. One should not need to understand the inner workings of a tool to use it. How many APIs do we use where knowing how it does what it does is required? Whether it's Stripe or Node or Bundler or ..., the user does not have to know what happens inside. The need for incantations makes some people feel powerful or exclusive. I just want my tools to do their job so I can focus on doing mine.


...plus git operation names are a bit confusing (pull vs. fetch, etc.)


> What does "git reflog" or "git reset --hard ...." do? What are the implications?

If you have ever done a "git reset" (which implies --soft), then you should know what "hard" in this case means. I am not sure, I knew that it would most likely get rid of my uncommitted changes. I "git stash", then "git stash pop".

Do not run anything without knowing what it does, you should consult the manual page: "man git reset". Search for "--hard", and you get:

  --hard
  Resets the index and working tree. Any changes to tracked files in the working tree since <commit> are discarded.
I do not find myself reading the manual pages that often anymore. That said, I do have my own notes which I read sometimes, just to be sure.

Slightly off-topic, but I love working with meld! If I do a "git rebase --onto [...]" I do not get "meld" open automatically, you gotta type "git mergetool" to resolve conflicts.


I've noticed this. Every time I try to explain to people about hash maps, GIT blobs and commit trees their eyes glaze over. If you are ever curious though the documentation is kinda cool.

https://git-scm.com/book/en/v2/Git-Internals-Git-Objects


+1.

I wonder if it is possible for someone to write a meta-CLI that works on top of git like “git for humans”. While, still leaving the power for expert users


90% of the time all you need from version control system is update-> make your changes-> commit/push your changes

You just don't need to learn unless you really require it. And even if you do learn you will eventually forget if you don't use all the features.

Its not hard to see why people don't even bother.


I suggest to try this: www.pluralsight.com/courses/how-git-works. It covers the fundamentals behind git but without the usual 'nose up in the sky attitude and elitism' that could be seen in many books or articles about it.


I do use git every single day. It’s a super valuable tool. Maybe my bad on not taking a course or reading the manual from cover to cover.


this is why i was so sad when mercurial support ended. I loved using mercurial it just works and it was so intuitive. My firm last year switched all projects over to github and it's been a pain, I learned it no problem but still have to google the occasional thing, but I spend most of my time fixing other people messing up the repo.


You're not the only one. https://xkcd.com/1597/


Points for honesty. Obligatory XKCD [0].

FYI: `git reflog` saves your ass when you accidentally delete a branch locally, only to then realize there were valuable commits yet to be pushed to master! How it does it is another matter.

Systems enabling high usefulness with little education (i.e. shallow learning curve, at least initially) and then incremental education are ideal. This isn't essential, just desirable. Git can err a little on the steep learning curve/mystery black box side of things a bit. But it is still a damn fine tool.

[0] https://xkcd.com/1597/


I'm sorry, but how many bits of git's UI do we have to force users to manually replace before we realize that the entire problem is git's UI?

Between the tone deaf responses here about "using a GUI client is the problem," to the tone deaf responses of "you just have to learn it's internal architecture," it should be obvious what the problem is. The problem is not just being able to undo a mistake (though that's certainly one of the problems). Git is an incredibly user-hostile experience, and someone needs to fix or replace it. Can we just say it out loud and stop pretending that the problem is the literally thousands of users who have problems using it?


There is no doubt that a bunch of the general commands are really poorly named, or mixed together. `git checkout branch` means switch to a branch - that's fine. But `git checkout filename` means "undo changes to the file". What? That's totally insane.

`git branch new-branch-name` means create a new branch. Great, but it doesn't check out the branch, which you want like 99.9% of the time. If you want to do that, you use `git checkout -b new-branch-name`. Yes, a third separate use for git checkout.

Why not (e.g.) `git branch -c new-branch-name` with a config option to make `-c` the default if you want it?

If you rebase and push, it tells you to do a `git pull` to "fix" it, when in every workflow I've ever done, you want to add a `-f` and push away, just be aware that you are intentionally overwriting the remote branch.


New commands 'git switch' (git switch -c for new branch) and ' git restore' address exactly that issue and are available on newer git versions, so it's being worked on!

I do agree with the second example. Pulling and merging with remote after a rebase makes a terrible mess!


And if I create a new branch locally, why must I specify what name I'd use for it on the remote? Why would I like an other name I wonder? It's fantastic that it can do this, but the default should be the branch name I'm using and that's that.


> And if I create a new branch locally, why must I specify what name I'd use for it on the remote? Why would I like an other name I wonder? It's fantastic that it can do this, but the default should be the branch name I'm using and that's that.

You can fix this by doing `git config --global push.default current`.


Are you referring to the git branch command? As far as I'm aware, you don't have to specify the name of the remote branch when creating a new local branch. And when you push to the remote, a branch with the same name is created by default. You would have to specify the remote branch name when running git push if you wanted a different name for the remote branch.


My env doesn't do this. Here's what I did:

    git checkout -b test
    git push
And I got "fatal: The current branch test has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin test"

But in this situation, we actually push with "git push -u origin branchname".

After deleting the "test" branch, I recreated it with "git branch test", switched to it and tried a push, and I got the same error message.


AIUI the problem there wasn't with the branch name, but that the "-origin" bit -- the reference for where to push to / pull from -- was missing. The -u parameter someone else suggested apparently sets that at the same time as pushing/pulling, so it combines the effect of "--set-upstream" suggested in the error message you got and then push/pull. (A bit like "checkout -b newbranchname" combines creating and then switching to a branch; there is some consistency to the UI, but it's weird and spotty.)

I've almost never had this problem; nearly all branches I've come across usually "know all by themselves" where their remote is. I suspect they inherit it from a global or repository-level setting that might be missing from your setup. Have you tried executing that "--set-upstream" it suggests? Maybe that sets it for the whole repo, or have you had to repeat it for each branch (like the "-u" parameter for the push seems to imply)?

Or maybe, even if you have the remote set correctly on the repo level, branches just somehow don't "inherit" that...? Idunno, maybe some magic setting missing. Aha, one further possibility: If you set the remote on the "master" (or "main") branch, maybe branches branched off from that (and others branched off from those, etc) "inherit" the remote setting?

Finally: Creating a git repo by cloning in stead of from scratch (git clone in stead of git init) automagically sets the remote for the master branch of the newly cloned repo to the master of the repo it was cloned from. Maybe, if this is how it works, that's why it's always worked for me: I've worked mainly with cloned repositories, and my branches are almost always -- albeit sometimes indirectly -- branched, ultimately, from "master".

Anyway, I think somewhere among all that there should be something that fixes your problem once and for all (i.e. for all future branches once it's correctly set for a repo). I'm sure your problem isn't standard behaviour. HTH!


Have you tried running:

  git push origin HEAD
In both Github and Github Enterprise, that command will create a branch with the same name on the remote.


Works like a charm, thanks!


Git checkout branch means change the entire working tree to the reflect the state of the branch and git checkout file, checkout the state of that file in HEAD. Both commands to the same thing, change the state of your working tree to reflect a point checked into version control.

It sounds like you are unaware of

  git switch branch
and

  git switch -c new-branch


Just a tip: when teaching people, the readable alias is easier to remember and understand, i.e.

    git switch --create new-branch


Thank you. It seems silly but my first parse was "-c(heckout)" but that comes from not using "switch" yet; I need to rewire my brain from "checkout".


It was a good example: it was obvious by the name of the branch and he/she was teaching the real word use case


By the way, I recommend using `push --force-with-lease `instead of `push --force`. This way it doesn't destroy the remote changes if someone else pushed while you weren't looking.


> But `git checkout filename` means "undo changes to the file". What? That's totally insane.

Heh, and I'll do you one better: it's unstaged changes, so it won't put the file back to the HEAD version 100% of the time >:-)


The problem is just that collaborative text editing between multiple users simultaneously is hard. It's a human problem and git attempts to be a technical solution but the abstraction fails at the edge cases.

Regardless, what's so bad about deleting the repo and pulling from remote if you can't figure out why you hosed it? It's not like it costs you anything to do `rm ... && git clone ...`. And it's not like this happens daily either.

In the rare case that someone hoses remote, yes you'll have to do some weird git-fu to get everyone working again, but it's really hard to hose remote if you stick to the pull -> commit -> push -> merge pattern, which is what 99% of users are doing anyways. I've used git for 9 years and I've never had to spend longer than an hour troubleshooting git based BS. And if you'll remember, 9 years ago git wasn't the clear winner, it had competition via SVN and Mercurial. Both of them are practically irrelevant at this point so however bad git's UI was, it's clearly better than anything else that existed before.


> It's a human problem

If it were a human problem, how could alternatives like Mercurial and Darcs get consistent amounts of praise for their intuitive workflows?

> 9 years ago git wasn't the clear winner, it had competition via SVN and Mercurial. Both of them are practically irrelevant at this point so however bad git's UI was, it's clearly better than anything else that existed before.

SVN isn't distributed; it mostly tried to replace CVS, which it managed to do very successfully.

As for DVCS, the fact that system X became more popular than system Y doesn't mean that every aspect of X is better than Y. Many would say git succeeded despite its awful interface. As far as I can tell, the main feature of git was its speed; that made it usable for huge projects like Linux and Xorg, and this endorsement from high-profile projects gave git the edge for DVCS hosting sites like Gitorious. Then GitHub came along, and grew into such a behemoth that git became the de facto standard.

See also: WorseIsBetter


> If it were a human problem, how could alternatives like Mercurial and Darcs get consistent amounts of praise for their intuitive workflows?

Because maybe mercurial's real problems wouldn't show up until people started using it at scale.

This is a classic problem that happens time and again.

Windows vs. Mac back in 2005:

"Windows having malware is a human problem"

"If it were a human problem, how could alternatives like Mac get consistent amounts of praise for their lack of malware?"

Turns out it's just because Mac didn't have a big enough market share. As soon as it did, Apple could no longer claim that Macs can't get malware because they can and do.

Similarly, if you scaled up the number of Mercurial users by several orders of magnitude so that it was now "mainstream", I'm sure some of its lesser known pain points and/or counter intuitive behaviors would start floating up to the top 10 HN stories. But since hardly anyone uses Mercurial vs. Git, that doesn't happen. Just because a minority of users praise something (<insert lesser known language>, <insert lesser known DB>, <insert lesser known VCS>, etc.) doesn't mean if you scaled up the userbase one or more magnitudes it would continue getting that level of praise from all users.


> Because maybe mercurial's real problems wouldn't show up until people started using it at scale.

I actually tried both git and mercurial almost a decade ago having had only experience with subversion, and found git to be much easier to understand and use. I don't recall what those pain points were, and am sure I couldn't describe them correctly if I did remember, but because of that I do expect such things to start popping up if more people used it.


Mercurial scaled better actually. It's 1 of the reasons Sun, Mozilla, and Facebook used it instead of Git.

Typically Git was a bit faster for most operations on Linux. Mercurial was a bit faster for other operations. Especially on Windows. And both were much faster than the rest.

Bitbucket and GitHub started around the same time. GitHub just executed better.


I agree that collaborative text editing between multiple users simultaneously is hard.

That doesn't mean that Git couldn't have a much better UI for this problem than it does. And while I agree that Git seems to have mostly won over SVN and Hg, it doesn't follow that it's because of its UI. (For example, I think that a lot of Git's success actually comes from the UI of GitHub, not Git itself, but I don't have evidence to back it up.)

Take a look at the Research section at the bottom of https://gitless.com/ for work that's been done on how to accomplish the same operations with a simpler conceptual model.


This is the real interesting problem. Even simple subtle decisions like "should this list be kept sorted, or have new things added to the end" determine under what conditions you will hit a new conflict and when you won't. List of enums, add to the end, and if 2 branches grab the next enum value then you'll see a conflict. List of lets say URL prefixes keep in order, then most randomly added URL prefixes will not collide or conflict, only conflicting if you get 2 things trying to operate on the same prefix.


Wow. Reading the home page of gitless made me both excited to use it and angry at how poorly designed git’s UI and UX are. Thanks for sharing this!


GitHub is what made git so popular, git wasn’t inevitable, and GitHub didn’t become huge because of git, GitHub’s big innovation was attaching a social network to a software/code repo. It’s what made it way more popular than basically all the competition, which were just software/code repos but didn’t have a great network/social story. (Also at the time GitHub was coming out, the biggest player in the space decided to monetize in an annoying way.)


> Regardless, what's so bad about deleting the repo and pulling from remote if you can't figure out why you hosed it? It's not like it costs you anything to do `rm ... && git clone ...`. And it's not like this happens daily either.

Well, actually, a lot of people work in large repos, so this can be a pain.

Which of course brings up another failing of the git UI: the difficulty of removing old history from the repo. Last I checked, you pretty much have to use a 3rd party tool for this.


I feel that I only understand git because I’m forced to use it for years, but the experience could have been a lot less painful. I use a GUI for sure. Do people really diff and resolve big conflicts at a command line? I never figured out how.


Just delete the bits you don't want and `git add`

It helps a lot to set:

git config --global merge.conflictstyle diff3

then you can see the before as well as the 2 changes you are trying to resolve.


  git mergetool
or

  git checkout --ours/theirs && git add .


> tone deaf responses of "you just have to learn it's internal architecture,"

Why is this tone deaf? It’s a development tool for programmers and the internal structure is conceptionally pretty straightforward. Understanding your tools is pretty much prerequisite. You don’t have to have written a compiler to use one, but you do have to have a model of what it does. Git is no different.


Admittedly I did spend like a combined hour troubleshooting git over the last two months.

But git saved me an immeasurable amount of hours in return. I was asked what I worked on some specific week back in may -- git saved me. I had to look up what performance my program had before a specific change -- git saved me. I had to try stuff out without messing up my codebase -- git saved me.

True, I invested quite a few hours and I still do, but I think the investment pays itself off. I'll never ever do a project without git.


Last time I checked, 5 of the top 10 questions on StackOverflow were about Git...


still true! but that isn't a great metric necessarily, as #10 is "What is the “-->” operator in C/C++?" https://stackoverflow.com/questions/1642028/what-is-the-oper... which is just a novelty rather than a common question


Maybe because like 90% of devs use git; there's no language or framework that is used by so many.


This is the one thing I took with me from The Design of Everyday Things: usability design matters, and it matters a lot.


> Git is an incredibly user-hostile experience, and someone needs to fix or replace it.

* in your opinion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: