
What is wrong with “A successful Git branching model”? - GolDDranks
https://barro.github.io/2016/02/a-succesful-git-branching-model-considered-harmful/
======
K0nserv
I disagree with the author.

Feature branches should not be long lived in the first place. If a feature
branch encompasses more than a story of 1-5 story points it's likely that it
is too large in the first place. Once you dispel this notion that feature
branches are allowed to be long lived all the other points fall as well.

Since feature branches are merged to master frequently the problem of
integration between multiple features is mitigated. Personally I find that the
extra merge commits and the commit bubbles generated by this model make the
log easier to read and it's clearer how the code came to be at the state it
is.

Additionally the PR/MR process codifies when code review should happen and it
does not require an extra tool if a web based git UI is being used(GitLab,
Github, Bitbucket). During the PR/MR the author can create many small commits
with fixes to aid the review of incremental change after the initial full
review. When the PR/MR is accepted a rebase on origin/master and heavy use of
squashing removes the extra noise created by these smaller commits as well as
resolves most problems with integration into to the master branch.

~~~
jroseattle
This. This is the correct way to think about it.

Ballooned feature branches suck either at merge time or at deployment time.
The problem in either case isn't the model -- it's the balloon of changes.

Commit early and often -- and merge back to master. :-)

~~~
Silhouette
This argument assumes that useful changes can always be broken down into bite-
sized chunks and implemented incrementally without unwanted side effects.
That's an ideal situation, and it's also a realistic one a lot of the time,
but not always. A development process that can't cope with major changes being
needed from time to time isn't going to be appropriate for a lot of projects.

------
Osiris
I prefer the concept that master should always be deployable to production. My
CI creates a build artifact on every commit to master and the artifact can be
deployed to staging/production at any time. Thus, master should always be
considered stable.

Development should happen in a branch. When it's done, checkout master
LOCALLY, merge in your branch, run unit tests, if tests pass, push master. If
tests don't pass, reset master back to origin and keep working on your
feature.

This way master is always ready to go and you also get the benefit of seeing
how a feature was developed and the merge commit is a representation of a
complete chunk of work.

If you do need to back out a feature, how would you do it on a rebased, fast-
forward merge? You have NO CLUE where the start of the feature was. You have
to just guess based on the author maybe? In my experience, I've never used git
bisect so I don't optimize my branching model for that experience. For me,
backing out bad merges is much more common so I optimize for that.

~~~
sciurus
What you describe for testing and merging is a needlessly manual process.
Using e.g. Phabricator, I can make a local feature branch and when I think
it's ready open a request for code review. My CI system can run tests. I can
update my request in response to test failures or reviewers comments. When my
code is ready, Phabricator can squash all my commits into one and rebase that
onto master.

If for whatever reason I did want to back out this code, there's a single
commit to revert. This is unlikely though, since until a feature has proven
itself in production it will be behind a feature flag.

[https://secure.phabricator.com/book/phabflavor/article/recom...](https://secure.phabricator.com/book/phabflavor/article/recommendations_on_branching/)

~~~
Osiris
That's a nice process. Currently, GitLab doesn't support running tests on a
merge request in that sense, making a temporary merge of master and the branch
and running the tests to confirm the merge will work. When/If GitLab + GitLab
CI adds that ability, then yes, it would make that process a lot easier.

~~~
sytse
We're discussing adding a 'test the merge' ability in 8.7 in
[https://gitlab.com/gitlab-org/gitlab-
ce/issues/4176](https://gitlab.com/gitlab-org/gitlab-ce/issues/4176)

This will test the 'current merge' as soon as a commit is pushed, not the
merge result when it happens.

------
Gratsby
My biggest problem with "A successful Git branching model" is that it scares
people into thinking they have to design and commit to a workflow before they
can develop software.

Having a defined workflow is not a bad thing. And it certainly wasn't the
intention of the author for anybody to stagnate. Unfortunately, it's very
natural for technical people to overthink and over-engineer process.

In my mind, when an organization is new to source control or is haphazardly
using it, it's best to simply start using the tool. Each product is different,
each organization is different, and each developer is different. You should
mature into an organized development model based on what you know about your
own situation.

If you develop on the develop branch and release on the master branch, but
don't communicate that clearly to the new guy, I guarantee he's going to start
working on master right out of the gate. The same goes for every model where
there's a lack of communication around procedures and no workflow controls.

Git saw a rise in popularity because it was easy where every other solution
was difficult. If you're having issues with code review, patches, release
versioning, or whatever - address that particular problem. If the problems are
multiplying to the point where it's significant, it's time to think through a
process. Spending your time and energy ferreting out a process based on the
fear of what might one day happen will result in a significant amount more
time spent doing that then addressing the would-be problem in the first place.

~~~
mcv
I agree, communication is essential. The most important thing is that the
developers agree on how they work with git, and all these branching models are
just usable defaults and starting-off points from which you figure out what
works for your organization.

But preventing the new guy from pushing straight to master is not that hard.
On my current project, a git hook prevents merging commits into the remote
master if that commit doesn't already exist in another remote branch. So you
have to do your work elsewhere or it won't be accepted.

~~~
Gratsby
Unless of course he sees an error and runs `git push --no-verify`.

I'm not really saying any of the advice around git workflow is bad. It's that
too much time spent on process instead of development is. If you take the
advice for what it is - a good starting point - I think it's great to use.

~~~
mcv
> Unless of course he sees an error and runs `git push --no-verify`.

Yes, but then you know he's someone you need to fire. That's also useful
knowledge.

------
stevebmark
Letting devs continuously push their shitty commits to master without passing
any sort of CI suite is a recipe for constant broken builds and blocking the
deployability from master from it being constantly broken. Master should be
always deployable, or better yet, auto-deploy when anything is merged in. That
way your devs are forced to write good, CI-passing code in their _branches_
before they're merged into master.

~~~
4lejandrito
If your devs are creating shitty commits in the first place maybe you are
facing a completely different problem, don't you think?

We only push to master after rebasing locally which has worked out well for us
for several years. Every commit is supposed to be deployable to live and CI
friendly, that is, tested. So, by definition, we don't create shitty commits
(most of the time :D).

~~~
teen
Maybe your code isn't done yet but you'd like it replicated to github? This
model doesn't allow that

~~~
4lejandrito
Sometimes, but very rarely, we do create WIP (Work In Progress) branches so we
can push the code to the server. But the idea we follow is to integrate with
master a fast as possible ideally with several commits per day.

I used to be a fan of the git branching model but since I work like this I
find it much easier and less error prone. I guess is the way we've found
comfortable for all of us.

Just choose what works for you!

~~~
sopooneo
Does this mean all your features are able to be completed in a day?

------
mcv
The main thing to keep in mind with git workflows is that one size does not
fit all. The number of people working on the code, and the way they work on
it, has a big impact on which git workflow works best for you. The nice thing
about git is that it's flexible enough to accommodate very different ways of
working.

His primary objection to _git flow_ seems to be integration hell, which is
easily avoided by regularly pulling from develop, which is something you also
need to do in his model. His point about git bisect is interesting, though
I've never heard of anyone using it.

The primary advantage of shared feature branches is that multiple developers
can work on the same feature that's not ready for release yet, which is
impossible if you've got a single shared branch, and it means others have easy
access to your code if something happens to you. Code shouldn't live just on
the developer's machine. And while other backup solutions exist, git can keep
it in the regular developer work flow.

That said, I don't think feature branches are sacred; I'm totally fine with
developing small features straight on develop. Or master, if you prefer.
Whether you want master to be stable or unstable is a matter of taste.
Although I personally think it's useful to have a single branch that always
contains the latest stable release.

~~~
Mithaldu
> The main thing to keep in mind with git workflows is that one size does not
> fit all.

This. This is the only true comment about this. Every software development
project is different, and why there are categories they can be sorted into,
every category needs a different kind of source repo layout.

Anyone who starts discussing this issue without that notion in the front of
their thoughts might as well be banging their head against a wall for all the
good it'll do.

------
bicknergseng
IMO, there's one big feature missing from git (and other SCMs in my limited
knowledge of the field) that results in posts like this every so often.

There really needs to be some concept of a commit group, one or more commits
(probably the diff in commits from one branch to another) that are packaged
together so that they can be addressed as one object instead of trying to keep
track of a whole bunch of commit hashes. Merging a commit group is revertible
and trivially cherry picked across branches. This addresses one of the
weirdest behaviors in git: the rebase squash. We want commits to be
understandable and usable, but that probably assumes the committer either held
off committing or rewrote history to make it seem like things were written
correctly the first time. It seems to me like the whole purpose of an scm is
to keep track of history, whether it's the neatified, readable feature/branch
merges or an individual git user's frantic, probably unorganized development
progress.

~~~
foxylion
We use ticket references in our commit messages. So you'll be able to find all
relevant commits of a specific feature or bug fix. Cherry-picking them is then
also no big problem.

    
    
      git cherry-pick $(git log release/3 --grep ISSUE-123 --pretty=%h)
    

The commit message contains the technical details about the changes. The
ticket reference is a linking to the feature details itself, so you can
quickly find out what the commit tries to achieve on a higher, non technical,
level.

~~~
azth
Do you put `ISSUE-123` in the title, or the body of the commit message?

------
js2
There's more than one way to do it and what works well in one project may not
be suitable in another, depending upon things like team size, development
velocity, code base size, external dependencies, etc.

The model described by this article is very close to how Chromium development
works. Meanwhile, git itself uses a model much closer to the one this article
dislikes. But git is a smaller code base, with many fewer commits per day, and
has a single person responsible for integrating all the changes which come in
over the mailing list.

Generally, I'd say that a smaller team that produces relatively fewer commits
per day can use a more complex branching model, while a larger team that
produces much more code churn requires a simpler model.

------
DanielShir
I'm usually a silent reader of HN, but I can't help myself.

I definitely disagree with pretty much everything. The merits of merge commits
have already been proven, and using rebases is basically rewriting history. If
you want all work done precisely as happened in real life you should always
use merge commits. Saved my ass a bunch of times.

Also, when something breaks after a merge, it's super easy to undo and to deal
with - "Yo Don, your merge of feature X just broke feature Y. Run some tests".

I just recently moved to a new job where they love the "everything is on
master" approach and that is absolutely terrible IMO. Everything is constantly
broken, and developers always end up breaking builds and stepping on each
others' toes. CI doesn't help in this case either because you always need to
wait for some dev to fix the build. Wouldn't it be better to just undo a merge
of some feature (i.e. reject the merge) and then let the dev fix that on her
own time? Working just on master is just the wrong way of doing anything on a
team with more than 2 people.

~~~
JanezStupar
Indeed, from my observations people using rebase are usually working solo or
in very small teams, or just like spending insane amounts of time fixing their
broken code.

~~~
Pyxl101
The typical effective way to work with Git is to develop on master while
running "git pull --rebase" regularly, and before you commit. Prefer to build
up a feature as one commit, with "git commit --amend" or "git commit --fixup".
Since you pull and rebase regularly, your code is closely in sync with all
shipped changes, and if there are any conflicts you get to resolve them in
small incremental pieces. When your change passes CR, push it.

The result is a simple, clear, linear commit history with a minimum of effort
spent on branching and merging fuss. The fuss is rarely worth it. This model
works for quite large teams. After a certain size, it might be better to split
the package into several rather than add branching/merging complexity.

~~~
icebraining
Doesn't that prevent you from using commits while developing the feature?

~~~
finishingmove
You can make as many commits as you need on your feature branch. When you're
done with the feature, you do `git pull --rebase origin master` (or whatever
the main branch is) and squash your commits into one (or a few -- when it
makes sense) using `git rebase -i`.

~~~
Bahamut
Better to squash before the rebase so you resolve conflicts once rather than x
times...and hope there are no nasty merge commits to ruin the history.

------
golergka
This sounds like horrible advice. Especially the part about git bisect: it is
indeed an often situation, where both feature branch changes and master branch
changes are correct by themselves, and only introduced the bug when combined.
And in this situation, git bisect pointing to the merge commit as culprit is
exactly what should happen, because incorrectly implemented merge, which
didn't take care of interface changes (for example), is exactly the reason for
such a problem.

This exact situation is also the reason why I hate git rebase: it rewrites
history and hides it's real complexity under leaky abstraction. When I tried
to use it, I found myself guessing a few months later: were these commits
really how I (or a team member) wrote this code? Or may be these are actually
rebased commits?

So, this statement:

> having the history linear without any merge commits could immediately point
> out the commit that causes issues.

Is laughably incorrect. When you make your history linear, you can no longer
pinpoint the commit that caused issue, because after doing rebase, _you
destroyed it_. The original commit that caused the issue is no longer there.
Or, it never was there: if the issue was introduced not by individual change
set A or B, but by there combination, then the merge commit would the one that
caused the issues. But since you did rebase instead, the _`git rebase` command
was the "commit" that caused your issues_! But, of course, you won't see it in
git's history.

~~~
justicezyx
Unless we live in some parallel universe, everything is linear...

I think you simply miss the idea of linearizability...

~~~
justinjlynn
I must respectfully disagree. In this universe, nothing is linear in terms of
how we perceive events and their simultaneity. If we consider a commit is an
event, then who can say one commit comes before another? As in relativity, if
both developers are working independently then they each of their own frame of
reference. In effect, a merge commit reconciles those different frames of
reference in its own discrete and well-documented entity. You could think of
it as the equivalent of a Lorentz transformation. I understand the analogy is
quite stretched here, but I still find it useful.

It follows then that if a merge commit is a genuine reconciliation, then a
rebase is the act of one party rewriting their own memory in order to hide
from themselves the fact that they were working in isolation and later had to
agree on the actual outcome of events. Thus, in effect, rebasing ones code is
no different than suffering from a self-inflicted delusion. That is, a
delusion that we are all working in one shared directory or, to a lesser
extent, that there is one source of truth as in SVN or other centralised
systems.

This is, in fact, quite a sad thing our community. Many a developer -- while
they may be _using_ git -- are not truly using git in the way it was intended.
They have limited themselves to their old ways of thinking and acting and, in
the pursuit of what they have been taught is "the ideal", they have hidden
from themselves the possibilities, efficiencies and understanding that might
be discovered by the move into a world which can capture the fact that not
everyone works in a single shared directory with immediate knowledge of each
other's work.

We have to learn to embrace the oddity of a non-linear history because, while
we might not like it, it represents what really happens when we write code.
The better our tools help us model reality the better answers we can produce
for ourselves now and in the future. Six months from now when we say, "How the
hell did this ever work?" or "What the fuck was I thinking when I wrote
that?", we'll want answers. If we choose rebase, corrupt our own memories and
burn the reflogs we'll never know.

~~~
Pyxl101
This is a poor and incorrect analogy for rebase. Someone using rebase is
conducting the reconciliation and then rewriting their commit to account for
it, while simplifying the commit history and hiding it.

When a commit lands in master, it doesn't matter to anyone else how it was
developed. Those events are almost outside the "light cone". Those who rebase
a commit onto the upstream branch recognize that the little details of how it
was made have no relevance to others and thus do not belong in shared history.

People often speak about preserving history while missing the point that
source history is most meaningfully logical, not physical. Imagine an editor
that made a new commit for every key press. That would be a true recounting of
history, and yet would be irrelevant. Most feature development can and should
land as one commit; the details of how it was made are, like the key presses
that composed it, minimally relevant to anyone but the author.

~~~
justinjlynn
Fair assessment. Commits should represent logical groupings of modifications.
This is always a trade off between the extremes of recording each change
atomically and dropping one tarball release of all code to replace the last
with no changelog. However, the purpose of having a history is to, in as much
as possible, reconstruct the contents of the writers mind when evaluating
their actions and the changes they have made. This is the heart of why we have
revision control -- to allow us to manage this information and query it
effectively.

Since the purpose of a revision history is to enable historians to gather a
complete picture of a feature's genesis so as to interpret the author's
changes and mental states prompting those changes in their correct context all
of recorded history should facilitate that purpose. If rebase workflows do
indeed encourage big atomic commits which drop features into a repository as
though they were commandments from a god then obviously they do not help
historians understand the authorial intent that went into their creation. You
yourself argue for just that as do, in my observations, most users of rebase
oriented workflows. Additionally, the heart of rebase workflows themselves,
namely retroactively changing the parent of a commit, misrepresents what the
author knew at the time that code was written. This clearly misleads the
historian and I find the workflow to be dishonest and unhelpful in preserving
the very reason we use revision control and thus I judge it harmful.

If one uses rebase (say, in interactive squash mode) to clean up and regroup
commits into small logical chunks in order to facilitate the understanding of
authorial intent then I don't have any issues with it at all. It's when it is
used to misrepresent, overload and mislead the historian for the sake of a
"good looking" git log that I find the usage of rebase distasteful.

An author should never be so self-absorbed that they believe that no-one but
themselves could possibly care how they did their work and arrived at the
conclusion they did. Authors that abuse their tools really have no excuse and
best remember that they themselves are also historians with respect to other
people's code and, in time, their own.

------
bryanrasmussen
I recently had to work at some place that followed the 'successful Git
branching model', and it was quite awful but I also felt maybe it was suitable
for them because they had probably several hundred developers running around
to make their large media website (once you count the drupal, .net, node.js,
and I forget the last one - as well as frontend stuff and media management
processes they had to do)

on my own projects there are often only 2 or 3 internal developers (and
sometimes that can go down to just me, so that further decreases the need for
any complicated process) and there I like to have Master be deployable, a
branch named development that does sprint type stuff where you expect not to
push to master for a week or two, and bugfixes where you expect any work gets
merged and deployed in hopefully a matter of hours.

Sometimes self contained feature additions that will not affect anything else
will be put on their own branch, often these are done by just one developer
who might even be external to the main project.

Is this the perfect way to do it, probably not, but I tend not to believe in
perfection of process I just want something reasonably manageable for the team
size and the complexity of the project.

------
junke
The whole series of article from Junio C. Hamano, and in particular "Fun with
merges and purposes of branches"
([http://gitster.livejournal.com/42247.html](http://gitster.livejournal.com/42247.html))
are full of good advices regarding how to use branches, merging and rebasing.

~~~
juped
This is a good idea - Junio understands git. Linus understands git too, but
you only get insight out of him when someone screws things up badly.

------
vog
From the article:

 _> The biggest issue with [the article "A successful Git branching model"] is
that it comes up as one of the first ones in many git branching related
searches when it should serve as a warning how not to use branches in software
development._

This is so true! That one seems to be a classic example of over-engineering.
Although that model may be useful for some types of developments, ultimately
every project has to find out their own way of branching model, which I should
be the simplest model that serves their need - no more, no less. Going crazy
with branches is just another way to fail using them.

 _> I will explain next why merge commits are bad and what you will lose by
using them._

To add to that, I think it is quite telling that the Linux kernel developers
themselves prefer a simple, linear history in the end (using branches only as
intermediate steps), especially since they were the ones who created Git in
the first place.

~~~
drothlis
> I think it is quite telling that the Linux kernel developers themselves
> prefer a simple, linear history in the end

What do you mean? This is what the current git history of the linux kernel
looks like:

    
    
        *   12b9fa6 Merge branch 'for-linus' of git://git.kernel.org/pu
        |\  
        | * 5129fa4 do_last(): ELOOP failure exit should be done after 
        | * a7f7754 should_follow_link(): validate ->d_seq after having
        | * d456564 namei: ->d_inode of a pinned dentry is stable only 
        | * c80567c do_last(): don't let a bogus return value from ->op
        | * 0fcbf99 fs: return -EOPNOTSUPP if clone is not supported
        | * b6853f7 hpfs: don't truncate the file when delete fails
        * |   340b3a5 Merge tag 'armsoc-fixes' of git://git.kernel.org/
        |\ \  
        | * \   d877a21 Merge tag 'renesas-soc-fixes-for-v4.5' of git:/
        | |\ \  
        | | * | 901c5ff ARM: shmobile: Remove shmobile_boot_arg
        | | * | 4e960f5 ARM: shmobile: Move shmobile_smp_{mpidr, fn, ar
        | | * | b1568d8 ARM: shmobile: r8a7779: Remove remainings of re
        | | * | d2613f5 ARM: shmobile: Move shmobile_scu_base from .tex
        | * | | 7931845 MAINTAINERS: Extend info, add wiki and ml for m
        | * | |   9fa6c2b Merge tag 'omap-for-v4.5/fixes-rc5' of git://
        | |\ \ \  
        | | * | | 3f315c5 ARM: OMAP2+: Fix onenand initialization to av
        | | * | | e327b3f Revert "regulator: tps65217: remove tps65217.
        | * | | | a9e5547 MAINTAINERS: alpine: add a new maintainer and
        | * | | | 5e45a25 ARM: at91/dt: fix typo in sama5d2 pinmux desc
        | * | | |   b223c9f Merge tag 'imx-fixes-4.5' of git://git.kern
        | |\ \ \ \  
        | | * | | | f5d0ca2 ARM: dts: imx6: remove bogus interrupt-pare
        | | | |/ /  
        | | |/| |   
        | * | | |   e3acd74 Merge tag 'omap-for-v4.5/fixes-rc3-v2' of g
        | |\ \ \ \  
        | | | |/ /  
        | | |/| |   
        | | * | | cf26f11 ARM: OMAP2+: Fix omap_device for module reloa
        | | * | | 08c78e9 ARM: OMAP2+: Improve omap_device error for dr
        | | * | | bf26927 ARM: DTS: am57xx-beagle-x15: Select SYS_CLK2 
        | | * | | a5b8751 ARM: dts: am335x/am57xx: replace gpio-key,wak
        | | * | | 5f35dc4 ARM: OMAP2+: Set system_rev from ATAGS for n9
        | * | | |   74a46ec Merge tag 'mvebu-fixes-4.5-2' of git://git.
        | |\ \ \ \  
        | | * | | | 44361a2 ARM: dts: orion5x: fix the missing mtd flas
        | | * | | | 9d021c9 ARM: dts: kirkwood: use unique machine name
        * | | | | |   691429e Merge branch 'akpm' (patches from Andrew)
        |\ \ \ \ \ \  
        | * | | | | | 7f6d5b5 dax: move writeback calls into the filesy
    

Merge commits galore.

~~~
ajdlinux
It's true that at the top level, most of Linus' commits are merge commits.
However, on a per-subsystem level, we very much favour a nice linear history.
In most parts of the kernel, it's rare to go beyond 2 levels of merging, which
given that the kernel has something like 1500+ developers and well over 10k+
commits per release cycle, is fairly linear...

~~~
drothlis
Interesting (I'm not a kernel developer). What's the workflow at the subsystem
level -- does the maintainer do a `git fetch` followed by `git rebase`? Or is
the rebasing done via email patches and `git am`?

~~~
ajdlinux
It's pretty much all done by emailed patches with git am - I maintain a
personal tree on GitHub and one privately in my company, but they're purely
for experimentation, not for sending pull requests. Email is how we submit
code, discuss code and review code. You use git send-email to fire your
patches off to the appropriate maintainer + mailing list, you make your
modifications/rebase it/etc then send off V2, V3, ... V14 of your patch until
everyone's happy. Among other things, we tend to be quite picky about getting
commit messages right, and making sure that patch series are "structured" in a
"nice" way - so git rebase -i is one of the most common commands I run on my
private branches...

Maintainers all have their own tools and scripts to help automate and track
the whole process - in the area where I work, we track things using Patchwork
(see [http://patchwork.ozlabs.org/project/linuxppc-
dev/list/](http://patchwork.ozlabs.org/project/linuxppc-dev/list/)). When the
maintainer's happy with a patch, it gets applied and pushed to one of the
trees they use (e.g. with powerpc, we have powerpc-next for feature
development and powerpc-fixes for important fixes). Eventually, they send
Linus a pull request, and it makes its way into the kernel mainline.

It's a bit of a tricky system that requires understanding of the kernel
community's social norms to get right - which I'm not entirely happy with but
I don't think it'll change particularly quickly. However, it's also
surprisingly effective - the kernel is one of the largest and most distributed
individual projects in the open source world, and as a community we keep
pushing out releases.

------
sergiotapia
This method doesn't assist code reviews. Everything is on master, and I
imagine it would be difficult to find all the commits for a given feature,
unless every feature is just a single huge commit (terrible).

It ain't hard people, just do:

master (production) -> develop (staging) -> feature/foo-bar (feature)

No biggie.

~~~
mbrock
What's wrong with a single commit for a feature?

~~~
karyon
You lose a whole lot of development history information. You have no chance of
finding out why these five lines of code came to be when all your commits
touch hundreds or thousands of lines. Maybe the intent of the code should've
been in a comment, but if there are none, it's nice to have commit messages as
a backup.

Also, bisecting (and then finding the actual fault in the code) gets harder
the larger the commits are.

~~~
Pyxl101
> You lose a whole lot of development history information

What information do you lose, and why is that a problem? The feature commit
tells you everything you need to know. Beneath those details tend to be
irrelevant minutiae.

Let's imagine an editor that made one commit for each key press you type, or
every time you save a file. That would be the closest thing to true history. I
might accuse your practice of not committing that level of detail with "losing
a whole lot of development history", and it would be true but only in a banal
sense.

How an author built up their feature commit is almost never (and shouldn't be)
relevant to everyone else. What's relevant to others is how they changed the
repository and why. A good feature commit or commit series should stand alone.

~~~
mcv
One thing you lose is history across filename changes. Git can track renaming,
but if in the same commit you also change the contents of the file, git has no
way to figure that out. Separate it over two changes, and git knows what
happened.

For that reason I also try to spread big refactorings over a couple of commits
(always with the code working after each commit, of course).

I also like to have formatting changes in a separate commit. Mixing functional
changes with formatting changes means every line in the file has changed, and
the functional changes become invisible. (Though commit hooks that demand an
(Jira) issue nr in the commit message make this hard.)

~~~
mbrock
Isn't this mostly true if you both change the file name and change most of the
file's content? In which case it might be best to just see it as a deletion
and creation anyway?

~~~
mcv
If the file still serves roughly the same functionality, just is refactored
form, I think it's nice if the old version still shows up in the history.

------
jpgvm
I find all of these posts on "the right way to use git" pretty tiresome.

At the end of the day it's really not that complicated and you will probably
have more luck by keeping it that way. Bringing in a complex workflow that
isn't rooted in the flow of work in your organisation makes no sense.

Stick to master being the newest code, fixes done in a branch, review and
rebase ontop of master, features the same just hang around in a branch for
longer - still rebase ontop of master. Maintain branches for releases and
cherry-pick from master onto them.

If your workflow takes a blog post to describe it's just too damn complicated.

~~~
radicalbyte
Gitflow is exactly this, only they add an extra branch which tracks all of the
releases. And, for some weird reason, call it 'master'.

------
radicalbyte
It's worth pointing out that the entire point is that you use SHORT-LIVED
feature branches. The long-lived branches are the releases that you need to
support.

You're basically formalizing a working copy by calling it a branch; with the
added advantage of being able to make a nice clean set of commits ready for
code review.

I introduced using short-lived feature branches with rebase/merge after having
a great experience doing something similar with Perforce. Gitflow simply adds
a master which contains only releases and formal fix-bug-on-release-merge-
back-to-development rules.

------
juped
Not _another_ one.

All good git workflows are different; all bad git workflows are bad in the
same way - okay, one of the same two ways: superstitiously declaring important
parts of the git model "bad" (merge commits are bad! rebasing and amending are
bad!), or superstitiously merging things that have no business being merged
together left and right (this is really a special case of the first: it's
'branches are bad!', though the person doing it often doesn't realize it).

GitHub certainly doesn't help matters by putting a frontend onto git that
doesn't match the conceptual model of git in the least - you can't even see
the graph unless you dig deep and find "Network", the world's worst graph
visualizer, and the pull-request module is absolutely unreviewable and has
giant inviting buttons to instantly do nontrivial things to your immutable
(without inflicting cascading rebases on the entire planet) history that you
may or may not even want without telling you exactly what they'll do.

It's a pretty serious indictment that Bitbucket, originally designed for
Mercurial, is the only web frontend to git that does a passable imitation of
gitk. (Git (the command-line porcelain) doesn't help matters by assuming the
user is a seasoned LKML veteran with Linus or a vice-Linus above them willing
to ruthlessly reject their history if it sucks, either.)

"We are a small two-person collaboration on separate chapters of a book in
LaTeX, so we will forego branching, work on master, use pull --rebase, and
push to the same central remote, to basically have a more robust CVS" is a git
workflow you can have.

"I am the maintainer of a decently large open-source project, so I will accept
only signed tag pull requests of work based on the last signed and tagged
stable release, with cleanly logically separated commits and no internal
merges allowed unless explicitly justified, and --no-ff merge them onto my
ongoing blessed integration branch after review" is another git workflow you
can have, and it uses more of git's featureset - it's close to how Linux
works, if you factor out the mailing lists and ignore the presence of vice-
Linuses.

"Rebase everything willy-nilly because merge commits are BAD" and its just-as-
evil twin, "have multiple branches but merge them all into each other whenever
someone blinks, because separating separate work is BAD but 'branch' is a
buzzword" are not workflows you can have, they're symptoms you can exhibit.

GitHub, et "considered harmful" blog posts, et superstition, delendae sunt.

------
prodigal_erik
You shouldn't do continuous integration without continuous deployment, because
you don't know whether you're relying on code that is not ready to go to prod.
And you shouldn't do continuous deployment unless your test coverage
(including load+perf) is so exhaustive that every possible change has zero
risk in prod. Literally every team I've ever worked on had to manage
production deployment tactically in terms of "are the right people available
if we roll this right now" and "how much effort would the rollback be" and "is
this an especially important day for our customers".

New features should be on branches off master, and master is the code we've
already pushed to prod _and_ agreed is good enough not to roll back.

~~~
Silhouette
Some people seem to think the only software that gets developed any more is
running on a web server, that its developers can and should always have direct
access to production servers, that pushing minor changes to those production
servers several times a day with minimal oversight or control is a badge of
honour, and that having serious failures in production now and then is
acceptable.

No doubt for some software development projects this is all true. For many
others, it is not. Processes and tools that work well for a team in one
context might be completely inappropriate for a team in another.

------
Ruud-v-A
I do prefer rebases over merges because they lead to a cleaner history, but
when you have a set of features in separate branches that are all dependent,
rebasing becomes a pain. I like to have one local branch per CL/pull request.
If I have a branch feature1 with commits on top of master, then feature2 with
commits on top of feature1, feature3 with commits on top of feature1, and
feature4 that merges feature2 and feature3 and then has commits on top of
that, that thing is a __pain __to rebase. (I know about --preserve-merges, but
it does not always do the right thing.) The main problem here is that the end
result should be feature4 rebased on top of master, but the other branches
should also be updated to point to the new commits.

~~~
Pyxl101
It's rare to need to develop multiple features as concurrent branches, as long
as you can release regularly.

The practice that works really well is: just develop against master while
running "git pull --rebase" regularly. Code review and then ship the feature,
repeat. Use branches only for rare concurrent development, and only after the
need has arisen (Git makes it trivial to move changes into a branch after the
fact).

~~~
Ruud-v-A
It is not rare at all. The features I am talking about are small. In a large
project code review can take several days if the reviewers are busy, even for
a small change, so it is essential to work on multiple things concurrently.

------
paulddraper
Good points.

Two issues:

(1) All code review tools I know of currently use merges
[http://programmers.stackexchange.com/q/256789/108980](http://programmers.stackexchange.com/q/256789/108980)

(2) If you want to keep your local work up-to-date, you need to rebase. It can
be painful to do this after a while; many like to do it often. If you have a
conflict, you will need to resolve it everytime you rebase, unlike merge,
which saves the conflict resolution in the merge commit. You can use git
rerere, though it takes some effort [https://git-scm.com/docs/git-
rerere](https://git-scm.com/docs/git-rerere)

------
iamflimflam1
Basically don't have long lived branches.

Everyone should know that by now, doesn't matter what development model or
process you are using, if you've gone off on a branch for too long you will
have problems.

------
partycoder
This poses questions of the caliber of "Why not instead avoiding version
control and just have a shared folder?", it's "simpler"... There's a reason:
organization.

------
kuahyeow
Not sure why the aversion to merges. Bisect works fine even with merges, and
the avoidance of the "christmas tree look" is IMO just aesthetics.

------
jwr
The model described in this article is a very reasonable one. I've been using
something quite similar with various teams at different software companies
over the last years and it has always worked great. Little form, lots of
function.

One note, though — feature branches should IMO be short-lived. Long-lived
branches cause huge and difficult merges and rapidly start being costly to
maintain.

------
quadrangle
> when one developer changes some internal interface and other developer
> builds something new based on the old interface definition

Well, the compiler would just catch this, right‽ Except if, I suppose, you're
not using a strongly-typed compiled language like Haskell… ;)

(okay, I know this doesn't actually address the git issue at hand and isn't
foolproof either)

------
derFunk
This "Something more simple/Cactus Model" is exactly what we're doing very
successfully since years with SVN. We're focusing on release branches. Very
rarely doing feature branches, because branch merging with SVN is PITA and
something you want to avoid.

~~~
lisivka
We used feature branches with Subversion very often, then switched to GIT,
because it is much faster and has built-in support for rebase instead of
custom rebase-branch script for svn.

------
geocar
"How changes get live" is one of the first, most important things I tackle
with a new project. This means I consider my "live system" is effectively part
of my development environment.

Right now, I have a "live system" repository, and then development actually
occurs in a second repository which is a submodule of the first.

This has a lot of advantages:

• Configuration and data is out-of-tree, so it is easy for me to have a read-
only view of the live database, and read-write to a local (temporary) database

• I can select a branch using a signed cookie or a URL which makes it very
easy to demonstrate features on the live system

• I can try out tags on 1% of traffic, or 5% of traffic, testing some features
with my live users

• I can use git bisect on the live system to find regressions

There are a few disadvantages, but most of them are psychological: Testing on
live sounds scary; how to make database migrations is a lot more work; etc.

------
kensign
GitFlow, when used correctly with Agile, solves all these problems quite well.

