
Abandoning Gitflow and GitHub in favour of Gerrit - hankeypancake
http://www.beepsend.com/2016/04/05/abandoning-gitflow-github-favour-gerrit/
======
1_800_UNICORN
Gerrit is the wrong solution for truly agile software development. I had a
client ask my team to use it, and it was a real PITA.

The fact that one commit = one merge is ridiculous. It encourages monolithic
commits for no reason other than that the tool demands it. It's unrealistic to
ask someone to code review multiple commits per feature, and if you tie in
your CI it doesn't make any sense to run your build over and over again for a
single feature either.

The patchsets DO allow you to see the history of a code review if you have to
make changes, but I'd much rather see that live in my git history rather than
in Gerrit's history. It's a dirty solution to have to amend your commits to
make changes.

Nowhere does the author mention how painful it is if you finish a feature, are
waiting for a code review, but have to start the next feature using the code
you just wrote. Maybe I'm missing some magical feature in Gerrit that makes
this easy, but if you push multiple dependent commits to Gerrit, and one of
the early ones gets merged, all of the later ones now have to be rebased
because Gerrit created a merge commit in the middle.

~~~
xyzzy_plugh
> if you push multiple dependent commits to Gerrit, and one of the early ones
> gets merged, all of the later ones now have to be rebased because Gerrit
> created a merge commit in the middle.

Someone didn't spend time configuring gerrit or configured it wrong for your
use cases. You can solve this in one of two ways: you either fast-forward when
possible (gerrit can attempt a trivial rebase) or you commit a change set,
create your own merge commit, and then your early ones won't get merged until
your last commit. Gerrit will wait to merge the early commits (and vice versa
if dependant commits are ready) until everything is good, then merge them all
in one shot.

I used gerrit heavily for a few years (and still do, passively, with golang
and other public google projects) and I miss it dearly. Sure it's not perfect,
but I'd take it over the github fork/make pull requests model any day of the
week.

It does not encourage monolithic commits. You need developers to understand
what they are doing, care about commit history and be disciplined during
development. In my experience, developers who just want to hack shit together,
cut corners and love squashing commits or throwing away history hate gerrit.
Developers who care love it. It's definitely possible and easy to produce
Agile software.

Like anything else, you should use the right tool for the right job. If you're
not willing to suck it up and learn something new, then it's not going to have
a chance to be the right tool.

~~~
jacobsimeon
Your argument would be better received sans-ridicule.

I think the OP's point was that Garret hides a lot of features that are built
into git and then attempts to paper over that fact by re-implemting those
features within its own system.

In your first paragraph, you defend Garrit's rebasing procedure and then you
implicitly accuse the OP of not being willing to "suck it up and learn
something new".

Why should we learn a new tool that makes (arguably poor) attempts at re-
implementing the features of a tool that we already know and love?

~~~
ngrilly
There's nothing ridiculous in xyzzy_plugh's argument.

The discussed tool is not "Garret" or "Garrit", it's Gerrit, which makes me
wonder if you used it before commenting.

As far as I know, Gerrit doesn't reimplement git's features. Rebasing is done
using git rebase.

Gerrit is a system for code review, just like GitHub's PR, but with a
different approach.

~~~
jacobsimeon
"Look at this guy mispelled the name of something. He must not know how to
identify ridicule"

------
piotrkaminski
The article isn't particularly well written or argued, but it does have a core
of truth to it: serious code review in GitHub is painful. However jumping
straight to Gerrit to solve that problem seems like overkill to me. Sure, you
get a really powerful and extremely configurable code review system, but you
have to retrain for a new (and honestly a little long in the tooth) UX and
spend time administrating the system.

A lighter-weight SaaS like [https://review.ninja](https://review.ninja),
[https://omniref.com](https://omniref.com), or
[https://reviewable.io](https://reviewable.io) (disclosure: this one's mine)
might be a more appropriate solution. Specifically to the article's points,
Reviewable has a nice reviews dashboard and will show inter-commit diffs
within a PR (whether you're rebasing/amending or not), hiding any files with
no changes since you last looked. Its default review completion criterion is
that all files have been marked as reviewed by at least one person and there
are no unresolved discussions still going on, but you can customize this to
your team by writing a snippet of code to run against the review's state, e.g.
to implement LGTM approval or even a voting system. Reviewable will also
update a status check on the PR so you can enforce review completion before
merging if that works best for your situation.

Best of all, because both systems integrate tightly with GitHub, there's no
need to learn a new workflow or mess around with new git commands. Gerrit
still has its place but I don't think it should be the tool of first resort.

(Edit: added mention of Omniref.)

~~~
eeZi
Just use Phabricator! It's the best code review system I've used so far. Many
large open source projects and companies have adopted it.

Someone neatly wrote up the main advantages:

[http://cramer.io/2014/05/03/on-pull-requests](http://cramer.io/2014/05/03/on-
pull-requests)

Phabricator's issue tracker is also an excellent choice over GitHub's
simplistic issue tracker.

Also, Gerrit isn't that hard and I've seen small teams get productive with it
within a 1-2 weeks.

No need to reinvent the wheel!

By the way: I live in Europe and I haven't worked for one single company which
would allow their developers to host proprietary source code with a third
party SaaS provider.

~~~
piotrkaminski
Phabricator is nice, but it's more of a full-featured replacement for GitHub
as a whole. Some people want to keep most of GitHub and just improve on the
code review aspects.

> By the way: I live in Europe and I haven't worked for one single company
> which would allow their developers to host proprietary source code with a
> third party SaaS provider.

Fair enough, companies vary widely in their acceptance of SaaS -- though
Reviewable has plenty of European customers too. But to clarify, neither
Review Ninja nor Reviewable (not sure about Omniref) actually host code
themselves: they just access it through GitHub APIs without storing it. You
can also deploy Review Ninja (and soon Reviewable) on-premises, though of
course that means you're on the hook for administrating the system again.

~~~
_yp
> Phabricator is nice, but it's more of a full-featured replacement for GitHub
> as a whole. Some people want to keep most of GitHub and just improve on the
> code review aspects.

That's the nice thing about Phabricator, you can switch off all features you
don't need, and it integrates with GitHub. You can definitely use it just for
code reviews, with users logging in using their GitHub accounts and the
repositories being hosted by GitHub.

> You can also deploy Review Ninja (and soon Reviewable) on-premises, though
> of course that means you're on the hook for administrating the system again.

That would work! Third party SaaS probably includes Github.

------
simula67
Nothing about what was wrong about gitflow.

One small correction I would like to make to gitflow is that the default
branch for developers should be 'master'. If you want to deploy another
branch, use a 'production' branch. This means people don't have to manually
change branches everytime they clone, that type of repetitive work should be
outsourced to a computer ( your deployment scripts ). If you have full access
to the repo and you are on github, atleast you can change the default branch (
which for a repo I contribute to, I don't )

Anyone know the need for a seperate 'develop' branch ?

~~~
davnicwil
> Anyone know the need for a seperate 'develop' branch

The HEAD of _master_ should always be production-ready code. It is your most-
stable branch.

It's necessary to have a less-stable, separate, _develop_ branch where
features, bugfixes, etc, that were developed off on their own sandboxed
branches can be merged and integrated as part of a development phase.

Even with the most disciplined testing of isolated feature branches, it's not
known whether integrating different ones will always work, so it is not even
guaranteed that the HEAD of _develop_ will always be stable, let alone
production-ready. Even if it is stable, it could not be production-ready for
the simple reason that it so far only contains features X and Y, and the next
release cannot go out without Z there too, where Z is not merged yet.

As you point out, having _master_ as the most stable branch appears to just be
a convention, and it would be perfectly possible to branch a _production_
branch from _master_ and swap the roles at the start of the project too.

I think the convention is like that so that one can immediately build and run
the latest production code after cloning the repo, which in my opinion is
nice. Besides, with the number of branch switches you do every day as part of
normal development, it doesn't seem too horrible to have to do it once, to
start working off of develop after you clone the repo - at the end of the day
how often do you do that, and how annoying or arduous really is it?

~~~
mbell
> It's necessary to have a less-stable, separate, develop branch where
> features, bugfixes, etc, that were developed off on their own sandboxed
> branches can be merged and integrated as part of a development phase.

I don't think it's _necessary_ by any means. e.g. We auto-deploy the HEAD of
master, assuming tests pass and don't use any special integration branch. If
you have some set of features that need to ship together, that is just a
feature branch like all others. Multiple people can work on a feature branch
and test their integrations there.

> Even if it is stable, it could not be production-ready for the simple reason
> that it so far only contains features X and Y, and the next release cannot
> go out without Z there too, where Z is not merged yet.

I think largely your assumptions are based on a slow moving release cycle, not
everyone operates that way. We regularly release multiple versions a day in
most of our repos.

~~~
davnicwil
Within gitflow it _is_ definitely necessary to have an integration branch - it
doesn't really matter if you call it _develop_ , or you progressively merge
different features branches down together, or group them into one feature and
just develop them all on one feature branch from the get go, the result is the
same - you end up with all your code integrated on one, not necessarily always
stable, branch.

The idea in gitflow is that you then isolate a snapshot of that integration
branch, whatever form or name it takes, as a release branch, and only when
that release branch produces a build passing all tests is it then merged to
_master_ (and probably tagged).

You mention you auto-deploy from HEAD of _master_ after tests pass - does that
mean that code at the HEAD of _master_ may not pass tests, or that you merge
code to _master_ only if it has passed tests. If it's the former, then you're
not using gitflow, and I was just explaining the use of separate _master_ and
_develop_ in gitflow, not implying that this is the one true branching/release
strategy. Using something other than gitflow can work equally well. If the
latter, then you are using an integration branch - the one you run the tests
on before merging to _master_.

As to the point about release frequency, having a set of features to release
in batch was just an example - if you're in any situation where other
developers may commit things to the branch that will ultimately be released
from between the time you create your feature branch and the time you merge
it, then gitflow is useful - be that on a scale of minutes, hours, days or
weeks.

------
eridius
I really want something that provides better code review than GitHub. The
described code review features of Gerrit sound promising. But the article says
you can't submit a series of commits for review as a unit, you only submit a
single commit. Is that really true? That seems like a rather awful limitation
of the system. Sometimes my changes work well as a single commit, but often,
especially when doing more complicated things, it's much more preferable to
use a handful of related commits, all of which should get reviewed and merged
as a batch. Does Gerrit not support this?

~~~
superuser2
Phabricator[0] is awesome. Diffs (like PRs) contain several commits, but are
reviewed, discussed, and "landed" as a unit with an auto-generated commit
message referring to the Diff description and a link to read its history, but
"master" is a linear sequence of Diffs being landed. You can also configure
all kinds of rules like "X person must sign off on any changes to this file"
or "do not merge code to master unless it is approved by N people other than
the author." The UI is a bit clunky but I love it, and it's open source and
very flexible.

[0] [http://phabricator.org/](http://phabricator.org/)

~~~
eridius
When you say a linear sequence of Diffs being landed, do you mean it commits
squashed merges, or does it create actual merges (so the history of master
includes all the commits in each Diff)?

~~~
superuser2
AFAIK this is configurable. We did squashed merges, which I quite liked,
because "git log" for master read nicely but you could go look up the more
granular history if you wanted to.

------
websitescenes
"If you add a new member to your team, they would have to fork the
repositories on GitHub, clone them locally, make the changes, push to their
own fork and then create the pull request."

You don't have to do this with Github. Just clone directly to your local
machine. Also, you don't have to use git flow when using Github. I think git
flow seems to be the real culprit of the symptoms you have identified.

~~~
balls187
> You don't have to do this with Github. Just clone directly to your local
> machine.

If you use this model, your local clone isn't "backedup" by github until the
pullrequest is merged, correct?

One reason I like my teammembers to have their own local fork, is so they can
have github exist as a backup.

~~~
daigoba66
Team members can also work in their own private branches that are pushed to a
single repository - that's usually sufficient as a "backup".

~~~
balls187
That requires push-access. Not all team members have push access to all our
source repositories.

~~~
brown9-2
Allowing someone to push branches to the "central" repository versus requiring
them to have forks of it is functionally the same thing, assuming you set up
branch protection for master.

~~~
balls187
> assuming you set up branch protection for master.

Bad assumption. However, branch protection sounds cool. Will investigate it.

------
loopbit
I don't see anything in the article that goes against gitflow. Is it just me?

They don't want to use Github for the code review and prefer to use Gerrit?
Good for them, I don't either and actually prefer GitLab or BitBucket for git
in general. But hey! thanks for the Gerrit introduction.

But gitflow?

I've been using the gitflow way of working for ~5 years (although not always
the gitflow extension) and I don't see anything there that clashes with that
way of working. You want to work on a feature, perfect: Branch from develop
and work away.

After you are done and before you merge with develop again, you push your
feature to Gerrit and do the code review/changes. And then merge back to
develop.

Really, I don't see the issue between Gerrit and gitflow and still think that
gitflow is a very sane way of working in a team, specially if you have people
that are not used to work with [distributed] version control systems and you
probably wouldn't believe how many developers are out there like that.

------
Negative1
This is way more complex to me than GitFlow with pull requests. As a matter of
fact, if you use something like SourceTree for most of the initial steps it's
a few mouse clicks. Also, try Gitlabs for your reviews; it's pretty good!

The idea of a more in-depth review is intriguing (we all know this is
something that can be improved), but _voting_ on a peer's code just seems like
a bad idea. Vote too low, people get insulted and you breed discontent and
mistrust. Too high and everyone stays happy until code quality drops. People
will either use this as an opportunity to diminish other's, show off or suck
up. Sad, but that's human nature.

I know in-person code reviews aren't always possible but adding this
disconnect just seems like a bad idea. Would love some honest feedback from
people who have actually used it in medium-large scale production.

~~~
clay_to_n
What appeals to me about the voting scale is that it's well-defined: a -2
means "don't merge yet", a 1 means "looks good to me but I'm not sure that
it's ready to merge". More of an enum than a voting mechanism.

It sounds like it removes the ambiguity of when you leave a few comments on a
PR but aren't explicit about whether you think it should be merged or not.

------
thesis
We tried Gerrit. We ran as fast as possible away. It seemed to hate merge
commits, and would hang up often. I'm not sure if it's changed at all, but I
think the only option was to review every commit individually inside of a
branch. It seemed to really be pushing us towards squashing a branch and
pushing that up.

~~~
oluwie
Gerrit does favor rebasing over merging but that's hardly a reason to run away
from it

~~~
hinkley
Rebasing is great for version history but is hell for collaborating on a
feature. If anything is the Achilles' heel of Git (aside from the
groundbreaking levels of inconsistency in the CLI) it's this.

the moment someone creates a new version control system that has most of what
Git does but fixes parallel histories, I'll switch. And I don't mean that the
way people say "if Bush wins again I'm moving to Canada." I say that as
someone who has administered CVS, SVN and Perforce repositories on behalf of
my teammates but wants nothing to do with administering Git.

~~~
zo1
Would you care to elaborate what you find wrong/bad about "parallel
histories"? I'm curious.

~~~
hinkley
This is related to the conversation about squash that happened last week.

Basically, the moment two or more people try to work on something outside of
the trunk line of code (trunk/master/whatever), there is no version of their
commit history that will be pretty, and so squashing the branch at the end
seems like a good idea, even though it produces objectively worse code over
the long run.

To wit: There comes a point where the branch and merge structure of the code N
months ago is no longer relevant to my day. However, the contents of 'blame'
could contain code from years ago every time I use it. [edit] How it was
merged is irrelevant. When it was merged, by whom and with what commit message
is what survives into the future. By squashing the who and the message are
lost. But we think that's okay because it's a lesser evil than having a bunch
of commits in trunk. Which is bullshit.

What's hard to stomach is a bunch of merge commits back and forth and back
again, and so I sympathize with people eager to sweep those under the rug. But
at the same time I know that one of the most common ways a bug makes it into
the code is via a bad merge. Keeping them is more honest even if we don't want
to think about all the little human things we do that make our code worse.

I want to tell one little white lie with the code: I want to pretend like Joe
and Tim wrote their entire feature after my bug fix and before Steven's, even
though they worked on it all week. I want them to be able to commit it as a
single transaction but with all the intermediate steps.

When I say 'parallel history' I mean I want Joe to be able to rebase the
branch on top of my changes, without having to go apologize to Tim for making
his snapshot into mincemeat.

Or, I want us to stop pretending like feature branches fix all of our
problems, with no serious consequences. Maybe we should just go back to the
roots of Continuous Integration and merge on every commit, and rely on things
like feature toggles and test automation to control the reach of our in-
progress changes.

I think the real issue at hand here is the one-size-fits-most mentality we
maintain. As a maintainer of a FOSS project I want to reserve the right to
reject contributions out of hand. If I don't like your code I tell you no and
you go away. I will also enjoy getting medium units of work that were a team
effort without ever having to coordinate with any of those team members. As a
volunteer effort, this scales like nobody's business. But just because it's
working great for open source doesn't mean it's the rational answer for
commercial code.

In commercial code, all changes can be tracked to either human error or a
requirements change, and knowledge of the project often is locked in someone's
head because archived public forums aren't the dominant form of communication
or negotiation. When, how, and why every line changed matters because every
fix I undo while making another one alienates a paying customer. So verifying
why the code is the way it is now is fairly important.

And let's be perfectly honest here. If, as your dev lead, I don't like the
quality of your contribution, guess what, it's going in anyway. I can push
back and make you do it better, but 9 times out of 10 your change is going in.
Maybe not today, but soon, unless I'm in the process of getting you fired for
incompetence. So being able to drop it on the floor at no cost to me isn't
really a useful feature. For Linus that and an insulting email are how he
'fires' bad contributors. As coworkers our relationship would be a little more
complex.

Pretending that these constraints fit the exact same development pattern as
what works for Linux, NPM, Mocha, or probably even Docker is just nuts. We can
share a lot of tools, but we can't use exactly the same development process.
And Git bends in one direction but has no give in the other.

~~~
euroq
> By squashing the who and the message are lost. But we think that's okay
> because it's a lesser evil than having a bunch of commits in trunk. Which is
> bullshit.

Well, don't erase the message when squashing. My policy is that we take the
pull request and use the message title and description as the actual message
of the squashed commit, which works great.

Having a bunch of commits IS really bad. One idea = one commit is so much
better than some bullshit three commits of "refactored" "made mistake" "back
to normal" those are just as worthless to keep in the history as recording
your typos+backspaces+fixes into the history.

------
aeorgnoieang
There's a Gerrit service that integrates with GitHub:

\- [Gerrit Code Review for
GitHub]([http://gerrithub.io/](http://gerrithub.io/))

~~~
edgan
If only it was only code review. This is basically another GitHub git hosting
service using gerrit, and it syncs to github for open source projects.

I don't want a new hosting service, I just want a code review process on top
of GitHub.

~~~
piotrkaminski
Try [https://reviewable.io](https://reviewable.io) then. Just code review,
runs on top of GitHub, no extra repos to manage. (Disclosure: I'm the
founder.)

------
ninjakeyboard
I use gerrit daily. It's a good tool once you learn it but as an end user I
find it has some usability issues. I would very much recommend it though for
teams. You'd need someone to instill good code review culture in your team. It
dictates the +1/+2 review flow so you have to adhere to that for it to be a
natural fit.

------
ozim
What I miss in article is for how long they are on it. Is author after peak of
inflated expectations?

I have used Gerrit in my previous team and it did not worked so well. Hanging
vetos on -2 are not that nice when you have to push feature forward, like
instead of blocking it someone else could just fix it, by the time you talked
person who put -2 to change it to -1. But maybe with more mature team it would
not be a problem.

~~~
distances
I think there's something amiss in your development process if you feel you
can merge changes that have -1. What's the point of a review in this case?

There's always hurry, but skipping reviews (and often unit testing too) is a
sure way to make sure you'll keep busy fire-fighting in the future.

------
Laaw
Without fail, every single one of these "Git Flow sucks!" or "GitHub sucks!"
posts has a fundamental misunderstanding about one or the other.

An example of this is my own team, where we currently have a list of
"release/*" branches in our GitHub, because "git flow doesn't deal very well
with hotfixing a release".

Fundamental misunderstandings.

------
brown9-2
_Gerrit is being used by many large open source projects,_

It should be worth noting that those large open source projects have very,
very different needs than a small development team working on a product
together. The open source project likely isn't doing weekly releases (which
require some sort of manual QA process, in the source article). A large open
source project has hundreds of contributors, where reviewer time is scarcer
than contributor time (and the pool of people to approve and commit a change
is much smaller than the contributor pool).

I think the OP's real problems are that:

\- an increased release frequency requires them to do _more_ QA

\- their time spent in code review seems to be a function of how often they
are "releasing", not how often people are making changes

If the difficulty of making a release increases as you increase your release
rate, you might be doing "agile" in a poor way.

------
u801e
> As soon as someone added changes to their pull request – either by rebasing
> in the new changes or making it as a new commit – you lost track of the
> comments in the code and viewing what had actually changed since the last
> update became really hard (almost impossible if the new push was rebased
> with the new changes).

We use github at work with a feature branch workflow (as opposed to gitflow).
We've adopted a system where pull request comments are addressed through the
use of "fixup" commits.

For example, when a pull request is submitted for a feature branch that
contains 3 commits, and a comment is made regarding part of the change, the
person who submitted the PR will add a commit that addresses the comment with
a commit title of:

>> fixup! Title of the commit to update

>>

>> An explanation of what this commit does and why

>> ...

This, incidently, is exactly what git commit --fixup <commit_ref> does.

Then the person responds to the comment saying that it was addressed in
<commit_sha1>.

As a reviewer, it makes it easy to see that my comment has been addressed and
exactly what change was made to address it (by clicking on the link that
github autogenerates from the sha1 in the comment).

Once the review process is complete, the person will run git fetch origin and
then git rebase -i --autosquash --keep-empty origin/master to actually reduce
the set of commits down to the original clean set of commits. They then run a
git diff <original branch head sha1>.. to verify that there are no differences
and then they merge the PR using the merge button in the web interface.

This way, you end up merging a clean set of commits for each PR, and it's
still relatively easy to keep track of comments and incremental code changes
addressing those comments during the PR.

In fact, multiple developers can collaborate using the same branch by pushing
up "fixup!" commits. Though they need to make sure that they fetch/merge or
pull before they push to avoid unwanted merge commits within the branch.

------
killface
Blech. I have to use Gerrit at one of my current clients, and I fucking hate
it.

Github's workflow is easy. You have a repo, you can fork it or create a
feature branch, you can add multiple commits, and then open a PR. That makes
sense. And the interface is pretty.

In Gerrit, I still do commit-as-you-go, because __that 's the entire fucking
point of Git __. If I wanted SVN semantics in my repo, I 'd use SVN. Then, we
have to squash all the commits (I'm very anti-history-revision, but I know
that's an opinion) and push up to a different origin. And god forbid if you
want to work from that code point in a new branch while you wait for a review.
Oh, and if you want to fix some issues found in review? Yeah, let's edit the
history again... Oh, and there's a change id created that causes all kinds of
other headaches.

I have tools to manage and make sense of my git history. I absolutely hate
things that force me to modify history. It might as well be voodoo magic when
stuff goes wrong. It's often easier to blow it all away and start over.

I do like the things it can help you enforce -- a good build and +1/+2 code
review etc. But that's not enough to deal with all the little annoyances in
gerrit. Especially since it's available in a much better tool -- gitlab.

Gitlab is what comes after github has run its course for your team. It's got
the same predictable and useful feel, it integrates great with CI tools, and
it allows a similar GHPR-style way of merging. There's also BitBucket Server
and other stuff.. but Gitlab has my vote in the strongest way possible.

------
bkeroack
"At the time we had consultants working with us to speed up the development
process..."

Mistake #1.

------
theseoafs
> As soon as someone added changes to their pull request – either by rebasing
> in the new changes or making it as a new commit – you lost track of the
> comments in the code and viewing what had actually changed since the last
> update became really hard (almost impossible if the new push was rebased
> with the new changes).

Why are they all rebasing in their PR branches if it obviously makes the PR
unreadable?

------
xori
I don't really see what's different. Both github and gerrit have voting and
you're still creating a pull request like in github. The only difference is
that this pull request is restricted to a single commit.

Not very flexible, I see a lot of churn of invalid pull requests with this
design if they aren't allowed to grow into complete features..

~~~
kiallmacinnes
> Not very flexible, I see a lot of churn of invalid pull requests with this
> design if they aren't allowed to grow into complete features..

Actually, Gerrit really encourages growing a patchset ("pull request") into a
complete feature. It allows you update your change over and over, addressing
review comments as they come in.

Once done, you have a clean "Add support for use of XYZ by ABC" commit - and
not a pile of half baked commits - I cringe when I see things like this: "Add
framework for XYZ", "Define Config for XYZ", "Correct typos", "Add tests",
"Rework XYZ to be standards compliant", "Correct typos", "Fix tests"

~~~
cyphar
> > Not very flexible, I see a lot of churn of invalid pull requests with this
> design if they aren't allowed to grow into complete features..

> Actually, Gerrit really encourages growing a patchset ("pull request") into
> a complete feature. It allows you update your change over and over,
> addressing review comments as they come in.

> Once done, you have a clean "Add support for use of XYZ by ABC" commit - and
> not a pile of half baked commits - I cringe when I see things like this:
> "Add framework for XYZ", "Define Config for XYZ", "Correct typos", "Add
> tests", "Rework XYZ to be standards compliant", "Correct typos", "Fix tests"

I don't like commits like that either. But almost all real pull requests
require more than one commit (so future generations can bisect the repo
properly without then needing to bisect patches as well). A nice pull request
is something like this:

server: add statistics monitoring framework

server: component a: hook into statistics monitoring

api: expose statistics monitoring

integration: add tests for statistics

Each commit works as intended and does exactly one thing. I tried to do
something like this on Gerrit (was contributing to the TWRP recovery) and it
was such a pain that I collapsed everything to one commit. That's not how
things should be dealt with (I needed to improve the pattern decryption to
support N*N patterns and it required a bunch of UI, internals and other
changes that all got squashed together).

I also didn't like the fact that anybody could overwrite your PR's commit with
their own crap. Why is that a feature?

------
desireco42
What I got from this article is idea to squash every pull req into a single
commit. I think this is valuable idea.

~~~
xori
I agree, which github now does for you

[https://github.com/blog/2141-squash-your-
commits](https://github.com/blog/2141-squash-your-commits)

~~~
desireco42
thank you for pointing this out for me

------
codemac
Anyone know of a good hosted gerrit solution?

One of the largest problems I've had getting gerrit adoption on smaller teams
is that they can just pop up a private github team easily, whereas finding
hosted gerrit solutions that actually make it easy to convert from other
source control tooling has been very difficult.

~~~
fishywang
try gerrithub

------
chadnickbok

      The process might seem more complex initially but think
      of it like this: If you add a new member to your team, 
      they would have to fork the repositories on GitHub, clone 
      them locally, make the changes, push to their own fork 
      and then create the pull request
    

Annndddd we're done - its easy to make a pull request from a branch; this
person has no idea what they're doing.

In addition, the new GitHub code review tools address most (if not all) of the
stated reasons for this switch. And in my opinion, GitHub's ease-of-use,
alongside it being an incredible single point of reference, far outweigh any
clunky other tools I've used (like Gerrit).

~~~
ssmoot
Github isn't exactly a utopia of UX.

Yesterday I was looking for a way to refresh an old fork with upstream. I'm
pretty sure there was a button for this at one point. I looked. Couldn't find
it. So instead I had to:

    
    
      $ cd ~/src
      $ mkdir github
      $ cd github
      $ git clone myfork
      $ cd myfork
      $ git remote add up upstream
      $ git pull up master
      $ git push origin master
    

Or something like that. I think I got lost somewhere along the way trying to
checkout the upstream branch (is it "up/master" or "up master"? It depends)
but it took 5-ish minutes.

Github may be pretty-ish, but I tend to avoid them these days. They're
expensive, and they remove useful features. Fool me once, shame on you, fool
me twice can't get fooled again.

~~~
quicklyfrozen
You can create a PR from the upstream to your repo, then accept that PR. (I
know, that's not immediately intuitive, but at least you can do it without
pulling a local copy.)

~~~
yxlx
I tried that once but ended up with a merge-commit with my name on it in the
history of my "fork". Is it possible to not end up with such merge-commits?

~~~
quicklyfrozen
I don't think so -- you'll need to do something like a git rebase, and there's
no way to do that via the GitHub UI.

I know many don't like the 'dirty' history, but I like knowing exactly how the
updates made it into my repo.

------
ngrilly
Exactly why I'm frustrated with GitHub's PR:

> As soon as someone added changes to their pull request – either by rebasing
> in the new changes or making it as a new commit – you lost track of the
> comments in the code and viewing what had actually changed since the last
> update became really hard

------
jessegreathouse
Out of the frying pan and into the fire. I don't like Github but, from the
sound of it, Gerrit is more complicated which is what I don't want.

------
dpc_pw
I would never ever recommend gerrit to anyone.

------
ahoka
+2

~~~
atomic77
... still waiting for someone to workflow+1

------
lucaspottersky
One word for Gerrit: UGLY! :P

~~~
russelluresti
It is fairly ugly, but it also allows you to drop in your own css file. It's
not a 100% solution, but you can go a fairly long way towards making it more
usable.

