
Continuous Integration and Feature Branching - vinnyglennon
http://www.davefarley.net/?p=247
======
kazinator
> _The longer that you defer feedback, the greater the risk that something
> unexpected, and usually bad, will happen._

Though, for Pete's sake, at least wait until all the modules of the program
are presented for linking before having a fit about an undefined function. :)

> _Feature Branching is very nice from the perspective of an individual
> developer, but sub-optimal from the perspective of a team._

Feature branching, from the perspective of a an individual developer, is
almost _invisible_ with a tool like a Git. Your local tracking branch is your
feature branch. You develop your commits on it, rebase it and submit it.
(Perhaps to a review system like Gerrit, if not directly). That's a one-
developer feature branch in disguise.

An actual non-invisible feature branch is a remote one shared by two or more
developers.

> _I work on Trunk, “master” in my GIT repos. I commit to master locally and
> push immediately, when I am networked, to my central master repo where CI
> runs. That’s it!_

No, you work on a local branch which just tracks the remote "origin/master"
one (or whatever your upstream repo is called, not necessarily "origin"). When
that local branch is created, it by default gets the name "master" so it's
easy to pretend you're working on _the_ master. That's what I mean by "almost
invisible".

------
dcldcl
If I understand this correctly, it seems like they are only doing CI/CD on the
master/main branch.

The question is why don't they test the feature branches in the same way as
master/main? Assuming all their testing is part of the CI/CD system, it seems
like that would provide a valuable signal and enable faster feedback as well.

Yes, I suppose there could be capacity constraints around this. But let's hold
those aside for a moment.. let assume we have enough computers (IaaS) and
testing is full automated.

In this case, I think argument for only testing master is to force the
developers to integrate with master as soon as (or as frequently as) possible.

While I can see the benefit in that, I think there is a cost to developer
productivity. If you had perfect change isolation and developers always knew
how to implement features as a series of small changes this would work. But
IME, this is usually not the case.

While you can try to get the developer to change, I don't think this is an
easy transformation, especially in a team/org where there's larger diversity
in skill set.

Even if that might be the right thing to from a "make everyone a better coder
standpoint", I'm not sure it's optimal from a "help the people we have ship
quickly and better" by letting them test (CI/CD) as soon as possible.

The cost of rebasing/syncing with master is also highly variable across
codebases. Some codebases has pretty stable interfaces that don't change often
and it's very easy to work within those borders. But if those interfaces are
changing often, I can also envision tons of mental context killing rebases to
keep up.

I think there's spectrum of possible outcomes here and the optimal/right
choice will vary depending on the circumstances.

~~~
hzhhzh
This approach is safer because you are getting some feedback sooner, from the
CI running on your feature branch, but this branch is telling lies. It is not
the real story. This is not a change set that will ever make it into
production, it isn’t integrated with other branches yet. So even if all your
tests pass on this branch, some may fail when you merge. It is slow because
you are now building and running everything at least twice for a given commit.

Copied from post.

~~~
dcldcl
Yeah I think this is problem with characterizations of "but this branch is
telling lies." It's not; running tests reveals the state of the branch. And
the coder also might have some idea of far away that branch is from merging to
master.

Let's consider it this way:

Common starting point A, current master, where everything passed already.

Two people are working on their own mostly independent features on branches:
B1 and B2.

So who's the next master? Well it depends on which of the two finishes first.
If B1 tests their branch and they are the first to merge, well there's no
double work. They tested what would be the next master.

Say B2 finishes next. If they run their tests now, they have a good idea if
their own changes are the problem or not. If they wait to rebase and then
test, they now need to consider two causes: (1) their own changes and (2) the
combination of B1 and B2 changes.

Sure for a small enough change and good enough set of test cases, the
combination of B1 and B2 won't be hard to figure out. But then the situation
already is the team knows how to build big features out of many small changes.
I would say that's not the majority of the developers out there IME.

Slow regarding build and running really depends... We've gone over to more
test oriented coding because the automated tests help us do better than manual
testing. A collection of tests built up over time tracks the gotchas/corner
cases/etc.

If the developer were better at catching bugs without these test cases, then
we would still be doing it manually. But I would rather wait 3 hours for my
automated tests than spent 3 hours double/triple checking my code.

The specifics do matter; for some code bases the approach in the article may
be right. But building and running some thing twice may not be the wrong
thing. I think there's a failure to realize there's an engineering and process
design/compromise to be made...

