
What Is Proper Continuous Integration? - markoa
https://semaphoreci.com/blog/2017/03/02/what-is-proper-continuous-integration.html
======
TurboHaskal
Someone introduces $buzzword.

As software is a popularity contest industry driven by hype and career
building blogposts, now everyone needs to do $buzzword so that they can play
SEO with their resumes. This is a highly competitive market after all and you
gotta put food on the table.

smug-faced guru enters the scene: "Oh it looks like you're not actually doing
proper $buzzword. Shame on you for raising the hand!"

Now you're trapped. You either a) do what the guru says or b) remove $buzzword
from your resume before you kiss your children goodbye and prepare for your
new homeless life.

------
xyzzy123
Well a major problem in many organisations is perceived sensitivity of source
code, and therefore a reluctance to outsource many tasks which rely on access
to such.

I like to think "we'll get there".

------
penetrarthur
Well, what if the CI is first building xcode project and then deploying it on
device and then running tests? 10 minutes?

------
jpalomaki
More important than build speed is the confidence that code which passed to
build really works.

I think one big reason why organizations don't integrate and deploy often is
that people are afraid they will break something when they do this. Then you
end up pushing this big and scary thing forward and forward. This of course
does not take away any of the risks, but at least you need to close your eyes
and wish for the best only once a month.

------
mazlix
I really don't think build time is very important. With a separate CI server
it doesn't much matter if takes an hour or <10 minutes for a build to
complete, the gain for me is that it's no longer "your" time. It's some other
server that's doing the work with a saved state (master).

I'll typically just merge into master on GH, let the CI server take it from
there and not bother checking the site manually when the build is complete,
it's not necessary. Our QA and CI process handles that.

I consider it a major advantage (benefits far outweigh risk) that I can
typically merge my ticket into master, close my laptop and head home.

------
pokemongoaway
Quite a lot of words in the comments and article, but not the word
"automation" for some reason :P

------
skiplecariboo
Is that now a thing to put your css at the end of the page?

------
draw_down
What exactly is the significance of 10 minutes? Who came up with that part?
These kind of purity tests can get a little silly, I think.

~~~
jdlshore
The "ten-minute build" is an idea from Extreme Programming, and you can find
it in Martin Fowler's article on continuous integration[1]. (That's probably
where OP got it from.) The reason, as explained in Fowler's article, is to
provide rapid feedback. So the person who came up with it is Kent Beck.

[1]
[https://www.martinfowler.com/articles/continuousIntegration....](https://www.martinfowler.com/articles/continuousIntegration.html)

------
__mp
I think this really depends on the code base. If I had a website where a
change would trigger more than 5 minutes of integrating and testing I would go
mad.

I'm working with a weather model and it takes around 1 hour to build and test
everything: We build against 2 super computers, 3 different compilers, 2
different architectures (CPU, GPU) and single and double precision. Building
alone takes 30 minutes (more in some cases). For testing we reserve 1x node
with 8 GPUs and 9 CPU cores on one machine and 2x nodes with 1 GPUs in a 30
minutes debug slot.

With a pull-request based workflow we are able to push to the master multiple
times a day. 40 minutes might be achievable by massively revamping our build
mechanism and getting jenkins to store gigabytes in build artifacts for each
run. However, I do not think it is worth it, because changes can take multiple
days, or in some rare cases months to implement. If people need to wait for an
hour for their tests to validate that's not so bad.

~~~
maxxxxx
Our tests can take hours to days and need multiple hardware configurations. I
am already happy if all tests run once a week or even once a month. In the
past a lot of tests would be run maybe once or twice during a year long
release cycle so once a month is already huge progress.

~~~
__mp
I think this would be too expensive for us. Why do they take so long? What
area are you working in? Would it make sense to scale down the tests to a
sensible size and do the longer tests over the weekends?

------
SideburnsOfDoom
> As with all ideas, everybody does their own version of it in practice.

Yes.

> Continuous integration (CI) is confusing.

At core, no. Not really. See the 1st paragraph of
[https://en.wikipedia.org/wiki/Continuous_integration](https://en.wikipedia.org/wiki/Continuous_integration)
:

> In software engineering, continuous integration (CI) is the practice of
> merging all developer working copies to a shared mainline several times a
> day.

So the 2 words are defined as follows:

Continuous: at least several times a day.

Integration: merge changes to master.

It's not a tool: it's a practice, a workflow that builds, tests and tools can
help you do safely.

Some of these "CI Server" tools also support "build and test the branches"
which is useful but is not CI. This is a common mistake.

See also:
[https://trunkbaseddevelopment.com/](https://trunkbaseddevelopment.com/)

~~~
UK-AL
As long as those branches are merged in daily, it's still CI.

Your working copy is just an informal branch. Sticking it in a feature branch
while your working on it is just convince.

~~~
SideburnsOfDoom
Yes. I meant that "build and test the branches" is not _itself_ CI. But it can
be a useful practice before merging in a CI workflow. Or it can be a painful
set of long-lived branches.

But some people have the idea that the build and test of any branch, i.e the
automated production of a binary that passes checks, is the "integration"
itself. Look out for statements like "We do CI of branches on our CI server".

------
lucaspiller
For those using Selenium or something similar, do you have any tips for making
test suites fast? Usually I just test the main flows with Selenium, then go
more indepth with unit-tests where parts of the application are mocked out -
but it's a trade off between accuracy and speed.

~~~
cstejerean
That's a good trade off though, and not just for speed. Selenium tests are
more difficult to write and take more effort to maintain. You should
definitely test as much of your application as you can with unit tests and
integration tests. The end to end tests with something like Selenium should
then focus on the type of bugs that you couldn't catch with the lower level
tests.

I've seen situations where people had Selenium tests for every possible
validation error on a form. A lower level test would have been sufficient to
test all validation permutations and making sure they return the right error
message, and then a single high level test can ensure that when an error
message is returned it is properly rendered on the screen.

------
marcv81
This is an okay and mostly correct introduction for anyone who has never heard
of CI, but lacks any sort of depth. The author sounds like he just discovered
the concept and wants to share it, but lacks practical experience.

~~~
markoa
Hi, author here. I do come from a team making a CI service, and it took us a
lot of time to realize that it'd be useful to have a time-based hard limit for
what's good enough. And for anything over that you're obliged to take action
to make it better.

E.g. one of our own builds took longer than 10mins at some point. You just
assume that it's a function of size of the code and it's normal to rise over
time. So my goal is to share that idea & why I think it matters, and get
feedback. :)

~~~
alkonaut
How is the hard limit enforced? By the CI tool as a failed build?

It would be perfect if one could somehow keep the build quick using tools. A
hard limit on a _build_ might not be the best way. If the test is taking Limit
minus one millisecond, and I add one new test that takes one millisecond, I
break the build. But the culprit was really commit yesterday that added a test
that takes 9 minutes. The worst thing one can do in CI is somehow flag the
build as "not good enough" and point the blame the wrong way.

A tool that treated test quality/perf like any other asset and fails builds
based on time limits etc would be perfect. It's really hard to do that
especially on shared CI servers because of fluctuating performance.

~~~
markoa
Yeah sorry for not being clear there, by hard limit I didn't mean that CI
should fail the build. But that the number should be a fixed threshold for the
team to optimize for. Thanks for your feedback.

------
verytrivial
> "Of course our build takes long, we have over 10,000 lines of code!"

Well, my project misses the 10 minute target by a factor of four, but does
include tests and have 670kloc of C++. I claim continuous in this context,
thanks very much.

~~~
marcv81
Not bashing at all (you are somewhat in a better place than my own
organization) but a point could be made that you could break the code down
into subprojects and manage dependencies.

~~~
verytrivial
Indeed it could, and you're looking at the result, down from 2.5h-ish when I
arrived. Our pipeline runneth deep ... (Though of course we'll never stop
whittling!)

~~~
marcv81
Aw man!

