
The Difference Between CI and CD - jpdel
https://fire.ci/blog/the-difference-between-ci-and-cd/
======
hinkley
> The scenario we want to avoid is that a faulty commit makes it to the main
> branch.

Close. The scenario we want to _minimize_ is faulty code on the main branch.
As your team grows, as the number of commits go up, it becomes a game of
chance. Sooner or later something will get through. The more new teammates you
have, the more often that will happens.

This is an inescapable cost of growth. The cost of promoting people to
management. The cost of starting new projects. Occasionally you can avoid it
as a cost of turnover, but you will have turnover at some point.

What matters most is how long the code is "broken" (including false positives)
before it is identified, mitigated, and fully corrected. The amount of work
you can do to keep these number relatively stable in the face of change is
profound.

If you insist on no errors on master ever you will kill throughput. You will
create situations where the only failures are _big_ , which is neck deep in
the philosophy that CI rejects: that problems are to be avoided instead of
embraced and conquered.

~~~
gregdoesit
> If you insist on no errors on master ever you will kill throughput.

Unless you solve this engineering problem with tooling. At Uber, the full-
blown CI mobile test suite takes over 30 minutes to run on a development
machine (linting, unit test, UI tests - most of this time being the long-
running UI tests, specific to native mobile). So we only do incremental runs
locally, and have a submit queue, which parallelises this work and merges only
changes that don’t break, into master. And we have one repository that
hundreds of engineers work on.

It’s not an easy problem and the solution is also rather complex, but it keeps
master at green - with the trade-off of having needed to build and maintain
this system. See it discussed on HN a while ago:
[https://news.ycombinator.com/item?id=19692820](https://news.ycombinator.com/item?id=19692820)

~~~
pojzon
How do you handle situations like that: multiple dvelopers added merge
requests to queue, the changes they made are mutually exclusive (automatic
rebase wont work). What happens when the first branch gets merged to master
and next 10 are still in the queue ? How do you mitigate that to decrease
development cycle ?

Lets just say in my company it also takes 30m to run tests and 4h to run them
on merge pipeline with FATs and CORE tests.. Its way too long and highly
cripples productivity.

~~~
gregdoesit
A lot of the below comments touched on things we do (verifying that changesets
are independent, breaking tests into smaller pieces, prioritising changes that
are likely to succeed). They add up and the approach does become more complex.
We wrote an ACM white paper with more of the details[1]. It’s the many edge
cases and several optimisation problems that turn this into an interesting
theoretical and practical problem.

[1]
[http://delivery.acm.org/10.1145/3310000/3303970/a29-ananthan...](http://delivery.acm.org/10.1145/3310000/3303970/a29-ananthanarayanan.pdf?ip=213.208.239.146&id=3303970&acc=TRUSTED&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2EE47D41B086F0CDA3&__acm__=1575290605_b9398003215d133482dd4a9255ce1f17)

~~~
pojzon
Sorry, but that link points to "not found" page.

------
bump64
I am using Azure Devops for the past couple of years and they have kind of
nailed it. You have builds that do most of the CI and Releases which can be
fine tuned to do complex deployments and complete the CD story. Then you can
set them as a requirement for the pull request approval to the main and branch
it helps to guarantee healthy trunk.

I don't agree that CI is a team problem and CD is an engineering problem. If
you are following infrastructure as code principles it is everyone's problem
because if you don't add how your new feature should be deployed it will break
the CI and CD pipelines and you won't be able to merge it.

~~~
jpdel
Also using Azure DevOps and it is indeed very well structured.

As for CI/CD differences: how many commits can actually affect code and
infrastructure? I think this is part of the engineering problem at the end of
the day.

------
danpalmer
I wish more CI/CD services understood this.

We recently moved from Jenkins to CircleCI, and while the PR experience has
improved dramatically (no queueing, faster builds), the _release_ process is
far worse.

The reason seems to be that CircleCI just treats CD as CI. In reality doing CD
requires high correctness, care, and nuance.

For example... with CircleCI there's no way to ensure that you release your
code in the correct order other than to manually wait to merge your code until
the previous code has gone out. That's not _continuous_. This is a very basic
requirement.

So perhaps they are not a CD service as they pitch themselves as? That means
deploys are manually triggered then? Nope, there is no way to manually trigger
a build.

I wish this was an isolated example, but I've yet to see a CI/CD service that
is easy to build fast, correct, deployments with. Jenkins is correct but not
fast or easy, Circle is fast but not correct, and most others I've used are
none of these at all.

------
coderinsg
Well written. The line between CI and CD has been blurred esp when they’re
commonly mentioned together. Many cant tell the difference

~~~
hinkley
Few things are as aggravating to me as people who say they understand CD but
then manage avoid practicing any of the tenets of CI.

Automated builds are the smallest part of CI. Necessary, but drastically
insufficient. If that's all you're doing you've missed the forest for the
trees.

~~~
tomxor
> Few things are as aggravating to me as people who say they understand CD but
> then manage avoid practicing any of the tenets of CI.

It is completely reasonable to utilise one without the other, not all projects
are giant multi author efforts trying to wrangle commits.

For instance if you have lots of small projects being worked on independently
in parallel with no more than one or two authors on a repo at a time, CI is
not going to be worth the investment... but CD still has it's uses.

~~~
jdlshore
Small-scale CI is trivial to set up. A build script and integration VM is all
you need. If it's difficult, there's most likely hygiene factors in your
codebase that are worth resolving.

[https://www.jamesshore.com/Blog/Continuous-Integration-
on-a-...](https://www.jamesshore.com/Blog/Continuous-Integration-on-a-Dollar-
a-Day.html)

------
s_Hogg
The title reminds me a lot of all those fatuous articles about the difference
between statistics and machine learning. This one is alright though - I wonder
how we got to lumping CI and CD together as is commonplace now.

~~~
jpdel
I think this is a business trend unfortunately. Tools tackling the CI space
wanted a piece of CD and then boom. Things became the same :)

------
devonkim
In SRE and DevOps land we’ve mostly had arguments over continuous deployment
vs continuous delivery and have mostly let feature engineers decide how they
want to use the possible approaches and options available

------
cottonseed
> Keep it short. 3-7 minutes should be max.

Who has a 3-7m CI build here?

~~~
smcleod
I would say for me that's a pretty reasonable estimate for microservice
architecture applications / services, of course large legacy monoliths take
longer but not more than say 15-20 minutes at most.

~~~
cottonseed
3m seems aggressive to do builds and spin up infrastructure for anything non-
trivial.

Reading a bit closer, I see the author describes CI as a sanity check,
"ensur[ing] the bare minimum" and doesn't consider deploying on every commit.
Maybe 3-7m is more realistic then.

However, I'm slightly surprised by this definition of CI. According to Fowler
[0], "Continuous Delivery is a software development discipline where you build
software in such a way that the software can be released to production at any
time. ... The key test is that a business sponsor could request that the
current development version of the software can be deployed into production at
a moment's notice." So having CI gates on the development version that are
weaker than the release tests would not seem to be continuous delivery
according to his definition.

We're currently releasing on every commit and our CI build (which implements
continuous delivery) takes about 15m.

[0]
[https://martinfowler.com/bliki/ContinuousDelivery.html](https://martinfowler.com/bliki/ContinuousDelivery.html)

~~~
colinchartier
Hey, I've been working on a CI tool that skips the "non-trivial" bits for
arbitrary Linux workflows, would love your feedback:
[https://layerci.com](https://layerci.com)

~~~
jpdel
Is this doing anything else than leveraging Docker multi layers caching?

------
agustif
I've been using [branchci]([https://branchci.com](https://branchci.com)) (free
tier) recently to build, lint, and deploy a wp site and been very happy with
it!

------
a_imho
_The process of Continuous Integration is independent of any tool._

This is one of my pet peeves, people using the term CI referring to the
tooling. For me this alone invalidates anything they have to say about the
subject.

