
CI/CD Is Not a Progression - kiyanwang
https://hackernoon.com/ci-cd-is-not-a-progression-86ebc896571b
======
arrow64
> Once your test suite is bulletproof [...] you can deploy code using
> continuous delivery techniques

I see this line of reasoning a lot in the industry and I think it misses a key
piece: monitoring/dev-ops.

Unlike the author, I think there is a clear progression from CI to CD. Where
most teams stumble, is they are not prepared for the shift towards deploying
new code regularly: measuring outcomes in production, running experiments, and
validating assumptions against real traffic.

Developers tend to think their job is done once their PR merges, and therefore
teams struggle to reap the benefits of continuous deployment.

~~~
troupe
Agreed. If continuous delivery means you can release your software at any
time, then that seems to assume you have integrated all the pieces of your
software. If continuous deployment means automatically deploying every passing
build, it seem to assume that when you get a passing build, you are in a state
where you can deploy. So continuous delivery depends on continuous integration
and continuous deployment depends on continuous delivery.

I supposed you could just deploy stuff to prod when the build finishes without
being in a state where you can safely deploy. Well now that I think of it I
have seen places that do that, but it never turns out well.

~~~
jwatte
Continuous deployment means you are never in a state where you cannot deploy.
Yes, this requires significant effort. API agility, schema agility, cache
agility, ... Lots of things need to be solved! (Imagine storing a serialized
cached entity from a later version, abd getting a cache hit in the
concurrently operating previous version, for example.)

It's similar to how continuous integration means your master is always stable
(all builds work and all tests pass.)

If you can't do a 1% (or 10%) canaried deploy, you're not yet ready for
continuous deployment.

------
perlgeek
I mostly agree, but I also want to point out a piece that the author does not
discuss: You shouldn't just make deployments frictionless, all else being
equal, you should also make them often.

If you do deploy often, the code deltas between two deployments are smaller,
and so it's often much easier to pin down the source of bugs.

~~~
eckza
> If you do deploy often, the code deltas between two deployments are smaller,
> and so it's often much easier to pin down the source of bugs.

This is the real win for continuous deployment. Smaller and more frequent
release deltas are the real source of systemic improvement; CD is just a
methodology for facilitating this.

------
gregmac
> Hiding in-progress work behind feature flags — which is often part of
> continuous deployment

There's many ways to do this, of course, but something I've found works well
is just making it frictionless to deploy feature branches to a "dev"
environment. You can test changes (especially those that integrate with other
systems/services) but still keep everything isolated.

This is also great because when applicable you can take screenshots/gifs of
the new feature actually running (not just on your machine) for the pull
request.

~~~
matthewmacleod
That helps, but in my experience it’s much more useful to be able to decouple
deployment from _delivery of features_.

Want to change the layout of a page? Normally this would require at least a
little coordination with, say, customer support or marketing teams. We don’t
want to hang around waiting for schedules to line up, so it’s often easier to
simply deploy the feature behind a flag - it means it can, for example, be
rolled out to some customer cohorts before others (beta testers?), easily
rolled back, made live by the product team rather than engineering - that sort
of thing.

Though of course a dev environment is _also_ incredibly important!

~~~
gregmac
That's fair.. it really depends on the situation. Too many feature flags can
get complicated in the same way too many open, long running branches can. It's
nice to have the option of either. Feature flags I think are a bit easier to
forget about and leave in the code; branches are obtrusive and you constantly
have to merge new code in to keep them updated, pushing dev to get rid of them
ASAP.

All my work lately has been done in branches, which then merge to master, and
master gets deployed to production as needed: sometimes immediately with a
single bug/feature, sometimes several bundled together after a week or two.
Depends on severity, other work occupying the team (especially QA), and which
software/system(s) are affected (eg. core software, infrastructure, or
internal tooling). I am pretty sure we've only used feature flags in the core
software, and only a couple times. Of course, that software is hybrid cloud/on
premise, and as such we have strong version numbering for every release, so
features/changes naturally tie to the customer-facing release numbers.

------
swframe2
If you go the continuous delivery route, I think you will need a way to canary
a pending release. I also think you need really good production monitoring
solution that can trigger a rollback to the previous release automatically
(assuming you have a way to mark a release as "appropriate to automatically
rollback").

The amount of tests you need to automatically release "bulletproof" code is
staggering. In one successful example, I noticed there were about 10k tests
that took about 30 minutes to run (before the code could be checked in). I
would think you need to use TDD to feel confident that your tests are good
enough.

The other aspect of CI/CD that bugs me is the dramatic increase in complexity.
I thought KISS and Occam's Razor would kill the CI/CD efforts. Moving to
weekly release seems a lot simpler and I doubt most teams need to release
changes much faster than that. With CI/CD, if you have several components and
lots of shared code then managing which component to update given dozens of
changes affecting different components is a nightmare. Unfortunately, once you
start using CI/CD, there is no going back (often for political reasons).

~~~
matthewmacleod
_The other aspect of CI /CD that bugs me is the dramatic increase in
complexity. I thought KISS and Occam's Razor would kill the CI/CD efforts._

I’m honestly really surprised to hear that view. If anything, I’ve found it
much simpler. For some insight as to how my current team works:

\- feature development happens on git branches \- all pushes to github are
automatically tested on CI, built, and produce a deployable artifact \- any
artifact can be deployed to a fully-featured staging environment \- Builds
from the master branch are automatically deployed to production \- the product
owner is responsible for acceptance testing the feature in the staging
environment \- a PR into the master branch is raised for code review \- when
this branch is never, after product owner approval, the feature is live.

It’s delightfully simple and entails no release schedule. Large features can
be placed behind feature flags and allow other teams to activate feature for
user groups as required.

Honestly I can’t imagine going back to releases ever again.

~~~
rev0lutions
KISS and Occam's razor?

~~~
swframe2
KISS = "Keep It Simple Stupid"
[https://simple.wikipedia.org/wiki/KISS_(principle)](https://simple.wikipedia.org/wiki/KISS_\(principle\))

Occam's razor
[https://simple.wikipedia.org/wiki/Occam%27s_razor](https://simple.wikipedia.org/wiki/Occam%27s_razor)
Suppose there exist two explanations for an occurrence. In this case the
simpler one is usually better. (I was trying to say "If there are two ways to
release a product, the simpler way should be better".)

------
jwatte
He's missing the third foundation: Have sufficiently good monitoring to
support your frictionless deployments with an immune system that can
automatically roll back deploys that seem questionable, and easily diagnose
the cause of changes.

~~~
perlgeek
I'm curious, do you know any software that specializes on this kind of thing?
Or is that something that everybody cobbles together, possibly using an event-
based automation tool such as stackstorm?

------
eadmund
> Continuous Integration: “a software development practice where members of a
> team integrate their work frequently, usually each person integrates at
> least daily,” says Martin Fowler. It generally involves running automated
> tests that run every time a patch is merged (“integrated”) into the
> repository’s main branch of code.

I don't think it's about integrated patches into one piece of software, but
rather about integrating the patched software into the entire software system
in which it lives. So if you have three pieces of software (A, B & C) which
are deployed together, they'll all live in the same repo (because multiple
repos are almost always a mistake); each will have its own unit tests (because
each piece of software is a unit); and then you'll have integration tests
which test how well they play with one another, particularly when other folks
are changing different parts of A, B & C at the same time.

Fowler's article[0] reads like it could go both ways. I guess if you see any
system of multiple processes as a single piece of software then it's much the
same thing.

[0]
[https://martinfowler.com/articles/continuousIntegration.html](https://martinfowler.com/articles/continuousIntegration.html)

------
scarface74
We have A slowly evolving CI/CD process. With a microservice architecture.

1\. We always commit to the master branch where automated unit tests use to
run. We threw out all of our unit tests because of some design mistakes that I
made as the first time Dev lead. We were too dependent on brittle Mock based
tests when we should have used a more functional (no dependency/ no side
effect business classes), based approached we started using.

2\. We have an Integration environment setup where everything is deployed to
whose purpose was to do automated integration tests but haven't gotten around
to it yet.

3\. Since every push goes to Dev but we don't want every push to go to QA. The
devs have to do a post deployment approval before it gets to the QA queue.

4\. Qa has to do a predeploymebt approval when they are ready to test a build.
After they test, they do a post deployment approval.

5\. Since we have a mandate thT each developer is responsible for getting
their own code to production. They have to work with the product owner and
coordinate when it should be released to UAT. Then the developer does a
preapprovsl for it to go to UAT.

6\. Once it is approved by the product owner and we get change control
approved. I as the dev lead and my manager both have to do a post approval.

7\. Then the developer does a preapprovsl and it goes to production.

Yes it sounds convoluted but people above my pay grade make those decisions
(once the avalanche has started the pebbles don't get to vote). But,
appprovals are done on a simple Visual Studio Team Services Web page
accessible from anywhere and once done, deployments are automatic.

We also don't explicitly version code. We tag each built executable with the
Git commit hash and that hash is logged as a property with each log statement.
We use Serilog for structured logging.

We can rollback just by redeploying a previous release. If we need to make a
change to a previous "version". We just branch based on the logged git hash.

~~~
avinium
> 1\. We always commit to the master branch where automated unit tests use to
> run. We threw out all of our unit tests because of some design mistakes that
> I made as the first time Dev lead. We were too dependent on brittle Mock
> based tests when we should have used a more functional (no dependency/ no
> side effect business classes), based approached we started using.

Can you elaborate a bit further? I'm trying to make my design/test suite more
robust at the moment and it would be good to hear from others who have been
through the same process.

~~~
scarface74
We were using a dependency injection framework to do constructor injection.
All of our apps basically have one job, to either take external data from
somewhere and map it to a common model and send it to Mongo through an API, or
send it somewhere from the common model to an external source (a Master Data
Model system).

Of course all of the parts of the main process that had external dependencies
like api calls and database calls were injected. While other people were
writing the code, I was constantly changing the dependency chain on the core
routines they were dependent on, I was changing the wheels on a moving train.

This wasn't an issue with the actual program because we were using a DI
Framework, all of the dependencies were being wired up automatically at the
composition root
([http://blog.ploeh.dk/2011/07/28/CompositionRoot/](http://blog.ploeh.dk/2011/07/28/CompositionRoot/)).

But Mocking out everything was painful and if the dependencies changed
anywhere within the framework, the tests broke - they wouldn't compile. On top
of that, while you can add optional arguments to a method and it won't break
preexisting code, it will break your Mock setup (long story).

Just like it is a well known anti pattern to use DI as a service locator
([http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-
Patte...](http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/)),
so is the overuse of Mocks.

Moving to a more functional model means we have a service class where you
still inject your dependencies but our classes that have business logic are
completely functional - they take in data and return data. The service class
is responsible for the orchestration of getting the data, processing the data,
and sending the data.

The mapping classes also expose events that the service class can subscribe to
if it needs to do something during the processing of each transaction like
logging.

Now the only classes we need to unit test, have no dependencies and no
mocking. ([https://medium.com/javascript-scene/mocking-is-a-code-
smell-...](https://medium.com/javascript-scene/mocking-is-a-code-
smell-944a70c90a6a))

But we also found that our biggest issues aren't having regressions around our
individual programs, but around integrations - thus the need to build up our
automated integration testing environment.

------
Chiba-City
I don't understand the appeal. We used to call this "slipstreaming" changes. I
understand a variety of methodologies going back decades. Is this for content
or entertainment applications where users are victims and data integrity is
not sacrosanct?

My own model for software is tooling that enjoys infrequent highly stable
releases, priorities on backward compatibility and iron tight accurate
documentation. Where exactly is rapid incremental code delivery at such a
premium? Why are these articles not contextualized to domains?

~~~
overgard
I think somehow the idea that you should keep your main branch
buildable/shippable at all times (a good thing imo) evolved into, since it is
shippable, it should be shipped automatically (which is, in my opinion, really
suspect)

I'm in the same boat though, I don't get the point of CD for most applications
other than to prove that you can do it (but other than impressing other
engineers with your teams apparent discipline: who cares?)

~~~
jwatte
Minimal code delta means the isolation of any bugs is much easier than if you
have to read through six months of patches.

Ability to get user or system feedback on code today, rather than in three
months, is often quite helpful. That is, if your company depends on listening
to customers or studying customer behavior. (I know of no company that isn't,
though, except monopolies like Comcast.)

~~~
overgard
> Minimal code delta means the isolation of any bugs is much easier than if
> you have to read through six months of patches.

Sure... which is why your QA department should test things early. (You're not
making your users into QA... are you?)

> Ability to get user or system feedback on code today, rather than in three
> months, is often quite helpful.

Users-as-beta-testers can work in some contexts, but there's a lot of of
fields where doing that is a very bad thing. Experimenting on your users is,
at best, a pretty nuanced business decision. And: again you can get feedback
internally, or by recruiting testers, or by having a QA department. I do game
development. We do play tests when we're trying to figure out if something is
a good idea and we have QA to track the bugs. CI is very useful for
coordinating with QA on things, but god help us if we started shipping
everything checked in. Not every feature is a winner.

Not saying CD can't be useful in some contexts, but I remain unconvinced it's
anything but a niche need. CI is pretty universally useful, but CD? Really
depends on what you're shipping and who you're shipping it to.

~~~
joshuamorton
There are lots of things that QA can't test well. Some bugs can only be
detected at scale. If you push every feature immidiately, it's also very easy
to roll back a bad feature immidiately. And you can notice and deal with
performance issues much more easily than with weekly or monthly releases.

