
Benefits of Continuous Delivery - henrik_w
https://henrikwarne.com/2017/11/19/benefits-of-continuous-delivery/
======
coldcode
I find continuous delivery to the mobile app stores to be rather silly and
wasteful, updating your app every two weeks for example consumes vast
bandwidth especially for people with automatic updates on. The benefits of
changing apps on such a quick basis makes it unlikely customers will even
notice changes or be able to adapt to what's new or different. Being able to
delivery quickly is not the same as having it be automatically useful, just as
being able to easily add some new functionality is not the same as having that
be useful or desirable to the end user.

~~~
chaosphere2112
On the bandwidth end, both Android and iOS do use incremental updates ([1],
[2]); if the changes are something that you would be releasing eventually
anyway, you're not wasting any bandwidth, and are instead loadbalancing it
over multiple payment periods.

[1]: [http://www.androidpolice.com/2016/07/23/new-play-store-
tools...](http://www.androidpolice.com/2016/07/23/new-play-store-tools-help-
developers-to-shrink-the-size-of-app-updates/)

[2]:
[https://developer.apple.com/library/content/qa/qa1779/_index...](https://developer.apple.com/library/content/qa/qa1779/_index.html)

------
ryanbrunner
I like that this article doesn't focus too much on the technical aspects of
auto-deploys and CI. I've been in a lot of places where the concepts were
lumped together, and while we could have easily delivered software
continuously, we falsely believed we needed every last thing to be perfectly
automated before we did.

It's important that your deployment process is repeatable and simple (i.e.
typing one or two commands into a terminal), but if a human is still kicking
it off that's still a net positive over big timed releases.

~~~
jbattle
I agree that reaching perfect automation isn't critical - but one thing that
_is_ critical is to ensure that local uncommitted changes do not get deployed.
I've been burned a couple of times where changes from a developer's machine
ended up in production

~~~
rhizome
How does that happen? The only way I can think of is when using a "copy local"
type deployment rather than a repository checkout, which is a pretty basic bug
in this kind of process that should be eliminated by the time "automation" is
a priority.

------
vsupalov
Great article! A tiny nitpick: the distinction between continuous delivery and
continuous deployment, is that in the first case you _could_ deploy anytime
your want, but the triggering is still up to a human. With continuous
deployment, everything is shipped to prod automatically, given that all
conditions are met.

If you want to learn more quickly - I did a talk on the topic last week, and
did my best to provide a concise overview of the most essential terms. Check
out the slides for a high-level view on ci/cd [1] and deployment pipelines in
general [2] if you want to learn more.

[1] [https://www.slideshare.net/VladislavSupalov/automated-
testin...](https://www.slideshare.net/VladislavSupalov/automated-testing-
environments-with-kubernetes-gitlab/12)

[2] [https://www.slideshare.net/VladislavSupalov/automated-
testin...](https://www.slideshare.net/VladislavSupalov/automated-testing-
environments-with-kubernetes-gitlab/17)

~~~
rhizome
Your "continuous delivery" definition sounds like CI to me.

------
zeroz
Good summary. One missing argument IMHO against continuous deployment: Fear of
consequences in strong regulated sectors, like finance and insurance.
Continuous delivery is highly encouraged, but for deployments I see strong
preferences to test everything (in some parts automation is still weak) and
therefor some bundling of deployments or special dates are still preferred. I
think costs of bad reputation or being watch by regulators because of failed
or illegal 'transactions' are in these businesses much higher than e.g. in
retail, gaming, etc.

~~~
beat
And meanwhile, we have Equifax losing tons of valuable data due to a breach
caused largely by how slowly they deploy, and how difficult it is for them to
get rid of antiquated technology.

After many years in big enterprise, I've learned something important - the
_appearance_ of risk is more important than the _existence_ of risk.
Continuous delivery looks "risky". Slow, deliberate release cycles on a
quarterly or even yearly basis look "safe", because "testing".

In practice, those quarterly deployments have far too many changes embedded in
them all at once. Worse, teams race to get their features in under the
deadline, knowing it can be months before they'll get another chance, leading
to careless coding and inadequate testing. So, based on both my experience and
a little beyond-common-sense logic, slow release cycles are _more_ risky than
fast ones.

~~~
zeroz
I absolutely agree with you! These huge quarterly updates with last minute hop
on's and changes ("otherwise we have to wait another three months") which
large enterprises did or are still doing are more risky than smaller ones.
Does this require to jump directly to continuous deployment? I don't think so.
What's wrong with continuous delivery to user acceptance stage and e.g. two
weekly deployments after ~98% automated tests ~2% manuel test (especially
penetration).

------
atsaloli
If anyone wants to learn how to set up CI/CD, I have a free self-paced class
at
[https://gitpitch.com/atsaloli/cicd/master?grs=gitlab#/](https://gitpitch.com/atsaloli/cicd/master?grs=gitlab#/)
\-- all you need to follow along is an Ubuntu VM for the hands-on exercises.

------
OtterCoder
I'm working on CD for a project I've been working on, but one difficulty I'm
facing is that the clients and server are being developed on parallel tracks,
and aren't always in feature parity at any given moment.

Which repo should own the integration tests? How do I synchronize the releases
of matching front and back ends?

~~~
wpietri
This is a strong sign that your client/server distinction is artificial.

If it were in a true client/server environment, where you had a variety of
different client versions rolled out, then I expect you would have already
found the obvious answer: the server has to support multiple versions of
clients and clients use a mechanism like feature switches or capability
detection to enable functionality as it becomes available server side. Both
repos have their own integration tests: the server's make sure the server
supports multiple client versions, and the client's make sure that the client
degrades gracefully in the face of server variation.

From the way you talk, though, it sounds to me like you expect client and
server to always be released at the same time (e.g., where it's a web front
end and web back end). If that's really the case, then I'd just have everybody
work as one team, working off a common set of features switches.

Is that helpful?

~~~
OtterCoder
The distinction is only artificial because we are in early stages of
development. Our MVP will require a web client and an intermittently offline
mobile client.

It is helpful, but certainly doesn't sound simple. Feature switches would
require the messages to already be designed, which is most of the work already
done. It also sounds like an edge-case nightmare.

~~~
pbecotte
Add new features in the backend first...then the clients can add the new
features over time. The feature toggle is that the web client just hasn't
added the feature yet!

I usually put integration tests in the client repo but it doesn't matter. The
key is that you put in a trigger so they get run by changes to any of the
projects.

------
github-cat
Hmm, another big challenge for continuous delivery is actually human factor.
You need to have people who can work in continuous delivery mode. This factor
is more critical than that of traditional software development based on my
experience.

------
korzun
Good article. I want to offer my thoughts on a couple of things from my
personal experience.

> If the change deployed is small, there is less code to look through in case
> of a problem. If you only deploy new software every three weeks, there is a
> lot more code that could be causing a problem.

That's relative. Pushing out an accumulated amount of small changes once a
week will most likely have the same end the result. The difference is, if you
commit more than one breaking change you are dynamically expanding the window
of service degradation. One release with three breaking changes is better than
three broken pushes.

> If a problem can’t be found or fixed quickly, it is also a lot easier to
> revert a small deploy than a large deploy.

It is also harder to revert two non-consecutive deploys out of three.

> If I deploy a new feature as soon as it is ready, everything about it is
> fresh in my mind. So if there is a problem, trouble shooting is easier than
> if I have worked on other features in between.

Personally, I favor stability vs. easier troubleshooting. This works for some
products and not others.

> It is also frees up mental energy to be completely done with a feature
> (including deployed to production).

Anecdotal evidence, but my team would usually catch and correct bugs when they
have to come back to green light a production push. Engineers that ship clean
and fast are rare.

> All things being equal, the faster a feature reaches the customer, the
> better. Having a feature ready for production, but not deploying it, is
> wasteful.

Something like this would usually be pushed out manually to align with other
non-engineering parties within your company. Pushing broken features to the
customer faster is not a good thing. Unless you can assume 100% success rate;
which is not possible.

> The sooner the customer starts using the new feature, the sooner you hear
> what works, what doesn’t work, and what improvements they would like.

This depends on the stage of the company, the product, and your customers.

> Furthermore, as valuable as testing is, it is never as good as running new
> code in production. The configuration and data in the production environment
> will reveal problems that you would never find in testing.

All of the environments I govern match production 1:1 (sans data sanitation)
in every way possible. I feel pretty strongly about this, if you can't test
your code without pushing it into production, you should not be automating
anything.

> Continuous delivery works best when the developers creating the new features
> are the ones deploying them. There are no hand-offs – the same person writes
> the code, tests, deploys and debugs if necessary. This quote (from Werner
> Vogels, CTO of Amazon) sums it up perfectly: “You built it, you run it.”

Don't compare a start-up to Amazon. Amazon has dedicated teams to govern the
process and you most likely not replicate that. Also, hiring people that 'just
send it' without doing damage takes money, time and a lot of training. It's
expensive.

~~~
pbecotte
> One release with three breaking changes is better than three broken pushes.

Why? Each of those pushes you have one thing to check, and if it is messed up
only one thing to revert. With a batched release you have multiple things to
check, and are reverting other people's working stuff when you have to revert.
Even worse, you have to choose between reverting slowly (but checking every
feature) and possibly having to revert a second time because there was another
bug you missed!

> Personally, I favor stability vs. easier troubleshooting. This works for
> some products and not others.

I don't understand. If you make the same number of changes with the same
number of breakages, is packing them into a smaller window really more stable?
Even worse the more time it takes you to fix those breakages, the less uptime
you have... The opposite of stability.

> All of the environments I govern match production 1:1 (sans data sanitation)
> in every way possible. I feel pretty strongly about this, if you can't test
> your code without pushing it into production, you should not be automating
> anything

I agree with this! But... Then why are you advocating for staging to digress
further from production waiting for a big release?

