
Containers Will Not Fix a Broken Culture - signa11
https://queue.acm.org/detail.cfm?id=3185224
======
mailslot
So many companies I've interviewed for are rushing toward microservices and
containerization as the cure to all problems. The problem is that the
champions often have no clue what any of this means.

I recently spoke with a company that had no testing whatsoever for a large
production app. When I asked about it, they proudly said "Oh, we do CI. We
have Jenkins!" Any tests? "We're going to add them after we move to
microservices. Moving away from our monolith is top priority because monoliths
are difficult to debug."

I see a ton of companies shitting all over best practices and then chanting
buzzwords to pretend that they're all about it. That, or gross
misunderstanding of any concept behind buzzwords.

X company uses Docker. We should use Docker. "Um. This code runs on an FPGA."
"Does it run Docker?"

~~~
apeace
If it weren't containers it would be a new programming language, framework, or
another agile methodology. Your argument has very little to do with
containers.

Containers are just a better tool for writing OS configuration scripts. (If
your team is full of Chef experts then it's not "better" for your team, but
for a lot of teams it is).

What you're saying applies a lot more to microservices, which are a
fundamental architecture choice. Containers aren't, they're just better than a
tangle of bash scripts which create stateful VMs. And the problems you're
describing apply no matter which tools a team uses.

Remember that you can use containers without complicated orchestration or
microservices. I think a better argument would be to untangle these three
things and describe how each one can solve certain problems or make the
problem worse, and under which conditions.

~~~
markbnj
>> Containers are just a better tool for writing OS configuration scripts. (If
your team is full of Chef experts then it's not "better" for your team, but
for a lot of teams it is).

No, not really. You could argue that dockerfiles are part image provisioning
script and part process environment specification, but I think you'd still be
missing the main advantage. Dependency isolation is the thing that usually
gets touted, but that's only part of the picture. After all you can isolate
dependencies now by baking images. That works great, it's well proven and
reliable. But the vm that runs a single boot image can potentially run dozens
of different containers, and using an orchestration platform you can easily
and quickly shift those loads around, scale up and down, reconfigure and
redeploy, all with far less overhead then deploying an image to a vm requires.
Containers didn't become a popular tool because they don't add value. The use
case for them has been clear for over four years now.

~~~
dozzie
> Containers didn't become a popular tool because they don't add value. The
> use case for them has been clear for over four years now

Well, yes, they did become popular despite not providing anything
substantially new. The main value of containers is that programmer who works
with network now doesn't need (initially) to understand how to configure the
network, which is a dumb idea by itself. All the other things added by
containers boil down to distributing a tarball with whole operating system, so
you can run that in a chroot.

From where I stand it seems that programmers didn't want to learn how to
build, distribute, and configure software with OS packages, so they invented
their own binary packages system.

~~~
bananadonkey
The irony here being I had to port our production RPM (rhel-based) build
system to Docker just so it could have a reasonable API and be anything close
to maintainable.

Edit: the extreme portability and "free" concurrency were just a bonus.

------
jefe78
As a systems engineer, I struggle with this virtually every day. We're called
'DevOps' by most and anytime we encounter a new problem, everyone invariably
screams for containers. Containers aren't a magic bullet.

My favourite example is when our AWS TAMs offer a solution, knowing we have
ZERO pipeline/infrastructure setup for supporting containers. They always push
containers. We don't use containers, stop forcing them down our throat. We've
tried, we've been burned, VMs work for us. Stop!

When did containers become perceived as the end-all solution? I see their
value and uses but they don't meet ours so why have we started ignoring the
right solution for the job? I see this everywhere I go.

~~~
jitl
You need containers to run on Kubernetes my dude. And running on Kubernetes is
_critical_.

~~~
scarface74
Yes I know you were being sarcastic.

But you don't need k8s or containers for orchestration.

I chose Hashicorp's Nomad (I'm the dev lead for our company) precisely because
I didn't want to commit to Docker from day one but I did want to leave that
option open. Nomad works with everything - Docker containers, jar files, shell
scripts, raw executables, etc and is dead simple to set up - one < 20Mb self
contained executable that works as a client, server and as a member of a
cluster. Configuration is dead simple if you use Consul.

------
apeace
This is really off-base, misses the point, and is another form of the "you
don't need containers" criticism which has become very tired at this point.

This is mostly a critique of microservice architectures, not containers. If
that were the main point I'd have little disagreement.

> Someone in security is weeping for the unpatched CVEs...

> ...the heavyweight app containers shipping a full operating system aren't
> being maintained at all...

This is just wrong, it's the opposite of that. Never have I had more up-to-
date operating systems, programming languages, and frameworks than when I
started using containers. It's just so damn easy, especially if you use `FROM
python:3` instead of `FROM python:3.6.2`. It auto-updates every time you
deploy.

> There is no substitute for experimentation in your real production
> environment; containers are orthogonal to that...

They're not orthogonal to it, they're a really useful way to get _very, very
close_ to production. The maxim isn't untrue, but again, I sleep better than I
ever have in my life because I know that these problems are now rare for me.
The difference between my local, staging, and production is tiny. I haven't
encountered such an issue in over a year.

All of the problems in the article are true no matter what tools you use to
build and deploy. The author focuses a lot on developers' desire to go off in
a corner and build their own little world. That's still a risk if you're using
Ansible or Chef.

Bottom line: writing a Dockerfile is the most powerful way I've ever found to
define your OS's configuration in code. Stop discouraging people from trying
it just so you can make grand arguments about the types of problems every
engineering team faces.

~~~
DougWebb
How does _" It auto-updates every time you deploy"_ fit with _" The difference
between my local, staging, and production is tiny"_? It sounds like you have
little control over your dependencies, and the differences between your local,
staging, and production environments are the newer versions of your
dependencies which you haven't tested against.

I like the idea of being able to precisely control both my code and all of my
dependencies, so that I know for certain that I'm deploying exactly the same
overall system that I tested. Containers are much better for that than the old
way, because you could never be certain that your OS and system software were
exactly the same in production as they were local and staging. But to achieve
that precision, you need to use precise version numbers, and you need to
install your dependencies from a local repository to be really sure.

~~~
apeace
You're right, I didn't make that clear.

What you're missing is that the beginning of the "deploy" process is building
the image on your local (or on CI). That's when the update happens. Then you
test it on staging, and if all is well you deploy to production.

If there's a problem it's easy to change your Dockerfile from "python:3" to
"python:3.6.2" in order to go back to what you had. Or stick with "python:3.6"
if you only want security patches. Or, if you _want_ to miss out on those
security patches in order to guarantee more stability, go with "python:3.6.2"
and decide when to test and deploy an upgrade.

~~~
scarface74
Are you referring to Canary Builds?

[https://www.thoughtworks.com/radar/techniques/canary-
builds](https://www.thoughtworks.com/radar/techniques/canary-builds)

~~~
apeace
No, I am more talking about a standard process of applying security patches
(and/or bugfix patches). I'm countering the claim in the article that using
containers somehow makes software orgs more prone to "unmaintained" OS's.

I'm saying that has little to do with containers, and if anything containers
make it a lot easier to get security updates.

------
scarface74
I'm the lead dev for our company and I chose as phase 1 for our implementation
Nomad for orchestration, a bunch of "micro-apps" (single purpose executables),
and Consul for configuration and service discovery. I chose that combination
for ease of use and flexibility - Nomad works with everything raw executables
and containers.

Of course later on the consultants came along later and scoffed at the
simplicity of our process - our deployment process is basically one step in
our continuous delivery pipeline - copy the bin/release directory to the
destination folder.

I tried to get them to articulate a business case for us to use containers.
They couldn't come up with one.

Then my manager, someone I really respect for his technical acumen finally
gave me one. If we go to containers, we don't have to provision servers on
AWS. We can use AWS Fargate. Lambda isn't an option we have long running
processes that make more sense as apps.

I wanted to do Docker anyway eventually just to add a bullet point on my
resume and I could have as the dev lead but it felt unethical to make a choice
that wasn't best for the business. Now I think it's the right way to go.

~~~
tty7
Sounds like your consultants were not sold on docker/containers as you already
supported them. They wanted to lock you into a managed service so that you
keep using them.

If they really cared about containers they would have helped you continue
using nomad/consul as thats a fine combination

~~~
scarface74
I'm not opposed to a managed service and yes Consul+Nomad is great. But I like
the idea of not having to manage VMs. We will probably move to a hybrid
approach.

------
erikb
Containers are not there to solve a problem. They are there to produce buzz,
then busy engineers, then bills, all packaged in a way that overloaded (and
sometimes quite unskilled) manager can show something to their bosses.

And in that regard Containers are successful as hell. That's why we have a
religion around them now. You can hate it but you can't really ignore it if
you need to pay rent and work in the industry.

~~~
sidlls
You're getting downvoted because of the impression of a sort of caustic nature
of the comment, I suspect, but I agree with your real point.

I put containers and "orchestration" like kubernetes right up there with "Big
Data", Kafka and a bunch of other technology that is the current fad. All of
these have a legitimate use case. Odds are the use case of anyone reading this
comment isn't one. But because of the terribly broken interview process and
bandwagon effects, engineers feel compelled to force them into the development
process in order to bypass filters (human and automated) on their resumes and
keep a sort of social cache among their peers.

~~~
viraptor
I downvoted it, because it's an opinion about things some people do and has
nothing to do with the technology. You can use it well, you can use it badly.
If you claim "They are there to produce buzz, then busy engineers, then
bills", then you haven't seen a problem they solve. That's fine. Just don't
try to tell people there is no such problem.

~~~
sidlls
I think it has everything to do with the technology. Specifically the
technology solves problems most (almost all) organizations don't have, yet
their adoption is practically ubiquitous as is their requirement to appear on
a resume to get past gatekeepers. If in the end the technology is used more as
a means to "produce buzz, then busy engineers, then bills" than to solve
problems for their legitimate use cases it hardly is an error to point it out.

~~~
viraptor
Are you saying almost all organisation don't need to solve the problem of
consistent deployment artifacts and easily reproducible testing/dev
environment? Because these are some of the problems containers can solve. (not
the only problems, and they're not the only solution - but they can be a
solution to those specific cases)

------
jt2190
Given developer x and developer y, each with adequate technical skills but
poor teamwork abilities, there's some business value in being able to employ
both given that the cost of dealing with their poor teamwork doesn't exceed
the business value they can each deliver.

In the past we might have to decide to forego employing one or both. Now we
kind of have the option of giving each dev their own "playground" via
containers, and not actually expect either to improve their teamwork skills.
Again, as long as the cost of supporting the container infrastructure is lower
that the business value each dev can deliver, it's a net win.

In practice I'm not sure if containers can really deliver on this promise, but
it's a very seductive idea.

~~~
cat199
Same is true for VM's, BSD jails, or even chroot()

think the issues being pointed out here are more about the whole 'linux
container ecosystem' which has a notion of statically built containers,
automated system orchestration, etc, and primarily in an operations context..

------
EngineerBetter
If an enterprise adopts a technology that allows them to go faster, but
doesn't change any of their processes to make them go faster, they have
effectively thrown money down the drain. I'm sick of seeing tech companies
eagerly switch out one tech for another without at least _trying_ to address
the people and process problems.

Stop trying to reduce costs. Work out your cost of delay, and focus on getting
things done more quickly and more effectively, not more cheaply and more
efficiently.

------
AlexCoventry
I'm not sure what the point of this piece is. It could have stopped at
"Complex socio-technical systems are hard; film at 11" and been just as
informative.

------
jacques_chester
Technology can't fix socio-economic problems, but it can shift the landscape
of what is possible.

Old-school software engineering is very much about reports, forms, documents.
Each tries to gather to itself every instance of some kind of information.
Here is the Customer Requirements Document. Here is the Software Requirements
Document. Here is the Software Design Document. Here are the Software Test
Plans.

These days most places work out of an issue management system. JIRA tickets,
Github issues, stories in Pivotal Tracker, whatever. The work is broken into
small chunks with their own lifetime. We don't wait while all the requirements
pile up before opening the dam and letting them flow downstream in a batch.
Each goes when it's ready to go.

This is not possible without the right tools.

That's a lie. It's totally possible. With 1980s word processing and
spreadsheets you could absolutely do everything Tracker or JIRA do. You could
use stacks of 3x5 cards to track thousands of items across dozens of teams.

But you probably don't.

The tooling lowers the threshold of the _possible_ , in a social and economic
sense.

No, containers aren't miraculous. Of themselves, they do nothing to fix other
problems. But they make it _possible_ to achieve improvements that are more
expensive and difficult in other ways. They lower the barrier of possibility.
The landscape of alternatives shifts, mountains become hills.

I've been on both sides of that divide now. As a consulting engineer I saw
projects rapidly iterating but not being able to deploy ("ops are too busy
right now"), leading to dozens of handsomely-billed hours being squandered in
meetings and workarounds and emails and chats and phonecalls trying to get the
code into any kind of production. I've also seen projects where deployment
took an hour or two to set up and that was that. People got on with the job.
And a major difference was the platform.

One more thing.

> _Development teams love the idea of shipping their dependencies bundled with
> their apps, imagining limitless portability. Someone in security is weeping
> for the unpatched CVEs, but feature velocity is so desirable that security
> 's pleas go unheard. Platform operators are happy (well, less surly) knowing
> they can upgrade the underlying infrastructure without affecting the
> dependencies for any applications, until they realize the heavyweight app
> containers shipping a full operating system aren't being maintained at all._

This is a problem buildpacks have solved for well over a decade on multiple
independent PaaSes.

Disclosure: I work for Pivotal, we do some stuff with containers, but we sorta
focus on the parts on top of and before them.

------
merb
well as somebody already pointed out. docker/containers are useless. combine
them with a useful system like kubernetes, they can be really useful.

~~~
currymj
i don't know about "useless".

scientific computing, for instance -- you may have a weird set of dependencies
for some code that you want to deploy a copy of on 200 slightly heterogeneous
nodes, once, and hopefully never again; but, it's vitally important that
people in the future have the possibility of replicating your computation.

containers are the perfect solution for this. in fact, in my experience they
essentially do fix a broken culture around reproducibility of computational
experiments.

