
Docker really is the future - mfenniak
http://blog.circleci.com/it-really-is-the-future/
======
davexunit
Docker does neat stuff, but if it's really the future then I am going to be
disappointed. Using the Docker daemon as a high-level interface to clone(2)
has been nice, but Dockerfiles are a weak format (why not use a general
purpose programming language?), the pre-built binaries on DockerHub are just
asking for exploitation, and unioning a bunch of disk images is a hack to deal
with the imperative nature of how images are built. Projects like Nix and GNU
Guix are what I want the future to be. With them I don't have to put trust
into any single third party, I get nearly bit-for-bit reproducible builds,
system-wide deduplication of packages, functional/declarative system
configuration, atomic updates/rollback, quick setup of development
environments (with or without a container), and more.

~~~
muraiki
I would be extremely interested in an article contrasting the approaches of
Docker and Nix/Guix. Is there anything like this available? A cursory search
returned mostly information on using the two together...

~~~
krick
It's hard to compare them, because it's really 2 different things. I
understand what parent means, and I think it's fair, but comparing Docker and
Guix is weird, because they aren't really competing technologies.

Docker is somewhat hacky solution for lightweight containers on Linux. It is
something tagged by #virtualization, although it isn't really that. And, well,
it comes with even more hacky solution to configure these containers.

Nix/Guix is a package manager for your Linux distro. It is solution for stuff,
for which you use apt-get. More generally, it is "the right one" solution for
system configuration, which makes configuration reproducible. So it's hard to
say, what is the difference between Docker and Nix/Guix because they serve
different purposes, but if you compare configuration language, "the approach"
\- you don't have to think twice to decide which is better. Nix isn't hacky
and Guix is even less so, as opposed to Docker.

~~~
davexunit
I should point out that Nix has container support built-in, without Docker,
using systemd-nspawn. So, you get to use the same tools to manage systems on
"metal", virtual machines, and containers. Pretty cool stuff!

------
fweespeech
> Up until now we’ve been deploying machines (the ops part of DevOps)
> separately from applications (the dev part). And we’ve even had two
> different teams administering these parts of the application stack. Which is
> ludicrous because the application relies on the machine and the OS as well
> as the code, and thinking of them separately makes no sense. Containers
> unify the OS and the app within the developer’s toolkit.

False. VM Blue/Green Phoenix Deployments were essentially "build VM for each
release to production; spin up new VM; spin down old VM" which is essentially
what Docker enables, except in container form....which you could have done
with OpenVZ or any other containerization solution that has existed to date,
even on AWS.

> Up until now, we’ve been deploying heavy-weight virtualized servers in sizes
> that AWS provides. We couldn’t say “I want 0.1 of a CPU and 200MB of RAM”.
> We’ve been wasting both virtualization overhead as well as using more
> resources than our applications need. Containers can be deployed with much
> smaller requirements, and do a better job of sharing.

As someone who runs & leases 128MB RAM VMs for various purposes...wut?

You could have just as easily used OpenVZ to achieve this and literally
everything else on your list:

[https://openvz.org/Main_Page](https://openvz.org/Main_Page)

Or any other container-based solution.

The only real thing you are saying with this article is:

"We like the Docker ecosystem and we feel its better than all other
solutions."

Fair enough, but at least don't pretend Docker is the only way to solve these
problems.

~~~
pbiggar
This is a good example of what I was talking about at the start. Nothing that
Docker does is completely new, and people were doing all these things before
Docker.

What Docker does is make them easier, pull them all into the same package,
bring the ecosystem of tools around a single technology, and most of all:
traction!

~~~
vezzy-fnord
> Nothing that Docker does is completely new, and people were doing all these
> things before Docker.

You're contradicting yourself, because in the same article you wrote:

 _Into that world drops Docker: a new way of doing almost everything. It
throws away old rules about operating systems, and deployment, and ops, and
packaging, and firewalls, and PaaSes, and everything else._

Then all the hype about the "future".

~~~
pbiggar
It's the same stuff, but it does it differently. So for example, instead of
using AMIs to prebake images, it uses a weird AUFS layer. And it deploys using
Dockerhub or by pushing images directly to hosts. And instead of using Mesos
it has Swarm. And Kubernetes: that really is quite different to what we're
doing, but not that different, conceptually, from what Heroku is.

~~~
fweespeech
Yeah, I took your post as vezzy-fnord did.

But honestly, you have a very AWS-centric perception and the traction you talk
about is really "Silicon Valley" specific. As long as you understand you are
looking at a very, very narrow slice of the world when you use terms like
that, sure, I can agree to that.

For personal stuff, I use docker because its easy w/o all the tools I have at
work. But that isn't the same thing as "unique and new".

------
merb
Your post is invalid at some point. Of course Docker is great, of course new
technologies are great. BUT the first point isn't so much of a joke. Docker is
not the future, at least not for everyone. Currently Docker will not change
the way how apps will be built, Docker will change nothing, ... at the
beginning.

To start out new development, people should and never forget this, don't care
about microservices and docker and anything else. They should just build a
fucking big Monolith. After they did that, and they are getting more people
for development or more people at their page / service / whatsever they can
still start to split everything.

Don't build a fucking Unicorn just to be accurate. Start out boring. Do this
every time and don't listen to anybody that tells you how good microservices
and docker is. Deploy your app manually (okai this step could be skipped),
then use a tool LIKE ansible or puppet, then if you need more, look at the
things that bigger companies use.

But never ever over architect your project / application / service.

~~~
shykes
As "the Docker guy" I don't want to enter the debate, I will simply try to
explain how we approached this problem when designing Docker.

Docker was designed explicitly so you _don 't_ have to change the architecture
or flow of your application on day one. Rather, we want to provide tools that
make your life easier in small ways _now_ , and make it possible to improve
your architecture and workflow gradually, and _on your terms_.

This was a hard-learned lesson from building our previous product (Dotcloud, a
Heroku competitor), which _did_ require developers to change everything on day
1. As a result it was simply not possible for many developers to use it.

~~~
RyanZAG
It's humorous that what you just said goes exactly opposite to the blog post,
which talks about how docker is designed to do everything completely
differently because current methods "don't scale".

Keep up the good work on docker though, it seems to be getting some good
traction so far! I'm personally wondering if it's more fashion based traction
though, and someone will be inventing the next "docker, but more buzzword"
before long.

~~~
shykes
> _It 's humorous that what you just said goes exactly opposite to the blog
> post, which talks about how docker is designed to do everything completely
> differently because current methods "don't scale"._

I don't think it's contradictory. We _do_ want Docker to change, for the
better, the way applications are built and run. And if you _do_ want to throw
away your existing stack and build your next application in the most portable
and scalable way possible, then Docker can definitely help. It's just that it
doesn't _require_ you to, because most people don't upgrade everything at
once: they improve their toolbox gradually.

For example, there is a meme that "if you run more than one process in Docker,
you're doing it wrong". I actually disagree with that. I think if you want to
transpose an existing VM into a container, and think of it as a mini-server
that you ssh into, that is your prerogative and Docker should support that use
case. Maybe later you will look into the benefits of breaking up your
application into smaller, single purpose containers (for example, you can now
the Docker API and ecosystem at a finer level of granularity). And when you
do, Docker should support that use case too.

A small disgression: I think it's unfortunate that the tech community feels
the need to coalesce around polarizing "you're doing it wrong" statements. I
find it particularly unfortunate that _Docker_ , a tool I created partly to
make the development world _less_ polarized, was chosen as a battleground for
ideological battles that I find frankly boring... Everything doesn't have to
be a battle.

> _Keep up the good work on docker though, it seems to be getting some good
> traction so far! I 'm personally wondering if it's more fashion based
> traction though, and someone will be inventing the next "docker, but more
> buzzword" before long._

Thanks.

Obviously it will be hard for me to answer that in an unbiased way. I think
the "fashion" aspect is a matter of perspective. From the point of view of
heavy Hacker News and Twitter users, there is a lot of hype, both positive and
negative. But the huge majority of Docker users don't hang out on Hacker News
(if they even know what it is). They have a job to do, Docker helps them do
that job, and they tell their friends and colleagues about it.

We've tried to invite as many real-world users of Docker at next week's
Dockercon, to talk about their experiences, both good and bad. Maybe watch a
few of their presentations and decide for yourself if it feels like "fashion"
:)

~~~
merb
I hope I will see some interesting talks about scaling out SQL Databases and
running Docker behind a firewall or running containers on customer hardware..
Docker has still some rough edges especially in "non internet" environments.
where stuff is not moving that fast. Also it's really hard to scale out while
running on a single box without internet access and trying to add more.. I
hope thats something that soon (tm) could docker also fix.

------
dasil003
One of the things that rankles the greybeards is when people think over-hyped
tools like Docker are original creations and they don't acknowledge that
containerization has been around for a long time. It's not really that anyone
makes this claim per se, but just a general impression fostered by the cool
kids' relative youth and ignorance.

We all wish that software could be judged on objective merits, but the sad
truth is that now more than ever the software development world is so big that
UX and marketing for dev tools actually matter a lot more. Of course over time
we still gravitate towards better things as lessons are learned, but in order
to figure out what the actual best tool is, huge investments need to be made
to get it to work. Until millions of man hours are invested, it's impossible
to say whether something like Docker will in fact be better than what came
before, or whether it will peter out at another local maximum due to a
fundamentally flawed philosophy. If you can't generate some initial hype then
it's hard to get enough developer mindshare to even test the premise of
something as complex as Docker.

~~~
erikpukinskis
What are the older examples of containers? Virtualization and chroot jails
have been around forever, but LXC didn't exist until 2008
([https://en.m.wikipedia.org/wiki/LXC](https://en.m.wikipedia.org/wiki/LXC)).
My beard is red, not gray, but I did a lot of years of coding and devops
before I ever heard about containers.

What am I missing?

~~~
krakensden
Solaris Zones? OpenVZ?

~~~
erikpukinskis
Cool! Thanks for the references I had never heard of those. They don't seem to
predate LXC by that much though. Solaris Zones was released in '04 and OpenVZ
was started around '00 and open sourced in'06.

I guess I don't consider 10-15 years a "long time" that's just approximately
the amount of time necessary to turn a fundamentally new architecture concept
into something you want to work with every day.

------
dean
So Docker is beautiful and wonderful thing, and it is the future, and anybody
who doesn't like it has no credibility because they just don't like change,
and they are Philistines.

I stopped reading after that.

~~~
lrajlich
I agree with this summary assessment of the article.

I like docker and I believe it's potential is huge, but I don't like this
article. At all. It's based on a false dichotomy and it's intentionally
divisive. Trying to get a rankle out of people who disagree with you by trying
to pin them as irrational "haters" is not discourse, it's propaganda.

"At the same time, most of the software industry makes its decisions like a
high school teenager: they obsessively check for what’s cool in their clique,
maybe look around at what’s on Instagram and Facebook, and then follow blindly
where they are led"

I think my reaction to this is... this is definitely true, but how do I know
you're not yet another lemming like the rest of us?

~~~
pbiggar
> how do I know you're not yet another lemming like the rest of us?

I'm pretty sure I am!

------
wwweston
> We are always faced with a choice between staying still with the
> technologies we know, or taking a bit of a leap and trying the new thing,
> learning the lessons and adapting and iterating and improving the industry
> around us.

I think they forgot the third popular choice -- taking a bit of a leap, trying
the new thing, and finding that it doesn't involve really adapting, iterating,
or improving, it's pretty much just a reinvented wheel.

This might explain why the author apparently has trouble drawing a picture of
the motivations behind curmudgeons who hate anything new.

If your tool _really_ _genuinely_ solves problems _without_ creating a new
layer of complexity, it's not going to have very many haters.

~~~
pbiggar
> If your tool really genuinely solves problems without creating a new layer
> of complexity, it's not going to have very many haters.

OP here. A thing I wanted to touch on, but left out. I don't think it's
possible to solve problems without adding some complexity. For example, golang
genuinely solves problems but you have to learn go and the new toolchain, etc.
Once you learn them though (and the same is true for Docker here), you get to
remove some of the complexity that existed with your previous tools.

So for example, in Docker, we'll be able to remove the host OS and move it
into the hypervisor, and the complexity will drop to lower than it was before
the start. Or we'll start to use the Google Cloud Platform (which is
Kubernetes) and we won't think about anything other than our container image
and the exact resources it needs.

~~~
wwweston
> I don't think it's possible to solve problems without adding some
> complexity.

Sounds like you're with the curmudgeons, apparently. ;)

> you have to learn go and the new toolchain, etc. Once you learn them though
> (and the same is true for Docker here)

That's fine -- the overhead of learning something new isn't (inherently)
"complexity" at all. This isn't to trivialize that overhead (or our frequent
failure to minimize it), but complexity is more in whether the abstractions
offered for dealing with a problem demand as little attention as possible.

Good tools/abstractions categorically reduce the number of details users have
to pay attention to, and correctly pick the prominent ones most relevant to
the problems they're trying to solve. Details beneath the abstraction _very_
rarely bubble up from beneath to break things or otherwise demand attention,
and nobody even pretends that when they do that it's anything other than a
problem. The end result is less complexity.

The curmudgeon invoked in the article is probably just someone who's seen that
a lot of what we produce doesn't live up to this standard. It's not
_impossible_ to produce these things. jQuery is one of my go-to examples: it
did a great job of insulating developers from browser API differences and blew
native APIs so far away in terms of convenience that people could mistake it
for an application framework (or even a language). But that's relatively rare.
Most abstractions are leakier and/or don't spare you from the details that are
really bogging you down.

I'm not qualified to talk about which category Docker is in. I might be more
qualified to talk about it if more of pieces like this focused on the specific
problems rather than speculative theories about haters and curmudgeons. :)

I do like that this piece did at least spend _some_ time on a high-level
overview of the kinds of problem-orientation it wants Docker users to take and
by extension a very fuzzy introduction of how Docker can help.

------
api
Both this and the previous tongue-in-cheek rant are correct. It depends on who
you are.

The fact is that not everyone is going to need to scale to Google-like sizes,
and not every app is going to need to scale _in the same way_.

For many developers trying to build products, getting bogged down in all this
emerging Docker-centric complexity is a case of premature optimization. Build
your thing, polish it, get users, get customers, etc., and if you manage to
get so many that scaling becomes a problem then you now have a "good problem
to have."

... and the problem with the Docker ecosystem is _not_ that it is new. I love
new stuff. The problem is that it's a lot like the web framework ecosystem,
which is an example of the CADT development model:

[http://www.jwz.org/doc/cadt.html](http://www.jwz.org/doc/cadt.html)

To a certain extent that's an artifact of all this Docker stuff being new and
very much in its experimentation phase. I expect that the web will settle down
a little someday, and this stuff will too, but for now it's a crazy wild west
of people implementing Yet Another Everything. There's also a bit of a funding
wave going through this area, which is causing a lot of me-too Docker startups
to pop up and do the same things over and over.

------
serve_yay
I'd agree that containers will be huge in the future. But it might not be with
Docker, that's all.

------
erikpukinskis
The same thing happened with "the cloud" and "nosql" etc etc. 2% of people
crow "this changes everything!" and 2% yell back "this changes nothing you
morons!" and both groups of people are wrong, and the other 94% of us just
keep working and are grateful for all of the cool new tools that help us get
to our goals faster.

10 years ago you couldn't build a Heroku unless you were a genius. Now you can
build a Heroku clone in a month. That's progress.

Also, it's still just computers. Can we move on?

------
rendambathu
Previous Blog post[1] is Gold!

[1] [http://blog.circleci.com/its-the-future/](http://blog.circleci.com/its-
the-future/)

------
struct
The only potential use I see for Docker's potential is to distribute
applications with data as combined appliances: so if you want a PostGIS server
preloaded with maps, you can do `docker fetch some_postgis_server` and end up
with something you can query. But then when you try to build such an appliance
(containing a lot of data), and push it to Docker Hub, it ends up failing with
a mysterious error code overnight, and you open a ticket, and nobody follows
up on that ticket. Docker has to get better at that kind of thing before I can
consider using it again.

~~~
mbreese
I'm looking forward to using Docker (or another container format) for
reproducible computational work. Now, it's still somewhat difficult to keep
versions of programs in sync on clusters (even using environment modules). It
will be really nice when we can store the application environment as a
container and be able to pull that configuration off the shelf to repeat an
experiment/analysis.

Unfortunately, that use-case usually happens on multi-tenant HPC clusters,
where we can't use Docker yet until the security issues are figured out (or we
can get a solid and standard micro VM for running containers). Job scheduling
is also a non-trivial issue.

------
parasubvert
I think the future is something like prefabricated / disposable /
immutable(ish) infrastructure. Stop managing servers, start managing your
service.

Containers are a big part of that - they made the idea more palatable and
usable. But if you're doing this BECAUSE of Docker and containers and PaaS
being cool, rather than the benefits of disposability and prefabrication in
enabling stable / predictable scale out , availability, and change management
of your bits, you've probably already lost.

Netflix arguably started this cloud-native wave. They still use VMs.

------
haberman
Docker seems interesting. But it seems like there is room for something even
better.

I imagine a Docker-like product where you can write a Dockerfile for your
checkin test suite as easily as you can write .travis.yml right now. And then
you can run a command to easily submit this "job" to AWS, Google Compute, or
whatever other cloud provider you want.

Maybe I'm thinking about this right now because my Travis build has been stuck
waiting to be scheduled for over 24 hours for no apparent reason. I'm at the
mercy of the Travis people to take a look at this. What I imagine is a world
where cloud execution is so commoditized that I can say "sorry Travis, too
slow, you lose my business today." I can hit CTRL+C, change my command-line to
--submit-to=aws.amazon.com, and run exactly the same test suite there instead.

Oh, to dream...

------
ohitsdom
He made some good points, but there's a really good chance I'll never need to
scale at a level that requires Docker. Hopefully I'm wrong, but
Azure/AWS/Heroku are probably going to be good enough or overkill for my
needs.

------
tobbyb
Docker is an opinionated way to use containers. You don't need to adopt that
to get the benefits of containers. A lot of the messaging, hype and marketing
conflates the 2 and it suits the docker ecosystem to do that but it does not
benefit informed discussion.

By eschewing plain containers in favour of Docker you are embracing some
complexity and it would help to have more discussions on the tradeoff and
benefits of every approach, rather than just conflating containers to Docker.

The LXC project in development since 2009 on which Docker was based and now
Systemd-nspawn give you pretty advanced containers with mature management
tools, multiple container OSs, full stack linux networking, storage options,
cloning, snapshotting etc. [1]

LXC and soon Systemd-nspawn (version 220) support unprivileged containers [2]
that let's non root users run containers. That's a pretty big step forward for
container security.

There is lot of innovation happening outside the hype of Docker. But these
projects are not opinionated and stop at giving you container technology as
lightweight VMs like KVM, Xen, Vmware stop at giving you virtualization.

Docker takes that as a base and restricts the container OS template to a
single app, builds the container as layers using aufs, btrfs, device mapper,
and enforces storage separation. This is not rocket science, you can do this
yourself with overlayfs, aufs, btrfs, build single app containers etc [3]

By adopting the Docker way you are immediately taking away seamless migration
of VM workloads and embracing some complexity. There are both upsides and
downsides to this. For a lot of use cases the Docker approach may help, in
others it may add unnecessary complexity. We have an indepth [4] look at the
differences between LXC and Docker here for those who are interested.

Disclosure I run flockport.com that provides an app store for servers based on
Linux containers.

[1] [https://flockport.com/guides](https://flockport.com/guides)

[2] [https://www.flockport.com/lxc-using-unprivileged-
containers/](https://www.flockport.com/lxc-using-unprivileged-containers/)

[3] [https://www.flockport.com/experimenting-with-
overlayfs](https://www.flockport.com/experimenting-with-overlayfs)

[4] [https://www.flockport.com/lxc-vs-docker](https://www.flockport.com/lxc-
vs-docker)

------
dave_ops
I'd say my problem with the whole massive containerization hype circus is less
that I'm a curmudgeon who hates anything new, and more of a practicioner who
hates marketing and social hype promoting a half solution to a narrow problem
as a full solution to all problems.

Containerization isn't new. It just has a brand name now, and the "all the
way" solution to this problem is unikernels.

------
geggam
Docker is a broken package manager with no checksumming or version, leveraging
layered filesystems to introduce more bugs with a chroot post install hook

------
vezzy-fnord
This whole article seems completely confused with its definitions and train of
thought, which I suppose is delightfully ironic.

Some quarrels:

 _Into that world drops Docker: a new way of doing almost everything. It
throws away old rules about operating systems, and deployment, and ops, and
packaging, and firewalls, and PaaSes, and everything else._

That's a dramatic overstatement if I ever saw one. The rules haven't been
thrown away. They're there, just the subsystems partitioned into multiple
namespaces under a single host.

 _But then something interesting happened. Web applications got large enough
that they started to need to scale._

The whole portion of the essay about web applications and distributed systems
operates under a broken causal chain and continuity. That assumptions break
down and new use cases arise with scale is obvious, though here it's presented
like some recently attained enlightenment, and moreover that every J. Random
Hacker should be thinking about distribution and high scalability right from
the conception of their CRUD app. Not the case. Dumb setups work for the
commons.

 _Instead of dealing with simple things like web frameworks, databases, and
operating systems, we are now presented with tools like Swarm and Weave and
Kubernetes and etcd, tools that don’t pretend that everything is simple, and
that actually require us to step up our game to not only solve problems, but
to understand deeply the problems that we are solving._

This paragraph makes no sense. The author is listing completely orthogonal
tools.

\------

On to the allegedly solved problems:

 _Which is ludicrous because the application relies on the machine and the OS
as well as the code, and thinking of them separately makes no sense.
Containers unify the OS and the app within the developer’s toolkit._

Depends on your domain. Plenty of applications are built to be self-contained.
The unikernel/libOS approach is one that treats the OS as an implementation
detail, ironically taking us straight back to the 1950s where all code had to
independently initialize the machine, though in a good and reusable way.

 _Up until now, we’ve been running our service-oriented architectures on AWS
and Heroku and other IaaSes and PaaSes that lack any real tools for managing
service-oriented architectures. Kubernetes and Swarm manage and orchestrate
these services._

Those are all different deployment strategies and application environments
you're mixing up here. It may not be that you've lacked tools so much as
you've had no need for them in your use case.

 _Up until now, we have used entire operating systems to deploy our
applications, with all of the security footprint that they entail, rather than
the absolute minimal thing which we could deploy. Containers allow you to
expose a very minimal application, with only the ports you need, which can
even be as small as a single static binary._

And it does so by cloning the various subsystems of the host OS into their own
namespaces. You don't get around using the whole OS, you just work around it
because your host OS can't handle multi-tenant properly and the dynamic
linking quagmire has become a maintenance burden.

 _Up until now, we have been using languages and frameworks that are largely
designed for single applications on a single machine. The equivalent of Rails’
routes for service-oriented architectures hasn’t really existed before. Now
Kubernetes and Compose allow you to specify topologies that cross services._

This hasn't changed. You still need to bolt on lots of heterogenous
components. Seamless multi-node distribution is beyond the scope of nearly all
language runtimes, or frameworks, though then again nor is there any
obligation to support this. At sufficient scale, you will be doing lots of
homegrown integration work.

 _We couldn’t say “I want 0.1 of a CPU and 200MB of RAM”._

Pretty sure you could. I'm assume you're referring to the likes of Mesos, in
which case I can name at least HTCondor, which is a cluster manager and
scheduler not unlike Mesos, intended for HPC. It's been around since 1989.
Then there's the much smaller scale things you could always do to limit
resource utilization. It's not like this was discovered yesterday.

 _Up until now, we’ve been deploying applications and services using multi-
tenant operating systems. Unix was built to have dozens of users running on it
simultaneously, sharing binaries and databases and filesystems and services._

Author confuses multi-user with multi-tenant. Unix is the former.

 _As an example, how many protocols had to die before we got REST? ... Yet, we
still haven’t got the same level of tooling for REST-based APIs that we had
for SOAP a decade ago, and SOAP in particular has yet to fully die._

REST isn't even a clear protocol suite like SOAP or CORBA. It's more of a
design philosophy than a formal definition.

 _And the same thing has been going on with programming languages since we
escaped Java a decade ago._

We did?

 _If you’re looking for me, I’ll be in the future._

Damn, it looks an awful lot like the past.

~~~
CrystalGamma
> REST isn't even a clear protocol suite like SOAP or CORBA. It's more of a
> design philosophy than a formal definition.

It's also one of the most misunderstood architectural styles I have seen. If
you see any read-write API that calls itself "RESTful", chances are it
violates at least one compulsory constraint of REST (very often Uniform
Interface), in effect meaning "HTTP-Based API that is not SOAP".

------
framp
Docker is the present - given we don't have jails on linux.

Docker COULD be the future, but I frankly hope it's not. Despite the idea
being good, there are better designed alternatives.

Like rkt.

------
contingencies
TLDR; A frank retraction of the sentiment expressed sarcastically in the
previous post is summarized under _Real problems solved_. However, each of
these points is dubious...

1\. _Up until now, we’ve been running our service-oriented architectures on
AWS and Heroku and other IaaSes and PaaSes that lack any real tools for
managing service-oriented architectures. Kubernetes and Swarm manage and
orchestrate these services._

While some options for managing large groups of services running on one type
of infrastructure do indeed now exist, and this is one step further in
automation and therefore a good-thing(tm), it is by no means the end-game and
in fact at this stage may not even be desirable as it is in effect simply
shifting the basic scope of service-oriented infrastructure comprehension and
management from a single service to a group of services, and likewise the
units of deployment and management of infrastructure a cluster rather than a
host, while making certain (and not safely universal) assumptions about how
the service(s) will need to be managed in future.

2\. _Up until now, we have used entire operating systems to deploy our
applications, with all of the security footprint that they entail, rather than
the absolute minimal thing which we could deploy. Containers allow you to
expose a very minimal application, with only the ports you need, which can
even be as small as a single static binary._

Yes but this rarely happens in practice. It's like saying "now we use Linux,
we get the benefits of NSA SEL". No, you don't. You have to put a lot of
effort in to get that far, and it's highly unlikely to be used. So this is
basically a moot point right now.

3\. _Up until now, we have been fiddling with machines after they went live,
either using “configuration management” tools or by redeploying an application
to the same machine multiple times. Since containers are scaled up and down by
orchestration frameworks, only immutable images are started, and running
machines are never reused, removing potential points of failure._

Yes, immutable infrastructure is good, but we have 100 ways to do this without
docker. Docker is like an overpriced gardener who comes to your door, knocks
around the garden for half an hour, flashes a thousand dollar smile - ie. put
a cute process convention over the top of what's there already - and tell you
all smells sweet in the rose garden (PS. here's your fat invoice). Never trust
a workman with an invoice, and never trust abstraction to solve a fundamental
problem.

4\. _Up until now, we have been using languages and frameworks that are
largely designed for single applications on a single machine. The equivalent
of Rails’ routes for service-oriented architectures hasn’t really existed
before. Now Kubernetes and Compose allow you to specify topologies that cross
services._

Well that's cute, but actually bullshit. We've had TCP/IP and DNS for decades.
To "specify topologies that cross services" you just go _host_ : _port_.
What's more, the standard approach and protocols actually have deployment,
documentation, and are known to work pretty well on real world infrastructure.
Their drawbacks are known. Now, I'm not saying there's zero improvement to be
made, but the way this is phrased is ridiculous.

5\. _Up until now, we’ve been deploying heavy-weight virtualized servers in
sizes that AWS provides. We couldn’t say “I want 0.1 of a CPU and 200MB of
RAM”. We’ve been wasting both virtualization overhead as well as using more
resources than our applications need. Containers can be deployed with much
smaller requirements, and do a better job of sharing._

Sure, we've known that container-based virtualization was far more efficient
than paravirtualization for decades. Docker has not actually provided either,
nor made it measurably easier to mix and match them as required, so this claim
seems bogus.

6\. _Up until now, we’ve been deploying applications and services using multi-
user operating systems. Unix was built to have dozens of users running on it
simultaneously, sharing binaries and databases and filesystems and services.
This is a complete mismatch for what we do when we build web services. Again,
containers can hold just simple binaries instead of entire OSes, which results
in a lot less to think about in your application or service._

What kool-aid is this? The implication is that unix and its security model are
going to go away as a basis for service deployment because... docker. What?
Frankly, I would assert that many application programmers can barely _chmod_
their _htdocs /_ if pushed, let alone understand a process security model
including socket properties, process state, threads, resource limits and so
forth. Basically, the current system exists because _it is simple enough to
mostly work most of the time_. While it may not be perfect, it's a whole lot
better than throwing the baby out with the bathwater and attempting to rewrite
every goddamn tool to use a new security model. The mystical single binary
services that docker enthusiasts seem to hold up as their _raison d 'être_ are
likely therefore to either tend to be huge, complex, existing processes
allowing almost anything (like scripting language interpreter VMs) or
nonexistant. By contrast, the 'previous' unix model of multi-process services
with disparate per-process UIDs/GIDs, filesystem and resource limitations
seems positively elegant.

All in all, this post's argument doesn't hold that much water in my view.
However, I applaud CircleCI for working on workflow processes ... I think
ultimately these are the bigger picture, and docker is merely one step in that
direction.

------
Jamie_Dobson
Excellent article.

------
andyl
I want to like Docker. I just don't have the time to learn the tooling and
keep up with all the changes. Maybe in a year or two when things stabilize.

~~~
Animats
Software has become like Matryoshka dolls - layer after layer of packaging.
Even Python and Javascript programs now need "building". (I saw a makefile for
a Python program last week. All it did was "test: python app.py test", but
there was a makefile.)

It's sometimes easier just to make a static executable in Go or Rust.

