
Running containers without Docker - ingve
https://jvns.ca/blog/2016/10/26/running-container-without-docker/
======
apeace
I have to say: before I began to read through, I was already convinced the
author's idea was not a good one. But by the time I finished, she convinced me
that it was a good plan in her situation.

There are lots of great tools in the Docker/container ecosystem these days,
but I can see the argument that a major infrastructure migration should be
done in incremental steps. Using containers as simply a Puppet replacement
strikes me as a really good idea! You want your application to be _ready_ for
containerization, before you try to introduce an orchestration system.

Although, when she mentioned that her team is meant to be "Heroku for the rest
of the company", I couldn't help but think of Convox [0]. It's a great tool,
it's open-source, and using it is really just like having your own Heroku. We
used Convox at my last company, had amazing success with it, and I've been
recommending it ever since.

[0] [https://jvns.ca/blog/2016/10/26/running-container-without-
do...](https://jvns.ca/blog/2016/10/26/running-container-without-docker/)

~~~
nzoschke
Convox founder here. Thanks for the shout out! We are indeed building a
private Heroku like platform so you don't have to.

It's a simple of layer around AWS so it doesn't add extra craziness some other
approaches do.

As for the OP, her approach is spot on!

Containerizing your app is the first step in future proofing it. Doing so
correctly will remove layers and therefore simplify things.

You now are looking at a package that can run on a laptop and run on a server
with traditional techniques (upstart, systemd, etc).

You may take this to the next step and enter the orchestration world. Or not!
Patience is a virtue, wait 6 more months and all the orchestration tools will
be that much better.

It's all about simplifying. Adopt Dockerfile and shed Homebrew and Chef. Run
containers on a "dumb" box with systemd and shed Packer.

Run containers on ECS and shed custom AMIs and userdata and nagios and...

~~~
vacri
I hate nagios as much as the next op, but what is so magical about containers
that your services will no longer require an outage alerting mechanism?

~~~
seabrookmx
> run containers on ECS

Presumably ECS has its own alerting mechanisms similar to CloudFormation and
ElasticBeanstalk (I've only used the latter two, not the former).

~~~
vacri
In my experience, AWS alerting only sends one alert once per issue. If you're
asleep when the single SMS comes through, tough. Nagios will nag you until you
do something about it...

Maybe ECS is different. I haven't played with it, but I imagine it's "make an
SNS topic" like everything else.

~~~
pdx6
Send the AWS alarms to opsgenie or pager duty and they will nag you based on a
single event, including escalations.

~~~
vacri
Nagios or a SaaS, we're still back to something else other than containers to
manage the alerts, though.

------
throw2016
I think when it comes to containers HN has been guilty of a shocking lack of
scrutiny and unilaterally hyping Docker.

For example Docker was based on LXC till 0.9 but was so successful in hype and
misdirection of the project it's based on that till today most commentators
here do not seem to have a proper idea of the LXC project and whatever ideas
they have are negative. How did this come to be?

I don't know how healthy it is for open source if the HN community lets VC
funded projects essentially hijack open source projects in this way and allow
misconceptions to grow about them. For the record LXC has always been a full
scale container manager and was and remains a much simpler way to use
containers than Docker.

Most of the big problems holding back containers like security, isolation,
multi-tenancy are kernel side and not container manager side or ecosystem side
and will be solved kernel side. Similarly most of the features of containers
that are usable today are also thanks to kernel side.

Yet the folks who work on kernel namespaces, aufs, overlayfs and other
container used technologies receive no rewards or even recognition while
companies like Docker completely reliant on their work suck up the attention
and hype. How is this sustainable?

How many know the author of Aufs or Overlayfs? How much support have these
projects received and how do these projects sustain themselves.

How many know that cgroups were not namespaced, the issues this created for
containers and that they recently got namespaced? How many know about the
fantastic advantages but also pitfalls of user namespaces? Isn't this the kind
of 'container discussion' we should we having on HN instead of Docker or
vendor focussed discussions?

And thanks to the lack of scrutiny, Dockers complex use case of containers has
passed unexamined and worse been conflated to containers. This has only
increased technical debt for users just stepping in and runs the risk of
putting users off containers due to the inherent complexity.

~~~
tlrobinson
Docker is a case where "the whole is greater than the sum of its parts". Yes,
LXC and cgroups and aufs and overlayfs and whatever else do a lot of the heavy
lifting, but Docker glued it all together in a way that was approachable by
any developer, not just Linux geeks. I remember briefly looking into LXC ~6
years ago and being totally lost. Then Docker came along with a simple command
line interface and growing repository of images.

If you want to be popular among developers, the "hello world" developer
experience should be extremely simple. See: Stripe, Twilio, Docker, GitHub,
etc.

That said, I do think it's too bad that the Docker client/daemon are the
lowest-level building block most people are interacting with. It doesn't seem
to follow the Unix philosophy of doing one thing well.

~~~
luca_ing
> but Docker glued it all together in a way that was approachable by any
> developer, not just Linux geeks.

> I remember briefly looking into LXC ~6 years ago and being totally lost.
> Then Docker came along with a simple command line interface and growing
> repository of images.

I'm not trying to discount your experience in any way, but, remarkably, my
impression of Docker was exactly the opposite.

I personally find LXC to be very plain, simple and easy to understand, whereas
docker seems intimidatingly opaque and confusing.

But then again, I am a Linux geek :-)

------
alex-
I was introduced to linux cgroups and namespaces via my use of docker. It is a
fantastic tool for using these concepts quickly and I don't currently see
rocket etc overthrowing dockers dominance in the market place.

However I do sense (and feel my self at times) a growing upset with the
usability/ stability of the docker engine.

The OCI (Open Container Initiative) should allow new solutions to come to
market without fragmenting the space. i.e. it should be possible to run any
image with any container engine/Linux.

Docker is valued at $1B. It appears that justifying that valuation has taken
them into a number of adjacent markets. I wish they were more focused on the
Docker Engine itself.

~~~
wlamartin
Yep, the OCI is doing great stuff. At the bottom of the Cloud Foundry stack we
have Garden-Runc ([https://github.com/cloudfoundry/garden-runc-
release](https://github.com/cloudfoundry/garden-runc-release)). In past we had
our own homegrown containerization technology and we're super happy to have
been able to swap that out entirely in place of runc - it allows us to make
use of the significant engineering talent contributed and has enabled us to
move faster and reduce risk.

We're also buying into the image-spec for this reason.

Right now the focus is in underpinning CF, so although it is production ready
functionally, it lacks the pretty fantastic user experience that Docker
brings. Perhaps allowing us to reduce our focus on the containerizer, we'll be
able to spend more time improving UX - which would be neat.

I believe the Kubernetes team is working on a simple implementation of the
Kube API which is a thin wrapper around runc as well - so you're right, there
are some cool things coming around as a result of the OCI.

~~~
cyphar
> Yep, the OCI is doing great stuff. At the bottom of the Cloud Foundry stack
> we have Garden-Runc ([https://github.com/cloudfoundry/garden-runc-
> release](https://github.com/cloudfoundry/garden-runc-release)). In past we
> had our own homegrown containerization technology and we're super happy to
> have been able to swap that out entirely in place of runc - it allows us to
> make use of the significant engineering talent contributed and has enabled
> us to move faster and reduce risk.

Hi! I'm one of the maintainers of runC and I wanted to say that the CF stuff
around runC is awesome. Keep it up. :D

> I believe the Kubernetes team is working on a simple implementation of the
> Kube API which is a thin wrapper around runc as well - so you're right,
> there are some cool things coming around as a result of the OCI.

Actually, ocid is mostly being worked on by people from the OCI community
(runcom, mrunalp, myself and others). Though we are getting support from the
Kubernetes folks with it, which is pretty cool. :D

~~~
wlamartin
Sweet. runC is awesome and using it is awesome. We are also super excited
about the rootless work you personally have been doing, so thanks for your
hard work in that area.

~~~
cyphar
I can't take all the credit, Jess Frazelle was the reason that I started to
work on an upstream version of binctr. But one of the cool things we can do
with the rootless containers work (aside from making the lives of researchers
easier -- something I've experienced personally) is to add OCI image building
to KIWI (which is (open)SUSE's image building tool) and thus add it to OBS
(the Open Build Service which is (open)SUSE's package building tool). I think
there's a lot of cool things that will come out of rootless containers. :D

I'm going to be giving a talk about rootless containers at Linux.conf.au 2017
in case anyone is going to be attending.

------
schmichael
The nomad[0] team was just discussing today how much effort to put into our
rkt support[1] (our support for rkt lags behind our docker support).

So far we don't see a lot of evidence of our users taking rkt into production
over Docker, but I'm very curious if there's a significant number of users
like Julia looking to avoid running docker in production.

From a nomad developer standpoint, while building long command strings is a
bit awkward compared to calling docker's API, it's quite easy to support new
features and manually run generated commands to test our driver's behavior.

[0] [https://github.com/hashicorp/nomad](https://github.com/hashicorp/nomad)

[1]
[https://www.nomadproject.io/docs/drivers/rkt.html](https://www.nomadproject.io/docs/drivers/rkt.html)

~~~
philips
Hey Michael! Could you email rkt-dev about your questions as well?
Particularly around the long command strings. There are other ways to launch
rkt containers besides long args.

[https://groups.google.com/forum/#!forum/rkt-
dev](https://groups.google.com/forum/#!forum/rkt-dev)

~~~
schmichael
Our only question is more of a usage survey. Not sure it's our place to
perform those on your mailing list. :)

So far the command line building is more pro than con. It's so nice to be able
to copy & paste commands out of logs to see exactly how rkt is being run.

------
cyphar
Note that you can use runC (the underlying runtime for Docker) in a similar
way to rkt (except you also don't have to deal with images if you don't want
to -- runC just runs a rootfs). And runC doesn't have a daemon as well, and is
the canonical implementation of the Open Container Initiative runtime
specification. We're also working on ocid, which is a way to run kubernetes
purely on an OCI stack (no Docker required).

Though, shoutouts to the rkt guys. They really are awesome and have been
helping out a lot in the OCI effort.

~~~
heavenlyhash
> you also don't have to deal with images if you don't want to -- runC just
> runs a rootfs

Which is an awesome and powerful thing! The ability to define a filesystem
without any middle men simply makes some integrations _possible_.

Thank you and everyone on the runC team so much for your work.

------
dustinmoris
I don't get the author. So to sum it up, they like what Docker did with
containers, but they don't want to use Docker, at least not immediately,
because it requires to learn a few Docker commands and a bit other stuff and
therefore they decided to build their own hybrid shit, which will take them
probably more time to build than to learn Docker and then on top of that still
need to train their employees on that as well, instead of using a well
working, vibrant, healthy eco system, build by someone else, for free?

I cannot help myself but that sounds like the worst idea ever and totally
backwards. Particularly when they she said that they want to run containers
reliably in production, how do they think they are going to do this? The
reason why Docker is a "little" bit more than just `Docker run` is so you can
run containers reliably and in case of a fail over recover quickly. So how are
they planning to run containers reliably in production without adopting an
existing, well tested, well working framework?

~~~
fauigerzigerk
_> Particularly when they she said that they want to run containers reliably
in production, how do they think they are going to do this?_

As I understand it, they keep doing what they have been doing all along,
including everything that currently makes their system reliable, and gradually
add some of the desirable features of containers in a way they can fully
understand.

That doesn't sound like the worst idea ever to me. The danger of ending up
with too much of a homegrown snowflake is of course real. You're not wrong
about that. But she described the risks of the alternative approaches pretty
well I think.

~~~
dustinmoris
I don't get what the issue is with the migration plan to Docker either. You
have a big system with many components. The best way to start is build a Hello
world app on Docker and orchestrate it with whatever framework you think suits
best your requirements, let's say Kubernetes. Then run that hello world stuff
for a while to get a POC and make yourself familiar with the sytem. After that
start converting 1 of the many services to a dockerized service and run it
side by side in your new infrastructure. Maybe even load balance some of the
traffic progressively over to the new stuff until you are confident enough to
run 100% load on that migrated servcie. By then you should be pretty
experienced with the most part and have a great recipe to migrate the rest
slowly over. I don't see a reason why you would want to start with a homegrown
solution at any point in this process.

~~~
lmm
The whole "vertical change vs horizontal change" section of the article is
literally all about this exact subject.

~~~
dustinmoris
Yeah and I don't see how that is any good. Making small changes across your
entire infrastructure means that everything will move extremely slowly. I
cannot imagine that the entire infrastructure even deserves to be moved.
What's wrong by just leaving some really old, solid working, but less critical
stuff just on the old infrastructure until it gets phased out one day? It
feels like a waste of time and resources to force a horizontal move across
everything. Also technologies like containers and Docker are extremely fast
moving at the moment. It might very well be that some parts cannot be moved
immediately today, but in 6-12 months time they can and if you follow their
migration plan then meanwhile a whole lot of other stuff which could really
benefit from the new infrastructure will have to miss out. Also if you
discover an issue very late then you ended up with moving a lot of things back
and fourth, even if it is just a small step.

------
girvo
We followed this idea at my workplace! It worked wonderfully; we slowly
containerised every project we have, then deployed the containers as "dumb"
"programs". Yesterday I began the orchestration step, and everything has gone
extremely smoothly. Altogether this has taken about 12 months of on-and-off
work, maybe a man-month all up.

------
sciurus
> I think that just using containers by itself will force us to be disciplined
> about how we package and run services (you have to install all the stuff the
> service needs inside the container, otherwise the service will not work!). >
> > There’s no way for it to silently depend on the host configuration,
> because its filesystem is totally separate from the host’s filesystem.

I don't buy this, at least as presented. /etc/awesome/blah.xml does not
spontaneously come into existence on a host on its own. No matter if you're
building a docker image, a VM image, or managing long-lived hosts via a
configuration management tool, you have to specify that /etc/awesome/blah.xml
is created and has certain contents.

It sounds like the author is working on a poorly structured and documented
puppet codebase, where all the dependencies of services aren't clear.
Presumably if they can figure them out in order to build working docker
images, they could also figure them out and then refactor the Puppet code to
make it more maintainable.

At that point they could use their puppet code to generate their docker
images, which fits nicely into the "horizontal change" philosophy.

------
moondev
What is the motivation and benefit for running containers without docker?

Docker is by far the more mature and adopted development tool. It also runs
great on windows, macos and linux.

While k8s can run containers via the rkt runtime, it's still pretty new and
will probably introduce unnecessary headache and edge cases.

Docker also has a head start on a vibrant ecosystem for base images.

In my opinion it would be better to focus on docker, and when rkt gets there
it shouldn't be much of an issue to switch if desired.

~~~
shykes
Docker founder here.

> _What is the motivation and benefit for running containers without docker?_

I regularly hear from people who want to run containers without Docker. There
are several motivations, all of which are perfectly valid:

1\. _Learning_. It's fun to build things from scratch to understand how they
work under the hood.

2\. _Bad experience_. Early versions of Docker were quite buggy, and we
initially struggled to keep up with the colossal growth in usage and feature
requests. As a result, many of the people who tried Docker in production too
early were badly disappointed. Some of them decided Docker wasn't for them,
and started looking for alternatives.

3\. _Extremely custom use case_. If your deployment is larger, or more
complex, or more specialized than 99.99% of deployments out there, then
"mainstream" platforms like Docker might not be the right fit for you. Of
course we try to make Docker as customizable as possible, to support more a
wider spectrum of use cases with plugins. But realistically, no single
platform can cover all use cases, and I don't think any platform ever will.
Docker is no exception.

4\. _Philosophy disagreement_. Different people have different opinions on how
applications should be developed and deployed. Docker tries very hard to be
agnostic - to accommodate as many opinions as possible. But we can't please
everybody. If Docker does not fit your philosophy of development and
deployment, then the natural response is to look for an alternative.

5\. _Competition_. Many Docker competitors started out as extensions or
modifications of Docker, and over time are looking to reduce their dependency
on Docker.

I'm probably missing other reasons, but these are the ones I've been most
exposed to. These are all reasonable reasons to not want to use Docker.

Our approach is that, if you want to run containers without Docker, we should
make it as easy as possible. In practice that means spinning out as many of
the underlying components as possible (what we call the "plumbing") so that
you can assemble it yourself without being stuck using the entire Docker
platform. For example:

\- containerd
[[https://github.com/docker/containerd](https://github.com/docker/containerd)]
is our low-level container runtime.

\- runc [[https://runc.io](https://runc.io)] is a standardized "container
executor", which we donated to the Linux Foundation as the reference
foundation for the OCI spec.

\- libnetwork
[[https://github.com/docker/libnetwork](https://github.com/docker/libnetwork)]
is the low-level networking implementation (including overlay networking which
is a very useful primitive for container clustering)

\- swarmkit
[[https://github.com/docker/swarmkit](https://github.com/docker/swarmkit)] is
a clustering/orchestration implementation

\- notary
[[https://github.com/docker/notary](https://github.com/docker/notary)] is a
cryptographic content verification tool, which you can use to sign and verify
container images.

\- infrakit
[[https://github.com/docker/infrakit](https://github.com/docker/infrakit)]
automates the provisioning of infrastructure capable of running containers.

Our opinion is that, even if you don't use Docker, by using these components
for your own purposes, you are indirectly contributing to making Docker
better. Splitting out these components has also forced us to refactor Docker
into a more modular, more robust design.

We even send Docker employees to explain how to run containers without Docker
:) For example here's a talk we gave at Linuxcon:
[https://linuxconcontainerconeurope2016.sched.org/event/7oHM/...](https://linuxconcontainerconeurope2016.sched.org/event/7oHM/building-
distributed-systems-without-docker-using-docker-plumbing-projects-patrick-
chanezon-docker)

~~~
hiou
My main concern is coming across a an issue only to research it and find a
"won't fix"[closed] and 100 +1s asking for it. When I have something super
important in actual production this scares me way too much to trust.

~~~
fapjacks
Just curious: Which issue(s) are you talking about?

~~~
crummy
One that comes to mind is folks asking for relative path support in
Dockerfiles (though I understand why the Docker folks don't want to do that).

------
peterwwillis
So, in general, tools like Docker are crap because they're like the Apple iPod
equivalent of old standard Unix-style tools: bloated monolithic slapped
together crap that's crammed with non-extensible non-intuitive extras,
designed for people who don't want to know anything about how the sausage is
made. But if you do want to do fancy production things, it's nice to have a
tool that thousands of people have used for thousands of different things, and
bake in all the weird fixes for weird problems over thousands of iterations of
testing and bug-fixing.

However. If what you want is to simply understand what's going on under the
hood, this is a great way to do it. The only thing better that I would
recommend is to actually make your own Docker. It's not that difficult, you're
basically just slapping together some system calls and execvp()ing some other
standard tools, which is what Docker does, and you don't need to support any
of the extra fancy features. But you gain an intimate understanding of what
could be going on when stuff in production is breaking.

~~~
tarmstrong
> The only thing better that I would recommend is to actually make your own
> Docker. It's not that difficult, you're basically just slapping together
> some system calls and execvp()ing some other standard tools, which is what
> Docker does

Not sure if you've seen this, but the author wrote a great post about the
Linux primitives that Docker is built on:
[http://jvns.ca/blog/2016/10/10/what-even-is-a-
container/](http://jvns.ca/blog/2016/10/10/what-even-is-a-container/)

------
random567
I don't get the differentiation between "rkt" and "docker". What complexity
does Docker add? Separate networking? Difficulty communicating between
processes?

Just trying to understand why rkt is less overhead...

~~~
schmichael
One large difference is that docker has a daemon that exposes an HTTP API and
acts more or less like an init for containers.

The docker daemon has historically had some stability issues as well as some
security implications. Running a command line tool like rkt is a vastly
smaller attack surface and less complex stack overall.

~~~
indexerror
> docker has a daemon that exposes an HTTP API

This API has to enabled explicitly. Docker daemon works by using a unix socket
instead ( "/var/run/docker.sock" ).

> acts more or less like an init for containers

There is Docker daemon and there is Docker CLI. Both have separate scopes.

~~~
gnosek
> There is Docker daemon and there is Docker CLI. Both have separate scopes.

Docker CLI is glorified curl, everything happens in the daemon (containerd
being logically -- but finally not physically -- part of the daemon).

------
InquisitiveMe
Just started learning salt stack for infrastructure provisioning. But it makes
so much sense just to build all requirements in the container itself. Did
anybody already moved from tools like salt/puppet to container only approach?

------
Thaxll
Well you can run a single binary in a solution like Mesos or Kubernete. afaik
it's what Google is doing.

It's also how Hadoop and other m/r work.

------
iuhgdwij
If the author reads this, she might want to run this in vim

:s/build a comtainer image/build a container image/g

to fix a spelling error

------
masterleep
Try systemd containers (e.g, systemd-nspawn) if you're on a systemd distro
like Ubuntu 16.04.

~~~
aorth
systemd-nspawn is great! The workflow feels much more native than Docker on
GNU/Linux host. The easy integration with networking, BTRFS snapshots, etc is
very powerful.

~~~
Torgo
When I "discovered" systemd-nspawn, I kept thinking "why aren't people using
this? This is easier to understand in every way."

------
jsmthrowaway
I've successfully used Docker in production since it was a hobby project of
dotCloud. I was answering the OP's question about why anybody would ever
consider not using Docker for containers, so it's ironic that I'm now
responding to _your_ comment as the followup about my personal preferences.

Clever commentary trap: why not? Here's why. Oh, well, nobody's forcing you to
use it.

~~~
twblalock
This is what you wrote:

> Docker is a developer tool, not a container runtime.

It's just not true, and that's why I responded.

~~~
jsmthrowaway
The bloody comment I replied to called it that, and it's not controversial.
Docker did not suggest running in production for many years and the production
stuff is largely an afterthought. Again, I developed this opinion from running
it for _years_.

It was not intended for production usage from the beginning, as you claimed,
but I'm already tired of responding to this thread because I'm turning
instantly gray for not buying into Docker so why would I bother proving that
you're not correct?

~~~
twblalock
You called it that. Read what you wrote. Here, I'll paste your entire comment
as it is now, since you keep editing it:

> Containers existed before Docker and will exist after Docker, and articles
> like these are desperate attempts to remind everyone of that in the face of
> Docker's unilateral destruction of the concept. I know I'm right because of
> comments like these that presuppose "well, if you're doing containers, use
> Docker, the alternatives just aren't there." Except they were, before
> Docker, and after Docker.

> Docker is a developer tool, not a container runtime. The 'containers' it
> presents require you to commit a daemon on every machine, commit to a weird
> storage and distribution story when, you know, files of containers served in
> a flat directory are totally adequate, and so on. Docker made extremely
> complicated choices for a lot of things and now everybody wishing to advance
> containers has to deal with presuppositions like these, where Docker exists
> and the motivation for not using it is unclear to a lot of people, you
> included.

See the first sentence in the second paragraph? That's you. You wrote that.
You did not present it as an opinion, but as a fact.

~~~
jsmthrowaway
I edit to add because I'm on a phone. Nothing you've (annoyingly) pasted has
been touched since it was submitted the first time. Please take your overly
aggressive abuse of me to someone who cares, and reread what moondev wrote:

> Docker is by far the more mature and adopted development tool.

That's him. He wrote that. I added to it with something that you disagree
with, and I'm rapidly tiring of interacting with you because you're making it
extremely hard to remain civil.

But since you're the expert on Docker's production intentions, could you
perhaps discuss Swarm and how it compares against competitive technology in
the field? We can start small: what kind of scheduler does Swarm employ? Two-
level, optimistic? What is your understanding of the runtime and performance
bounds of the selected scheduling strategy? What is the expected latency for
scheduling decisions as the number of executing containers grows? How does
Swarm handle failure to schedule?

Could we then compare that scheduler against Aurora, Marathon, Kubernetes,
Omega, and Borg? Why do you feel that Docker is production ready in light of
the competitive work being done in this space? What do you feel is the
difference between Mesos and Aurora? Between Docker and Kubernetes? Since
Docker is intended for production usage, can you elaborate on some of the
challenges you've experienced running it in production?

~~~
twblalock
Here is the oldest copy of the Docker website on Wayback Machine, from the
month that Docker was first released:
[https://web.archive.org/web/20130323002800/http://docker.io/](https://web.archive.org/web/20130323002800/http://docker.io/)

The title is "Docker: the Linux container runtime." So, your assertion that
"Docker is a developer tool, not a container runtime" is simply false.

The text clearly describes deployment of docker containers as much more than
simple development tools: "docker can run on any x64 machine with a modern
linux kernel - whether it's a laptop, a bare metal server or a VM. This makes
it perfect for multi-cloud deployments." That sounds like production
deployment to me.

Whether or not you like Docker, or think it works well in production, is
immaterial to this. Docker _is_ a container runtime, it _is_ intended for
production deployment, and that means it is not merely a development tool as
you claimed. The first website Docker ever published is proof of Docker's
intentions. It was intended as a production tool from the very beginning.

~~~
orf
You're bring incredibly hostile over something very trivial.

Docker wasn't 'recommended' for production until 1.0:
[https://blog.docker.com/2014/06/its-here-
docker-1-0/](https://blog.docker.com/2014/06/its-here-docker-1-0/)

~~~
twblalock
Docker was always intended to run in production. That was the goal from the
beginning. The developers recommended against running it in production before
the 1.0 release because it was in beta. Developers of many products that are
intended to run in production do the same thing before the first stable
version is released.

That is not the same thing as saying that Docker was never intended to be used
in production, and was only ever intended to be a development tool. That's the
view I'm arguing against, and it's not trivial -- it denies the fundamental
purpose of Docker.

------
patrickg_zill
If someone sent me an email equivalent to this blog post, my sole
recommendation would be ProxMox running OpenVZ. OpenVZ gets you 90% of the way
to Docker-style containers without breaking anything for her developers.

~~~
icebraining
The current version of Proxmox uses LXC instead of OpenVZ, which is
interesting since it's using the same kernel containerization features that
Docker uses, unlike OpenVZ which required a custom kernel.

~~~
patrickg_zill
My mistake- I thought they had the option to use KVM, LXC, or OpenVZ all at
once. I see now that in adding LXC they dropped OpenVZ.

