Wrong wrong wrong. Containers do not encapsulate (in the security sense). You can get some security by layering SELinux underneath, but you're still wide open to a range of kernel exploits. A container is not "an OS within [an] OS". Containers do not "for all practical purposes behave like a VM" since you can't run another kernel, BSD, Windows, etc on them. Containers do not decouple your app from the host OS, you are very much dependent on features compiled into your host kernel. Subtle userspace<->kernel ABI regressions will cause you days of debugging (I've been there several times).
"[VMs] .. at a performance penalty and without the same flexibility"
After boot, there's almost no difference in performance. Boot time is a huge factor, but don't confuse the two.
Containers have their place, are useful and very fast to provision, when you understand and accept their limitations (as many do), but don't spread nonsense like this.
It states 'Containers decouple your applications from the host OS, abstracts it and makes it portable across any system that supports LXC'. You can run the app in the container in any other system that supports LXC, isn't that what decoupling is?
I think you are looking at this from a technical perspective and not an end user perspective. Why don't you download and try some of the Flockport containers available on the website, those apps are decoupled from your Linux OS and will work in any LXC environment.
This is an article about containers for end users, not technical users, to help them understand the concept of containers and how LXC differs from Docker.
If security, multi-tenancy, running other OS's or specific kernels are your core use cases, virtualization makes more sense as is stated a few paragraphs below that quote, and in the LXC getting started guide on the same site.
Nope it sure doesn't.
Although I don't think that's completely wrong as I'm under the impression from my limited experience with Docker that it's intended to containerize applications rather than work analogously to, say, FreeBSD jails.
Not really selective, the concept is repeated more than once in the article. Honestly it's a bit misleading.
"Docker containers are restricted to a single application by design."
No they're not __restricted__ at all, you can certainly run multiple apps in a single container.
It is accurate to say Docker is focussed on the deployment of a single app per container. But the article does repeat it that it is single process.
The difference to be honest has really got thing to do with what you can do but what your aim is.
You can put sshd in your docker container, you can run init scripts. All these things are perfectly easy to do. It's just as a rule that's not your focus when working with Docker.
Again I think the article is misleading in this way.
IMHO of course ;-)
It seems the author of the article in question replied to my comment as well, and it's worth noting because his intent is (more or less) what I got out of reading the post . I don't find it misleading. I think your point of contention may therefore be with the interchange between application and process as synonyms throughout the post. By implication, I took it to mean application in the context of a single cohesive unit (maybe not just simply a process, though it can be) that is run via a supervisor or alone.
(That said, using LXC directly is something of an exercise in frustration as the CLI tools are a bit... kludgy--Docker is far more polished.)
> You can put sshd in your docker container, you can run init scripts.
I seem to recall that running sshd from a Docker instance isn't exactly the intended use case of Docker  and is generally frowned upon.
Imho if your use case is not statelessness and you need a container as a lightweight VM, LXC tools are actually quite simple and easy to use.
Nielellis - I am sorry you feel the article is misleading. It describes the default behavior. The default Docker template has limitations, that the user should be aware or you get questions like this - http://stackoverflow.com/questions/21280174/docker-centos-im...
I would suggest you have a look at rationale for Phusion base image here - http://phusion.github.io/baseimage-docker/
Docker base template is not a multi process multi app environment, it executes the application specified on the docker command line and exits. Which is why you can't run apps in daemon mode and have to explicitly disabled, and need a separate process manager.
But those applications could have multiple executables, services, processes, etc...
'Docker restricts the container to a single process only. The default docker baseimage OS template is not designed to support multiple applications, processes or services like init, cron, syslog, ssh' When it comes to applications for a LAMP container you would need to build 3 containers that consume services from each other, a PHP container, an Apache container and a MySQL container. Can you build all 3 in one container? You can, but there is no way to run php-fpm, apache and mysqld in the same container without a shell script or install a separate process manager like runit or supervisor."
It's all in the same paragraph. There is no attempt to mislead. This is how Docker works.
That's why you need runit or supervisor to run multiple apps. https://docs.docker.com/articles/using_supervisord/
And this issue has been heavily discussed in Hacker News itself in the article about Docker Phusion base image - https://news.ycombinator.com/item?id=7258009
This article is not meant to trash Docker and I urge readers not to read it as such. It does not do that.
I have tons of respect for Docker, and any project involving hundreds and thousands of man hours of effort. It's an interesting use case of containers to build stateless applications, but its a single use case.
While researching this piece I came across significant confusion online including by technically minded folks and entire projects that persistently misrepresent LXC as 'low level kernel capabilities' or Docker as a 'front end user friendly' interface to LXC.
This kind of confusion does a disservice to both projects by not articulating Docker's core stateless benefits on top of normal LXC containers and misrepresent the LXC project, and undermine informed discussion on Linux containers.
> When it comes to applications for a LAMP container you would need to build 3 containers that consume services from each other, a PHP container, an Apache container and a MySQL container.
Containers are interesting and the technology is emerging, but for 1-2% overhead it just isn't as critical as its often held.
-but with de-duplication of a solution like vmware, approaches 0.
Containerization seems like a massive win when you're trying to fit a bunch of stuff in a single Digital Ocean VM, etc. When you're running many machines in a data center, though, it becomes dramatically less valuable.
You're right that once you've hit rack levels of virtualization, you've already hit the management overhead requirements to use VMs.
In my mind, it isn't about processing or IO overhead. It's about the overhead for management. It's a trade-off. Some applications are better managed as whole systems (VMs), whereas others are more building block-ish and probably better as containers. The two technologies are pretty complimentary.
For CPU and RAM, but the overhead for I/O is much higher.
technologies like deduplication of storage and memory can paradoxically improve performance over bare metal
In principle there's no reason why containers can't use those same technologies, although AFAIK currently Linux can't do memory dedupe for containers.
Hardware virtualized I/O is a thing now as well, and in that case as well the overhead has shrunk to marginal. Add external storage (iSCSI), and things get much more complex.
Much of the argument about virtualization is about ten years+ old. It once represented a significant additional overhead, but that simply isn't true at all now.