Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Understanding the key differences between LXC and Docker (flockport.com)
26 points by yungchin on Sept 12, 2014 | hide | past | favorite | 27 comments



This post was killed by user flags.


"Containers isolate and encapsulate your application workloads from the host system. Think of a container as an OS within your host OS in which you can install and run applications, and for all practical purposes behaves like an virtual machine. Containers decouple your applications from the host OS."

Wrong wrong wrong. Containers do not encapsulate (in the security sense). You can get some security by layering SELinux underneath, but you're still wide open to a range of kernel exploits. A container is not "an OS within [an] OS". Containers do not "for all practical purposes behave like a VM" since you can't run another kernel, BSD, Windows, etc on them. Containers do not decouple your app from the host OS, you are very much dependent on features compiled into your host kernel. Subtle userspace<->kernel ABI regressions will cause you days of debugging (I've been there several times).

"[VMs] .. at a performance penalty and without the same flexibility"

After boot, there's almost no difference in performance. Boot time is a huge factor, but don't confuse the two.

Containers have their place, are useful and very fast to provision, when you understand and accept their limitations (as many do), but don't spread nonsense like this.


Hi, I am the author of the article. It does not state containers encapsulate applications from a 'security sense'. You make a point which is not stated to draw conclusions which were not made.

It states 'Containers decouple your applications from the host OS, abstracts it and makes it portable across any system that supports LXC'. You can run the app in the container in any other system that supports LXC, isn't that what decoupling is?

I think you are looking at this from a technical perspective and not an end user perspective. Why don't you download and try some of the Flockport containers available on the website, those apps are decoupled from your Linux OS and will work in any LXC environment.

This is an article about containers for end users, not technical users, to help them understand the concept of containers and how LXC differs from Docker.

If security, multi-tenancy, running other OS's or specific kernels are your core use cases, virtualization makes more sense as is stated a few paragraphs below that quote, and in the LXC getting started guide on the same site.


"Docker restricts the container to a single process only."

Nope it sure doesn't.


You're right. It's more like a docker philosophy or best practice, not a restriction. And a best practice not everybody agrees on that is [1][2]. I'm using some docker containers based on Phusions base image myself. It works fine. Others of my containers are just running one process. I found it feels like a cleaner architecture when I manage to bundle concerns in a way so the resulting containers are stateless. But when I don't achieve that I don't pull my hair out over it. Whatever floats your boat.

[1] http://phusion.github.io/baseimage-docker/ [2] https://news.ycombinator.com/item?id=7258009


This seems like a selective quote (and it's certainly not a direct quote), because later in the post, the author mentions (albeit somewhat offhandedly) that it's possible to run multiple processes using a supervisor (supervisord, etc) or a shell script.

Although I don't think that's completely wrong as I'm under the impression from my limited experience with Docker that it's intended to containerize applications rather than work analogously to, say, FreeBSD jails.


It was cut and paste actually.

https://www.dropbox.com/s/lw1s0ip52do2r2v/Screenshot%202014-...

Not really selective, the concept is repeated more than once in the article. Honestly it's a bit misleading.

"Docker containers are restricted to a single application by design."

https://www.dropbox.com/s/a8843otngrhrp9x/Screenshot%202014-...

No they're not __restricted__ at all, you can certainly run multiple apps in a single container.

It is accurate to say Docker is focussed on the deployment of a single app per container. But the article does repeat it that it is single process.

The difference to be honest has really got thing to do with what you can do but what your aim is.

You can put sshd in your docker container, you can run init scripts. All these things are perfectly easy to do. It's just as a rule that's not your focus when working with Docker.

Again I think the article is misleading in this way.

IMHO of course ;-)


Hmm, thought for sure I'd searched your exactly quote, but I apologize that I missed it.

It seems the author of the article in question replied to my comment as well, and it's worth noting because his intent is (more or less) what I got out of reading the post [1]. I don't find it misleading. I think your point of contention may therefore be with the interchange between application and process as synonyms throughout the post. By implication, I took it to mean application in the context of a single cohesive unit (maybe not just simply a process, though it can be) that is run via a supervisor or alone.

(That said, using LXC directly is something of an exercise in frustration as the CLI tools are a bit... kludgy--Docker is far more polished.)

> You can put sshd in your docker container, you can run init scripts.

I seem to recall that running sshd from a Docker instance isn't exactly the intended use case of Docker [2] and is generally frowned upon.

[1] https://news.ycombinator.com/item?id=8310276

[2] http://jpetazzo.github.io/2014/06/23/docker-ssh-considered-e...


Hi Zancarius, my apologies, I mistakenly replied to you when it was intended for neilellis.

Imho if your use case is not statelessness and you need a container as a lightweight VM, LXC tools are actually quite simple and easy to use.

Nielellis - I am sorry you feel the article is misleading. It describes the default behavior. The default Docker template has limitations, that the user should be aware or you get questions like this - http://stackoverflow.com/questions/21280174/docker-centos-im...

I would suggest you have a look at rationale for Phusion base image here - http://phusion.github.io/baseimage-docker/

Docker base template is not a multi process multi app environment, it executes the application specified on the docker command line and exits. Which is why you can't run apps in daemon mode and have to explicitly disabled, and need a separate process manager.


Agreed. I think it might be more accurate to say that Docker containers have a philosophy toward individual applications within a container.

But those applications could have multiple executables, services, processes, etc...


Hi, this is what is stated in the article

'Docker restricts the container to a single process only. The default docker baseimage OS template is not designed to support multiple applications, processes or services like init, cron, syslog, ssh' When it comes to applications for a LAMP container you would need to build 3 containers that consume services from each other, a PHP container, an Apache container and a MySQL container. Can you build all 3 in one container? You can, but there is no way to run php-fpm, apache and mysqld in the same container without a shell script or install a separate process manager like runit or supervisor."

It's all in the same paragraph. There is no attempt to mislead. This is how Docker works.

That's why you need runit or supervisor to run multiple apps. https://docs.docker.com/articles/using_supervisord/

And this issue has been heavily discussed in Hacker News itself in the article about Docker Phusion base image - https://news.ycombinator.com/item?id=7258009

This article is not meant to trash Docker and I urge readers not to read it as such. It does not do that.

I have tons of respect for Docker, and any project involving hundreds and thousands of man hours of effort. It's an interesting use case of containers to build stateless applications, but its a single use case.

While researching this piece I came across significant confusion online including by technically minded folks and entire projects that persistently misrepresent LXC as 'low level kernel capabilities' or Docker as a 'front end user friendly' interface to LXC.

This kind of confusion does a disservice to both projects by not articulating Docker's core stateless benefits on top of normal LXC containers and misrepresent the LXC project, and undermine informed discussion on Linux containers.


Aren't all unixes a single process (plus its children)? :P


AFAIK, you are correct good sir or madam.


Thankfully. You should be running a supervisor process and probably a logging process along with your app process at the least.


> Docker restricts the container to a single process only.

> When it comes to applications for a LAMP container you would need to build 3 containers that consume services from each other, a PHP container, an Apache container and a MySQL container.

Huh??


No you can have multiple processes in containers. You might want to split them though.


While people contemplate things like containers, it's worth noting that modern hardware virtualization imposes a performance overhead of low single digital percentage points, and with some technologies like deduplication of storage and memory can paradoxically improve performance over bare metal in many scenarios.

Containers are interesting and the technology is emerging, but for 1-2% overhead it just isn't as critical as its often held.


You may nave a point, but you're making up numbers and forget other scenarios. How much ram and HD overhead there is to virtualize ping? How much time do I need to spawn the vm? Vms are great, but there are problems better solved by containers. Use both.


Why would anyone ever virtualize or containerize ping? In the real world, we use containers for largely the same things that we use virtualization -- the web server, the database server, the VPN host, and so on. The overhead is negligible to begin with-

https://major.io/2014/06/22/performance-benchmarks-kvm-vs-xe...

-but with de-duplication of a solution like vmware, approaches 0.

Containerization seems like a massive win when you're trying to fit a bunch of stuff in a single Digital Ocean VM, etc. When you're running many machines in a data center, though, it becomes dramatically less valuable.


Ping is actually an excellent candidate for virtualization/containers/SELinux since it has to run as root. A bug in ping could mean a carefully crafted response could lead to a remote exploit. There are obviously also a number of local exploits that take advantage of setuid binaries like ping: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-2851


I think sandboxing (with something like capsicum[1], or maybe seccomp?) would be more appropriate for ping than "containerization".

[1]: https://www.freebsd.org/cgi/man.cgi?query=capsicum&sektion=4


It doesnt have to any more - Linux supports special icmp sockets see https://lwn.net/Articles/420799/


Come on... ping was obviously chosen as a ludicrously simple example. You'd never containerize ping. But you could imagine some simple user application that would be easily deployed as a container, but that would be horrible overkill to wrap in a virtual machine.

You're right that once you've hit rack levels of virtualization, you've already hit the management overhead requirements to use VMs.

In my mind, it isn't about processing or IO overhead. It's about the overhead for management. It's a trade-off. Some applications are better managed as whole systems (VMs), whereas others are more building block-ish and probably better as containers. The two technologies are pretty complimentary.


modern hardware virtualization imposes a performance overhead of low single digital percentage points

For CPU and RAM, but the overhead for I/O is much higher.

technologies like deduplication of storage and memory can paradoxically improve performance over bare metal

In principle there's no reason why containers can't use those same technologies, although AFAIK currently Linux can't do memory dedupe for containers.


but the overhead for I/O is much higher

Hardware virtualized I/O is a thing now as well, and in that case as well the overhead has shrunk to marginal. Add external storage (iSCSI), and things get much more complex.

Much of the argument about virtualization is about ten years+ old. It once represented a significant additional overhead, but that simply isn't true at all now.


Looking at it from a public cloud perspective, I doubt we'll ever be able to use SR-IOV. By the time a particular function has been driven into hardware there are new functions we need that aren't in hardware, like sophisticated overlay networking.


Nesting is an issue though, at least for some use cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: