Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Virtual Machines Vs. Containers: A Matter Of Scope (networkcomputing.com)
41 points by wslh on June 2, 2014 | hide | past | favorite | 21 comments


We should need neither VMs nor Containers. The process isolation and environment virtualization provided by the OS should be sufficient, that's what it is there for.

The fact that it is not seems like a huge failure both in terms of application architecture that assumes it owns an entire machine and operating system technology that can't prevent this.

VMs always seemed like a ridiculous (expensive, over-engineered, under-performing, mis-applied, ...) solution to that problem (they're fine for OS/hardware simulation etc.), but containers look like a nice, minimal extension to the isolation offered by the OS.


Containers are the process and environment isolation facilities of the OS. It just turns out that sometimes it's useful for processes to see each other, and other times it's detrimental. Similar for chroot and filesystems (for environment isolation). It turns out that containerization is nothing more than extremely fine grained control over what processes get to do what in a running system. The name for a fully isolated process/environment is "container" is all.

Docker currently, for the most part, creates a basic userland for use within a container, however LXC doesn't require that at all - instead you can just link the existing userland FS into a directory tree. Heck docker even allows this, but it's harder to do some things than others.


> We should need neither VMs nor Containers. The process isolation and environment virtualization provided by the OS should be sufficient, that's what it is there for.

This is of course assuming that there is a common OS that supports the applications that people actually want to deploy. Virtualization at the x86 level has taken off partly because of the support for mixed operating systems. This has provided operations teams additional flexibility and enabled some consolidation of disparate workloads on shared hardware.


>This is of course assuming that there is a common OS that supports the applications that people actually want to deploy.

I think you missed:

"(they're fine for OS/hardware simulation etc.)"


Thanks for pointing that out, I didn't catch the precise scope of the problem you were calling out in my initial reading.


There is an added benefit to VMs: smaller attack surface.

Your average OS kernel has an abundance of interfaces, all of which may have bugs enabling a malicious program to elevate its privileges. The OS<->hypervisor channel can be much more restricted.

With containers, programs in the container are generally talking directly to the host kernel through the same interfaces as any other program, so you don't get that benefit.


Again, that seems a bug in the OS(es) that should be fixed, rather than papering over it by adding yet another layer, bringing its own set of vulnerabilities.

In my experience, the vulnerabilities in the different layers tend to to be cumulative rather than restrictive, due to the fact that the stuff you want to protect is in the vulnerable layer.


Yes, of course you can only exploit a bug if it exists :)

However, all code has bugs (including security-critical ones). Minimising the attack surface of trusted elements is an important part of defense-in-depth.

Xen is <150000 LoC, and a lot of that can be run in less-trusted stub domains. The linux kernel runs in the millions of lines (admittedly probably not all compiled-in, but it's still at least an order of magnitude more), and none of it can be fenced away from the sensitive bits.

I'm not saying containers don't have their place. But there are benefits to full VMs in security critical infrastructure.


No Zen is ~150,000 LoC more than every OS + application it's hosting. If any of them have a significant bug someone can get inside your network. Put another way Zen can have bugs which make you venerable even if your OS and application where rock solid, but it does not add anything over simply running different applications on different physical hardware.


But a bug in the OS will bite you whether you are using containers, VMs or physical servers, so it's not very relevant to the discussion.

When comparing the security of containers to VMs, vulnerabilities in the OS should be out of scope. A more realistic attack scenario would be a virtual host or container server with two VMs/containers: a trusted on and an untrusted one. Assume neither has networking, but the untrusted one is running compromised software. The question is how easy is it to escalate privileges from the untrusted container/VM to the trusted one?

I'd say that Xen's small code size and resulting easier auditing is a benefit here.



It is being fixed by moving some isolation from software to hardware.

The options are rebooting hardware/OS/compilers/apps (Mill Computing) or disaggregating existing operating systems as has been done by Xen and Hyper-V, using CPU features from AMD, Intel and now ARM.

Isolated hardware regions will become smaller:

http://theinvisiblethings.blogspot.com/2013/08/thoughts-on-i...

Namespaces/containers are for trusted workloads.


> Again, that seems a bug in the OS(es) that should be fixed

Granted. But unless their are glaring faults or unacceptable performance issues in the extra layer(s) you can gain some "security in depth" with each layer shielding the next and previous to an extent, reducing the potential fall-out of certain classes of bug.


Containers are an extension of that process isolation really, not a replacement for it. They allow control to be more finely distributed in a relatively easy many. They also make coordinating library versions (or, more aptly, removing the need to coordinate them) easier.

They also offer easy ways to move applications between machines and locations, even applications that we never designed with any sort of flexibility like that, and for things that are designed for that sort of thing they can offer better flexible scalability.

Yes, there are other techniques for both of these properties, but the existence of other solutions doesn't make VMs and containers a bad solution.

> expensive

They certainly don't have to be this, assuming you are meaning financially expensive.

> over-engineered

I extrapolate from that (perhaps wrongly) that you thing VMs are a relatively recent invention, when in fact they have been around almost as long as multi-processing has. They seem over-engineered on current common desktop and server hardware because these platforms were not designed with virtualisation in mind and the hypervisor design needs to either account for (or just live with) the deficiencies in this area, but with some hardware that simply isn't the case.

> under-performing

Depending on the solution chosen and your processing and IO loads, yes this can be the case - but often the performance effects are negligible especially if you are using containers/VMs to consolidate light-use services on less hardware. A key problem I find is people underestimating the performance hits seen in VMs, perhaps because they've ready to many promotional brochures and not run any of their own performance/suitability tests before deploying to a given platform.

> mis-applied

This is certainly true in some cases, but that can be said for any technology especially when it is on the current buzzword bingo card (XML and XSLT anyone? Great for what they are great for, but oh so many places they hinder rather than help either because they are not the right tool for the job or are just badly implemented)

> but containers look like a nice, minimal extension to the isolation offered by the OS

Which is exactly what they are, by my understanding, offering more than other existing options such as chroot while not being nearly as heavy as fuller virtualisation.


Asking this question led me to ask other questions. In particular, is abandoning full-machine virtualization for containers a real possibility? Is this a move that cloud architects should truly be considering?

It's a move FreeBSD, Solaris, and mainframe users made years ago (although it was full-machine, er, physicalisation they abandoned), since when i imagine they've been sitting around staring at the Linux industry's VM frenzy with bafflement.


I'm surprised I haven't seen anything being said about reproducability advantages of docker.

Docker images are always reproducable through their Dockerfile's. While virtual 'appliances/images/snapshots' are - if at all - much harder to reproduce.


It's pretty easy to write a Dockerfile that isn't completely reproducible, like anything that uses apt-get.

I would also say that VM images should be built in an automated way, although VMs do allow you to shoot yourself in the foot.


VMWare + Vagrant + Puppet + In-house APT repository = fully reproducible

anything + external dependencies = incompletely reproducible

PS: "Please note Docker is currently under heavy development. It should not be used in production (yet)."

PPS: Docker runs anywhere as long as that anywhere is Linux 3.2+ on x86. So to run Docker contained apps on my hosts I'd have to run a VM anyway.


VMs are equally reproducable, e.g. through Vagrant or similar. This doesn't even include using tools like puppet or chef within either containers or VMs, and cloning for either.


I'm not sure I follow, could you elaborate? Are there specific things that you think are harder with Virtual Machines?


What about packer.io? Handles both Docker and VMs.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: