Hacker News new | past | comments | ask | show | jobs | submit login
LXC Networking introduction (containerops.org)
136 points by gyre007 on Nov 19, 2013 | hide | past | web | favorite | 42 comments



LXC is interesting. It has been around probably for 5 or 7 years or so. At least I remember looking at it then. Then it just kind of made slow progress. It has been in the kernel since probably since 2.6.30, for example. But aside from kernel and virtualization forum discussions, it just didn't get much attention. That was kind of odd because it is a very cool piece of technology.

Then I guess something happened in the last year or so and all of the sudden it became very popular. dotCloud certainly has a lot to do with it. Was it better libvirt integration, too, perhaps: http://libvirt.org/drvlxc.html

Now of course it cannot completely replace KVM because it is more of a container than a full virtualization. So running Windows VMs on will not work. For Linux one could probably have a farm of hosts based on various distros (with LXC enabled) and that would provide the ability to run various Linux OS guests, by picking the hosts that matches it.


I think the main reason it didn't take off until recently was because the kernel namespacing[0] work wasn't complete. Then, even when user namespacing was mostly complete, XFS still wasn't compatible until 3.12 [1].

[0] http://lwn.net/Articles/531114/

[1] http://www.phoronix.com/scan.php?page=news_item&px=MTQ1Nzc


Early implementations were quite buggy; I tried it in 2010 after a presentation at linux.conf.au and found that the tools simply segfaulted with monotonous regularity on a then up-to-date environment. It's made significant strides since then, obviously.


Ah good point. That might explain it.


Maybe I have an incomplete understanding of Docker and LXC, but I feel like this is retreading a lot of ground that Joyent has covered with SmartOS. Virtualization with KVM, networking with Crossbow, and VM exporting with ZFS datasets have all been around for the past few years without getting all of this attention.


KVM has more performance overhead than LXC. Sure, SmartOS has "zones" which are a similar concept, but then you're stuck with the SmartOS (Solaris) userspace. Plus, with LXC the physical host is running Linux, so hardware compatibility is less of a concern.


What are you missing in the SmartOS user space? This is a genuine question and not a troll. SNGL (http://www.joyent.com/blog/jonathan-perkins-on-why-smartos-i...) takes a shot at addressing the "comfort level" problem, but I'd like to understand if the problem is a technical one.

pkgsrc (http://www.pkgsrc.org) does a good job at making sure most packages you'd need are available.

I've grown more and more frustrated with the direction of Linux distributions over the past couple of years that I'm mostly avoiding new installations. I've been using OpenSolaris derived distributions for a while for ZFS, but I've come to the realization that SmartOS covers the majority of my general computing needs as well. Anything I write and deploy goes on SmartOS.

For tools that won't work on SmartOS for a technical reason, I'm using FreeBSD more. My firewalls have been OpenBSD for quite some time.


I haven't personally used SmartOS, but from my experience with Solaris 11, package management and availability wasn't great. Many common packages weren't available, some were only available through a semi-maintained community repository (OpenCSW), and other things were difficult to build due to non-GNU Solaris components (even when GCC and GNU tools were installed).

PS: I don't mean any disrespect towards OpenCSW, the packages that were there saved me a ton of trouble earlier this year. Packaging is tough.


BTW, Solaris 11 and OpenIndiana support Linux Branded zones. These have a linux user land and are mostly syscall compatible with Linux, so many linux programs can be run on them as is. I suspect that SmartOS took that out, but OmniOS (the Illumos distribution I'm using) still has that feature.


With SmartOS, you get the netbsd userland and package system rather than the Solaris one. For the most part, this means it is more likely to just have what you want and be easier to build for.


but as the other poster mentioned it uses pkgsrc which used to be the NetBSD package system but has been adopted more widely. you can even use it in Linux. it has a lot of packages.


> KVM has more performance overhead than LXC

When I was using it, KVM had some small memory and CPU overhead, but it's a fair point.

> ...so hardware compatibility is less of a concern.

That's definitely a problem. I'd very much like for SmartOS to branch out from just Intel, but they seem to be keeping development centered around the hardware they use for Joyent Cloud.


Performance overhead definitely varies based on the workload: http://openbenchmarking.org/result/1308296-SO-UBUNTUKVM59 (on the AIO test something was definitely wonky, maybe fsyncs weren't being obeyed)

That's with a Linux hypervisor though, so I wonder if SmartOS has any more impact on performance.


I havent used smartos, but you can only do solaris containers now? IIRC around the Indiana release one of the demos was running a RHEL5 container. I also seem to recall at least one person getting deb working.


It has KVM support, but zones ie containers use the sam ekernel so cannot run Linux.


People just never got fired for using Linux and Solaris will never be Linux.


Thanks for sharing this link.

LXC has gotten me very excited about testing new linux services on a home box again. I always worry about exposing a server to the internet with any new services. The idea that a compromise on the box could leak everything that's on the box usually leads me to avoid exploring new services.

LXCs seem to give me hope that I could experience a compromise, but not lose everything.


LXC is not secure at the moment. root in a LXC container can lead to root on the host. There is unfortunately no good summary of the problems - here is my list (take it with a grain of salt - a lot of these problems are mitigated in docker.io and with AppArmor in Ubuntu):

- Without CONFIG_USER_NS and a newer kernel a lot of problematic things can happen. If /proc or /sysfs is mounted on the container DoS or escalation to root is possible: http://blog.bofh.it/debian/id_413 - At the moment no stock distro kernel has CONFIG_USER_NS enabled.

- There are some issues related to remounting filesystems rw and altering files

- Mounting cgroups in the container can also lead to problems - DoS and aquiring more ressources

- Capabilities. You stock Linux distribution won't boot without CAP_SYS_ADMIN (see man 7 capabilities) - there are a lot of other capabilities that could be troublesome.

- Not sure about this one: http://seclists.org/oss-sec/2011/q3/385

So for running services without CAP_SYS_ADMIN and with dropping a lot of other capabilities it can be considered somewhat safe. For everything else it's probably dangerous.

Not sure if all these issues are still a problem today but if you are running lxc on e.g. a current Debian Wheezy you have to know about all of them.


Many people have complained about CONFIG_USER_NS missing in Ubuntu [1] [2] [3] [4] [5].

Apparently the reason is that it's not compatible with XFS filesystem until 3.12 [6], and even though nobody uses [7] the XFS filesystem, backwards compatibility takes precedence over new features.

This is changed in 3.12, but it looks like the patch just missed the boat for Ubuntu Saucy [8].

[1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/509808

[2] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1085684

[3] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1191600

[4] http://lwn.net/Articles/541787/

[5] http://www.mail-archive.com/ubuntu-bugs@lists.ubuntu.com/msg...

[6] https://news.ycombinator.com/item?id=6761847

[7] By "nobody uses," I mean "I don't use it"

[8] http://lists.debian.org/debian-kernel/2013/09/msg00356.html


sure but you should not be mounting /sys and /proc in the container. just run one application. I do not understand the trend to run a whole Linux distro in a container. no one did that with chroots.


Why not? No virtualisation overhead, good io, memory and cpu and even network limits with cgroups and no extra committed ram for the vm. You just give team xyz a login and they run their favorite distribution and software without overhead. If you have a copy on write filesystem you even save more space and with lvm you have easy snapshots and backups. You can put a lot of users on a moderately fast machine this way. Thanks to lxc-attach it is also dead-easy to debug problems for them or install software. I'd love to have this possibility in the future.


Because 1. right now it breaks security 2. its a whole lot more to manage, and thats expensive.

I see the convenience argument, which is why people like docker, but basically adding a whole OS overhead to every process you want to run is basically insane in my view.


Docker doesn't require you to run an entire distro in your container. It's what a lot of people do out of the box because it's convenient and familiar. But as far as docker is concerned your container can be a single static binary in an otherwise empty directory.

There is a growing trend of people building micro-containers with just the bare minimum for their application. Docker is facilitating that trend, not preventing it. If only because it explicitly encourages thinking of containers as application-oriented, not machine oriented.


not saying Docker prevents it but it is hardly widespread. Unless you use Go or a few other languages your toolchsin won't even create static binaries for a start.


Plenty of people run Debian and Gentoo in chroots on top of other linux variants. That is done on Android devices by people who want a more familiar userland to work in and it is done by people for testing and development across distros.


Exactly what security features do you think Docker is providing that LXC isn't?


LXC provides all the features. Except some protection that can at the moment only be provided by AppArmor or another MAC system. I'm not sure what are the LXC defaults are - But I assume that docker uses more restrictive defaults: https://github.com/dotcloud/docker/blob/master/lxc_template....

I'm still learning about LXC so my post regarding security may be inaccurate. I'd just thought I'd share it because a lot of people think it's as secure as a virtual machine. I hope soon it is.


I find the number of people schilling for Docker mindblowing.

"I don't know much about LXC, and I don't know what the defaults are, but I assume that this trivial wrapper around LXC provides more security".


Sorry this was not my intention. I actually prefer LXC to docker myself and did not wanted to shill for anything. I just wanted to point out that of the (possible) security problems that can happen when using LXC docker mitigates most by just not using them: They don't allow mounts, they drop CAP_SYS_ADMIN. I just posted the config file.

It was just a well intended warning - similiar to the warnings in the Ubuntu docs: https://help.ubuntu.com/12.04/serverguide/lxc.html#lxc-secur... and Gentoo: https://wiki.gentoo.org/wiki/LXC#MAJOR_Temporary_Problems_wi...

Here is the default configuration for Ubuntu 13.10 in comparision: https://gist.github.com/anonymous/7550932


A look at docker/LXC security back at the end of August 2013: http://blog.docker.io/2013/08/containers-docker-how-secure-a...


A minor correction -- dotCloud the PaaS does not actually use Docker in production, as the article claims. It uses something pretty similar, which was the inspiration for the creation of Docker, but not Docker itself.

[Source: @solomonstre, in person at a Docker meetup a few weeks ago]


That's correct (I'm the @solomonstre in question :). We do our best to not imply the contrary by accident. Docker is a clean slate which incorporates all our operational learnings from dotcloud - but it is a full rewrite and this not yet production ready.


10 pages to make one part of LXC understandable, is one of LXC's issues ;-)


...plus now the enterprise needs to hire more expensive sysadmins to babysit all of it. That's exactly what enterprises are wanting to avoid.

Our sysadmin left after he deployed LXC "goodness" to make things "better" and we are still in a recovery mode from this.


thats an interesting point, but yes, LXC isn't all that simple/easy, it's the main issue. I don't know why they made it so complicated. The namespacing technology underneath is pretty straight forward.

I'm sure they'd argue for days how that's not true (plus some of the LXC folks actually implemented the namespacing) - but at the end of the day I make my "jail" with the "unshare" command and mount, much simpler..


Great article! For me Docker has opened up what is great about the underlying LXC technology - a proper logical wall between apps on the same hardware. The networking was the hardest thing (for me) to grasp in Docker - this guide opens up a whole new level. It's cool however that if Docker won't do something networky I can drill down a layer to LXC instead.


Somewhat related, are there any cookbooks on how to "transform a heavy-duty server into several virtual machines"? By this I mean something that includes, among other things, mapping N available external IP addresses to N virtual machines. I realize a good prior understanding of networking, iptables, etc. would make sense. But... still... any well-detailed recipes out there? (I have tried to look them up, but everything seems to assume more knowledge than what I have in mind!)


We are currently virtualizing our infrastructure's desktops with QVD and LXC. There have been a few challenges, but overall, it has been painless and exciting.


A bit of a tangent, but I wonder what happened to lguest .. and how does (did) it compare to lxc?

Looked promising, but I never had success getting it going despite the promises on the webpage.

And it looks like little activity and still quite a low profile after all these years.

    http://lguest.ozlabs.org/
    http://en.wikipedia.org/wiki/Lguest


I like LXC but for now all my virtualized stuff runs on KVM (if needed, like an old CentOS install with Oracle), or OpenVZ. I haven't sat down yet and compared all the features of LXC + Docker with OpenVZ yet, though.


Fantastic article, super in-depth, and well written. One of the most in-depth guides to anything lxc related I've ever seen. Thanks for demystifying a little bit of lxc


Are there nice GUI/web tools for configuring docker and LXC firewalling etc?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: