The cool thing that is that we have a number of companies contributing significant technologies to the open source ecosystem that build a stack of software that gets us closer to running distributed systems in a reasonable reproducible manner:
- Google is bringing kubernetes (k8s) which represents their experience in deploying cluster wide applications
- CoreOS is bringing etcd to the table for the cluster wide decisions in k8s
- Docker is bringing a format that makes getting your applications isolated and running quickly
I think that's the way the landscape is changing.. light weight and easily-managed containers rather than virtualizing entire systems.
It's just one level of abstraction up. First the hardware was abstracted, and now the OS is abstracted. Once we can reliably and seamlessly shift applications (not VMs) around generic pools of compute resources, to coin a phrase, you're going to see some serious shit!
It may very well go that way, but I think unikernels (like MirageOS (http://http://www.openmirage.org/) are very interesting as well. A paravirtualized unikernel should be able to carry less overhead than regular virtualized OSs and be able to operate completely in ring0/kernelspace.
Pair that with all the hw acceleration for virtualization available these days and you may get some pretty lean and fast virtualization that also more easily support hybrid deployments (Container software needs to be built for specific container host OSs and libs(depending on how much is bundled in each container)).
Also, the security implications of containers vs (para)-virtualization are different, so I think my personal jury's still out on that one too.
But I do agree that these are interesting times, for sure. And containers may win, I just don't think it's a done deal just yet. :)
This is not a swipe at Docker; it's an interesting technology if you're running Linux and I'm sure it will be very valuable to many.
However, let's not forget that Solaris had this functionality first.
Solaris has offered hypervisor-level virtualization (LDOMS) on SPARC, light-weight "virtualization" (Containers/Zones) on SPARC/x86, and now offers full system virtualization out of the box (Kernel Zones) SPARC/x86.
And there's also OpenStack and Puppet system management integration in Solaris 11.2+.
Not really. Even in an absolute sense, linux-vserver is/was contemporary with zones.
Yes, in the sense that partitioning technology isn't new. zones and jails are comparable to vserver/openvz/lxc. vpars, lpars, and ldoms have analogues on mainframes. Various hypervisor technologies (xen, kvm, vmkernel) are also not unique to solaris, and were done on Linux.
What Docker offers that none of these do not is that it's containerization for applications without the "weight" of even zones. It's not virtualizing systems. It's starting one application in its own container. That's it.
I don't know who's spreading this "Docker is just like zones" FUD, but it's wrong. Linux has had container-level virtualization for a decade, and LXC has had mainline support for a while. Docker builds on that, but it's different.
At the same time, EMC is not shitting themselves over docker. Application containers will not replace traditional or container virtualization for all workloads. But they will for some.
linux-vserver is not contemporary with zones. If you think that it is, you haven't looked at Solaris zones technology very carefully.
linux-vserver requires the kernel to be patched; Solaris zones does not.
linux-vserver has no clustering or process migration capability; Solaris zones in combination with LDOMs gives you a path for live migration.
linux-vserver networking is based on isolation, not virtualization. This means each virtual server can't create its own internal routing or firewall setup -- Solaris zones can.
linux-vserver doesn't fully virtualize the system; clock, parts of /proc and /sys are not virtualized.
So no, linux-vservers are not equivalents.
Yes, docker offers containerization -- but not sufficient containerization. Certainly not sufficient for security purposes as have come up repeatedly in recent history.
As for the "weight" of zones; I don't know what "weight" you're talking about. Solaris zones have almost no overhead at all. They use some disk space, but we're talking less than 300MB if I recall correctly at most in a default configuration. And Solaris Zones give you several advantages that Docker doesn't provide.
Regardless, I'm certain that for some specific use cases, Docker will prove an appropriate technology.
This is not something you agree or disgree with. The Linux technologies mentioned _was_ contemporary, or in specific cases even pre-dates, zones.
The rest is just not a one-to-one comparison. The fact that Linux requires the kernel to be patched is a cultural thing. That is how new functionality is distributed in Linux land.
Linux-vserver also does not, as you mention, offer comparable functionality. Solaris Zones works differently, and the only cases where you can compare them is where their use cases overlap. But you will see much more overlap with something like LXC.
Any direct comparion is moot however, as Sun/Oracle does not want these technologies to be adopted in Linux. They can at most serve as (valuable) proof of concepts on how the implementation works in the real world as Linux slowly gains corresponding functionality. And it increasingly looks like Docker is part of this picture.
Regardless, I disagree with the assertion. linux-vservers barely had their first 1.0 release about a year (2004) before the release of Solaris Zones as a beta. It's likely that the actual development of Solaris zones started around the same time as linux-vservers.
Even if you were to successfully argue that it "predates" Solaris zones -- it doesn't predate them by very much.
The fact that Linux requires the kernel to be patched is not just a cultural thing; it's a very large additional maintenance cost and proves that linux-vserver wasn't valuable to go and stay in the mainline kernel. I spent enough years maintaining Linux servers that required mainline kernel patches (such as a workstation at home) that I grow tired of it.
You can't blame Sun/Oracle for the failure of Linux to produce a completely equivalent technology.
The primary problem is that none of the mainstream Linux distributions have chosen to actually build a fully-architected platform including both the kernel and userland. The OpenStack project is finally forcing some of them to do that, but until they have a filesystem just as capable as ZFS (btrfs someday?), a packaging system that's just as deterministic and capable as IPS (Nix someday?), they'll always be a little bit behind.
Integration matters in the operating system; it makes a huge difference in terms of capability, reliability, and user experience.
In the end, use the right OS for the right job. I happen to believe Solaris is the right OS for servers, but I develop and distribute software for Windows, Mac OS X, Solaris, and Linux as I think they are either great or generally reasonable desktop OS systems.
> It's likely that the actual development of Solaris zones started around the same time as linux-vservers.
That's what contemporary means.
> linux-vserver wasn't valuable to go and stay in the mainline kernel.
Lots of technology start out-of-tree and is only much later incorporated into mainline. That's part of what the big Linux distributors do for a living, and a healthy side of the Linux ecosystem.
> none of the mainstream Linux distributions have chosen to actually build a fully-architected platform including both the kernel and userland
For compartmentalization, I take it. It is indeed a problem that it has stayed a niche product in Linux land for so long, but there have been plenty of minor Linux distributors focusing on it, mainly for ISP use.
> In the end, use the right OS for the right job. I happen to believe
I have never in my professional life been in a situation where the operating system was not given by the circumstances. What I believe is simply not relevant. YMMV, of course, and good for you if it does.
It's unreasonable to compare the functionality of zones in 2014 with their functionality in 2005, when vserver was contemporary and the principal containerization solution.
In 2014, you'll find that LXC or OpenVZ (or Xen paravirt in some environments) are the preferred virtualization solutions and have been for years, which have every advantage zones have.
By "weight" of zones, I mean that they're still effectively Solaris containers running init and basic services. Linux containers do this. Docker doesn't. It's app virtualization.
How is it unreasonable to compare the functionality of zones in 2014 to linux-vservers which are also under active development in 2014?
You're going to have to provide some actual data to support your assertion that linux-vserver was ever the "principal containerization solution".
LXC and OpenVZ do not have every advantage zones have; zones have other advantages because they're integrated with OS features that only Solaris (and derivative) operating systems have out-of-the-box -- such as ZFS. Which provides the ability to rapidly snapshot, clone and deploy containers. Zones also have other advantages that LXC and OpenVZ do not because of the networking stack features offered in Solaris.
The so-called "weight" of init and basic services is meaningless. But don't take my word for it, just download the Solaris 11.2 Beta and try it for yourself. Theorising about the potential "weight" of init and basic services (which are fairly minimal) is premature optimisation.
As I said before, Docker doesn't provide the full security isolation that Solaris Zones does; I'm sure it's the right style of solution for specific cases, but it is not an appropriate general solution for isolation or containerisation.
It's not unreasonable to compare the functionality of zones in 2014 with the functionality of vserver in 2014. But you compared the functionality of zones in 2014 with the functionality of vserver in 2005 (which hasn't changed much).
LXC is the preferred container solution and has been for years. I only referenced vserver because of your "Linux finally catching up to zones" comment, when Linux has been doing containerization as long as Solaris.
I'm also not going to "provide any data" about vserver. You can look at the release dates for vserver, openvz, and lxc yourself, as well as when lxc made mainline and how many VPS providers use openvz, versus how many distros even package vserver in 2014.
LXC made mainline for a reason. OpenVZ is pretty comparable in features. You're making a sideways argument now based on Linux not having ZFS, but that isn't the discussion. It's also true that Linux doesn't have Crossbow. It's not true that LXC and OpenVZ can't take advantage of openvswitch, which is pretty comparable. But none of that has anything to do with Docker. This is not "LXC vs Zones vs Jails".
Containers can also be backed by btrfs or lvm cnapshots, which aren't as feature-filled as ZFS, but you're reaching. Similarly, zones aren't as featureful as full-fledged VMs. But that's also not what we're talking about.
You're repeatedly missing what Docker actually does. Ok?
Zones -> LXC. LXC also has "weight" in that it starts init and basic services, and has to be managed.
Docker -> containerized chroot. Docker is not an analogue or competitor to zones.
However, Docker (through libcontainer) are already built on top of cgroups and can be managed through selinux. Security is not a valid complaint.
I am not repeatedly missing what Docker does; all I'm pointing out is that Docker is currently insufficient as a true isolation solution from a security and/or other perspectives.
Again, I'm sure Docker is appropriate for some specific situations, but it is not currently an appropriate general container solution if you care about security.
>I'm sure Docker is appropriate for some specific situations, but it is not currently an appropriate general container solution if you care about security.
Yes, you are. Docker is not currently and is not trying to be a "general container solution". Again, that's LXC.
But "X is currently insufficient as 'true isolation'" is inane. libcontainer is built on top of kernel cgroups. Docker can be wholly isolated with selinux:
I'm done here. I agree to disagree. I still believe you are wrong and I did not claim Docker was a general container solution. You don't know how Solaris Zones work, because if you did, you'd understand that cgroups are insufficient to provide the same level of security.
> I still believe you are wrong and I did not claim Docker was a general container solution
>I'm sure Docker is appropriate for some specific situations, but it is not currently an appropriate general container solution if you care about security.
>Docker doesn't provide the full security isolation that Solaris Zones does; I'm sure it's the right style of solution for specific cases, but it is not an appropriate general solution for isolation or containerisation.
Tell yourself whatever you need to.
>You don't know how Solaris Zones work, because if you did, you'd understand that cgroups are insufficient to provide the same level of security.
Which is why I also mentioned (and even linked you to the documentation for) docker_selinux, which is actually security instead of mere process isolation through namespaces and resource control (which are what cgroups do). Incidentally, this is the same way non-labeled zones work, but I guess I don't know anything about those.
Meaning that while Solaris had many technical advantages over Linux, it's not exactly a vibrant and growing community. Do you really think that any greenfield endeavor is going to pick Solaris as its OS?
Disclaimer: I just left a gig where I spent 6 years in a Solaris shop, and while there wrote an on-demand zones management system.
So in other words, your comment really wasn't about Solaris, it was about ecosystem and community.
As for "vibrant" or "growing", I don't know how you would objectively measure those things or to what objective metric you would relatively compare them to.
Even if the Solaris community is not what you desire, the technology is still certainly significantly advancing almost every year.
You wrote whole paragraph about what it isn't. But what is it then that make Docker not just LXC (other container) + scripts to manage applications in it. One could presumable still spawn a single application on any OS...
While Docker is built on top of libcontainer and cgroups (used to be LXC), traditional containers, including LXC and zones, start init and enough services to look like a "normal" system. You can still use rc.local to manage applications if you want to, I guess.
Docker is a build system for containers which run /bin/foo as PID1, with no services, no ssh, and no init (which presents other problems reaping children, handling sigterm, etc). It's containerization for application virtualization.
It's analogous to App-V or an Application LPAR, if such a thing existed, but these are both good examples.
My complaint to the previous poster is mostly that it differs in the same way that an Application WPAR differs from a WPAR. Yes, they're the same base. No, they're not the same thing.
> light weight and easily-managed containers rather than virtualizing entire systems.
Not if lighweight easily-manage containers can run Windows. Not just windows but any non-matching-with-host-kernel OS-es so nobody is eating VMWare's lunch yet.
I think after the baby boomers have left the picture in business, windows will slowly die out. Developers today are using OSX and Linux. Don't quote me, but traditional schools are the only ones using windows. My college does, and I honestly think its a learning point for all developers to know Linux over windows because of usage around the world. Tech companies are dropping windows for the opposing systems because of speed, reliability, and the current trend in design. With this happening all development, or at least what I'm seeing in the web, is mainly done on OSX or Linux. Therefore it make sense that lightweight containers will eventually eat VMWares lunch.
I was just thinking what a pain it is to have to use VMs/ssh in order to get access to containers from my Mac OS machine. It made be (semi-seriously) consider getting a Linux machine as my next laptop. I'm still a bit skeptical but it is a start to think about it.
I think the only ones that really ran with KVM were/are Joyent with their Smart OS - combining (some of the) tooling/tech that makes Solaris Zones great with a Free and Open operating system, freedom from Sun/Oracle and support for many guest platforms (and/or low overhead "native" zones).
I think the only real downside of Smart OS is the same as with Open Solaris (or pretty much any other "it isn't Linux"-unix-like OS'): drivers and hw support.
The great thing with Linux as a host, is that (edge cases excepted) you can literally run in on your entire infrastructure (right now, or in the near probable future) -- from phones and tablets via desktops and laptops through servers, clusters and pretty much anything beyond.
I'm sure we'll see some backlashes from the new monoculture, but I think overall it's a bright future.
And we can have our occasional parties arguing for why everyone should really use (Dragonfly|Free|Open)BSD/(Open)Solaris/Plan9 because it has X, does Y better and has more consistent and better documentation.
lxc is lighter weight. just like 5% page size saves millions in bandwidth. 5% cpu overhead saves in electricity and hardware costs. Assuming all other things being equal (I know they aren't - but security, tooling, and management can be improved) vmware has inherent overhead of the hypervisor that's not an issue with lxc.
TLDR; Kubernetes is basically like a local copy of a specific-configuration cloud provider that uses docker. It's also Google Cloud Platform's basis, so developing against it lets you deploy your code there. As far as software goes, it's very immature/early days. Some of the pertinent architectural limitations that Kubernetes appears to have are: limited range of target OS platforms for services to target, non-standard mechanism of service relationship abstraction (read: lock-in warning), immature security model, limited support for complex network topologies (eg. hardware switch management), fixed approach to cluster scheduling/consensus.
PS. Corrections welcome, I'm just trying to help people get a grasp without bothering with the background reading.
Google's open source investment hugely astonishes me but as far as desktop is concerned they are also hugely oblivious and ignorant (Yes, I am talking about Drive for Linux).
Gophers? Thats what developers who use the Go language call themselves?
What's with the ridiculously bad naming/branding in the tech world?
* Gophers (the animal) are considered by many to be a pest.
* The Docker logo is a whale carrying shipping containers on its back. Shipping containers that go into the ocean are basically unrecoverable/not worth recovering, and whales spend very little of their time on the surface (meaning all the containers will go into the ocean)
This is as ridiculous as having an airline named after an animal that cannot fly and kills people.
- Google is bringing kubernetes (k8s) which represents their experience in deploying cluster wide applications
- CoreOS is bringing etcd to the table for the cluster wide decisions in k8s
- Docker is bringing a format that makes getting your applications isolated and running quickly