Hacker News new | comments | show | ask | jobs | submit login
Ask HN: What is the actual purpose of Docker?
313 points by someguy1233 865 days ago | hide | past | web | 154 comments | favorite
I'm hearing about Docker every other day, but when I look into it, I don't understand the purpose of it.

I run many websites/applications that need isolation from each other on a single server, but I just use the pretty-standard OpenVZ containers to deal with that (yes I know I could use KVM servers instead, but I haven't ran into any issues with VZ so far).

What's the difference between Docker and normal virtualization technology (OpenVZ/KVM)? Are there any good examples of when and where to use Docker over something like OpenVZ?




> What's the difference between Docker and normal virtualization technology (OpenVZ/KVM)? Are there any good examples of when and where to use Docker over something like OpenVZ?

Docker is exactly like OpenVZ. It became popular because they really emphasize their OpenVZ Application Templates feature, and made it much more user friendly.

So users of Docker, instead of following this guide: https://openvz.org/Application_Templates

They write a Dockerfile, which in a simple case might be:

    FROM nginx
    COPY index.html /usr/share/nginx/html
So no fuzzing with finding a VE somewhere, downloading it customizing it, and then installing stuff manually, stopping the container and tarring it, Docker does that all for you when you run `docker build`.

Then you can push your nice website container to the public registry, ssh to your machine and pull it from the registry. Of course you can have your own private registry (we do) so you can have proprietary docker containers that run your apps/sites.

From my perspective, the answer to your question would be: Always prefer Docker over OpenVZ, they are the same technology but Docker is easier to use.

But I've never really invested in OpenVZ so maybe there's some feature that Docker doesn't have.


Docker and OpenVZ are not the same. Docker is single application focus. OpenVZ provides the entire OS in a container. OpenVZ has support for live migration.


It could be that their communities have different philosophies about containers, but this is not a technical difference. I think Docker added support for live migration recently too (the Doom demo on that recent Docker con right?)

They are built around the same set of kernel features and as far as I can see offer exactly the same abstractions.

For example Phusion Baseimage is a Docker baseimage that's similar to an OpenVZ container in the sense that it emulates a full running Ubuntu environment. It has is uses but the Docker community rather sees containers that encapsulate a single application with no external processes.


> OpenVZ provides the entire OS in a container.

Docker does that, too. Actually, I'm running docker containers as a fast and easy replacement for VirtualBox VMs.


Perhaps you could do this if you're on Linux, and booting into separate OS's that run off the Linux kernel. Or if you're using boot2docker. However, in neither case does Docker itself provide the kernel in the container.


What do you mean with that Docker does not provide the kernel itself? I did not mean Docker and OpenVZ are similar technology, they are exactly the same technology, just different toolset. You can only run Docker on linux (boot2docker simply runs a Linux VM), you use the kernel of the host os.


OpenVZ is closer to KVM (full virtualization). OpenVZ container is a lightweight VM which shares host kernel, has persistent FS and traditional OS you have to manage.


When I worked with docker containers, I noticed that they seem to present a full OS, with a FS that you had to manage.

I had to change /etc config files, mange where logfiles went and how they were handled, all of this is described on a diffing filesystem.

I might be misinterpreting what you mean though. I have no experience with OpenVZ and little with Docker.


This was mentioned many times, but still: this is about "best practices" or common use case. OpenVZ or Docker (or even full VMs) can interchangeably be used to solve a given use case


It serves as an amazing excuse to re-invent the wheel at your own workplace. It's a hot technology, and if you're not using it, it's because you're inept. Rip all of the stable things out that everyone knew how to use and slap containers in there! If it's not working, it's because your not using enough containers.

No security patching story at your workplace? No problem, containers don't have one either! If someone has shipped a container that embedded a vulnerable library, you better hope you can get a hold of them for a rebuild or you have to pull apart the image yourself. It's the static linking of the 21st century!


I want to downvote the first paragraph but upvote the second one.

Doesn't Docker also help cause problems like ssh private key reuse? I am sure that there are mitigations, but it's sad to have ways to prevent some activity that the software makes easy to do.


>I want to downvote the first paragraph but upvote the second one.

I had the very same feeling. Containers are very useful, but the Docker suite of tools just don't have a very good security story.


They have the same security story as other linux systems.


That's... just not true.


You have an SSL vulnerability, you need to patch the docker image, just like you'd have to patch a linux system.

Now you say something of substance!


Docker hosts pre-built images that have known exploits in them. They also bundle insecure versions of libraries with their software: https://github.com/docker/compose/issues/1601


I think the problem here is that people seem to assume that "application isolation" is synonymous with "security isolation." Your statement is true, the vulnerabilities are the same, but people don't seem to get that there is no "security story" for containers in the first place. That isn't their job.


Isn't one of the claims that if you patch the main OS (without changing the libraries..just patch like you would normally) with a new base image, that with the dockerfile you could re-setup the application in a matter of minutes?


while promising a much better one...


docker and openVZ aim to do the same thing.

docker is a glorified chroot and cgroup wrapper.

There is also a library of prebuilt docker images (think of it as a tar of a chroot) and a library of automated build instructions.

The library is the most compelling part of docker. everything else is basically a question of preference.

You will hear a lot about build once, deploy anywhere. whilst true in theory, your mileage will vary.

what docker is currently good for:

o micro-services that talk on a messaging queue

o supporting a dev environment

o build system hosts

However if you wish to assign ip addresses to each service, docker is not really mature enough for that. Yes its possible, but not very nice. You're better off looking at KVM or vmware.

There is also no easy hot migration. So there is no real solution for HA clustering of non-HA images. (once again possible, but not without lots of lifting, Vmware provides it with a couple of clicks.)

Basically docker is an attempt at creating a traditional unix mainframe system (not that this was the intention) A large lump of processors and storage that is controlled by a singular CPU scheduler.

However, true HA clustering isn't easy. Fleet et al force the application to deal with hardware failures, whereas Vmware and KVm handle it in the hypervisor.


> docker and openVZ aim to do the same thing.

docker is a process container not a system container.

> docker is a glorified chroot and cgroup wrapper.

that is fairly immaterial, suffice to say that the underlying linux core tech that enables docker has matured enough lately to enable a tool like docker. I built many containers and I never thought about them in terms of the underlying tech.

> There is also a library of prebuilt docker images (think of it as a tar of a chroot)

yes

> and a library of automated build instructions

more accurate to say there is a well defined SDL for defining containers.

> You will hear a lot about build once, deploy anywhere. whilst true in theory, your mileage will vary.

have to agree, this is oversold as most of the config lives in attached volumes and needs to be managed outside of the container.

> However if you wish to assign ip addresses to each service, docker is not really mature enough for that. Yes its possible, but not very nice. You're better off looking at KVM or vmware.

Have to disagree here, primarily because each service should live in each own container, docker is a process container, not a system container. Assemble a system out of several containers, don't mash it all up into one - most people don't seem to get this about docker.

> There is also no easy hot migration. So there is no real solution for HA clustering of non-HA images. (once again possible, but not without lots of lifting, Vmware provides it with a couple of clicks.)

None is required. Containers are ephemeral and generally don't need to be migrated, they are simply destroyed and started where needed. Requiring 'hot migration' in the docker universe generally means you are doing it wrong. Not to say that there is no place for that.

As a final note, all my docker hosts are kvm vm's.


edit this sounds like I'm being petty, I apologise, I'm just typing fast.

> docker is a process container not a system container.

Valid. However the difference between docker image and openVZ images is the inclusion of an init system.

> Have to disagree here, primarily because each service should live in each own container, docker is a process container, not a system container. Assemble a system out of several containers, don't mash it all up into one - most people don't seem to get this about docker.

I understand your point,

I much prefer each service having an IP that is registered to DNS. This means that I can hit up service.datacenter.company.com and get a valid service. (using well tested dns load balancing and health checks to remove or re-order individual nodes)

Its wonderfully transparent and doesn't require a special custom service discovery in both the client and service. because like etcd it has the concept of scope you can find local instances trivially. using DCHP you can say connect servicename and let dhcpd set your scope for you.

> None is required. Containers are ephemeral and generally don't need to be migrated, they are simply destroyed and started where needed. Requiring 'hot migration' in the docker universe generally means you are doing it wrong. Not to say that there is no place for that.

This I have to disagree with you. For front end type applications, ones that hold no state, you are correct.

However for anything that requires shared state, or data its a bad thing. Take your standard database cluster ([no]SQL or whatever) of 5 machines. You are running at 80% capacity, and one of your hosts is starting to get overloaded. You can kill a node, start up a warm node on a fresh machine.

However now you are running at 100% capacity, and you now need to take some bandwidth to bring up a node to get back to 80%. Running extra machines for the purpose of allowing CPU load balancing aggrieves me.

I'm not advocating writing apps that cannot be restarted gracefully. I'm also not arguing against ephemeral containers, its more a case of easy load balancing, and disaster migration. Hot migration means that software is genuinely decoupled from the hardware.


> However the difference between docker image and openVZ images is the inclusion of an init system.

No it isn't. Most people don't use an init system with Docker images. However, one of the top-10 popular images uses one -- the Passenger Phusion base images. They make a pretty compelling argument why you should.

None of these arguments are relevant in the big picture. Where Docker shines is the package management, not the virtualization. As a package management system, it is brilliant -- though incomplete. The package management could be fully content-addressable, and at which point, we'll have something more brilliant than what it is now. But it isn't, and I doubt anyone will try it until after this core concept gets adopted into the mainstream.

Ten years ago in 2005, I've heard these same types of arguments about cloud providers, the Zen hypervisors, and the AWS API. I've seen old mainframe folks rolling their eyes saying the technology is old, and this is hyped up. Of course it's hyped up; but unless you can look past the hype and your contempt, you won't see what's really there. No one is really arguing about cloud technology now, and the hold-outs are outnumbered by the majority.


> I much prefer each service having an IP that is registered to DNS. This means that I can hit up service.datacenter.company.com and get a valid service. (using well tested dns load balancing and health checks to remove or re-order individual nodes)

There are docker-backed service management tools that will automatically this for you(assign public/private dns per service cluster inc load balancing), like Empire https://github.com/remind101/empire


"Assemble a system out of several containers, don't mash it all up into one - most people don't seem to get this about docker."

Care to elaborate on this? Do you use the linking system described here? https://docs.docker.com/userguide/dockerlinks/

I mean, your various containers still communicate over IP, right? Just a private IP network within the host, rather than outside?

(Obviously I've never used Docker.)


The OP just means don't put everything into one container.


Yes, except each container has it's own isolated network and explicitly exposes a port that linked containers can listen to. In development I think a lot of people just use --net=host so that all the containers share the host networking stack (at least, I do).


"However if you wish to assign ip addresses to each service, docker is not really mature enough for that. Yes its possible, but not very nice. You're better off looking at KVM or vmware."

I don't understand. If you can't assign IPs to each service (or it's difficult/unreliable to do so) how can processes talk to each other and the outside world?


Host / port mapping, service discovery via things like docker-discover, docker-register, or Consul, etc.


Its messy


On OpenVZ, I can edit /etc/network/interfaces for each CT and can assign a static IP, for one example. Can I not do that for Docker? Sorry if this is naive, I don't know much of anything about Docker.


no, for many reasons:

You should have no way to login to the docker container

Dockers networking does not allow that for that IP to be visable.


Ok, thank you for clearing that up for me.


For me, it is the ultimate in the idea in Continuous Delivery of "build once." I can be very confident that the docker image I build in the first stage of my pipeline will operate correctly in production. This is because that identical image was used for unit tests, to integration and functional testing, to the staging environment and finally production. There is no difference than configuration.

This is the core that Docker solves, and in such a way that developers can do most of the dependency wrangling for me. I don't even mind Java anymore because the CLASSPATHs can be figured out once, documented in the Dockerfile in a repeatable programatic fashion, and then ignored.

In my opinion the rest of it is gravy. Nice tasty gravy, but I don't care so much about the rest at the moment.

Edit: As danesparz points out, nobody has mentioned immutable architecture. This is what we do at Clarify.io. See also: https://news.ycombinator.com/item?id=9845255


I get this by using vagrant + ansible rather than docker. Easy to spin up or destroy the environment in the same way in a VM, staging server or live environment.

I don't really see the point of lightweight virtualization. It provides an illusion of isolation which will likely come crashing down at some probably very inconvenient point (e.g. when you discover a bug caused by a different version of glibc or a different kernel).


Vagrant + ansible is a developer environment tool. Docker is a tool around making containers redistributable for production usage.

Packer is not quite an apt comparison, but would be a better comparison, than Vagrant.

The advantage is you do the steps that could possibly fail at build time. The downside is you need to learn to get away from doing runtime configuration for upgrades.

http://michaeldehaan.net/post/118717252307/immutable-infrast...

I wrote Ansible, and I wouldn't even want to use it in Docker context to build or deploy VMs if I could just write a docker file - assumes I might not need to template anything in it, probably. I would still use Ansible to set up my "under cloud" as it were, and I might possibly use it to control upgrades (container version swapping) - until software grows to control this better (it's getting there).

However, if you were developing in an environment that also wanted to target containers, using a playbook might be a good way to have something portable between a Docker file and Vagrant if a simple shell script and the Vagrant shell provisioner wouldn't do.

I'd venture in many cases it would.


The problem is Vagrant + Ansible violates the rule of "build once."

I don't care about the isolation for isolation sake, I care about it for the artifact sake.


How is building a Vagrant box via Ansible configuration any different than building a Docker container with a docker file? You can use both tools to build an image once and then rebuild for the updates. I don't see how the tool in any way violates that constraint.

What is this rule to only build once? I can see not wanting to create multiple artifacts of your codebase, but with machines it is possible to continually update them and sometimes desirable as well. In the "cloud" world, you can arguably rebuild a server every time it needs updates, but at the physical level you don't always have capacity to absorb the hit of rebuilding multiple boxes at once. The physical servers need to get updated and managed post-install.


> How is building a Vagrant box via Ansible configuration any different than building a Docker container with a docker file?

Unless you're snapshotting that vagrant box and then deploying that to all your servers somehow, you are building multiple times.

> What is this rule to only build once?

I'd recommend reading the book Continuous Delivery. It is a fantastically helpful read.

I prefer not to update my machines, but that is because I follow immutable deployments. But, even if I did update my machines, it is far cleaner (and easier to roll back!) to deploy an asset which has all its dependencies in the box. than to push out code and maybe have to upgrade or install new packages. The gemfile.lock and friends make this a bit less of a problem, but you also get to lock things like libxml version or ffmpeg or...

> In the "cloud" world, you can arguably rebuild a server every time it needs updates, but at the physical level you don't always have capacity to absorb the hit of rebuilding multiple boxes at once.

Totally true, and we don't do this. We build a machine image and do a rolling-deploy replacing existing servers with the new machine image.

> The physical servers need to get updated and managed post-install.

One of the reasons I try not to work with hardware. Physical hardware is hard, and avoiding it makes my life much simpler. I love it.


>Unless you're snapshotting that vagrant box and then deploying that to all your servers somehow, you are building multiple times.

You're also configuring many things in many different potentially complex ways.

The docker method of using environment variables as a configuration hack to get around this is pretty horrible, IMHO. Especially compared to ansible's YAML/jinja2 configuration.


>Especially compared to ansible's YAML/jinja2 configuration.

YAML/jinja2 is just terrible. When you have to introduce a templating system to programmatically generate your YAML configuration files, what you have really needed the whole time is an actual programming language.


It's a non-turing complete programming language much like excel spreadsheets are non turing complete programming languages.

Taking out the turing completeness restricts the mess which you can make and lets non-programmers do more while still being customizable.


Some smart people disagree with that:

http://12factor.net/config

I think the point is not to conflate configuration that is equivalent to code (which, sure, put it in version control) with configuration that is specific to how code is deployed (which your deployment tool should just tell you, via env vars).


> I'd recommend reading the book Continuous Delivery. It is a fantastically helpful read.

Which one? The one by Humble and Farley (Addison-Wesley) is from 2010, is it still relevant?


Still? For me, when I hear book recommendations, the older the book is the more likely the book is still relevant. Books get forgotten over time.

The book is excellent. I was surprised that it wasn't older.


Some books cover tools in detail, that can be great but those books go out of date. Will put it on the list, thanks.


Yes, one by Humble. 2010. Still fantastic. Timeless.


> What is this rule to only build once?

If your cycle is "build, test, build, deploy", then you are not deploying those artifacts that you tested.

Any number of factors (different dependency version, toolchain difference, environment differences, non-reproducible builds) could lead to the second build being different from the first one, and then you deploy an untested artifact.

Not to mention that rebuild can be resource intenstive.


First, not everything produces bitwise identical results from build to build (Websphere ear files, for example). Second, it's time consuming to rebuild from scratch every time. This is especially important if you're in a cloud environment and scaling horizontally for load. You want a way to bring resources online quickly.


All clouds I've used have snapshots - it's hard to get faster than creating a VM from one. You can just keep an up-to-date template using any config management tool.


I don't care so much for this rule. It sounds like a figleaf covering for broken build scripts.

I care about tracking down issues before they reach production. Meaning that I want an environment that mirrors production as closely as possible. Meaning heavyweight not lightweight virtualization.


Agreed about preventing issues getting to production, but that doesn't exclusively mean heavyweight virtualization. It also doesn't "cover up" broken build scripts.

Our build scripts get tested a dozen times a day and cannot tolerate half-assed broken build scripts.

Our deployment pipeline (after verifying the image is good enough to be deployed) packs the docker image into a machine image along with several other containers. The machine image is then deployed to staging. If the machine image passes staging, it goes to production. If there is an issue which has hit production exclusively (it has happened only a handful of times,) it is simply an issue of rolling back to the previous machine image.


It is far easier and less risky to only run it once and use that artifact than to make sure every environment it could possibly build in is identical. Keeping all your environments perfectly identical is more likely to either: 1. introduce subtle differences you aren't even aware of, or 2. introduce DLL-hell at datacenter scale.


>Agreed about preventing issues getting to production, but that doesn't exclusively mean heavyweight virtualization.

Well, it gets you a step closer to accurately mimicking production.

>It also doesn't "cover up" broken build scripts.

That seems to be what that 'build once' rule is for. If your building process isn't risky, why the need to prohibit running it twice?


So that your build process is faster. If I can use the same container for local dev, continuous integration, staging, and production, that means I only had to build it once and the pipeline to get something from a developer's laptop to a production instance is much quicker.

Just because I can install the operating system doesn't mean I want to do this on every deploy of an application.


The kernel is a concern, but I thought the libraries were container-specific.


Unix had this since day 1, you put your app in a file, and run it as a process. The modern day fashion of making an app out of a thousand separate files is just a silly fashion, and Docker is just a way to make this stupid model barely usable.


Like OP, I've been wondering exactly what problem Docker is meant to solve -- thank you for this explanation, it makes total sense.


>> I don't even mind Java anymore because the CLASSPATHs

Agree and add that Python's paths (forget what they're called) (as well as Java CLASSPATHS) have been a problem for me on occasion too which means Docker would probably help with all these types of path issues.


This was already done at the development shop I last worked at with VM images. Docker didn't do anything in particular to help this except for reducing the image size.


Sounds like a lot of work rebuilding and redeploying images every time a security update is available?


It isn't. Our deployment process is very simple, we download the configuration, and we download the container. This is extremely scriptable and repeatable. We have push-button deploys and don't mind rebuilding. That said, we do have a more mature pipeline, so it may be an issue for other people.


How do you handle different configurations then? Especially if you need to provide N values (or structured data).

Also, how do you manage your containers in production?


Configuration files, made available to containers as a read-only mount via the volume flag. No external network or service dependencies that way.

I'm not terribly fond of using environment variables for configuration, personally. That method requires either a startup shim or development work to make your program aware of the variables, and your container manager has to have access to all configuration values for the services it starts up.


I use environment variables, which can be passed in as arguments when you start the container. In cases where the configuration is complicated I'll use an environment variable to tell the container which redis key / db key to load config from.


As others have written, you can either download your config from an external host, or pass in environment variables to your container and generate a configuration based on them.

Lots of people write their own scripts to do this; I wrote Tiller (http://github.com/markround/tiller) to standardise this usage pattern. While I have written a plugin to support storing values in a Zookeeper cluster, I tend to prefer to keep things in YAML files inside the container - at least, for small environments that don't change often. It just reduces a dependency on an external service, but I can see how useful it would be in larger, dynamic environments.


Configuration: We use environment variables for anything small-ish. For more complicated configurations, similar to omarforgotpwd, we keep the values (files AND environment variables) in S3 and download them at deploy time. For stage/prod differences we can literally diff the different S3 buckets.

Management: We create AMIs using Packer. Packer runs a provisioning tool which downloads tho container and the configuration and sets up the process monitoring. It then builds a machine image, and then we launch new servers.


of course, you loose the "identical image" as soon as you build again or run the dockerfile.


Docker is a cute little tool that gives people who aren't that great at Linux the illusion that they know what they're doing. Throw in the use of some "Container" semantics and people become convinced it's that easy (and secure) to abstract away the containers from the kernel.

But it's not, at least in my experience; not to mention that as of now, anything running Docker in production (probably a bad idea) is wide open to the OpenSSL security flaw in versions of 1.0.1 and 1.0.2, despite the knowledge of this issue being out there for at least a few days.

Docker's currently "open" issue on github: https://github.com/docker/compose/issues/1601

Other references: https://mta.openssl.org/pipermail/openssl-announce/2015-July... http://blog.valbonne-consulting.com/2015/04/14/as-a-goat-im-...


> Docker is a cute little tool that gives people who aren't that great at Linux the illusion that they know what they're doing.

That's a rather embittered perspective, ironic considering how new Linux is in the grand scheme of things. A more germane perspective is that Docker is a new tool which acknowledges that UX matters even for system tools.


I'm not sure if I agree that the UX of docker is all that great if you're inferring that docker somehow is more intuitive or easier to understand. The sheer amount of confusion from so many people out there about what docker does is evidence to that.


You're conflating the fundamental technicality and irreducible complexity of the issues that Docker is attempting to solve with the UI they are wrapping around it. It doesn't matter if you have Steve Jobs himself risen from the grave, there is no way to make Docker be as easy to understand and use as Instagram. That doesn't mean it's not a step-change if you consider the UX of Docker vs raw LXC.


I'm only pointing out that docker has some pretty big UX problems as a whole (whether or not they can reduce the complexity of them enough to actually solve them is another thing). I wouldn't bill docker's UX as it's killer feature.


Docker's UX is not a feature, it's just part of its DNA that assisted in gaining rapid mindshare, sort of like Vagrant before it, by being more approachable than older tools like OpenVZ and VirtualBox respectively.


>Docker is a cute little tool that gives people who aren't that great at Linux the illusion that they know what they're doing.

Well, that's what I personally hoped. Then you run into problems, distro specific problems, and find yourself unable to deal with it without actually becoming great at linux under a deadline. Docker can actually introduce tremendous complexity at both the Linux and application level because you have to understand how an image was prepared in order to use it, configure it, etc. (Of course, a big part of the problem is that there's no way that I know of to interactively examine the filesystem of an image without actually running the image, and accessing the FS from the tools that the image itself runs. This has to be some sort of enormous oversight either on my part or on Docker's).


This is what happens when people confuse something which reduces complexity, to something which can move complexity. It's important to note that it can move complexity, if you can set up the container host environment in such a way to allow it. At which point the complexity normally associated with the OS management/systems administrator can largely be moved into build process/software developer complexity.

The number of tools which one would suggest you use along with Docker are a reflection of this, and are additional layers to try to provide further movement of host complexity up into a software controllable level (Consul, etcd, yada yada).

The whole ecosystem plays well with "cloud" hosts, because their systems people have taken the appropriate steps in creating that host architecture and complexity (which is not gone) for you.

As someone else stated well, it is the modern static linking. I have no idea why people would ever have done "build, test, build, deploy" - that sort of insanity should have been obviously wrong. However, "build, test, deploy" does not depend on static-ness of everything related to build, but compatibility of environment between "test" and "deploy". Those who invested not enough time in making sure these environments were always in sync I think have found a way to wipe the slate clean and use this to catch up to that requirement.


I'm sure this is not the answer you are looking for, but you can 'docker export' a container to a tar file and examine your image file that way.

(1) You're exporting a container, not an image, so if you wanted to export your image, deploy it to a container first. Run echo or some other noop if you need to.

(2) This is similar to how git operates. You wouldn't want to examine your git commits interactively (assuming that means the ability to change them in place) well, if you did, git has --amend, but no such thing exists in Docker.

An image with a given id is supposed to be permanent and unchanging, containers change and can be re-committed, but images don't change. They just have children.

It can get hairy when you reach the image layer limit, because using up the last allowed image layer means you can't deploy to a container anymore. So, how do you export the image? 'docker save' -- but 'docker save' exports the image and all of its parent layers separately. (you need to flatten it, for example?)

I once wrote this horrible script[1] whose only purpose was unrolling this mess, since the latest image had the important state that I wanted in it, but I needed the whole image -- so, untar them all in reverse order and then you have the latest everything in a single directory that represents your image filesystem.

The horror of this script leads me to believe this is an oversight as well, but a wise docker guru probably once said "your first mistake was keeping any state in your container at all."

[1]: https://raw.githubusercontent.com/yebyen/urbinit/del/doit.sh


Given stupid hacks, like "Run echo or some other noop if you need to" to go from an image to a container, and 'docker commit' to go back from a container to an image, the distinction between a docker image and docker container seems a bit academic and a bit of poor UX rather than anything else.


Not really, containers are disposable and images (at least tags) are somewhat less disposable. Containers are singular, malleable units and represent a processes' running state, images are atomic and composable, inert, basically packages.

You wouldn't say that the difference between a live database and its binaries compiled source code is academic, would you?

I agree that it would make more sense if you could dump the image to a flat file with a single verb. I also think docker needs an interface to stop a pull in progress that has stalled or is no longer needed. These are academic concerns, you can submit a pull request.


Docker and docker-compose are -not- the same thing. That does not indicate a security flaw of any sort in docker containers themselves.


Fine, I'll bite: what non-cute tool do big boys who are "great at Linux" and do know what they're doing use?


In my experience (as one of those "big boys"), it's usually more traditional virtualization, typically on top of a bare-metal hypervisor like Xen (nowadays via Amazon EC2, though there are plenty of bigger companies that run their own Xen hosts), ESXi, SmartOS, or something similar. Even more recent is the use of "operating systems" dedicated to a particular language or runtime; Ling (Erlang on Xen) is an excellent example of this.

On one hand, this tends to offer a slightly stronger assurance against Linux-level security faults while also enabling the use of non-Linux stacks (such as BSD or Solaris or - God forbid - Windows, along with just-enough-OS (or no OS whatsoever)). Proper virtualization like this offers another layer of security, and it's generally perceived to be a stronger one.

On the other hand, the security benefits provided on an OS level (since now even an OS-level compromise won't affect the security of the whole system, at least not immediately) are now shunted over to the hypervisor. Additionally, the fuller virtualization incurs a slight performance penalty in some cases, and certainly includes the overhead of running the VM.

On the third hand, bare-metal hypervisors tend to be very similar to microkernels in terms of technical simplicity and compactness, thus gaining many of the inherent security/auditing advantages of a microkernel over a monolithic kernel. Additionally, in many (arguably most) environments, the slight degradation of performance (which isn't even guaranteed, mind you) is often much more tolerable than the risk of an OS-level bug compromising whole hosts, even if the risk of hypervisor-level bugs still exists.


It depends on what you want to do, of course, but the standard tools for software packaging is deb and rpm.

The management tools are fairly decent, and the question "which CVEs are we vulnerable to our production environment" or "were are we still using Java 6" shouldn't be more than a keypress away.

Neither deb/rpm nor containers are an excuse for not using configuration management tools however. Don't believe anyone who says so.


Docker > Chef


1. Stateless servers. Put your code and configuration in git repos, then mount them as volumes in your docker container. The absolute star feature of docker is being able to mount a file from the host to the container.

You can tear down the host server, then recreate it with not much more than a `git clone` and `docker run`.

2. Precise test environment. I can mirror my entire production environment onto my laptop. No internet connection required! You can be on a train, on a plane, on the beach, in a log cabin in the woods, and have a complete testing environment available.

Docker is not a security technology. You still need to run each service on a separate host kernel, if you want them to be properly isolated.


>The absolute star feature of docker is being able to mount a file from the host to the container.

This is a simple bind-mount and isn't special at all.

    mount("/foo", "/container/foo", "none", MS_BIND);
Also, virtual machines have had things like 9p that allow the same thing.


I don't think there is enough RAM in my laptop to run five VMs simultaneously :)


Yeah, containers are much slimmer.


I'm stunned that nobody has brought up the idea of 'immutable architecture' -- the idea that you create an image and deploy it, and then there is no change of state after it's deployed. If you want a change to that environment, you create a new image and deploy that instead.

Docker gives you the ability to version your architecture and 'roll back' to a previous version of a container.


In the full-OS VM (KVM, VMWare, etc) this is known as disk snapshotting. Another way to look at it is putting an RDBM in full-recovery mode, so the database itself remains the same and replaying logs is required to get the data's true state.

You shut down a VM and instruct the hypervisor system to take a "snapshot" which locks the original VHD file and creates a new one. When writes happen, they're performed on the new VHD, and reads have to use both the main and the snapshot VHD. And you can create a chain of snapshots, each pointing to the previous snapshot, for versioning. Or you can have several VM snapshots use the same master VHD, like for CI or data deduplication.

To roll back, it's usually as simple as shutting down the VM and removing the snapshot file.


Nobody is mentioning it because VM's already did this for more than a decade.


This isn't true.

The way VMs handle this doesn't carry the same semantics as the way you can with Docker. There's a finer-grain composability with Docker that is much more awkward with VMs.

Docker may not be as great as a virtualization tool as VMs -- security concerns, complexity, etc. -- but it is a much better package management tool.


This means that you have to roll out a (potentially huge) new blob each time you want to make even small config changes.

You get most of the benefits of immutable builds anyhow by having scripts which can reliably set up servers from scratch on the fly.


This is where the layers come in useful, pulling a small change should only require pulling a small new layer.


My shell glue and apt-magic I wrote for my personal servers does that too... And people have been doing that for decades, just not as often, because servers used to be expensive.


Yep, I haven't specifically mentioned it, but check my top level reply to this thread. Clarify.io practices immutable architecture down to a T, and benefit greatly from it.


Obs, seems I commented the same before seeing your comment.


Some key points:

- Docker is nothing new - it's a packaging of pre-existing technologies (cgroups, namespaces, AUFS) into a single place

- Docker has traction, ecosystem, community and support from big vendors

- Docker is _very_ fast and lightweight compared to VMs in terms of provisioning, memory usage, cpu usage and disk space

- Docker abstracts applications, not machines, which is good enough for many purposes

Some of these make a big difference in some contexts. I went to a talk where someone argued that Docker was 'just a packaging tool'. A sound argument, but packaging is a big deal!

Another common saw is "I can do this with a VM". Well, yes you can, but try spinning up 100 vms in a minute and see how your MacBook Air performs.


What is important to note is that the ideal use case for Docker is not actually speed - it's about quickly responding to increased load. If you need to vary the number of, say, web servers from 5 to 100 over the day, then Docker is good because you can start them up very quickly.

The down side about docker is that it takes longer to set up your docker in the first place, it is harder to keep secure, and it runs slower than a traditionally deployed application, but compared to VM deployed applications the performance is usually better.


Docker is indeed fast and lightweight. It's amazing how much CPU power is freed up from not running a full on VM in VirtualBox. That said, I'm wary of running it in production


In my experience, the performance gains are much less significant v. a virtual machine running on a proper bare-metal hypervisor (like Xen).


Why? Security?


Yep. I don't understand the security implications well enough to guard against them. As much as I like cutting edge tech I prefer to not actually cut myself with it!


PM if you want to talk more.


Yes, packaging is a big deal!


Docker is mainly an app packaging mechanism of sorts. Just like you would build a jars, wars or rpms, etc. you create docker images for your applications. The advantage you get is that you can package all your dependencies in the container thereby making your application independent and using the tools provided by docker in combination with swarm, compose, etc. it makes deployment of your apps and scaling easier.

OpenVZ, LXC, solaris zones and bsd jails on the other hand or mainly run complete OS and the focus is quite different from packaging your applications and deployments.

You can also have a look at this blog which explains the differences more in detail: http://blog.risingstack.com/operating-system-containers-vs-a...


Docker uses the same kernel primitives as other container systems. But it tied together cgroups, namespaces and stackable filesystems into a simple cohesive model.

Add in image registries and a decent CLI and the developer ergonomics are outstanding.

Technologies only attract buzz when they're accessible to mainstream developers on a mainstream platform. The web didn't matter until it was on Windows. Virtualization was irrelevant until it reached x86, containerization was irrelevant until it reached Linux.

Disclaimer: I work for a company, Pivotal, which has a more-than-passing interest in containers. I did a presentation on the history which you might find interesting: http://livestre.am/54NLn


As an ops guy, I would also mention the benefits of the Dockerfile and docker-compose.yml, which could be clear sources for information to how the system is built, and which in most circumstances would build the same system in dev as in prod. By changing a docker tag in the configuration management, I can roll out a new version quite conveniently to staging and eventually to production.

The potential minimalism of a container is also an important concept to mention, with fast startup-times and less services that could potentially be vulnerable.


Agreed.

Application runtime dependencies are a common source of communication breakdowns between development and infrastructure teams. Making the application container a maintained build file on the project improves this communication.

docker provides:

* a standard format for building container images (the Dockerfile)

* a series of optimizations for working with images and containers (immutable architecture etc).

* a community of pre-built images

* a marketplace of hosting providers

All at the cost of linux only, which is ok for many shops.


I think the main thing is to provide an abstraction for applications so that they aren't tightly coupled to the operating system of the server that's hosting them. That's a big deal.

Some people have mentioned security...patching in particular. Containers won't help if you don't have patching down. At the very least it lets you patch in the lab and easily promote the entire application into production.

I think that the security arguments are a canard. By making it easier and faster to deploy you should be able to patch app dependencies as well. I, for one, would automate the download and install of the latest version of all libs in a container as part of the app build process. Hell, build them all from source.

IT departments need to be able to easily move applications around instead of the crazy build docs that have been thrown over the wall for years.


It's a tool to make over engineering every project even easier! All joking aside it is a good tool for some teams to make sure the same exact code is running in production that was tested. I don't think it is for everyone and can make things much more complicated than they need to be. I also don't think everything needs to be in a docker.


Docker is the industry-accepted standard to run web applications as root.


It's unfortunate that Docker still doesn't use user namespaces.


No, they are new and have had many security issues. Just run your containers not as root, you can use capabilities if you like.


But certain namespaces cannot be created without CAP_SYS_ADMIN. Sure, you can drop privileges later, but a privilege escalation exploit in the container gives the attacker root access outside of that container, too. Sure, user namespaces have had issues, but they seem a hell of a lot safer than no isolation at all. Furthermore, user namespaces allow unprivileged users to create containers, too, which is particularly exciting.


I like this presentation, as it shows what Docker really is, and also how to use Docker without Docker: https://chimeracoder.github.io/docker-without-docker/#1


The most common pro is "Build once deploy everywhere" even is possible, I always feel pushing a 500 MB tar image to the production servers is more an annoyance than being helpful; Yes, You can setup your own registry but maintaining the service, securing, adding user permissions and maybe use a proper backend like S3 is an extra annoying layer and another component that could fail.

If the docker tool will have something like `docker serve` and start his own local registry will be more than great.

For this case when I switch to Go was a great solution, building the binary is everything you need.

About docker being helpful for development, definitively yes, I switch to postgres, elasticsearch and redis containers instead of installing them on my computer, is easy to flush and restart and having different versions of services is also more manageable


I know you have some other questions that I am not qualified to answer, but I recalled seeing a similar question asked not that long ago.

https://news.ycombinator.com/item?id=9805056


To create a buzzword to attract investors money. It is professional brand management at work.


You're coming at this from the wrong direction, namely virtualization.

What differentiates Docker is not virtualization, so much as package management. Docker is a package management tool that happens to allow you to execute the content of the package with some sort of isolation.

Further, when you look at it from that angle, you start seeing the flaws with it, as well as it's potential. It's no accident that Rocket and the Open Container Project are arising to standardize the container format. Other, less-well-known efforts include being able to distribute the container format themselves in a p2p distribution system, such as IPFS.


Somehow this wasn't well explained, leading to a really persistent misunderstanding that comes up any time Docker is mentioned; you can see in this very thread someone claiming to be an expert with Linux and saying that Docker is no good because it doesn't sufficiently abstract away from the kernel, as if that were its purpose. There is always someone on hand to claim that Docker is nothing more than cgroups, again, as if the packaging part of this didn't even exist.


Fair enough!

I ran through the same thing too. I used to work for Opscode. I joined them because I like the idea of "infrastructure-as-code." I remember when Docker came around, I was scratching my head. There was a part of me that thought it has something, and another part that was thinking, why would anyone want to use this? Wouldn't this set us back to the time when infrastructure is not code? I couldn't put my finger on it. And what's really funny is that the "container" metaphor explains this well -- and I had spent time reading up on the history of physical, intermodal containers and how they changed our global economy to boot. The primary point of intermodal containers isn't that it isolates goods from one merchant from another; it is that there is a standard size for containers that can be stacked in predictable ways, and moved from ship to train to truck quickly and efficiently. You are no longer loading and unloading pallets and individual goods; you are moving containers around. Package management. A lot of logistics companies at the time didn't get this either.

Most of the literature out there explains Docker as virtualization, or some confused mish-mash of "lightweight virtualization", or "being able to the move containers from one machine to the other." They pretty much circle around the central point of package management without nailing that jelly to the wall.


For what it's worth, we use this metaphor a lot, along with the same wording in pretty much every pitch we do, both public and private.

What I find interesting about Docker is that different people get excited about different aspects of it.

One of the major reasons I love working at the company - I get to watch them have the same feeling I did over 2 years ago: the feeling that Docker can help with something they find painful in their daily work.


Thanks for sharing. I remember the pictures of intermodal containers for explaining this.

Sadly, I also see writeups that focus too much on the virtualization aspect. The journalists are searching for something to compare it to, so Docker gets compared to other virtualization and resource isolation tools.

Growing pains, I suppose?


The media often looks for conflict; the easiest target is Docker vs. VMWare. But as you correctly point out, we don't really see it that way.


I was wading through the confusion in this thread looking for this answer. The big benefit for me is that it is not virtualized. I can repurpose physical machines in my cluster quickly, by spinning up a different container. I don't want to run three VMs on one box, I want to run the same kind of node on thirty boxes. With Docker + Kubernetes (or Mesos), I can then switch these to another node type incredibly quickly. I can run the same container on AWS, VMWare, GCE, or Azure.

I guess most people don't need to do this...


I haven't needed that yet (not at the scale I am using), but that is exactly the use-case I am looking forward to.

I have a lot of interest in seeing our current devops tooling converging with AI/MI. I don't see that happening without something like Docker.


For me I don't understand the purpose at all. I have a few node.js and PHP services. Why do I need isolation and have them in containers? If I want an identical environment when developing I can use Vagrant.


FYI: Vagrant can use docker, rendering your argument invalid :)

http://docs.vagrantup.com/v2/provisioning/docker.html

Docker is about running isolated environments in reproducible ways. I get a container working just so on my desktop, ship it to an internal registry, where it gets pulled to run on dev and qa. It works identically to how it works on my desktop, then I ship it to production. One image that works the same on all environments. That is what docker was for, developer productivity.


The description on HN the other day of Docker as a souped-up static linking system is the most interesting one.


OpenVZ or LXC give you OS containers like KVM or VMWare gives your Virtual machines. Unlike OpenVZ, LXC does not need a custom kernel, and is supported in the mainline Linux kernel paving the way for widespread adoption.

Docker took the LXC OS container template as a base, modified the container OS init to run a single app, builds the OS file system with layers of aufs, overlayfs, and disables storage persistence. And this is the app container.

This is an opinionated use case of containers that adds significant complexity, more a way to deploy app instances in a PAAS centric scenario.

A lot of confusion around containers is because of the absence of informed discussion on the merits or demerits of this approach and the understanding that you have easy to use OS containers like LXC that are perfectly usable by end users like VMs are, and then app containers that are doing a few more things on top of this.

You don't need to adopt Docker to get the benefits of containers, you adopt Docker to get the benefits of docker and often this distinction is not made.

A lot of users whose first introduction to containers is Docker tend to conflate Docker to containers, and thanks to some 'inaccurate' messaging from the Docker ecosystem think LXC is 'low level' or 'difficult' to use, Why would anyone try LXC if they think it's low level or difficult to use? But those who do will be pleasantly surprised how simple and straightforward it is.

For those who want to understand containers, without too much fuss, we have tried to provide a short overview in a single page in the link below.

https://www.flockport.com/containers-minus-the-hype

Disclosure - I run flockport.com that provides an app store based on LXC containers and tons of tutorials and guides on containers, that can hopefully promote more informed discussion.


Docker does not use LXC as default execution environment anymore. They created their own, called libcontainer¹. But with the new opencontainers movement, the package has been moved to runc².

[1] https://github.com/docker/libcontainer

[2] https://github.com/opencontainers/runc


That's why I used the word took, Docker used LXC as a base till version 0.9, untill it got enough traction, at which point it basically recreated LXC with libcontainer.

But that was not the point. The point is you have always had perfectly usable end user containers from the LXC project even before Docker. Then a VC funded company Docker bases itself on LXC and markets itself aggressively and suddenly a lot of users think LXC is 'low level' or 'difficult to use'? This messaging is coming from the Docker ecosystem and the result is the user confusion we see on most container threads here.

Informed discussion means people know what OS containers are, what value they deliver, and what Docker adds on top of OS containers so there is less confusion and FUD, and users can make informed decisions without a monoculture being pushed by aggressive marketing.

But that discussion cannot happen if you are in a hurry to 'own' the container story and cannot acknowledge clearly alternatives exists and what value you are adding exactly on top. I see people struggling with single app containers, layers and lack of storage persistence when they are simply looking to run a container as a lightweight VM.

The 'open container movement' is yet one more attempt to 'own' the container technology and perpetuate the conflation of Docker to containers. How can a 'open container movement' exclude the LXC project that is responsible for the development of much of the container technology available today. It should ideally be called 'Open App Container' because there is a huge difference between app containers and OS containers. OS containers provide capabilities and deployment flexibility that app containers simply cannot give because they are a restrictive use case of OS containers. Containers technology as a whole cannot be reduced to a single PAAS centric use case.


Docker is a way to create immutable infrastructure, which is a key component to a) have software working the same in test and prod. (hint DevOps.) and b) creating servers which can scale both vertically and horizontally.

I think thats the best way I can summarise what Docker _is_.


I don't know much about virtualization technology, but Docker is nice for me because it's an accessible, well-known, and rather easy way to make applications easy and straightforward to run.

Where I've worked in the past, setting up a new development or production environment has been difficult and relied on half-documented steps, semi-maintained shell scripts, and so on. With a simple setup of a Dockerfile and a Makefile, projects can be booted by installing one program (Docker) and running "make".

You could do that with other tools as well, but Docker, and even moreso the emerging "standards" for container specification, seems like an excellent starting point.


This explains the difference between Docker and normal virtualization technology: https://www.docker.com/whatisdocker



I think one useful purpose was described by Prof. Mine Cetinkaya-Rundel of Duke at the recent UseR conference. She teaches an introductory statistics class for non-majors. Docker lets her spin up individual virtual machines for each student with all the packages they need for the class without all the sys-admin headaches of getting all the software on everybody's systems. You can see her slides and evaluation of the alternatives here:

https://github.com/mine-cetinkaya-rundel/useR-2015/blob/mast...


Simply put, Docker is operating system virtualization:

https://en.wikipedia.org/wiki/Operating-system-level_virtual...

Edit: formatting.


A meta critique after reading 139 comments: I too had the same question as the parent and from the ensuing conversations, I assume that either Docker is so thin-layered (not in a bad way) that it is open to so many interpretations or it is so thin-layered (in a trivial way), that one does not need to get all worked up adopting it if one is comfortable in using other VM options out there (like OpenVZ for example).


I find docker is quite good for integration tests where you need to test against a third party bit of software. Lots of images exist in the hub for this.


OpenVZ is not upstream in the kernel; the container stuff that got merged is what Docker uses. Docker has much wider adoption than OpenVZ does now.


> Docker has much wider adoption than OpenVZ does now.

I don't think your statement is true at this point in time. OpenVZ is used by a ton of companies in the hosting industry and by large companies such as Groupon and smaller ones like TravisCI [1]. I would't make a statement that that Docker has a wider adoption than OpenVZ at this point in time. Maybe in five years, yes it may have a wider adoption than OpenVZ. OpenVZ and commercial VZ have been doing full OS containers since the early 2000s and it has the production track record to do very well in many server applications. I wouldn't hesitate to use it over Docker in production for my future projects.

[1]: http://changelog.travis-ci.com/post/45177235333/builds-now-r...


Travis moved to Docker after that [1]. And the "hosting industry" is not the thing it used to be since cloud.

[1] http://blog.travis-ci.com/2014-12-17-faster-builds-with-cont...


Very cool on the Docker move by Travis. I still think Docker has a long way to go to over take OpenVZ. Docker is gaining steam, but it's adoption rate isn't wider than OpenVZ. Not yet.

I agree that the hosting industry isn't what it used to be. Most of the larger hosting providers are not keeping up with the current trends and deployment methods, but that is mostly due to the fact that they do not need change. Most people who are buying commodity hosting don't have a team of developers and operations guys to use all the new cool cloud methods like Docker.


And Docker is massively used at Groupon, so your argument isn't valid.

https://engineering.groupon.com/2014/misc/dotci-and-docker/ http://www.meetup.com/Docker-Chicago/events/220936626/

Source: I work at 600 W Chicago, the Groupon World HQ, where they frequently host Docker meetups on the 6th floor.


I never said Docker wasn't used at Groupon and just because it is used in some cases, doesn't make my point any less valid. It is going to take a lot more than a few years to take over OpenVZ/Virtuozzo in market share when most of the commodity hosting industry uses it.


FWIW: The two companies that contributed the most to the namespace code in the upstream Linux kernel (that Docker uses) were Parallels and Google, both of which know a lot about containers at Scale. For those that don't know, Parallels wrote virtuozzo.

(from linux.git):

$ git log --pretty=oneline --no-merges --author='@parallels.com' | wc -l 695

I also suspect that it won't take super long for container technologies like Docker to take over OpenVZ/Virtuozzo if not only for projects like Kubernetes, which is backed by Google, and is the basis for commercial PaaS offerings such as Openshift v3 from Redhat. Virtuozzo is great tech, but using it for a new buildout seems like a bad idea for the forseeable future. Docker is not as good, but will be soon, and is supported out of the box on every modern distribution.


Someone (Darren Shepherd?) compared Docker to Ajax. It's not a technological breakthrough, it's another kind of breakthrough.

I think it was here [1], but deleted now.

[1] http://ibuildthecloud.tumblr.com/post/63895248725/docker-is-...


I went ahead and blogged an answer here: http://blog.tfnico.com/2015/07/the-sweet-spot-of-docker.html

TL;DR: It's better for deploying applications and running them than using home-made scripts.



It exists to create jobs in devops.


[Disclaimer: I am the guy who was running OpenVZ since the very beginning, and if you hate OpenVZ name/logo, I am the one to blame. Also, take everything I say with a grain of salt -- although I know, use, like and develop for Docker, my expertise is mostly within OpenVZ domain, and my point of view is skewed towards OpenVZ]

Technologically, both OpenVZ and Docker are similar, i.e. they are containers -- isolated userspace instances, relying on Linux Kernel features such as namespaces. [Shameless plug: most of namespaces functionality is there because of OpenVZ engineers work on upstreaming]. Both Docker and OpenVZ has tools to set up and run containers. This is there the similarities end.

The differences are:

1 system containers vs application containers

OpenVZ containers are very much like VMs, except for the fact they are not VMs but containers, i.e. all containers on a host are running on top of one single kernel. Each OpenVZ container has everything (init, sshd, syslogd etc.) except the kernel (which is shared).

Docker containers are application containers, meaning Docker only runs a single app inside (i.e. a web server, a SQL server etc).

2 Custom kernel vs vanilla kernel

OpenVZ currently comes with its own kernel. 10 years ago there were very few container features in the upstream kernel, so OpenVZ has to provide their own kernel, patched for containers support. That support includes namespaces, resource management mechanisms (CPU scheduler, I/O scheduler, User Beancounters, two-level disk quota etc), virtualization of /proc and /sys, and live migration. Over ten years of work of OpenVZ kernel devs and other interesting parties (such as Google and IBM) a lot of this functionality is now available in the upstream Linux kernel. That opened a way for other container orchestration tools to exist -- including Docker, LXC, LXD, CoreOS etc. While there are many small things missing, the last big thing -- checkpointing and live migration -- was also recently implemented in upstream, see CRIU project (a subproject of OpenVZ, so another shameless plug -- it is OpenVZ who brought live migration to Docker). Still, OpenVZ comes with its own custom kernel, partly due to retain backward compatibility, partly due to some features still missing from the upstream kernel. Nowadays that kernel is optional but still highly recommended.

Docker, on the other side, runs on top of a recent upstream kernel, i.e. it does not need a custom kernel.

3 Scope

Docker has a broader scope than that of OpenVZ. OpenVZ just provides you with a way to run secure, isolated containers, manage those, tinker with resources, live migrate, snapshot, etc. But most of OpenVZ stuff is in the kernel.

Docker has some other things in store, such as Docker Hub -- a global repository of Docker images, Docker Swarm -- a clustering mechanism to work with a pool of Docker servers, etc.

4 Commercial stuff

OpenVZ is a base for commercial solution called Virtuozzo, which is not available for free but adds some more features, such as cluster filesystem for containers, rebootless kernel upgrades, more/better tools, better containers density etc. With Docker there's no such thing. I am not saying it's good or bad, just stating the difference.

This is probably it. Now, it's not that OpenVZ and Docker are opposed to each other, in fact we work together on a few things:

1. OpenVZ developers are authors of CRIU, P.Haul, and CRIU integration code in Docker's libcontainer. This is the software that enables checkpoint/restore support for Docker.

2. Docker containers can run inside OpenVZ containers (https://openvz.org/Docker_inside_CT)

3. OpenVZ devs are authors of libct, a C library to manage containers, a proposed replacement or addition to Docker's libcontainer. When using libct, you can use enhanced OpenVZ kernel for Docker containers.

There's more to come, stay tuned.


reading while eating popcorn ( ͡° ͜ʖ ͡°)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: