Hacker News new | comments | show | ask | jobs | submit login

> docker and openVZ aim to do the same thing.

docker is a process container not a system container.

> docker is a glorified chroot and cgroup wrapper.

that is fairly immaterial, suffice to say that the underlying linux core tech that enables docker has matured enough lately to enable a tool like docker. I built many containers and I never thought about them in terms of the underlying tech.

> There is also a library of prebuilt docker images (think of it as a tar of a chroot)

yes

> and a library of automated build instructions

more accurate to say there is a well defined SDL for defining containers.

> You will hear a lot about build once, deploy anywhere. whilst true in theory, your mileage will vary.

have to agree, this is oversold as most of the config lives in attached volumes and needs to be managed outside of the container.

> However if you wish to assign ip addresses to each service, docker is not really mature enough for that. Yes its possible, but not very nice. You're better off looking at KVM or vmware.

Have to disagree here, primarily because each service should live in each own container, docker is a process container, not a system container. Assemble a system out of several containers, don't mash it all up into one - most people don't seem to get this about docker.

> There is also no easy hot migration. So there is no real solution for HA clustering of non-HA images. (once again possible, but not without lots of lifting, Vmware provides it with a couple of clicks.)

None is required. Containers are ephemeral and generally don't need to be migrated, they are simply destroyed and started where needed. Requiring 'hot migration' in the docker universe generally means you are doing it wrong. Not to say that there is no place for that.

As a final note, all my docker hosts are kvm vm's.




edit this sounds like I'm being petty, I apologise, I'm just typing fast.

> docker is a process container not a system container.

Valid. However the difference between docker image and openVZ images is the inclusion of an init system.

> Have to disagree here, primarily because each service should live in each own container, docker is a process container, not a system container. Assemble a system out of several containers, don't mash it all up into one - most people don't seem to get this about docker.

I understand your point,

I much prefer each service having an IP that is registered to DNS. This means that I can hit up service.datacenter.company.com and get a valid service. (using well tested dns load balancing and health checks to remove or re-order individual nodes)

Its wonderfully transparent and doesn't require a special custom service discovery in both the client and service. because like etcd it has the concept of scope you can find local instances trivially. using DCHP you can say connect servicename and let dhcpd set your scope for you.

> None is required. Containers are ephemeral and generally don't need to be migrated, they are simply destroyed and started where needed. Requiring 'hot migration' in the docker universe generally means you are doing it wrong. Not to say that there is no place for that.

This I have to disagree with you. For front end type applications, ones that hold no state, you are correct.

However for anything that requires shared state, or data its a bad thing. Take your standard database cluster ([no]SQL or whatever) of 5 machines. You are running at 80% capacity, and one of your hosts is starting to get overloaded. You can kill a node, start up a warm node on a fresh machine.

However now you are running at 100% capacity, and you now need to take some bandwidth to bring up a node to get back to 80%. Running extra machines for the purpose of allowing CPU load balancing aggrieves me.

I'm not advocating writing apps that cannot be restarted gracefully. I'm also not arguing against ephemeral containers, its more a case of easy load balancing, and disaster migration. Hot migration means that software is genuinely decoupled from the hardware.


> However the difference between docker image and openVZ images is the inclusion of an init system.

No it isn't. Most people don't use an init system with Docker images. However, one of the top-10 popular images uses one -- the Passenger Phusion base images. They make a pretty compelling argument why you should.

None of these arguments are relevant in the big picture. Where Docker shines is the package management, not the virtualization. As a package management system, it is brilliant -- though incomplete. The package management could be fully content-addressable, and at which point, we'll have something more brilliant than what it is now. But it isn't, and I doubt anyone will try it until after this core concept gets adopted into the mainstream.

Ten years ago in 2005, I've heard these same types of arguments about cloud providers, the Zen hypervisors, and the AWS API. I've seen old mainframe folks rolling their eyes saying the technology is old, and this is hyped up. Of course it's hyped up; but unless you can look past the hype and your contempt, you won't see what's really there. No one is really arguing about cloud technology now, and the hold-outs are outnumbered by the majority.


> I much prefer each service having an IP that is registered to DNS. This means that I can hit up service.datacenter.company.com and get a valid service. (using well tested dns load balancing and health checks to remove or re-order individual nodes)

There are docker-backed service management tools that will automatically this for you(assign public/private dns per service cluster inc load balancing), like Empire https://github.com/remind101/empire


"Assemble a system out of several containers, don't mash it all up into one - most people don't seem to get this about docker."

Care to elaborate on this? Do you use the linking system described here? https://docs.docker.com/userguide/dockerlinks/

I mean, your various containers still communicate over IP, right? Just a private IP network within the host, rather than outside?

(Obviously I've never used Docker.)


The OP just means don't put everything into one container.


Yes, except each container has it's own isolated network and explicitly exposes a port that linked containers can listen to. In development I think a lot of people just use --net=host so that all the containers share the host networking stack (at least, I do).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: