Hacker News new | comments | show | ask | jobs | submit login

Docker is a cute little tool that gives people who aren't that great at Linux the illusion that they know what they're doing. Throw in the use of some "Container" semantics and people become convinced it's that easy (and secure) to abstract away the containers from the kernel.

But it's not, at least in my experience; not to mention that as of now, anything running Docker in production (probably a bad idea) is wide open to the OpenSSL security flaw in versions of 1.0.1 and 1.0.2, despite the knowledge of this issue being out there for at least a few days.

Docker's currently "open" issue on github: https://github.com/docker/compose/issues/1601

Other references: https://mta.openssl.org/pipermail/openssl-announce/2015-July... http://blog.valbonne-consulting.com/2015/04/14/as-a-goat-im-...




> Docker is a cute little tool that gives people who aren't that great at Linux the illusion that they know what they're doing.

That's a rather embittered perspective, ironic considering how new Linux is in the grand scheme of things. A more germane perspective is that Docker is a new tool which acknowledges that UX matters even for system tools.


I'm not sure if I agree that the UX of docker is all that great if you're inferring that docker somehow is more intuitive or easier to understand. The sheer amount of confusion from so many people out there about what docker does is evidence to that.


You're conflating the fundamental technicality and irreducible complexity of the issues that Docker is attempting to solve with the UI they are wrapping around it. It doesn't matter if you have Steve Jobs himself risen from the grave, there is no way to make Docker be as easy to understand and use as Instagram. That doesn't mean it's not a step-change if you consider the UX of Docker vs raw LXC.


I'm only pointing out that docker has some pretty big UX problems as a whole (whether or not they can reduce the complexity of them enough to actually solve them is another thing). I wouldn't bill docker's UX as it's killer feature.


Docker's UX is not a feature, it's just part of its DNA that assisted in gaining rapid mindshare, sort of like Vagrant before it, by being more approachable than older tools like OpenVZ and VirtualBox respectively.


>Docker is a cute little tool that gives people who aren't that great at Linux the illusion that they know what they're doing.

Well, that's what I personally hoped. Then you run into problems, distro specific problems, and find yourself unable to deal with it without actually becoming great at linux under a deadline. Docker can actually introduce tremendous complexity at both the Linux and application level because you have to understand how an image was prepared in order to use it, configure it, etc. (Of course, a big part of the problem is that there's no way that I know of to interactively examine the filesystem of an image without actually running the image, and accessing the FS from the tools that the image itself runs. This has to be some sort of enormous oversight either on my part or on Docker's).


This is what happens when people confuse something which reduces complexity, to something which can move complexity. It's important to note that it can move complexity, if you can set up the container host environment in such a way to allow it. At which point the complexity normally associated with the OS management/systems administrator can largely be moved into build process/software developer complexity.

The number of tools which one would suggest you use along with Docker are a reflection of this, and are additional layers to try to provide further movement of host complexity up into a software controllable level (Consul, etcd, yada yada).

The whole ecosystem plays well with "cloud" hosts, because their systems people have taken the appropriate steps in creating that host architecture and complexity (which is not gone) for you.

As someone else stated well, it is the modern static linking. I have no idea why people would ever have done "build, test, build, deploy" - that sort of insanity should have been obviously wrong. However, "build, test, deploy" does not depend on static-ness of everything related to build, but compatibility of environment between "test" and "deploy". Those who invested not enough time in making sure these environments were always in sync I think have found a way to wipe the slate clean and use this to catch up to that requirement.


I'm sure this is not the answer you are looking for, but you can 'docker export' a container to a tar file and examine your image file that way.

(1) You're exporting a container, not an image, so if you wanted to export your image, deploy it to a container first. Run echo or some other noop if you need to.

(2) This is similar to how git operates. You wouldn't want to examine your git commits interactively (assuming that means the ability to change them in place) well, if you did, git has --amend, but no such thing exists in Docker.

An image with a given id is supposed to be permanent and unchanging, containers change and can be re-committed, but images don't change. They just have children.

It can get hairy when you reach the image layer limit, because using up the last allowed image layer means you can't deploy to a container anymore. So, how do you export the image? 'docker save' -- but 'docker save' exports the image and all of its parent layers separately. (you need to flatten it, for example?)

I once wrote this horrible script[1] whose only purpose was unrolling this mess, since the latest image had the important state that I wanted in it, but I needed the whole image -- so, untar them all in reverse order and then you have the latest everything in a single directory that represents your image filesystem.

The horror of this script leads me to believe this is an oversight as well, but a wise docker guru probably once said "your first mistake was keeping any state in your container at all."

[1]: https://raw.githubusercontent.com/yebyen/urbinit/del/doit.sh


Given stupid hacks, like "Run echo or some other noop if you need to" to go from an image to a container, and 'docker commit' to go back from a container to an image, the distinction between a docker image and docker container seems a bit academic and a bit of poor UX rather than anything else.


Not really, containers are disposable and images (at least tags) are somewhat less disposable. Containers are singular, malleable units and represent a processes' running state, images are atomic and composable, inert, basically packages.

You wouldn't say that the difference between a live database and its binaries compiled source code is academic, would you?

I agree that it would make more sense if you could dump the image to a flat file with a single verb. I also think docker needs an interface to stop a pull in progress that has stalled or is no longer needed. These are academic concerns, you can submit a pull request.


Docker and docker-compose are -not- the same thing. That does not indicate a security flaw of any sort in docker containers themselves.


Fine, I'll bite: what non-cute tool do big boys who are "great at Linux" and do know what they're doing use?


In my experience (as one of those "big boys"), it's usually more traditional virtualization, typically on top of a bare-metal hypervisor like Xen (nowadays via Amazon EC2, though there are plenty of bigger companies that run their own Xen hosts), ESXi, SmartOS, or something similar. Even more recent is the use of "operating systems" dedicated to a particular language or runtime; Ling (Erlang on Xen) is an excellent example of this.

On one hand, this tends to offer a slightly stronger assurance against Linux-level security faults while also enabling the use of non-Linux stacks (such as BSD or Solaris or - God forbid - Windows, along with just-enough-OS (or no OS whatsoever)). Proper virtualization like this offers another layer of security, and it's generally perceived to be a stronger one.

On the other hand, the security benefits provided on an OS level (since now even an OS-level compromise won't affect the security of the whole system, at least not immediately) are now shunted over to the hypervisor. Additionally, the fuller virtualization incurs a slight performance penalty in some cases, and certainly includes the overhead of running the VM.

On the third hand, bare-metal hypervisors tend to be very similar to microkernels in terms of technical simplicity and compactness, thus gaining many of the inherent security/auditing advantages of a microkernel over a monolithic kernel. Additionally, in many (arguably most) environments, the slight degradation of performance (which isn't even guaranteed, mind you) is often much more tolerable than the risk of an OS-level bug compromising whole hosts, even if the risk of hypervisor-level bugs still exists.


It depends on what you want to do, of course, but the standard tools for software packaging is deb and rpm.

The management tools are fairly decent, and the question "which CVEs are we vulnerable to our production environment" or "were are we still using Java 6" shouldn't be more than a keypress away.

Neither deb/rpm nor containers are an excuse for not using configuration management tools however. Don't believe anyone who says so.


Docker > Chef




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: