Hacker News new | comments | show | ask | jobs | submit login

>Docker is a cute little tool that gives people who aren't that great at Linux the illusion that they know what they're doing.

Well, that's what I personally hoped. Then you run into problems, distro specific problems, and find yourself unable to deal with it without actually becoming great at linux under a deadline. Docker can actually introduce tremendous complexity at both the Linux and application level because you have to understand how an image was prepared in order to use it, configure it, etc. (Of course, a big part of the problem is that there's no way that I know of to interactively examine the filesystem of an image without actually running the image, and accessing the FS from the tools that the image itself runs. This has to be some sort of enormous oversight either on my part or on Docker's).




This is what happens when people confuse something which reduces complexity, to something which can move complexity. It's important to note that it can move complexity, if you can set up the container host environment in such a way to allow it. At which point the complexity normally associated with the OS management/systems administrator can largely be moved into build process/software developer complexity.

The number of tools which one would suggest you use along with Docker are a reflection of this, and are additional layers to try to provide further movement of host complexity up into a software controllable level (Consul, etcd, yada yada).

The whole ecosystem plays well with "cloud" hosts, because their systems people have taken the appropriate steps in creating that host architecture and complexity (which is not gone) for you.

As someone else stated well, it is the modern static linking. I have no idea why people would ever have done "build, test, build, deploy" - that sort of insanity should have been obviously wrong. However, "build, test, deploy" does not depend on static-ness of everything related to build, but compatibility of environment between "test" and "deploy". Those who invested not enough time in making sure these environments were always in sync I think have found a way to wipe the slate clean and use this to catch up to that requirement.


I'm sure this is not the answer you are looking for, but you can 'docker export' a container to a tar file and examine your image file that way.

(1) You're exporting a container, not an image, so if you wanted to export your image, deploy it to a container first. Run echo or some other noop if you need to.

(2) This is similar to how git operates. You wouldn't want to examine your git commits interactively (assuming that means the ability to change them in place) well, if you did, git has --amend, but no such thing exists in Docker.

An image with a given id is supposed to be permanent and unchanging, containers change and can be re-committed, but images don't change. They just have children.

It can get hairy when you reach the image layer limit, because using up the last allowed image layer means you can't deploy to a container anymore. So, how do you export the image? 'docker save' -- but 'docker save' exports the image and all of its parent layers separately. (you need to flatten it, for example?)

I once wrote this horrible script[1] whose only purpose was unrolling this mess, since the latest image had the important state that I wanted in it, but I needed the whole image -- so, untar them all in reverse order and then you have the latest everything in a single directory that represents your image filesystem.

The horror of this script leads me to believe this is an oversight as well, but a wise docker guru probably once said "your first mistake was keeping any state in your container at all."

[1]: https://raw.githubusercontent.com/yebyen/urbinit/del/doit.sh


Given stupid hacks, like "Run echo or some other noop if you need to" to go from an image to a container, and 'docker commit' to go back from a container to an image, the distinction between a docker image and docker container seems a bit academic and a bit of poor UX rather than anything else.


Not really, containers are disposable and images (at least tags) are somewhat less disposable. Containers are singular, malleable units and represent a processes' running state, images are atomic and composable, inert, basically packages.

You wouldn't say that the difference between a live database and its binaries compiled source code is academic, would you?

I agree that it would make more sense if you could dump the image to a flat file with a single verb. I also think docker needs an interface to stop a pull in progress that has stalled or is no longer needed. These are academic concerns, you can submit a pull request.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: