Well, that's what I personally hoped. Then you run into problems, distro specific problems, and find yourself unable to deal with it without actually becoming great at linux under a deadline. Docker can actually introduce tremendous complexity at both the Linux and application level because you have to understand how an image was prepared in order to use it, configure it, etc. (Of course, a big part of the problem is that there's no way that I know of to interactively examine the filesystem of an image without actually running the image, and accessing the FS from the tools that the image itself runs. This has to be some sort of enormous oversight either on my part or on Docker's).
The number of tools which one would suggest you use along with Docker are a reflection of this, and are additional layers to try to provide further movement of host complexity up into a software controllable level (Consul, etcd, yada yada).
The whole ecosystem plays well with "cloud" hosts, because their systems people have taken the appropriate steps in creating that host architecture and complexity (which is not gone) for you.
As someone else stated well, it is the modern static linking. I have no idea why people would ever have done "build, test, build, deploy" - that sort of insanity should have been obviously wrong. However, "build, test, deploy" does not depend on static-ness of everything related to build, but compatibility of environment between "test" and "deploy". Those who invested not enough time in making sure these environments were always in sync I think have found a way to wipe the slate clean and use this to catch up to that requirement.
(1) You're exporting a container, not an image, so if you wanted to export your image, deploy it to a container first. Run echo or some other noop if you need to.
(2) This is similar to how git operates. You wouldn't want to examine your git commits interactively (assuming that means the ability to change them in place) well, if you did, git has --amend, but no such thing exists in Docker.
An image with a given id is supposed to be permanent and unchanging, containers change and can be re-committed, but images don't change. They just have children.
It can get hairy when you reach the image layer limit, because using up the last allowed image layer means you can't deploy to a container anymore. So, how do you export the image? 'docker save' -- but 'docker save' exports the image and all of its parent layers separately. (you need to flatten it, for example?)
I once wrote this horrible script whose only purpose was unrolling this mess, since the latest image had the important state that I wanted in it, but I needed the whole image -- so, untar them all in reverse order and then you have the latest everything in a single directory that represents your image filesystem.
The horror of this script leads me to believe this is an oversight as well, but a wise docker guru probably once said "your first mistake was keeping any state in your container at all."
You wouldn't say that the difference between a live database and its binaries compiled source code is academic, would you?
I agree that it would make more sense if you could dump the image to a flat file with a single verb. I also think docker needs an interface to stop a pull in progress that has stalled or is no longer needed. These are academic concerns, you can submit a pull request.