The problem with all images which base on something that is not an official distro image is that these packages always have to depend on the base image author to regularly update the images. There is no such thing as apt-get upgrade in a docker only environment, and I'm not really looking forward to the next Apache RCE vuln or the next Openssl desaster. People with long-running containers will be hit hard.
Only if I directly base off my image from Alpine. All images that base off something that either directly bases off Alpine (or worse, with more intermediaries) have a problem as ALL images in the chain must be rebuilt.
That is the core problem.
I always build my images directly off Alpine anyway, or a base image that I control.
(In this case though, the security fix is for the apk-tools package and not the distro itself, so as long as you have apk update+upgrade in your final build, whether the intermediate images are rebuilt doesn't matter.)
edit: To clarify, the images themselves are quality, and do get generated from Canonical's rootfs tarball, but the trust path for a huge chunk of binary data now hinges on a single individual, rather than a corporate entity.
You make it sound like it was a bad thing. It's not.
How can a server running Docker rebuild itself a new image and deploy it? Without your intervention. With cron?
Reminds me, I actually could automate this using Jenkins, it has an interface to Marathon (which backs DC/OS)...
> It is worth noting that for the sake of this exploit I assumed the attacker has knowledge of the memory layout of the executed program(3)
This is not the same as saying "this cannot be exploited with ASLR".
If you can not arbitrarily compute on the target machine ASLR is still completely useful. This is the case for tons of vulnerabilities, like basically anything that isn't a web browser with JS execution privileges.
> If you can not arbitrarily compute data on the target
> machine ASLR is still completely useful
That said, that ASLR could theoretically be broken remotely hardly implies that ASLR isn't an effective mitigation. If you can quantify the rate of leakage across the channel you could, for example, throttle requests to make an attack impractical. Just be careful about underestimating the rate of bit leakage, or forget that a few leaked bits can make practical a brute-force trial+error attack.
This is why, arguably, the new best practice in writing service daemons (see recent OpenBSD work) is to fork+exec when restarting a service, even when restarting it statefully; as opposed to just resetting internal process state. Also, daemons and daemon slaves should be periodically restarted automatically.
This can be difficult to do, especially if you want to avoid dropping existing, long-lived connections. But when writing new software it's not too bad, especially if you farm out sensitive work (e.g. key signing, complex object parsing) to subprocesses that can be spawned and reaped independently and at a faster rate.
 We shouldn't discount the distinction between targeted and untargeted attacks. Currently untargeted attacks don't typically rely on breaking ASLR via timing attacks, and even when they do we can expect that the _cost_ of mass scale timing attacks to be significantly greater. Basically, there's no reason to believe ASLR to be a completely useless mitigation against remote exploits; quite the contrary. Useless for local exploits? Maybe, especially considering that Linux and Windows kernels are so riddled with exploits.