Hacker News new | comments | show | ask | jobs | submit login
Exploiting Alpine Linux (twistlock.com)
91 points by zelivans 124 days ago | hide | past | web | 22 comments | favorite



Stuff like this is why I base my docker images off Ubuntu or Debian and regularly rebuild them. This way I always get the security updates from Debian/Ubuntu.

The problem with all images which base on something that is not an official distro image is that these packages always have to depend on the base image author to regularly update the images. There is no such thing as apt-get upgrade in a docker only environment, and I'm not really looking forward to the next Apache RCE vuln or the next Openssl desaster. People with long-running containers will be hit hard.


I don't understand your conclusion. The exploit in the article was reported to the Alpine Linux maintainers and the fix was promptly made to apk-tools (https://git.alpinelinux.org/cgit/aports/commit/?id=b849b481a...), so if you rebuilt a docker image based off Alpine you'd have gotten the security update like you described.


> so if you rebuilt a docker image based off Alpine you'd have gotten the security update like you described.

Only if I directly base off my image from Alpine. All images that base off something that either directly bases off Alpine (or worse, with more intermediaries) have a problem as ALL images in the chain must be rebuilt.

That is the core problem.


TBH you'd have the same problem if you based your image off an intermediate Debian/Ubuntu-based image.

I always build my images directly off Alpine anyway, or a base image that I control.

(In this case though, the security fix is for the apk-tools package and not the distro itself, so as long as you have apk update+upgrade in your final build, whether the intermediate images are rebuilt doesn't matter.)


Your best bet is really to build your base image using debootstrap or similar. The 'official' images are often a joke. For the longest time, the maintainer of the 'official' Ubuntu image had no clear association with either Docker Inc or Canonical.

edit: To clarify, the images themselves are quality, and do get generated from Canonical's rootfs tarball, but the trust path for a huge chunk of binary data now hinges on a single individual, rather than a corporate entity.


> the trust path for a huge chunk of binary data now hinges on a single individual, rather than a corporate entity.

You make it sound like it was a bad thing. It's not.


It is definitely a bad thing from a risk standpoint, no two ways about it. Simply because that person could get hit by a bus, burn out, etc.


I'd trust Canonical for Ubuntu images over a random internet citizen that decided to provide them.


When it comes to base images, I'd much rather trust Canonical, Docker Inc, Redhat, etc than Some Dude.


> Stuff like this is why I base my docker images off Ubuntu or Debian and regularly rebuild them.

How can a server running Docker rebuild itself a new image and deploy it? Without your intervention. With cron?


I do it the "old fashioned" way. Manually build the images on the laptop, increment the version prior to pushing it to Nexus, and then increment the version in DC/OS.

Reminds me, I actually could automate this using Jenkins, it has an interface to Marathon (which backs DC/OS)...


If you want to automate it just use unattended upgrades for security updates within the container.


This works for sure but is ephemeral - when you have to restart the container for unrelated reasons, the upgrades are lost.


I was looking forward to reading this article (it's the second part) to see how the author would bypass aslr but they just disable it (running under gdb) so in the wild their method would not work.


This seems to be a common practice in documenting exploits -- to purposefully leave out details that would make it able to cause actual damage in the wild.


Or in many cases, they leave out details that would show their proposed attack is completely impractical.


ASLR has been thoroughly broken through cache and memory subsystem timing attacks. It seems reasonable to leave breaking ASLR as an exercise for the reader in order to focus more on the core vulnerabilities being discussed.

> It is worth noting that for the sake of this exploit I assumed the attacker has knowledge of the memory layout of the executed program(3)

This is not the same as saying "this cannot be exploited with ASLR".


Ever since that paper was linked to on Ars everyone thinks ASLR is done for. ASLR was always vulnerable to cache timing attacks. ASLR can not protect you in the case that you can arbitrarily compute on the target machine - this is not new, this has never been the goal of ASLR. That paper was not the first to demonstrate this.

If you can not arbitrarily compute on the target machine ASLR is still completely useful. This is the case for tons of vulnerabilities, like basically anything that isn't a web browser with JS execution privileges.


  > If you can not arbitrarily compute data on the target
  > machine ASLR is still completely useful
Cache timing and other side channel attacks can be done remotely. RSA implementations have been broken via side channels across ethernet and even via thermal noise.

That said, that ASLR could theoretically be broken remotely hardly implies that ASLR isn't an effective mitigation.[1] If you can quantify the rate of leakage across the channel you could, for example, throttle requests to make an attack impractical. Just be careful about underestimating the rate of bit leakage, or forget that a few leaked bits can make practical a brute-force trial+error attack.

This is why, arguably, the new best practice in writing service daemons (see recent OpenBSD work) is to fork+exec when restarting a service, even when restarting it statefully; as opposed to just resetting internal process state. Also, daemons and daemon slaves should be periodically restarted automatically.

This can be difficult to do, especially if you want to avoid dropping existing, long-lived connections. But when writing new software it's not too bad, especially if you farm out sensitive work (e.g. key signing, complex object parsing) to subprocesses that can be spawned and reaped independently and at a faster rate.

[1] We shouldn't discount the distinction between targeted and untargeted attacks. Currently untargeted attacks don't typically rely on breaking ASLR via timing attacks, and even when they do we can expect that the _cost_ of mass scale timing attacks to be significantly greater. Basically, there's no reason to believe ASLR to be a completely useless mitigation against remote exploits; quite the contrary. Useless for local exploits? Maybe, especially considering that Linux and Windows kernels are so riddled with exploits.


It's not immediately clear, however, how one would conduct a cache timing attack against the package manager.


Through bad antivirus code that inadvertently runs your code in a sandbox. There have been multiple vulnerabilities like this recently.


Sounds like we should design our package managers so that they don't run hostile code in a sandbox.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: