Hacker News new | past | comments | ask | show | jobs | submit login

For my two cents, if you're image requires anything not vanilla, you may be better off stomaching the larger Ubuntu image.

Lots of edge cases around specific libraries come up that you don't expect. I spent hours tearing my hair out trying to get Selenium and python working on an alpine image that worked out-of-the-box on the Ubuntu image.




I would rather install the needed libraries myself and not have to deal with tons of security fixes of libraries I don't use.


That’s rolling your own distro. We could do that but it’s not really our job. It also prevents the libraries from being shared between images, unless you build one base layer and use it for everything in your org (third parties won’t).


Once you start adding stuff, I think Alpine gets worse. For example, there’s a libgmp issue that’s in the latest Alpine versions since November. It’s fixed upstream but hasn’t been pulled into Alpine.


musl DNS stub resolver is "broken" unfortunately (it doesn't do TCP, which is a problem usually when you want to deploy something into a highly dynamic DNS-configured environment, eg. k8s)


Do libraries just sat there on disc do any damage?

Also, are you going to update those libraries as soon as a security issue arises? Debian/Ubuntu and friends have teams dedicated to that type of thing.


Can they be used somehow? They perhaps.

Depending where you work you might also need to pass some sort of imaging scan that will look at the versions of everything installed.


I mean honestly if you're that paranoid then you shouldn't be using Docker in the first place.


What does docker have to do with patching security fixes? If you have an EC2 box it's going to be the same. I don't consider that paranoid.


This is not a valid comparison. You're comparing bare metal virtual machines wherein you are responsible for all of the software running on the VM, with a bundled set of tarballs containing binaries you probably cannot reproduce.

Many, many vendors provide docker images but no Dockerfile. And even if you had the Dockerfile you might not have access to the environment in which it needs to be run.

Docker is successful in part because it punts library versioning and security patches and distro maintenance to a third party. Not only do you not have to worry about these things (but you should!) now you might not be able to even do anything if you wanted to.


> Docker is successful in part because it punts library versioning and security patches and distro maintenance to a third party. Not only do you not have to worry about these things (but you should!) now you might not be able to even do anything if you wanted to.

This is a very restricted view.

Besides this article is about building your own images, not using existing ones.


I found this is not actually an "Alpine" issue but a libmusl issue. Lots of stuff like local support does not work for musl. I do like the compact size of Alpine but, if you are not developing on with libmusl underneath there seem to be lots of surprises.


True. I had a somewhat similar experience with the official Alpine-based Python images. The are supposedly leaner than the Debian-based ones, but any advantage is cancelled if you need any PyPI packages that use native libraries. Now you suddenly need to include a compiler toolchain in the image and compile the native interface every time you build the image.


I generally agree.

I start all my projects based on Alpine (alpine-node, for example). I'll sometimes need to install a few libraries like ImageMagic, but if that list starts to grow, I'll just use Ubuntu.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: