Hacker News new | more | comments | ask | show | jobs | submit login
Alpine Linux 3.9.0 Released (alpinelinux.org)
205 points by _ikke_ 23 days ago | hide | past | web | favorite | 90 comments



Great!

Especially happy to see Tesseract OCR v4.0 [0] now being in the mainline repository. Tesseract was the main motivation for changing my web stack to docker a couple of weeks ago, and I had to use a separate builder image [1] in Alpine 3.8. Now it is just:

> apk add tesseract-ocr

[0] https://pkgs.alpinelinux.org/package/v3.9/community/armhf/te...

[1] https://hub.docker.com/r/inetsoftware/alpine-tesseract/


I'm curious to know why they switched back to openssl from libressl. Are there compatibility issues or have the issues that caused libressl to be created been addressed now in openssl?


This post[0] contains some of the reasons:

  - better upstream support from projects 
  - To my understanding, various of the issues in OpenSSL
    that made us switch to libressl have been resolved.
    (for example memory management) 
  - libressl failed to retain compability with OpenSSL 
  - libressl breaks ABI every 6 months, OpenSSL does not 
  - FIPS support
[0]:http://lists.alpinelinux.org/alpine-devel/6308.html


A more organic reasoning about it here: http://lists.alpinelinux.org/alpine-devel/6073.html


It sounds weird that FIPS is relevant to Alpine. They're not going through the certification process, so does it matter?


There are situations where you want to be able to say you're using a FIPS certified crypto library, but that you're not going to get the whole thing certified.

I went through that with an IBM product I was working on years ago. It involved taking out any crypto implementations already in the code (I believe we had a reference DES implementation doing something with passwords) and switching to FIPS certified TLS libs. At the end we got to say we were FIPS compliant (NOT certified), which was important for some government contract or other.


Also interesting is that the PCI SSC is now recommending FIPS 140-2. While not likely relevant to the decision in Alpine, it may be relevant downstream with regard to choosing Alpine to develop on.

PCI Standards Security Council - Secure Software Standards v1.0 (Jan 2019) https://www.pcisecuritystandards.org/document_library?catego...


Oh interesting. That said I've only ever taken devices through PCI-PTS, not worked on PCI compliant software so I have no idea how much of a departure this is from common practice.

The one thing that would concern me about widespread adoption of FIPS-certified tech is that (IIRC) FIPS 140-2 essentially forbids PFS modes (DH/DHE) on TLS, presumably for traffic audit purposes in secure government environments.


OpenSSL is the de facto industry standard.

To me this list explains why one should stick to OpenSSL.


That logic might hold for a container platform meant to run other peoples code, but it’s one of the best reasons to not use it in other contexts like load balancers.


My logic is that you probably want a proven, up-to-date crypto/TLS library with a 'standard' API whatever the application.


I thought heartbleed proved this appeal to authority wrong causing liressl to exist in the first place.


OpenSSL is continuously scrutinised and has resources and pressure to fix issues.

A less ubiquitous library is not as scrutinised (so who knows what vulnerabilities lie within?) and has probably not the same resources/pressure to fix.

Forks are often political before anything else.

Fragmentation is not a good thing.


I’m not suggesting you fork it yourself, write your own, or use Joe Schmoe’s SSL library. There are other major implementations such as Amazon’s s2n that have many eyeballs on them daily.

Diversity of infrastructure components confers similar resistance as DNA diversity in the wild.

https://github.com/awslabs/s2n


Go use OpenSSL, see the absolutely horrendous API and then come back and see if you recommend it.


I'll add I saw a number of Docker image discussions where people asked about TLS 1.3 support, and the answer was "blocked in Alpine, which is blocked in Libressl".


From what I understand, libressl was a fork of openssl resulting from a bunch of people who were pissed about heartbleed. Not being so dependent on one library is good, but that doesn't necessarily mean that the alternatives are better. Openssl is still more established, and from what I understand (anecdotally), openssl is faster (an important consideration for a distro like alpine). With that said, libressl is likely more secure (which means it may be an option to go along with alpine's hardened kernel).

https://en.wikipedia.org/wiki/LibreSSL


Nice to see someone go "lets choose the faster lib over the more secure one" for a security lib. No way that could ever go wrong in the long run. Or if it already did.

Not commenting on the choice from alpine, just comment on this post.


From the web site: “ container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.”

I remember in 1997 when I could boot Linux off a 1.44MB floppy and get a fully a fully functioning Linux environment even with network support in a blitz. If 130MB is considered “lean”, what happened to our Unix principles of minimalism and clean design?


They bumped up against modern considerations of plug-n-play. You'll see that it's the kernel which makes up most of it, the rest of it is quite small. If you wish to compile a kernel for your own hardware, you can slim that down by a _huge_ amount. I've managed to go from around 130 mb -> 25 on one machine.


There’s a lot of neat stuff you can do if you build your own kernel.

On my old laptop I had a small EFI partition with a 30MB “linux.efi” kernel. The initramfs that was included had busybox, wpa_supplicant, elinks, and gcc5. The idea being I could totally switch OSes without having to make bootdisks or worry about having an unbootable computer. You can staticly link all the modules and firmware too which is convenient IMHO.


Openwrt still fits on routers with 4mb of rom space because of this approach. The x86 flavor, for example, doesn't have keyboard and mouse drivers by default.


It allows me to boot my kernel and initramfs directly from the SPI flash chip that stores my laptop's BIOS (coreboot), which is very useful.

My goal is to decrypt my disk from code stored on the motherboard (and physically write-protected) but I'm not there yet because fitting everything on 7.6MB is not easy. I may try using a bigger chip, or use the SPI-kernel to check the signatures of decryption code stored on the HDD.


Why not just go the more conventional approach of having a small stage-0 non-Linux loader stored in ROM which can validate an entire kernel stored on disk? If your goals are solely around trusted boot chain it makes more sense to keep the trust root (initial bootloader and verification) as small and possible to audit as possible, right? Even the Linux kernel seems like a pretty big attack surface to keep in ROM


> I remember in 1997 when I could boot Linux off a 1.44MB floppy

but in 1997 every pointer was half the size it is today, and there were 1/1000th the number of device drivers that exist today.


> I remember in 1997 when I could boot Linux off a 1.44MB floppy

I installed slackware in the early 90's. The kernel was on one floppy, then the barest rootfs was on a 2nd and 3rd floppy. I believe the whole install spanned 13 1.44MB floppies. That's not so different from say, current day Openwrt in overall size, which can still fit into 4MB of ROM if needed.


Remember the letter groups? A, AP, N for networking, X for basic X11. (:

Good times.


Ah yes, Slackware disk sets. I actually got the installer on a CD-ROM from Walnut Creek but had to resort to floppies because Slack didn’t recognise my optical drive (those days it was attached via an ISA mounted sound blaster card, not on the motherboard itself!). The distro I was referring to above was LOAF (and tomsrbt), Linux On A Floppy.


Let's get realistic, 130MB is in the lowest for ~modern linux. I'd love to have 1.44MB OS too, deeply, but 8-130 is lean for 2010s standards


Couldn't agree more.

Let's not forget we now have "native apps" which are actually web browsers packed with a full node.js instance and a local database, eating up several hundred megs of space for seemingly trivial tasks.


Most webpages are bigger than 1.44MB these days... I'm talking individual pages here. Though I'm not entirely sure that this is a good thing.


yeah, I want vintage core memory sized 4kB webpages


OpenBSD's installer fits on a 1.44MB floppy, and that includes network connectivity.


Do those 1.44MB include userspace similar to what alpine has and all of the linux drivers that alpine covers? Or are they downloaded via the internet?


miniroot.fs is >4mb for 6.4 last time I checked...


there are multiple install options, floppy64.fs is 1.4MB


Did you ever use the QNX demo-disk? A bootable Unix-like with GUI on a single 1.44MB floppy. I agree it's unrealistic these days, but it was an impressive statement-piece.


Always wanted to try these. I think it's possible to fetch an image online.


The mouse isn't working properly, but you can emulate it in your browser (or grab the image) here: https://archive.org/details/qnxdemo_modem_v4


It would be interesting to see a tree map of the sizes in Linux, like how big are the drivers, the network stack, ...

Maybe 99% of those 130Mb are the drivers, a lot were added to the kernel since 1997 I guess


Indeed, according to [1] a bit more than half of the Linux source is the drivers. But I don't know if those proportions still holds after compilation.

[1] https://unix.stackexchange.com/a/223753


The size is mostly caused by firmware.If you run in VM,like qemu,the size can be way smaller.


I remember that even in 1997 you needed one boot-disk containing the kernel, somewhat taylored to your system (SCSI/non-SCSI, networking) already because of space constraints, and a root-disk, and the running system was pretty minimal.


I'm running a normal installation; the largest package (nearly 500MB) is linux-vanilla. The included drivers are mostly what are hogging space. This goes back to the whole plug-and-play another person mentioned; there are a lot of supported devices.


I used it in containers (LXC through Proxmox VE) and my base install with "functioning Linux environment even with network support in a blitz" comes around 8MB, as the kernel comes from the host I simply save on that end. Works very well.


Kernel size and 64 bit code, and it's not like the rescue floppies didn't use some tricks, either.

It never got on the level of the famous QNX demo disk.


There's always TinyCore Linux.


Most of the size is caused by the Linux kernel and the firmware, so recommending another Linux Distribution wont help here.


Depends on the distro. Changing compilers or compiler flags and any dependent base libraries has a big impact on kernel size, and distros can tune kernel configs for size.

Fitting Linux on a floppy back in the day was about 80% tuning the kernel, and then playing with BusyBox size and finally things like compression or even repacking objects to align better. (It also helped if you had a floppy that could cheat it's way up to 1.68MB, a little known hack that traded reliability for space)


Still only 12MB!


Just out of curiosity:

> Firefox is only available on x86_64 due to Rust.

Could someone explain the reasoning behind this? I’m not familiar with whatever restrictions Rust may impose.



Ah, I see. Wouldn’t have thought making Rust work is so troublesome!

Thanks.


The issue is porting Rust to other architectures, which is not trivial. Work is being done to get there, just not in time for v3.9.


Huh? Linux on arm, i686, mips, ppc all have rustup installers. I'm not sure if that formally makes them "Tier 1" platforms but it does mean that they're pretty darn well supported by rust. Are the Alpine folks having problems because they're not using GNU libc?


IIRC, at some point rustc was not able to self-host on 32 bit platforms due to the amount of virtual memory it used. I don't know if it has been fixed yet, but I can imagine that this would be an issue if you want your entire package repository to be self-hosting.


The challenge might be bootstrapping. rustc makes a habit of capitalizing on just-introduced features, and they only require that you have the prior release. So to build rustc 1.x, you have to build 1.(x-1) first.

Also -- packaging. rustup might be available but the distros generally prefer their own native packaging system.


When bootstrapping a new arch, you usually cross-compile to get it working. This means that you don't need to build all of those builds. If you want to start a new bootstrap chain, which is what most distros have done, you do this once and then go from there as new releases come out.


Yeah I'm somewhat familiar with getting rust bootstrapped, I still have the 15/16/17 rust toolchains built for mipsel kicking around. Worst part of that was targeting wheezy though.

It just seems odd that these architectures seem to have official rust support but are being problematic on Alpine. Self-hosting might be a sticking point, but seems a bit of a silly one given how slow some of those architectures are.


Yes, that most likely plays a role. I wasn't too sure, so I left it out of my answer.


How suitable is Alpine as a desktop distribution? It seems like a low-GNU distribution with an emphaisis on static linkage. Is that a correct assessment? I'm very happy with Arch, I stopped my distro-hopping six years ago when I landed on it, but I worry that I'm getting complacent because Arch is just so easy to use.


I probably wouldn't use it for desktops. The AUR (and all the packages it offers) is one if the best features of arch and very convenient for desktop users. Not something I'd like to give up on my desktop, and it's dependent on gnu stuff to work.

However, it's great for small servers. I use raspberry pis and old computers to serve applications at home, and I just switched them all to Alpine. Perfect for that use case; orders of magnitude better than something like ubuntu server. I would highly recommend it for server.

Quick edit before anyone gets all offended: yeah it could be used for desktop, I just wouldn't recommend it. Especially coming from arch, the AUR represents a massive repository of software. It will likely be a while befire Alpine is in a similar position, especially if they want to stick to their musl static-link ethos. It's a lot easier to deal with compiling and possibly minor porting yourself for a single-porpose server box (only needs a few apps) vs a multi-purpose general box (needs many).


Recently I switched my little server from Ubuntu to Alpine. I'm now thinking of switching back. I miss Systemd. Ubuntus repo has so much more stuff.


Not answering your question directly. But found, for my needs for desktop support for Wayland is important (because I use AMD graphics cards without a fan (as I cannot cope with noise well)).

And since manufacture driver support for AMD is not there (my cards are over 5 years old) -- I found that speed of Wayland is really good.

So I 'standardized', on Fedora and just upgraded from 27 to 29 in one shot, using their upgrade plugin with 0 problems (had to uninstall like 3 packages and put them back).


I ran two arch boxes for about 7 years, two years ago I got an X200 with libreboot and did not want systemd on it. I chose Alpine because it allows you to use musl libc, this was after fighting a bug in the gentoo build process. The idea is to understand exactly how my system boots up, so I can cut as much cruft from it as possible, and also for my system to boot up in the same order every time, something that in my experience maintaining several computers with systemd, causes a lot of problems with chasing down bugs (If the bugs are nonexistent half the time because the services boot up in almost-random order, how the hell can you debug it?!).

In my experience, it's extremely viable. Most things you can get away with pulling from the Alpine repos, everything else you can stick in a chroot (musl libc causes some problems with compatibility) or compile from source (Which is usually quicker than you'd think, except when C++ gets involved).

It's so much more stable than any other modern linux system I have run. Someone I know has a twin X230 with Manjaro on, and the boot-up time is so long. I managed to get my Alpine box to 40 seconds to gui, but was limited by the dhcp resolution and the pre-boot flashups. Manjaro takes about three or four minutes to get to the login screen, and then another minute or two to load the GUI.

My much more powerful arch box has about the same bootup times as the X230, even though theoretically I am using lighter technologies. Something I have noticed as well is that because of systemd running boot items concurrently, it actually ends up with a less deterministic boot. A lot of the times it simply fails to resolve wifi on boot (leading to a several minute hang), and the systemd logs and dmesg show absolutely nothing at fault.


For what I've seen in the 20 minutes I tried it, it's probably best suited for ridiculously low footprint cli only installs. My favorite low fat distro for desktop on constrained x86 hardware is DietPI, which regardless of the name doesn't run only on xPI boards and is functionally very close to a normal Debian, although much lighter.

This is from a DietPi x86-64 install onto a Virtualbox VM, XFCE desktop plus an open terminal and both Firefox and LibreOffice loaded; Firefox showing mozilla.org webpage and Libreoffice Writer an empty page. Not bad at all! Although I prefer Armbian for embedded boards, DietPi really screams on small netbooks.

  dietpi@DietPi:~$ free
                total        used        free      shared  buff/cache   available
  Mem:        2052524      464456     1112748       41452      475320     1405128
  Swap:         45052           0       45052
(beware of punch-in-the-eye colors) https://dietpi.com/#download


Well for starters, there's X, some window managers, and firefox. It really just depends on what graphical software you need.


I work on graphical software using PyQt5. I'm not afraid of building Qt myself (looks like the system Qt is still 4.8). Under Arch, and even CentOS7, building Qt is a breeze (at least if you configure out QtWebEngine), but on Alpine I'd worry about satisfying the dependencies. I might give it a spin because things just seem too easy right now :)


The only problem with using Alpine as a desktop distribution, is that security updates will sometimes take months to be available, and this is only because the team working on Alpine is very small.

If this wasn't an issue, I would not use anything else on the desktop.


If security fixes can take months, then it's not suitable for any use (especially on servers).


Static linkage is a beautiful prospect, beckoning with promises of less time in dependency hell.


Although that sounds nice, dependency hell has never been a problem for me under Arch. Sometimes AUR-installed packages go out of date, but then you rebuild them and everything is OK again.


As someone that lived through the introduction of dynamic linking and threading into UNIX, it is kind of ironic to see the return to the past being celebrated with joy.


Dynamic Linking was actually heralded against by many of the foundational minds of UNIX. Because the flaws outweight the benefits. Do not quote me on this, but IIRC Plan9 does not have dynamic linking for this very reason.

http://harmful.cat-v.org/software/dynamic-linking/


Plan 9 was a middle step for Inferno, the OS HNers keep forgetting about.

Dynamic linking is everywhere on Inferno, implemented by the same Rob Pike of that email thread.

Also many seem unaware that Go supports dynamic linking for a couple of versions already. The only thing missing is building plugin libraries on Windows.


> Plan 9 was a middle step for Inferno, the OS HNers keep forgetting about.

I am aware of Inferno.

Also, the irony in talking about "HNers" as an Other, when you yourself are, to me, a random HNer. It's like an Anon implicitly complaining about an Anon, rather amusing.

> Dynamic linking is everywhere on Inferno, implemented by the same Rob Pike of that email thread.

Sure, but (if I remember correctly) Inferno also is written on/as a virtual machine. Inferno had rather different aims compared to most "UNIX" systems, whereas Plan 9 was a unification and generalization of the "UNIX" paradigm.

Windows NT (Or was it DOS) was built, in part, off of UNIX. That doesn't mean that we should look to Windows NT as an ideal UNIX system, because the design considerations are different, and the aims of the system are different.

> Also many seem unaware that Go supports dynamic linking for a couple of versions already.

Sure, that's not to say there wasn't a huge amount of debate around that. I believe in the end it was more or less agreed that language uptake was more important in this case, but I could be wrong. Regardless, if one is properly appraised of the debate around Go supporting dynamic linking, you can find Uriel, et al. have some solid arguments against dynamic linking.


> Switch from LibreSSL to OpenSSL

Glad that this happened. OpenSSL looks a lot better than when the entire drama started and it was quite hard to even build OpenSSL from source on alpine.


Much awaited release. I've been running my entire application line up on Alpine it's been awesome! I wish more Cloud / Hosting providers have default Alpine Images. I've been using alpine's APK packaging system to manage software builds and release cycles.

So for those who are curious my CI builds the software into packages automatically versioning them and marking the build versions, storing the packages in my GCS bucket and then automatically runs apk add --upgrade on my package. All orchestrated with Terraform and LXD, no docker / Kubernetes involved whatsoever. Now there is also apk-autoupdate which I look forward to exploring and seeing how it can simplify my build process.


Wonder why they use grub over the simpler syslinux option?


Alpine use syslinux,will use grub if you install with uefi enabled flag.


syslinux supports UEFI I think: https://www.syslinux.org/wiki/index.php?title=Install#UEFI

So I wonder what is the rationale for using Grub for bootloading UEFI systems.


Probably my favorite distro for dockerizing apps that need more OS. 130mb isn't exactly tiny, but it's a lot smaller than other more common options.

I'll sometimes do builds in a larger distro (debian or ubuntu server), then deploy into alpine.


Just wondering how many are using Alpine in Production? I have heard problems of LibreSSL ( no longer matters ) and muslc. But no one has came out and state they are using it happily in production on X number of Servers.


My company is using Alpine on (at least) 6 kubernetes nodes in production. I'm happy with it so far, but we're just running a java app (albeit high traffic).


No Docker image yet?

https://hub.docker.com/_/alpine


Follow Glider Labs GitHub Docker Alpine issue: https://github.com/gliderlabs/docker-alpine/issues/480


This great. I recently filed an issue to upgrade smokeping, and it's now closed. :)


Has anyone used this on an RPI? What was your use case?


I use it on an rPi and LOVE it. The system will boot quickly, and unless I `ibu`, I just reboot to clear my changes.

I use the rPi for dnscrypt-proxy2.


Still no EFI install media?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: