Especially happy to see Tesseract OCR v4.0  now being in the mainline repository. Tesseract was the main motivation for changing my web stack to docker a couple of weeks ago, and I had to use a separate builder image  in Alpine 3.8. Now it is just:
> apk add tesseract-ocr
- better upstream support from projects
- To my understanding, various of the issues in OpenSSL
that made us switch to libressl have been resolved.
(for example memory management)
- libressl failed to retain compability with OpenSSL
- libressl breaks ABI every 6 months, OpenSSL does not
- FIPS support
I went through that with an IBM product I was working on years ago. It involved taking out any crypto implementations already in the code (I believe we had a reference DES implementation doing something with passwords) and switching to FIPS certified TLS libs. At the end we got to say we were FIPS compliant (NOT certified), which was important for some government contract or other.
PCI Standards Security Council - Secure Software Standards v1.0 (Jan 2019)
The one thing that would concern me about widespread adoption of FIPS-certified tech is that (IIRC) FIPS 140-2 essentially forbids PFS modes (DH/DHE) on TLS, presumably for traffic audit purposes in secure government environments.
To me this list explains why one should stick to OpenSSL.
A less ubiquitous library is not as scrutinised (so who knows what vulnerabilities lie within?) and has probably not the same resources/pressure to fix.
Forks are often political before anything else.
Fragmentation is not a good thing.
Diversity of infrastructure components confers similar resistance as DNA diversity in the wild.
Not commenting on the choice from alpine, just comment on this post.
I remember in 1997 when I could boot Linux off a 1.44MB floppy and get a fully a fully functioning Linux environment even with network support in a blitz. If 130MB is considered “lean”, what happened to our Unix principles of minimalism and clean design?
On my old laptop I had a small EFI partition with a 30MB “linux.efi” kernel. The initramfs that was included had busybox, wpa_supplicant, elinks, and gcc5. The idea being I could totally switch OSes without having to make bootdisks or worry about having an unbootable computer. You can staticly link all the modules and firmware too which is convenient IMHO.
My goal is to decrypt my disk from code stored on the motherboard (and physically write-protected) but I'm not there yet because fitting everything on 7.6MB is not easy. I may try using a bigger chip, or use the SPI-kernel to check the signatures of decryption code stored on the HDD.
but in 1997 every pointer was half the size it is today, and there were 1/1000th the number of device drivers that exist today.
I installed slackware in the early 90's. The kernel was on one floppy, then the barest rootfs was on a 2nd and 3rd floppy. I believe the whole install spanned 13 1.44MB floppies. That's not so different from say, current day Openwrt in overall size, which can still fit into 4MB of ROM if needed.
Let's not forget we now have "native apps" which are actually web browsers packed with a full node.js instance and a local database, eating up several hundred megs of space for seemingly trivial tasks.
Maybe 99% of those 130Mb are the drivers, a lot were added to the kernel since 1997 I guess
It never got on the level of the famous QNX demo disk.
Fitting Linux on a floppy back in the day was about 80% tuning the kernel, and then playing with BusyBox size and finally things like compression or even repacking objects to align better. (It also helped if you had a floppy that could cheat it's way up to 1.68MB, a little known hack that traded reliability for space)
> Firefox is only available on x86_64 due to Rust.
Could someone explain the reasoning behind this? I’m not familiar with whatever restrictions Rust may impose.
They only have rust packages for X86_64
Also -- packaging. rustup might be available but the distros generally prefer their own native packaging system.
It just seems odd that these architectures seem to have official rust support but are being problematic on Alpine. Self-hosting might be a sticking point, but seems a bit of a silly one given how slow some of those architectures are.
However, it's great for small servers. I use raspberry pis and old computers to serve applications at home, and I just switched them all to Alpine. Perfect for that use case; orders of magnitude better than something like ubuntu server. I would highly recommend it for server.
Quick edit before anyone gets all offended: yeah it could be used for desktop, I just wouldn't recommend it. Especially coming from arch, the AUR represents a massive repository of software. It will likely be a while befire Alpine is in a similar position, especially if they want to stick to their musl static-link ethos. It's a lot easier to deal with compiling and possibly minor porting yourself for a single-porpose server box (only needs a few apps) vs a multi-purpose general box (needs many).
And since manufacture driver support for AMD is not there (my cards are over 5 years old) -- I found that speed of Wayland is really good.
So I 'standardized', on Fedora and just upgraded from 27 to 29 in one shot, using their upgrade plugin with 0 problems (had to uninstall like 3 packages and put them back).
In my experience, it's extremely viable. Most things you can get away with pulling from the Alpine repos, everything else you can stick in a chroot (musl libc causes some problems with compatibility) or compile from source (Which is usually quicker than you'd think, except when C++ gets involved).
It's so much more stable than any other modern linux system I have run. Someone I know has a twin X230 with Manjaro on, and the boot-up time is so long. I managed to get my Alpine box to 40 seconds to gui, but was limited by the dhcp resolution and the pre-boot flashups. Manjaro takes about three or four minutes to get to the login screen, and then another minute or two to load the GUI.
My much more powerful arch box has about the same bootup times as the X230, even though theoretically I am using lighter technologies. Something I have noticed as well is that because of systemd running boot items concurrently, it actually ends up with a less deterministic boot. A lot of the times it simply fails to resolve wifi on boot (leading to a several minute hang), and the systemd logs and dmesg show absolutely nothing at fault.
This is from a DietPi x86-64 install onto a Virtualbox VM, XFCE desktop plus an open terminal and both Firefox and LibreOffice loaded; Firefox showing mozilla.org webpage and Libreoffice Writer an empty page.
Not bad at all! Although I prefer Armbian for embedded boards, DietPi really screams on small netbooks.
total used free shared buff/cache available
Mem: 2052524 464456 1112748 41452 475320 1405128
Swap: 45052 0 45052
If this wasn't an issue, I would not use anything else on the desktop.
Dynamic linking is everywhere on Inferno, implemented by the same Rob Pike of that email thread.
Also many seem unaware that Go supports dynamic linking for a couple of versions already. The only thing missing is building plugin libraries on Windows.
I am aware of Inferno.
Also, the irony in talking about "HNers" as an Other, when you yourself are, to me, a random HNer. It's like an Anon implicitly complaining about an Anon, rather amusing.
> Dynamic linking is everywhere on Inferno, implemented by the same Rob Pike of that email thread.
Sure, but (if I remember correctly) Inferno also is written on/as a virtual machine. Inferno had rather different aims compared to most "UNIX" systems, whereas Plan 9 was a unification and generalization of the "UNIX" paradigm.
Windows NT (Or was it DOS) was built, in part, off of UNIX. That doesn't mean that we should look to Windows NT as an ideal UNIX system, because the design considerations are different, and the aims of the system are different.
> Also many seem unaware that Go supports dynamic linking for a couple of versions already.
Sure, that's not to say there wasn't a huge amount of debate around that. I believe in the end it was more or less agreed that language uptake was more important in this case, but I could be wrong. Regardless, if one is properly appraised of the debate around Go supporting dynamic linking, you can find Uriel, et al. have some solid arguments against dynamic linking.
Glad that this happened. OpenSSL looks a lot better than when the entire drama started and it was quite hard to even build OpenSSL from source on alpine.
So for those who are curious my CI builds the software into packages automatically versioning them and marking the build versions, storing the packages in my GCS bucket and then automatically runs apk add --upgrade on my package. All orchestrated with Terraform and LXD, no docker / Kubernetes involved whatsoever. Now there is also apk-autoupdate which I look forward to exploring and seeing how it can simplify my build process.
So I wonder what is the rationale for using Grub for bootloading UEFI systems.
I'll sometimes do builds in a larger distro (debian or ubuntu server), then deploy into alpine.
I use the rPi for dnscrypt-proxy2.