- Nov 26, 2010: https://news.ycombinator.com/item?id=1942778
- Aug 13, 2012: https://news.ycombinator.com/item?id=4374865
- Jul 5, 2016: https://news.ycombinator.com/item?id=12034277
- Nov 10, 2018: https://news.ycombinator.com/item?id=18421877
-June 2, 2010
-3 months ago
-Jan 16, 2016
I like Gentoo and still use it, but my biggest problem with it is that compiling packages takes a hell of a long time (especially some monstrosities like QT or webkit, which can take me days or even a week to compile on my old, slow laptop). You really have to have relatively modern system to compile all your packages from scratch, if you don't want to have to do it non-stop, virtually 24 hours a day. It's really annoying. I have better things to dedicate my processor cycles to and my own time to than constantly compiling packages.
So maybe I'll switch. I'm not sure to what, though. I've thought of BSD, but it has the same problem. I don't really like the idea of binary distros either, because of their relative inflexibility (no choice to include/omit package features that you want/don't want), and honestly, I don't really trust binary blobs as much as compiling from source... though maybe in the end it doesn't really matter.
It's my understanding that most of the other BSDs are similar in this regard, though I don't have first-hand experience.
if you want to stay with Linux, I have heard that Arch is one of the more BSD-like distributions.
FreeBSD definitely supports binary packages as well. It’s a great system to use too. In fact if there wasn’t a few Linux-specific tools I needed and FreeBSDs hardware support was on a par with Linux, then I would use FreeBSD as my primary system - without a doubt. But as it is I still run FreeBSD for all of my personal servers.
To be honest the tool I’d most want would be Docker and I couldn’t see that working even with binary compatibility.
On the FreeBSD servers I run, I just fire up a VM for the rare occasions I do require Linux (and it is extremely rare that happens), but Arch is a better compromise for me on my laptop.
Source: me. A FreeBSD workstation user for 16 years who has built his own machine with off the shelf parts since the beginning with a current machine running the latest and greatest.
Binary packages for -release and -stable are not updated.
I also ran into some strangeness with FreeBSD packages. I was on a STABLE release that was not the latest version. It was no longer possible for me to update and install binary packages.
This is generally the accepted solution if you don't want to follow Current.
Void also has forgone systemd which I find to be a mandatory requirement for a linux system. It's why I left Arch for OpenBSD in the first place. I love the simplicity of runit (the init system Void uses) and their package manager is top notch.
I haven't used Gentoo, but you say it can use binary packages. If it's too complicated, another option could be Archlinux. Building packages is really easy. It's just `asp export $package; cd $package; makepkg` and you'll get a $package-$version.pkg.tar.xz in the directory. If you want to modify something, just edit the PKGBUILD that asp downloaded. It's just a simple bash script with standard conventions. Once you're done modifying PKGBUILD, `makepkg` will make the package.
To make a repository, it's just a matter of putting all the .pkg.tar.xz in a directory and running `repose -J $repo .pkg.tar.xz; gpg -b $repo.db`, then hosting that directory with an http server.
Configuring the package manager to use your repo is just a matter of doing something like:
cat >> /etc/pacman.conf << EOF
Server = $url
I think DigitalOcean would support this better than Linode. You could keep the build host filesystem on a block storage volume instead of having to recreate it every time, and you could store the built packages in DigitalOcean's equivalent of S3 instead of needing a VPS just to host them.
I don't know about it being better, but you've got me curious. Linode also has block storage volumes. On using something like an S3, I wonder if that's really the best option. I don't know if it's cheaper, but it seems to be less flexible. For example, I like to have my private package repo provided on a VPN, not open to the public. I don't suppose using something akin to S3 would allow for a similar setup.
But really, what's your threat model? For a VPS employee/company to mess with your packages, they'd have to be personally targeting you. If you're only worried about systematic, automated handling of all company VPS to insert malware in the ones that have package repositories, then you can probably set it up in an unusual way to evade such a program.
I think chances are pretty slim that a VPS company's vps isolation is so crappy that you have the chances of getting your vps hardware shared with someone that knows of such a gapping security hole that could be such a huge liability to the VPS company.
EDIT: Also, why would someone go out of their way to compromise a neighboring VPS, check if they, by chance, have a package repository, and insert malware in that? Who are you, that someone would think that's a good use of their time?
You know, society can't function without trust. Every person that's close to you could suddenly turn around and try to kill you, but you have to trust that they function by reason, and know that they have no reason for doing so. Locks around the world are pretty useless to keep strangers from lock-picking them and very many of them are keyed-alike. Their real reason is to simply make it a greater hassle to get to whatever they're protecting and therefore make it a less appealing target. Like so and with other methods, people implement their security by making themselves a less appealing target. Some people setup the outside of their home as a dump while building a mansion inside. These people trust robbers to act on reason.
No one has perfect security. Security is a matter of choosing what to defend against (your threat model), choosing what you can trust, and anchoring your defenses on the things you trust.
EDIT 2: I removed the paragraph on VPSes being virtual in name only. Linode apparently uses KVM.
It has a similar "from scratch" policy, but all the official packages are pre-compiled. There's also the Arch User Repository, where you can find proprietary and unofficial packages, many of which will compile from source.
It also has by far the best documentation of any distro.
And when I installed Manjaro it felt like a 13 years old edgy kid riced the defaults. As much as I liked installing Arch from scratch I'm not always in the mood to do it. And if you automate the process you're essentially doing the same (but you don't lose street cred right).
> It also has by far the best documentation of any distro.
It really is the best wiki format and has a lot of valueable content. I wish more projects mimicked the approach. When I'm using Debian's wiki it feels like I have to think too much.
And, sure, it's not Debian or CentOS, so you don't have as much attention focused on the packages. Their record of keeping malware out is pretty solid, though, as far as we know.
I think a lot of security has to do with simplicity: if you don't understand how your software is configured, you're likely to open up a security hole on your system.
Arch's "don't start doing stuff until the user tells you" posture does a good job of making sure the user is aware of what's running on her system. Contrast this to Debian, which will often start running random services as soon as you install a package (e.g. Apache).
Seemed really amateurish. Hopefully that's changed.
Also, it was really immature compared to Gentoo back then. Now it's had some time to mature, so I might give it another go sometime. But I'd really have to feel I'm getting a huge win over Gentoo to bother.
This is the most salient feature of the modern pc BSDs to me. It's impossible to describe how different it feels to be truly in control of your system and understand/change it however you want.
I don't think there's any OpenBSD code in kernel space on IOS, just userspace.
Take something like Docker. Because Linux is popular, it was initially developed for Linux. And because Docker runs (best) on Linux, you get more deployments of Linux, and hence whoever makes the next big thing is more likely to develop it for Linux.
The end result is you have an OS that scales from smartphones to supercomputers, and so one needs quite a good reason to replace it.
Linux isn't exactly technically superior to other OS kernels, and it's definitely lacking in innovation.
You cite Docker as an example, but that's a shining example of Linux taking major innovations from other OSes and copying it badly. BSDs had jails and Solaris had zones long before Linux got containers, and whereas those were considered security features on other OSes, Docker containers are not seen as improving the security of systems.
Other features like this exist too. Linux has refused to allow better IPC mechanisms such as what Android to be upstreamed, and the Linux replacement for select/poll is generally considered to be the worst of the bunch.
Another consideration is that both smartphones and supercomputers tend to use lots of modifications to Linux not present in desktop kernels. Android, as mentioned above, uses a different IPC mechanism, while supercomputer applications rely a lot on libraries that bypass kernels because scaling to highly parallel 100,000-core systems requires breaking POSIX a fair amount (particularly the filesystem semantics).
Also when Netflix's open connect team finally open sources TLS sendfile than mainline FreeBSD will have much better file serving performance than Linux does. The FreeBSD downstream fork running on open connect appliances was doing 100Gbps of TLS encrypted traffic last year and I believe are doing much higher now.
Finally for me when I first got into Unix and Unix like operating systems I found the huge amount of Linux distros to be hard when it came to documentation. Generic Unix commands were pretty much the same across distros, but each one had a different package manager and different filesystem layouts, and different ways of upgrading, etc. With FreeBSD you just look up FreeBSD directions and don't have to worry about differences between distros.
The end result is you have A KERNEL that runs on everything from smartphones to supercomputers
The linux community and the direction the operating systems using that kernel are taking is "interesting"
I didn't really find that FreeBSD offered any advantage on the desktop over Linux, and features like browser sandboxing seem to (understandably) be lagging behind.
Lots of people are running away from Linux because of systemd, but that's a non-issue for me. It works just fine.
Curiously enough OpenBSD works perfectly OOtB, but in the end I returned to Fedora anyway because there's no support for Wine or Steam. I'd prefer it over FreeBSD though since I like the design of the base system better.
This seems to be NVidia’s fault rather than the FreeBSD project’s, though.
Recent problem of mine: in Linux, using a Bluetooth-connected Apple Magic Trackpad 2, I can't right click, let alone scroll or use any multitouch gestures.
It appears that the author has upstreamed his work and the module will be available with one of the next Linux Kernel Releases.
I installed FreeBSD on a ThinkPad T400 back in 2016... I was royally unimpressed with all the shit I had to go through to get drivers working on what is probably the most supported laptop of all time for the open-source community.
I would shudder to think about setting up an encrypted BSD dual-boot with Mac OS on my MBP, like I do with Arch.
And I mean this not to degrade BSD; I did love the design. Just, like, I don't want to spend all my time installing drivers from source and writing custom scripts to control my backlight and CPU fan.
But what a tour-de-force of passive aggression! I kneel.