Funny enough, I decided to play with FreeBSD for personal projects in 2020. I gave up and I am reverting all my servers to Linux in 2022, for the opposite of the reasons mentioned in this article.
* Lack of systemd. Managing services through shell scripts is outdated to me. It feels very hacky, there is no way to specify dependencies, and auto-restarts in case of crashes. Many FreeBSD devs praise launchd, well... systemd is a clone of launchd.
* FreeBSD jail are sub-optimal compared to systemd-nspawn. There are tons of tools to create freebsd jails (manually, ezjail, bastillebsd, etc…) half of them are deprecated. At the end all of your jails end up on the same loopback interface, making it hard to firewall. I couldn't find a way to have one network interface per jail. With Linux, debootstrap + machinectl and you're good to go.
* Lack of security modules (such as SELinux) -- Edit: I should have written "Lack of good security module"
* nftables is way easier to grasp than pf, and as fast as pf, and has atomic reloads.
I've grown to love systemd. It solves a lot of my problems with init scripts, particularly the ones involving environment / PATH at boot time. I've made init scripts for things before which work when they are invoked manually, but not at boot time because PATH was different. With systemd I am confident that if it works through systemctl it will work at boot.
Maybe I am not tuning Linux appropriately, but I have been in situations where a Linux system is overwhelmed / overloaded and I am unable to ssh to it. I have never had that experience with FreeBSD -- somehow ssh is always quick and responsive even if the OS is out of memory.
Most of the systems that I deal with are Linux, but I still have a few FreeBSD systems around and they are extraordinarily stable.
I think I’ve grown to appreciate it, but not love it. It feels like both systems are at opposite ends of a pendulum.
The RC/unit system was very approachable for me. You could explain it easily, and you could get inside of it and mess around very easily. That affordance/discoverability was awesome.
With systemd, simple things are nicely templated. I can find an example and tweak it to achieve what I want. Complicated things get complicated real fast.
What is the use case for that? I never had these needs on servers. Maybe you are doing something very different from me, so I am curious. I use RC on servers and systemd on clients; things run robustly for 15+ years on many servers for me (with security updates included).
Let's say that you're starting Apache Tomcat. You have to dig into the tomcat startup scripts and figure out where, if anywhere, it writes the pid file to disk so that you can later use it to determine if the process is running or not. If java happened to crash, though, this pid file is stale and might not point to a valid process. There's a chance that the pid has been reused, and it could have been reused by anything -- even another java process!
This is important, because this pid file is used to determine which process receives the kill signal. If you get it wrong, and have the right permissions, you can accidentally kill something that you did not intend to kill.
This is further complicated if you want to run multiple instances of Tomcat because now you need to have a unique path for this pid file per tomcat instance.
If the thing that you're trying to run doesn't fork, you then have to execute it in the background and then store the result of $! somewhere so that you know how to kill the process later on.
It's all very error prone and the process for each daemon is often different.
Your description is fine, but it’s missing one crucial detail: it’s specific to Linux. In FreeBSD this is already taken care of by rc infrastructure; there is no need for the user or sysadmin to mess with it.
I fully agree that it is a Linux only solution, but that's kind of the topic here.
This still doesn't handle restarting crashed services, and it still is true if you need to make init scripts for your own services outside of the ports tree.
It took me a long time to come to terms with systemd, but I am very glad that I did. For me it makes defining services both easy and reliable.
We're almost in agreement -- FreeBSD does handle pidfiles better and more uniformly than Linux does. It doesn't help you if you want to run two instances of the same service.
I looked at one of my FreeBSD servers to see how it was handled in the case of ZNC, an IRC bouncer that I use. ZNC doesn't produce a pid file on startup, so the FreeBSD RC framework tries to find a matching process in the output of 'ps'. [1] As soon as you attempt to run multiple instances of a given service, this falls over completely. [1]
daemon(8) helps -- it handles restarting of processes if they crash, and it can manage pid files and prevent them from getting stale. Nothing on my FreeBSD system uses it. The unbound (dns cache) port uses the pidfile option for rc.subr(8). Looking at how check_pidfile is implemented, it attempts to verify that the process represented by the pidfile matches the process name. Pids also wrap on FreeBSD, so you have a chance of a false positive match if you run multiple instances of a given daemon. I could, of course, change unbound's rc scripts to use daemon, but that feels like a lot of thinking about pidfiles for something that was taken care of a decade ago.
I do like FreeBSD, don't get me wrong, and I use it in my every day life. systemd solves problems for me, and I really like the way it manages process groups by utilizing kernel features.
> Let's say that you're starting Apache Tomcat. You have to dig into the tomcat startup scripts and figure out where, if anywhere, it writes the pid file to disk so that you can later use it to determine if the process is running or not.
The commandline argument is --PidFile and --LogPath. Most, if not all, programs allow you to customise this. It should never be a guessing game, especially when you are the one creating the init file, therefore you are the one in control of the running program.
Those arguments are for the Windows service and don't appear to have a corresponding Linux option. On the Linux side, if things are still done the same way as they were in the past, you had to set CATALINA_PID before starting the java process.
It's still a guessing game, though, even with CATALINA_PID. It is entirely possible for Java to crash (something which RC scripts do not handle, at all) and another java process starting up which happens to be assigned the same process id as the dead java process. This can not happen with systemd units because each service unit is its own Linux cgroup and it can tell which processes belong to the service.
You could just run every daemon in its own jail. You don't need to chroot (unless you want to), and don't need to do any jail specific network config (unless you want to), but you could use the jail name to ensure no more than one tomcat (or tomcat-config) process, and you could use the jail name/id to capture process tree to kill it without ambiguity.
With respect to servers that crash, approaches differ, but you could use an off the shelf, single purpose daemon respawner, or you could change the whole system, or you could endeavor to not have crashing servers, such that if it crashes, it's worth a human taking a look.
Sure, you can do all that and set it up manually. I'd love to be corrected here but last I checked this was not done automatically in any BSD. Systemd recognizes this is so common that it does it automatically for every service using the Linux equivalent (cgroups). IMO now that we have these tools, every sysadmin is always going to want to use cgroups/jails for every service all the time and never want to use pidfiles because pidfiles are tremendously broken, error-prone and racy.
Even with systemd one may need to deal with pid files for services with more than one process.
Systemd has heuristics to detect the main process to send the kill signal or detect a crash, but it can guess wrong. Telling it the pid file location makes things reliable.
Plus creation of a pid file serves as a synchronization point that systemd uses to know that the process is ready. The latter is useful when the service does not support systemd native API for state transition notifications.
And the best thing is that even if systemd has to be told about pid files, one never needs to deal with stalled files or pid reuse. Systemd automatically removes them on process shutdown.
A pid file is never actually reliable though. Since the supervised process has control over it, it can write any number it wants in there and trick the service manager into signaling the wrong process. As long as the process is not root and can't escape its own cgroup, systemd's pid detection is going to be less error-prone in basically every case.
I can't stress this enough. Pid files are really bad. The fact that we used to have to use them is a flaw in the OS design. Using them for startup notification is also a hack in and of itself. The kernel has enough information to fully track lifecycle of these processes without writing information into files that are inherently prone to race conditions, and we now have better APIs to expose this information, so we shouldn't need to use these hacks anymore. I don't think there is any Unix-like system left that considers the old forking daemon style to be a good idea, and systemd's handling of pidfiles is really just a legacy compatibility thing.
You have to do it for correctness. Every use case needs this handled.
Okay so you have a PID file somewhere /var/myservice/myservice.pid. The contents of that file is a number which is supposed to correspond to a process you can find in proc.
But PIDs are recycled or more likely your PID files didn't get cleaned up on reboot. So you look at your file and it says 2567, you look up 2567 and see a running process! Done right? Well it just so happenes that a random other process was assigned that PID and your service isn't actually running.
pidfd's are the real real solution to this but the short and long tail of software uses pidfiles.
> But PIDs are recycled or more likely your PID files didn't get cleaned up on reboot. So you look at your file and it says 2567, you look up 2567 and see a running process! Done right? Well it just so happenes that a random other process was assigned that PID and your service isn't actually running.
If you're unlucky, though, pid 2567 might match another myservice instance. This can easily happen if you're running many instances of the same service. Even checking /proc/$PID/exe could give you a false positive.
I don't expect many programs do this (and I agree the real solution would be handles) but it should be possible to check the timestamp on the PID file and only kill the corresponding process if its startup time was earlier.
There might still be race conditions but this should cut down the chance dramatically.
How about keeping pidfiles on tmpfs mounts so they do get cleaned up?
I guess that'd be an organisational thing to get all the apps to change to a consistent /var/tmp so you could mount it..
That's still not good enough. pids wrap and eventually it could point to something valid, especially on a busy system.
systemd uses linux cgroups so that it knows exactly which pids belong to the group.
The defaults have surely changed over the years, but pid_max used to be ~32K by default. On the system I'm typing this comment on, /proc/sys/kernel/pid_max is set to 4194304.
Wait... how would pid wrapping impact anything at all unless the process died and the pid file was stale (which is usually a fairly unusual circumstance - I've had it happen maybe a dozen times over the last couple of decades).
And I guess if you have stuff surprise dying without any of the usual cleanup, you have a pretty serious situation in general. Maybe need a process monitor service just for that :)
I was focusing on the "surprise startup" scenario where it would all get wiped and reset.
As with anything else, your chances increase with the number of processes you have running.
If it blindly reads the pid and sends a kill signal, your odds are pretty good with a limit of 32K pids on a busy system. If it confirms that the process name matches an expected value, you have less of a chance.. but if your process name is bash, java, or python, maybe not as good as you would hope.
I don't have things crashing a lot, but it's naive to pretend that it never happens. It could result in two things: the rc system telling me that everything is fine or the rc system sending signals to some unrelated, poor, unsuspecting, processes.
I don't need a process monitor just for that. I have systemd. :-)
> The defaults have surely changed over the years, but pid_max used to be ~32K by default. On the system I'm typing this comment on, /proc/sys/kernel/pid_max is set to 4194304.
How does systemd make sure it never reuses a cgroup, OOI? If you're worried about PIDs wrapping, surely that applies to anything randomly-generated as well.
Your nice shell-scripts in /etc/rc.d just won't handle a service crashing for any reason, at all (systemd does that)
Your nice shell-scripts in /etc/rc.d will start EVERYTHING and ANYTHING just in, case, even if you don't always need it (systemd does support socket activation)
Your nice shell-scripts can't handle parallelism (systemd can)
Your nice shell-scripts can't reorder stuff at boot, you have to specify it by hand (systemd can via Needs/RequiredBy etc)
Your nice shell-scripts are worthless if you need to run more than one instance of a service per server (with systemd having many instances of the same service is a feature, not an after-thought)
Your nice shell-scripts won't help you troubleshoot a failing service (systemd will, via systemctl status AND journalctl).
Service restart has been a solved problem in UNIX for over 40 years. It is called inetd. It is not perfect, and it was originally created for a slightly different operating model, but it does restart crashed «services». They are, in fact, called daemons. Shell scripts are for manual service starts, stops and restarts (when the daemon does not support a restart on SIGHUP), they are not meant for the automated service restarts.
Different UNIX systems have gone on to replace inetd to address specific shortcomings of the inetd model with launchd, service management facility, systemd, SAM, but the service restart is not an innovation that systemd has brought along. It is a further, fine-grained, improvement over a decade old concept and a tool + ecosystem that has implemented the concept (inetd + rc scripts).
SysV-RC handles all that apart from socket activation, which is the job of inetd. This is all before systemd.
I must say that now it has more features and the configuration format, while could be simpler, is way better than sysvrc magic comments. I'd suggest it should be XML with a simple config file equivalent for simple cases, as the subtree update in YAML-like is a hell anyway.
This speaks more to the issues regarding the "made of many little pieces without coordination" spelled out in the linked post than to using simple scripts for service management. Also the idea that you would have pid files scattered all over instead of /var/run seems like the sort of chaos linux spawns.
The *BSD rc systems are robust and rely on a framework that abstracts stuff out, you're never going to be digging in there for a pid file's location.
I looked into how the RC system works on one of my FreeBSD servers. You might be surprised to see how rc.subr does this -- it parses the output of 'ps' to look for the process name and matches it to what's listed in the script. This only works in the simplest of cases. If the script lists a pidfile, it finds that pid and makes sure that the process name of that pid matches what is in the script.
You don't need to dig for the pid file's location, I'll give you that, but it's also possible for this robust rc subsystem to kill the wrong process under the right circumstances.
My system runs unbound (dns cache), so I looked for its pidfile. It's not in /var/run, but rather wherever it happens to be specified in the config. The default is /usr/local/etc/unbound/unbound.pid.
In mainstream Linux distributions, things in /var/run are likely sane if you live in the packaged world. My tomcat example was about adding software outside of the packaged world / ports tree, which is why you would have to track down the pid file to make a service. Given that unbound places its pid file outside of /var/run, which is more chaotic?
I like FreeBSD, and I use it daily! I also very much like the features and organization that systemd brings to the table.
I think RC was only simple if you stuck to a single distro and had a neatly predefined pattern of work.
It's true that systemd is nowhere near as easy to hack/get into but at the same time I find that the largely consistent definition language/CLI give me a lot less reason to want to do so.
When the need does arise, there's usually a line or two I can add to my service definition to get it to do what I want.
All the problems with NFS dependencies described in the article existed before. The system administrator was supposed to address them manually by ordering things in rc files adding sometimes various sleep pauses to wait for things to start.
Systemd allowed to solve them in a reliable way at the distribution level with no need for the system administrator to do anything beyond providing the exports and fstab entries.
I think there's something pathological in the I/O subsystems on Linux that make it a bad experience - I've experienced horrible U/I latencies on desktop and server settings with Linux when there was any kind of I/O load, and found FreeBSD to always be a breath of fresh and quick air in this regard.
For a long time after creating my own Linux distro, I had the same kind of problems also. It turns out the Linux kernel is horribly tuned by default. After a number of tweaks and adjustments, I finally got all those bugs ironed out. Now my (four core) desktop is perfectly smooth and responsive under all loads, even playing video and running multiple builds while copying files around. Here's the important parts of what I've done:
* set disk i/o schedulers to 'bfd' for spinning drives and 'deadline' for solid state, and 'none' for nvme, by creating a file in /etc/udev/rules.d . kernel must have deadline and bfd schedulers compiled in.
* turned on SCSI block multiqueue in kernel config. requires kernel command line option scsi_mod.use_blk_mq=1 to actually enable it. this helped, but did not completely cure the disk i/o problem.
* patched kernel source file ./block/blk-mq-sched.c to hard limit number of queued block device requests to 2, instead of default which is like 32. this absolutely cured the problem. no more disk i/o dragging the system down. doesn't seem to have a major effect on throughput.
* kernel is configured for full preemption, with 1000hz timer frequency.
* for architectures which will boot using the muqss cpu scheduler patch, i enable that with a 100hz timer freq instead.
* overcommit is disabled, as well as swap, and i use earlyoom to ensure process destruction proceeds in a controlled manner in event of memory exhaustion.
That's the bulk of it. No real magic involved; just un-fuck-ifying the default kernel config, which is garbage even for server use IMO.
(This is on a 4.x kernel btw, and I have no plans to downgrade to the 5.x series.)
The fact that these some or all of these tweaks aren't done by default would seem, along with other evidence, to support my belief that Linux is actively being sabotaged by people who don't want it to succeed.
Software have a hard time keeping up with hardware architecture, mostly because of backwards compatibility...
Imagine running a restaurant, normally you can take 32 orders and have the customer sit and wait. One day you get new chefs that can make food 100x faster, but now you can only take one or two orders before the chefs have the food ready and you have to give it to the customer that ordered it. So despite the chefs being 100x faster it now takes much longer to place an order, and the waiting line can grow long with impatient customers.
There are definitely pathological cases around. Here's an 8 year old bug, still valid, that's a common extreme slowdown when copying to/from a USB drive:
It sometimes manifests as audio stuttering (the kind which sounds like unstable sample rate) which suggests a lot of context switching, perhaps a scheduler problem. Had this problem intermittently and seemingly at random for many years. Sometimes would happen all day, then nothing for months. Quite Bizarre.
Yep, one of my systems got this treatment. It's annoying because OOM has been an issue ever since. I'm playing with EarlyOOM and trying to remember if it'll be a huge pain to resize the partition for more swap space. Thanks for your reply.
Thanks for the tip on EarlyOOM. Swap file is at least easier to handle than swap partition.
Some run their Linux without swap at all, however I think that is better for a server setup where you run a small set of specific set of binaries, thus you know your load.
That is not the case on a desktop where you running a variable set of binaries depending on your task.
I’m somewhat surprised how bad this works on Linux, I don’t think I ever experienced a swap problem on Windows (not even in the 9x days), I guess this is because "Year of the Linux desktop" never happened, Linux is primarily a server and embedded OS, not a desktop OS.
Unless one have the latest SSDs on a server to tolerate occasional spikes in memory zram makes more sense. Just configure it to use lz4 compressor. This way even with halve of the memory compressed the system remains somewhat responsive and if the compression no longer helps, then killing the memory hog is probably the right thing.
On my desktop I run with no swap, overcommit disabled, and earlyoom. I frequently do heavy work with it including such things as Chromium builds and it's trouble free. See above for the tweaks I've made to ensure perfect responsiveness under all loads.
1x RAM plus a bit on any machine I want to hibernate eg a laptop.
Anything else, 2GB or less or not bother with a swap disc and just use a swap file with a cap at say 2x RAM. Also depends on how much RAM and disc is available.
If it makes you feel any better, I have been using Linux for like 25-30 years now and I still experience "sigh. and now I can't SSH into the box to kill the thing I shouldn't have started".
My boss graduated from Berkely, and so I occasionally had to administer a FreeBSD box he kept around for the "superior OS philosophy".
My biggest annoyance, apart from the obvious lack of systemd, was a social one: Any time I had to look up how to accomplish some non-trivial task I would inevitably find a thread on the FreeBSD forums by someone else who has had exactly the same problem in the past, together with a response by some dev along the lines of "why would anyone ever need to do that?".
Even though I severely dislike systemd, and am a fan of FreeBSD's stability and simplicity, I also have found this attitude to be an annoyance. For example, if I want to upgrade a fleet of a hundred or so FreeBSD boxes remotely, the answer seems to be, "Don't do that. You must upgrade each one by hand."
This is the principal reason why I've leaned toward Debian stable (which also aims at being a stable UNIX like FreeBSD) for decades, even though Debian has also been infected with systemd and has made other questionable decisions. Alternatively, I've also had good luck with Void, Alpine, and Artix. (I've had difficulties with electron apps on Void desktop, but it runs flawlessly on servers.)
> For example, if I want to upgrade a fleet of a hundred or so FreeBSD boxes remotely, the answer seems to be, "Don't do that. You must upgrade each one by hand."
either that or "don't do an upgrade, build new ones and replace the old ones instead"
I needed to script something for a FreeBSD server, and I was doing the development in Linux. It would have been OK if I had used Ruby/Python/Perl, etc., but I decided to do it in Bash. I had to make quite a few fixes when I got the script on the FreeBSD box. Then I deployed it to an OpenBSD system. More fixes.
I have mixed feelings about this.
On one hand, this is extremely alienating.
On the other hand, I respect when devs reject features to keep their code simple.
I don't think this has anything to do with graduating from berkely, I'm a hobo with a keyboard.
> At the end all of your jails end up on the same loopback interface, making it hard to firewall. I couldn't find a way to have one network interface per jail.
You may want to look at vnet, which gives jails their own networking stack; then you can give interfaces to the jail. If you use ipfw instead of pf, jail id/name is a matchable attribute on firewall rules; although it's not perfect, IIRC I couldn't get incoming SYNs to match by jail id, but you can match the rest of the packets for the connection. And that brings up the three firewalls of FreeBSD debate; maybe you had already picked pf because it met a need you couldn't (easily) meet with ipfw; you can run both simultaneously, but I wouldn't recommend it. Nobody seems to run ipf, though.
Edit: you may also just want to limit each jail to a specific IP address, and then it's easy to firewall.
> Many FreeBSD devs praise launchd, well... systemd is a clone of launchd.
No they are not. Systemd has a way bigger scope then launchd. It's like saying a truck is the same thing as a family car because they both solve the mobility problem.
> FreeBSD jail are sub-optimal compared to systemd-nspawn.
systemd-nspawn isn't a container. For example it doesn't manage resources such as CPU or IO. Again, the scope is way different and in this case there is a whole slew of things systemd-nspawn isn't going to manage for you.
BTW launchd doesn't have a feature like 'systemd-nspawn'.
> Lack of security modules (such as SELinux) -- Edit: I should have written "Lack of good security module"
And how is SELinux a good security module? SELinux with it's design fell in the same pitfall dozens of security systems did before it; ACL-Hell, Role-hell and now with SELinux we also have Label-hell.
> systemd-nspawn isn't a container. For example it doesn't manage resources such as CPU or IO. Again, the scope is way different and in this case there is a whole slew of things systemd-nspawn isn't going to manage for you.
It does[1]. At the end systemd-nspawn runs in a unit, which can be (like all systemd unit) resource-controlled.
> BTW launchd doesn't have a feature like 'systemd-nspawn'.
Neither does systemd. systemd-nspawn is just another binary, like git ant git-annex. The only difference from git and git-annex is that systemd and systemd-nspawn are maintained by the same team.
But like git and git-annex, most systemd installations don't have systemd-nspawn, but systemd-nspawn needs systemd.
> And how is SELinux a good security module? SELinux with it's design fell in the same pitfall dozens of security systems did before it; ACL-Hell, Role-hell and now with SELinux we also have Label-hell.
SELinux is generic enough to allow for sandboxing of services. But in its mainstream use, SELinux is well design that it allows for reuse of policies. Most people don't care about SELinux, they just run CentOS, install RPM from the repository, and everything works out of the box with heighten security. (= if there is a vulnerability in any of these packages, the radius-blast of the attack is limited thanks to SELinux's Mandatory-Access-Control)
That's not, strictly speaking, true. systemd-nspawn is able to do the cgroups-based containers fine without systemd as init, though you do lack some of the fancier resource control and networking features.
systemd the init system has nothing to do with DNS.
systemd the family of tools has a DNS server, systemd-resolved. Like all other tools in the family, using the init system does not require using the other tools, and sometimes also vice versa.
In the broader "Poetteringware" family of tools, sound is handled by pulseaudio, though the server side of pulseaudio is in the process of being replaced by pipewire.
Fcron can also do random timings, jitter, running if the time passed occured while system was off, timing based on runtime not elapsed time, delay until system is less loaded, avoid overlapping instances of the same job.
Can you elaborate? I'm a macOS laptop/FreeBSD server guy. What are systemd timers, how do they work, and why do you feel they solve your problem better?
Generally I would say the biggest difference is that with timers, you get more control.
For example, you can schedule a timer to run 10 minutes after boot. Or a timer that actives 10 minutes after it has last finished running (note: not when it started last time but when it finished! So if the proc takes 10 hours, there is a 10 minute gap between runs. If it takes 10 minutes, there is still a 10 minute gap).
You can also schedule something 10 minutes after a user logs in (or 10 seconds later, etc.).
Additionally you get Accuracy and RandomizedDelay. The former lets you configure how accurate the timer needs to be, down to 1 sec or up to a day. So your unit now runs somewhere on the day it's supposed to run. And with the later you can ensure that there is no predictable runtime, this can be important for monitoring.
My biggest favorite is Persistent=. If true, systemd will check when the service last ran. And if it should have been scheduled atleast once since then, it'll activate. I use this for backing up my home PC. When I do a quick restart, no backups are done but when I shutdown for the night, first thing in the morning my PC has a backup done.
Yes but anacron is still a bit less powerful than timers, since I can schedule a timer to run every hour instead of once an hour plus on reboot in two very short config lines. And I don't think anacron has a concept of running things every hours not including runtime, only including runtime.
It helps dispersing jobs like backups. If you have 20 servers, your backup storage is not hit by them all at the same time. Even better, because it takes into account how long it runs, if two servers hit it at the same time, the reduced performance will automatically spread the server that took longer behind the other. That scheme reduced the needed capacity in network and CPU of our backup servers a lot.
If it ran every hour, all servers would either hit the backup store at the same time or you would have to manually disperse them. Randomized Delay is great if you want to avoid this problem with short running jobs but it doesn't work well when most jobs take 10 minutes or more and the delay becomes larger than the timer's repeat interval.
So in that case, using a "run every 60 minutes" scheme is a massive advantage that reduces coordination needs.
And every server will have to be put on a different schedule. By running it every 60 minutes, it doesn't matter when it runs, it automatically aligns. Once you managed 200 servers, that becomes valuable and no longer manually managable ("How many servers are running backups on :05? Do I need to move some? Can I fit in one more?")
Counterpoint is that I've never had a requirement for timing scenarios like this, so now I'm dragging all that along for the ride for no advantage. Bloat is great when you need that rare thing I guess; otherwise it's just bloat and complexity for no benefit.
Maybe because you've never had the ability to use such timing requirements. Having more advanced tools available also makes you think more in them. If you only ever used cron, there has been no reason to think about a service running every hour or immediately if it missed the schedule. Because the only tools are every hour and on reboot. And no option to make "every hour" mean "between script invocations" instead of "on the hour of the schedule".
This was my experience. Cron was great because it did what I needed it to do and I knew how to use it. But over the years I accumulated all sorts of hacks to avoid pitfalls. When I learned systemd timers I didn't like the complexity, but as new needs arose I thought about both cron and systemd and realized that systmd timers were better for 75% of my needs.
There are lots of things that I have the ability to do that I've never had the requirement to do. I call those things "bloat" because they are unnecessary features that add complexity and bugs to the system.
Unnecessary to you maybe but not to other people. It's a tiny bit rude to simply call a feature rude just because you don't use it. After all, by that argument I could call the screen reader software or the Speech-to-Text bloat. But they aren't.
Does that add a static random delay on bootup or does it randomly delay the job each time? Because for timers you can pick that for every single timer individually.
agreed, and more importantly it's possible to implement these scenarios with cron if they're needed. i'll take a simple building block over bloat. where does the bloat stop? i can come up with tens more scenarios that timers doesn't cover.
> Lack of systemd. Managing services through shell scripts is outdated to me. It feels very hacky, there is no way to specify dependencies, and auto-restarts in case of crashes. Many FreeBSD devs praise launchd, well... systemd is a clone of launchd.
I'm not a fan of systemd personally but I do understand it has some good parts to it (such as the ones you've listed). That all said, you can still specify dependencies in FreeBSD with the existing init daemon. Albeit it's a little hacky compared with systemd (comments at the top of the script). But it does work.
> FreeBSD jail are sub-optimal compared to systemd-nspawn. There are tons of tools to create freebsd jails (manually, ezjail, bastillebsd, etc…) half of them are deprecated. At the end all of your jails end up on the same loopback interface, making it hard to firewall. I couldn't find a way to have one network interface per jail. With Linux, debootstrap + machinectl and you're good to go.
It's definitely possible to have one interface per jail, I've done exactly that. However back when I last did it (5+ years ago?) you needed to manually edit the networking config and jail config to do it. There might be a more "linuxy" way to do this with Jails now though but its definitely possible. eg https://etherealwake.com/2021/08/freebsd-jail-networking/
There's a lot to unpack here. For example, there's certainly other ways to network jails and all three ways you've mentioned to maintain jails are not deprecated.
Security modules do exist, they're different from Linux. Are you sure you're not just expecting FreeBSD-as-Linux?
As for init... What can I say, I've never been anti-systemd, not even remotely, but rc.d is much nicer than sysvinit, and I find it much simpler to understand than systemd. In fact, I think rc.d is an example of how Linux could have alternatively migrated from sysvinit without pissing some people off.
> Security modules do exist, they're different from Linux. Are you sure you're not just expecting FreeBSD-as-Linux?
You're right. My original message was wrong, I edited it, while keeping the original content. What I meant is "good security module".
SELinux on CentOS is "enabled by default and forget about it", unless you do something weird. MAC (= Mandatory Access Control) on FreeBSD requires much more configuration. They have some cool stuff like memory limits, but it's not as powerful as SELinux.
The security posture is quite different, so it's not as easy as just oh, turn on some magic module and be done. Security requires work, regardless of OS.
A fair bit is included in the base system already, see capsicum for example. Also, see HardenedBSD, which is arguably better than anything Linux has built-in.
I said built-in, you seem to have missed that part. SELinux is not built-in(though it is for certain distributions of Linux).
Security is hard to define, let alone prove. Everyone has a very different definition of security. So first one has to ask, secure from what?
I imagine most of the reason around BSD not on the official list(s) is because it's not as popular. I mean GenodeOS[0] is arguably one of the most secure OS's around these days, but I doubt you can find any public Govt support(by any govt) for running it in production today.
Going back to my original comment, security is complicated, and there is no "secure", but hopefully for a given set of security threats, there is a "secure enough".
The same exists in physical security. Our home door locks are notoriously not secure, but they are generally secure enough for most home needs. But your average home door lock would obviously be idiotic as protection for Fort Knox's gold deposit door.
Comparing BSD to Linux security is complicated, but for most high value targets, the answer probably is, run more than one OS. Root DNS servers and other highly critical internet infrastructure all do this as a matter of common practice. If you are mono-culture Linux only, I worry for your security, as you are effectively a single zero-day away from being owned. Linux, BSD, Windows, etc will all have RCE's and zero-days as a normal part of existing.
0: formal proof secure(sel4), for some definitions of provable even: https://genode.org/
>I said built-in, you seem to have missed that part.
I did not miss that part, you're just mistaken.
>SELinux is not built-in(though it is for certain distributions of Linux).
Wrong. SELinux is 100% "built-in" to Linux. That's like saying btrfs or Wireguard are not "built-in" to Linux because certain distros may or may not have them compiled in. Nonetheless, SELinux is part of the kernel [0].
The rest of your dribble is a painful Gish gallop because you were decisively proven wrong. Mature up a bit and take the L. Fomenting about being proven wrong is against the Guidelines here.
>>SELinux is not built-in(though it is for certain distributions of Linux).
>
>Wrong. SELinux is 100% "built-in" to Linux. That's like saying btrfs or ???>Wireguard are not "built-in" to Linux because certain distros may or may not >have them compiled in. Nonetheless, SELinux is part of the kernel [0].
It really depends on what you mean by "SELinux". The core kernel bits of SELinux, are of course, part of kernel by definition. However, SELinux is not really useful unless it comes with the SELinux policy definition which defines what applications do "normally". This work needs to be done by the Linux distribution, because without it, it's much like relationship between software and hardware. "Without the software, it's just a paperweight."
You can boot a system with a btrfs root file system without having the btrfs-progs installed.
Good luck trying to use SELinux without having the policy installed; it's guaranteed to be non-functional. And given that the SELinux policy is distro-speicific, it's not like you can take a random Linux distribution, and enable SELinux and expect it to work. You enable SELinux on the boot command-line, but without the policy installed, it will be dead in the water. And configuring the SELinux policy is extremely non-trivial. It's several orders of magnitude more challenging than running, say, "mkfs.btrfs".
>That is exactly operating system does, which is the topic of discussion. Linux >OSs such as RHEL as an example.
If that's your definition of an OS, then there are plenty of Linux distributions --- aka, an "OS" by your definition --- that do *NOT* have SELinux built in, because they don't have an SELinux policy defined that will work with that distribution's system daemons.
Therefore, by your definition SELinux is not "built in" to all versions of Linux (specifically, "distributions"). Q.E.D.
>You can boot a system with a btrfs root file system without having the btrfs-progs installed.
Wrong again! You can actually have Linux systems that do not support booting from btrfs, but have btrfs-progs installed. Or systems that have neither.
>Good luck trying to use SELinux without having the policy installed
They are installed and included in the Linux operating systems used by the US government, as I stated above. This is a non-point.
> And given that the SELinux policy is distro-speicific, it's not like you can take a random Linux distribution, and enable SELinux and expect it to work.
Wow, it's almost like it be nice if there were standards used by the government and other organizations looking to secure their operating systems. Maybe they can form an agency, I'll call it the Defense Information Systems Agency. They can make standards that secure and lockdown systems, and make sure SELinux is configured properly... We'll call these Security Technical Implementation Guides, STIGs for shor... Oh wait...
> And configuring the SELinux policy is extremely non-trivial. It's several orders of magnitude more challenging than running, say, "mkfs.btrfs".
Immaterial to the matter at hand.
>If that's your definition of an OS
A distribution is an operating system by definition. It's not my definition, it's the definition[0].
>that do NOT have SELinux built in, because they don't have an SELinux policy defined that will work with that distribution's system daemons.
Still "built-in" to Linux. Whether or not it's enabled or complied in is an implementation detail, but it's still "built-in" to Linux operating systems, most definitely those used by the government for secure systems, which was the original point. Thanks for proving my point, Q.E.D.
>Therefore, by your definition SELinux is not "built in" to all versions of Linux
Using your illogic, there are BSDs that send your password through plaintext over the wire because they only have rlogin. They don't have SSH "built-in".
SELinux is absolutely "built-in" to the Linux kernel and operating systems. Whether or not specific implementations of Linux have it compiled and enabled is besides the fact that it is built-in, not third-party.
You're fractally wrong again. Take the L and stop while you're this far behind.
I'm not trying to hate on SELinux, it's great stuff, for what it is. I'm not trying to hate on you either, though clearly you seem to have hatred towards me, which is just sad.
I'm happy to accept that SELinux is now built-in to Linux, the kernel parts do indeed seem to be built in now, news to me, thanks for that. I don't follow Linux kernel stuff much anymore, I haven't contributed to Linux in over a decade.
You seem to assume SELinux is the end-all be all of Linux security. It isn't. I recognize, based on your other comment, that you are fairly new to the field(a whole decade, go you!). Please open your mind and accept differing perspectives, it will do wonders for your ability to reason about security properly.
HardenedBSD[0] essentially implements grsecurity for FreeBSD, plus FreeBSD has built-in capabilities with Capsicum[1], which is true capability based security, which is much different than SELinux's MAC stuff. If you don't believe me, go read the capsicum paper[1] and come to your own conclusions, it might prove enlightening.
If you just want to continue hating on me, no reason to respond, we can go our separate ways. If you want to have a reasoned discussion about security, then I'm happy to continue.
>You seem to assume SELinux is the end-all be all of Linux security.
Never said or implied anything of the sort.
>I recognize, based on your other comment, that you are fairly new to the field(a whole decade, go you!).
I've been implementing secure, hardened UNIX and Linux probably longer than you've been alive. I just specifically worked on DoD TS+ systems for a decade.
The rest of your Gish gallop is nonsense. Linux also has capability based security ON TOP of all the other aspects of security, SELinux included.
>If you want to have a reasoned discussion about security
That's not possible with you. You instantly showed how little you know about security in general when you flouted your lack of SELinux knowledge, then you proceeded to Gish gallop and sealion because you've been called out.
I also use BSD, AIX, Solaris, etc. As someone stated below, maybe I'm "just tired of people who offer ignorant opinions and argue based on conjecture and not actual knowledge."
“Government and military servers” tend to run Windows ;-) SELinux looks nice on paper - another box to check - but it’s just another mitigation later, not something that can be considered “trusted”.
It’s not about providing any real security, it’s about ticking checkboxes. So yes, if the checklist says “Linux and Windows are ok” then you can mark the checkbox, and with FreeBSD you couldn’t.
Yet I see BSD on the STIG list as well. Are there different levels of security for the list items or how does this work if Windows can be said to be more secure than BSD even if both are checkmarked on the audits?
Interesting... does that mean that all secure systems are running on Cisco networking?
As said, they do have STIGs and CIS documents for JunOS but I guess they don't run any Juniper in the US secure networking despite of having certifications.
>Interesting... does that mean that all secure systems are running on Cisco networking?
Now you're strawmaning on top of shifting goal posts again.
>As said, they do have STIGs and CIS documents for JunOS
No they don't. The CIS documents specifically outline the operating systems they support, and there are ZERO BSDs listed (click on Operating Systems)[0].
Similarly, there are zero STIGs for Junos OS. There are STIGs and CIS benchmarks for Juniper network devices, but not for Junos OS. The actual devices could be running on FreeDOS for all we care, but FreeDOS in and of itself would not be allowed to run on any servers. Hilariously, even Juniper is moving away from BSD. Junos OS Evolved is Linux based.
> The `REQUIRE' keyword is misleading: It does not describe which daemons have to be running before a script will be started.
> It describes which scripts must be placed before it in the dependency ordering. For example, if your script has a `REQUIRE' on `sshd', it means the script must be placed after the `sshd' script in the dependency ordering, not necessarily that it requires sshd to be started or enabled.
“Managing services through shell scripts is outdated.” By inference, since most things in Unix like systems are built on the notion of shell (and automation using the shell), this is saying large part of the foundation of Unix is outdated.
A tool is a tool, but I would take a shell script from rc.d any time over a binary blob from systemd.
That is a flawed argument as the is a big difference between using the shells at places where it makes sense and managing services using shell scripts.
I mean the shell is still used everywhere, like e.g. to configure and control systemd.
Still I would say a lot of core shell tools are indeed outdated (for backwards compatibility).
At what cost? Thats not the enirety of systemds complexity curve.
Your systemd unit file is backed by pages and pages of docs that must be comprehended to understand and hack on. Unix developers have all they need from the script. Furthermore, its all in context of existing unix concepts and thus your unix experiance is paying dividends.
Now you're just exaggerating. Are you really saying spending 5-10 mins skimming through a couple of man pages is that hard? Are you saying that a lot of documentation is a bad thing? (I thought FreeBSD fans liked to harp about their handbook..) And besides, there are already hundreds of systemd unit files on your system that you can easily copy and make relevant changes for your own services. Not having to deal with finicky shell features is a major advantage IMO.
I think the disconnect we're having is your inability to perceive complexity. And I don't blame you, it's not easy to quantify. I suggest you start with, Out of the Tar Pit by Ben Moseley. I'm not knocking on documentation, it's a vital property that I consider when adopting new technology.
What I'm saying is that systemds documentation currency (if you accept my metaphor) is spent on covering its accidental complexity and it's voluminous. If you disagree with me, that's fine. This is just my experience as a linux user that's had to deal with systemd.
If your claim is that systemd man pages are well written documentation then I think you're exaggerating and I'll wager you've relied on stackoverflow examples or tutorial blogs to solve your systemd issues--because I have. The reason for this is because the number of concepts and abstractions that you have to piece together to solve your problem is massive. But yeah, it's just a 5 line Unit file. I prefer strawberry kool-aid, thanks.
I genuinely don't see what's so complex about a service unit file. It's a simple INI file that has multiple sections that describes the service, tells what command to run and specifies any dependencies. It's literally the same thing that init scripts do except in a much more concise and efficient manner. And as I said before, there's a ton of systemd service unit files on any Linux system that you can take a look at and use as inspiration for your own services. Taking a little time to learn the ways of systemd is not a huge burdensome task like you're making it seem to be. I don't see why you think everyone should conflate systemd with complexity.
And about the voluminous documentation, well man pages are supposed to be comprehensive and cover every single aspect of the tools being described. They're not there to just be an intro to systemd for new users and administrators. If you want something like that, look no further than the "systemd for Administrators" series of articles written by the systemd author himself. https://github.com/shibumi/systemd-for-administrators/blob/m....
> I genuinely don't see what's so complex about a service unit file
It't not the unit file that's the problem, it's the mountains of junk, low quality C code written by an obnoxious, arrogant twit named "Linux Puttering" who has proven for 15+ years he couldn't care less about code quality or system reliability.
Besides the anecdotes shared by others over the years about the horrible experiences they've had with systemd, I have one of my own to share. When developing my own distro to escape the bloated, laggy hell that is Ubuntu, I started the build on my existing Ubuntu system. I found out the hard way that accidentally double mounting virtual filesystems on the target volume causes systemd to crash the system after about 60 seconds, with no possible way to recover. On MY system, with no junky ass systemd, making this error harms nothing at all and can be easily fixed.
The people who talk about "buggy, hacky" shell scripts appear to be some of the same type of people who shrink in horror from the idea of compiling their own kernel, or working at the command line. (i.e. not really "hackers" at all.) There is nothing at all wrong with using shell scripts for startup. It is in fact the simplest, and IMO most elegant way of doing the job, and no it isn't buggy or hacky in the least. The file system is the database and unit file and the already existing shell is the interpreter.
My system starts much more quickly than Ubuntu and is much faster and more responsive in daily use also, so the "startup time" excuse is a myth, and practically all of the other contrived examples people use to justify the use of systemd can be done BETTER using shell scripts in conjunction with small, light weight, single purpose utilities built the UNIX WAY.
It's "just an INI file" but you would have to understand what the thing that's interpreting does. All the stuff that the OP described as a positive - dependencies, auto-restarts, socket activation - somewhere there's a codebase that's implementing all that, and you can't just understand your "config file", you have to understand what that codebase is actually doing with all of its concepts. Elsewhere in this thread someone writes about how great it is that systemd is using a cgroup namespace to keep track of each process instead of a PID, and maybe that is great, but it's yet another new concept that you have to understand to understand how any of this works. Etc.
You could say that a shell script is a config file for bash, and you have to understand bash to understand what it's doing. But a shell is both simpler than systemd, and something that anyone working with Linux already understands.
The equivalent comparison with init scripts would be all the documentation and complexity of every program invoked by the init scripts, not just by sysvinit or rc's documentation and complexity directly. systemd just has most of that built in. And if you're using socket units, the order of what order to start things is essentially outsourced to the kernel, so that's a bit of a simplification.
Try building and maintaining a linux distro without systemd, especially for a large organization that needs to write their own init scripts. And especially when a large number of the devs in that org aren't shell experts, or don't understand the difference between /bin/sh and /bin/bash. And so on.
Here's another example: https://lwn.net/Articles/701549/ before systemd, for complex NFS setups, the sysadmin _had_ to write the init scripts per-site or per-machine. With the solution in the article (systemd generators) one set of unit files shipped by the distro solves the problem for over 99% of users, including most of the aforementioned complex setups.
The unsaid thing here is Linux is largely not used by sysadmin/unix types. Devops has driven this bloat so that people new to the field can just not have to learn any fundamentals about the OS they're building their tools on. For rapid "move fast and break things" VC nonsense, this is a great match. For efficiency, correctness, and long-term maintainability and security, it's a nightmare.
FreeBSD has a lot of documentation, which is something people like about it.
I think it actually shows a problem, which is that BSD is designed for all your machines to be special snowflakes with individual names, edited config files, etc instead of being mass managed declaratively. So you need to know how to do everything because you’re the one doing it.
Our research company hosts 15 PB of data on what we call Single System Imaged FreeBSD with ZFS. All systems pull the complete OS from one managed rsync repo during production hours. Doing this for ten years, never ever any problems. Config files are included using the hostname to differentiate between servers. Adding servers doesn't add manual labor, it's the borg type of setup which handles it all.
Is there an automation for "before updating ports, you need to read every single entry in UPDATING in case one of them has a command you need to run after"?
Why is there a chapter on custom kernels under "common tasks" that assumes you're going to have a C compiler and kernel source on your machine and want to installkernel on that same machine?
But they only provide fixed functionality, while shell scripts allow for practically unlimited customization.
As for 500 lines - take a look at proper rc scripts, eg the ones in FreeBSD. They are mostly declarative; it’s nothing like Linux’ sysv scripts, which were in some ways already obsolete when first introduced (runlevels? In ‘90s, seriously?)
Yeah, this conversation seems a bit like people arguing past each other. But it's a result of the fact that the story on Linux was stuck for so long (e.g., sysvinit on Debian, Upstart with some sharp edged hacks on Ubuntu). Systemd as the solution seems to have sucked out all the air out of the room: either it's great and people are idiots, or it's the worst thing on the planet and people using it are sheep.
If you need extra customization capabilities, just run a shell script via the ExecStart= parameter and boom, you have all the power of systemd and the shell combined.
You can even do one better since systemd can natively run rc scripts. If you're on a systemd based distro peak at /etc/init.d. You can even manage services with /etc/init.d and the service command.
The amount of effort systemd went through to make existing software work is genuinely heroic.
For the same reason it’s a good thing in other contexts. It’s the main reason Unix got popular - because it can be made to fit whatever requirement you have.
Large parts of the foundation of Unix are absolutely, obviously outdated, starting from filesystems. There is nothing better yet for big chunks of Unix, but systemd (despite all its flaws) is a notable exception.
The fact that filesystems casually allow TOCTTOU races—and that those races cause security vulnerabilities unless you delve into arcane, OS-specific syscalls like renameat2—is an embarrassment.
If I understand correctly, the issue is mutex on file objects at the kernel level. Basically it is a failing of the implementation of the "file" abstraction. Or perhaps a failing of the "file" abstraction itself?
Per Wikipedia [0] (because I had no idea what a TOCTTOU race condition was)
In the context of file system TOCTOU race conditions, the fundamental challenge is ensuring that the file system cannot be changed between two system calls. In 2004, an impossibility result was published, showing that there was no portable, deterministic technique for avoiding TOCTOU race conditions.[9]
Since this impossibility result, libraries for tracking file descriptors and ensuring correctness have been proposed by researchers.
It seems to me that solutions to the problem from inside the "files as an abstraction" space won't solve the problem.
I was curious to see how a different abstraction would avoid this problem. Plan 9 FS appears to have had this issue at one point [1], but notably not because of the FS implementation itself. Rather the problem was caused by the underlying system's executing outside of the P9FS ordered access to a given file (someone please tell me if my understanding is incorrect).
Here's an article that talks about the problems of this (and other) race conditions from the level of abstraction [2].
NOTE: I'm not claiming that P9FS is immune from this sort of attack, I'm only commenting on what I've found.
Exactly! I think the right way to deal with this is some sort of optimistic transaction support, such as SQLite's BEGIN CONCURRENT or git push --force-with-lease.
sort of. Systemd took a lot of lessons from launchd but also from sysv and upstart. For anyone who hasn't read Lennart Poettering's "Rethinking PID 1" post[1] I highly recommend it. You'll understand the history, but most importantly you'll understand systemd a ton better.
>At the end all of your jails end up on the same loopback interface, making it hard to firewall. I couldn't find a way to have one network interface per jail.
FreeBSD jails has quite a bit of history (developed in 1998) and made public in the year 2000.
FreeBSD has had VNET for use with jails allowing you to add epair interfaces connected to a bridge, or add a physical interface or VLAN to a jail. This feature has been in FreeBSD since 8.0 (2009) and enabled by default since 2018. It also allows you to run PF in each jail.
Sadly many people a bitten by outdated forum and blog posts when it comes to jails.
I agree that firewalling jails with loopbacks is a pain, but most people don't do it that way anymore.
Yeah, it's a weird argument. I've used vnet, but rarely need it. I bind my jails to their own IPs and that's that. That has been available from the very early days of jails.
Again, mature, simple and secure, with very few surprises lurking. There's likely fewer lines of code to support all jail functionality than in systemd.
> At the end all of your jails end up on the same loopback interface, making it hard to firewall.
I suppose you didn't use vnet? It's a vastly better jail networking experience. You can pretend jails are separate machines, connected via ethernet. I don't think anyone who knows about vnet chooses not to use it!
> I couldn't find a way to have one network interface per jail.
I think vnet is what you want.
> nftables is way easier to grasp than pf, and as fast as pf, and has atomic reloads.
Systemd has only one advantage, which is also it's prime disadvantage: It's paws are everywhere in your system. Before there was Systemd, init systems worked okay too - in that space nothing was changed by it.
1.5M lines of code is not an init system, it is a time bomb... and with dodgy wiring.
I think it was a ZFS dev who complained he had to update 150 files to port ZFS to systemd. Simples?
And you have to love all them binary log files, with their dynamic and spontaneus api. Yes, everything is always new and exciting with systemD.
I use devuan. A debian fork with a choice of init systems (well, everything but systemD. ;) It took the dev/devs 2 years just to swap out/revert the init system.
Edit: I will concede that systemD is great for a lot of people. I honestly wish them success. The work that goes in to building, maintaining, and using it, is substantial. It must be a boon for the economy and job creation.
Oh that's rich. That is rich. The [Flagged] tag is what really makes it. I'm tempted to screenshot that post and make an NFT from it. It really does capture something about the zeitgeist.
Don't make a mistake, systemd is not (just) an init system. It's a replacement of more and more Linux userland. I suppose later on it will replace much of the filesystem, network, and process security management tools, stuff like xattrs, ip / ipfw, and selinux.
I suppose that the end goal of the systemd project is an ability to deploy a production Linux system with just systemd and busybox, and run all software from containers.
Not that it's a bad thing to strive for. But it's not going to be Unix as we know it.
> It's a replacement of more and more Linux userland.
To be more specific is it a replacement of more and more Linux userland that nobody asked. I would be ok if systemd would be a process management system that replaces init with a standardised way of managing services (it would be amazing if it was written a safe language, if it was respecting configuration files, if it did not take over managing system limits, etc. etc.).
Re nobody asked: I suppose that Red Hat and IBM sales departments sort of did, maybe the embedded teams did, too. Also, the Gnome team forced it on the desktop.
(I'm still able to run Void Linux as my primary desktop without systemd.)
I like how userland and the kernel are sync in freebsd. I disdain how linux works in that. However Linux has its core advantages, some distos are well suited and tested for certain scenarios. I used to have a bunch of freebsd, but nowadays, I use containers, light weight runtimes. With Lambdas and Serverless, most, but not all, business api are streamed line where servers matter not. Its the runtime.
Serverless is killing containers. Its killing the need to care about if its freebsd or linux, does it run my api fast enough?
This whole "userland and kernel" thing sounds good, but in my BSD use, the trade-off is that most desktop app packages (and ports) get a whole lot less attention and are more buggy than those in Linux. I imagine it's not a problem for servers.
The reasons are, for a large part, not on the technical side. I was surprised, because this this a lot of work for little visible gain. Here are the reasons, slightly abbreviated:
> The whole system is managed by the same team
Mostly philosophical.
> FreeBSD development is less driven by commercial interests.
Mostly philosophical.
> Linux has Docker but FreeBSD has jails!
IMO, this comparison is a mistake. In the Linux world, systemd's nspawn is very similar to Jails. It's a glorified chroot, with security and resource management. All the systemd tools work seemlessly with nspawn machines (e.g. `systemctl status`). Containers à la Docker are a different thing.
BTW, I thought the last sentence about security issues with Docker images was strange. If you care about unmaintained images, build them yourself. On the other side, the FreeBSD official documentation about Jails has a big warning that starts with "Important: the official Jails are a powerful tool, but they are not a security panacea."
> Linux has no official support for zfs and such
Fair point, though I've heard about production systems with zfs on Linux.
> The FreeBSD boot procedure is better than grub.
YMMV
> FreeBSD's network is more performant.
Is there some conclusive recent benchmark about this. The post uses a 2014 post about ipv6 at Facebook, which I think is far from definitive today. Especially more since it "forgot" to mention that Facebook intended to enhance the "Linux kernel network stack to rival or exceed that of FreeBSD." Did they succeed over these 8 years ?
> Straightforward system performance analysis
The point is not about the quality of the tools, but the way each distribution packages them. Seems very very low impact to me.
> FreeBSD's Bhyve against Linux's KVM
The author reluctantly admits that KVM is more mature.
I have essentially the same take. The sysadmin at my company prefers FreeBSD for all these reasons (as such that's what we're running), and he's engaged me a tonne about FreeBSD but all I see is an operating system that's just as good as the other mainstream server Linux distributions. Except now we've got a system that's more difficult to hire for. "Any competent admin can learn it easily" is something I've been told but how many will want to when they could easily go their whole career without encountering it again?
I like your point about Docker vs Jails, I haven't seen it discussed like that before. I keep hearing Jails are more secure than anything else, I'll have to read more into it.
As far as the networking goes, I haven't seen any recent benchmarks to substantiate those claims either. However, considering Netflix uses FreeBSD on their edge nodes and has put a lot of work into the upstream to improve networking (among other things), it wouldn't surprise me if it's technically superior to the Linux stack. Clearly though Linux's networking isn't an issue for most organizations.
And regarding ZFS, ZFS on Linux and FreeBSD's ZFS implementation are now one and the same. It would be nice to see some of the big distributions(or even the Linux kernel) integrate it more directly. This is probably a solid point in favor of FreeBSD, but it's not like it doesn't work in Linux. I'm not a systems guy, so I'm probably out of the loop on this, but Proxmox is the only distribution I've seen with ZFS as a filesystem out of the box, but I don't know how much production use Proxmox sees. I only run it on my home server.
All that to basically say, I like FreeBSD conceptually. I'm just still not convinced that it's doing enough things better to warrant using it over a common Linux distribution for general computing purposes.
> However, considering Netflix uses FreeBSD on their edge nodes and has put a lot of work into the upstream to improve networking (among other things), it wouldn't surprise me if it's technically superior to the Linux stack.
Possible. But it's also possible that the founder of the team was a BSD person. You occasionally get cases where the preferences of one key person affects things in the long run. At this point I'm sure that they're baked in, because removing all that code would not be worth the effort too, so even if it was better when Netflix went online, that doesn't guarantee that it's still true today.
That being said, Linux is also used in a ton of places, including high network jobs. It would take a decent sized amount of evidence to convince me that all of those other places were wrong and basically only Netflix was right to use BSD in network heavy cases.
My suspicion is that either:
1. The difference is minimal to negligent, and not enough to justify mixed OS development
or
2. The difference is significant, but you have to be pushing your network much harder than most teams ever do for it to show up.
FreeBSD networking during WhatsApp and Netflix starting era was definitely better than Linux. Both in their specific usage pattern. So it wasn’t some ideological reason but actual technical one. But that was 10+ years ago.
If I remember correctly from reading Meta / Facebook successfully switch all of WhatsApp large bare metal 4U server to their standardise Linux blade by 2017. They also spent 3 years working on Erlang BEAM and Networking so it was working close enough.
Netflix continues to use FreeBSD and it is working well for them. Both the lead engineer comment and submit update on HN frequently. Do a search and it should be easy to find it. ( on my phone now so can’t quote links )
2022 now I think any general advantage of FreeBSD over Linux in networking performance would likely be minimal. But it also doesn’t make sense to switch to another for the sake of OS unification.
If anyone wants to learn (as opposed to arguing), the papers on Netflix's TLS offloading work are a fun read.
And say what you will, something like 20% of all internet traffic has a FreeBSD endpoint. And doing 400Gb/s of encrypted streaming from one box is quite an accomplishment.
I would argue the reason someone like Netflix or any of the other large orgs using FreeBSD come there is for the simplicity/cohesiveness. If you're looking to do something like in-kernel TLS or something, way easier on something smaller, documented, and with an OS devel team that will likely incorporate your work in future releases.
> ZFS on Linux and FreeBSD's ZFS implementation are now one and the same.
Don't assume that this means feature parity, not by a long shot. ZFS on FreeBSD affords you native NFSv4 ACLs (hugely important in corporate settings), tight integration with jails, much better delegation support, the FreeBSD boot loader is actually updated to understand all the ZFS features, and offers boot environments and checkpoint rollbacks right from the boot loader menu (and not having to shuffle around with a dedicated /boot partition like you do on Linux).
If none of those features appeal to you, ZFS on Linux will seem very much the same. Indeed, within the limitations of Linux itself, ZFS does very well at providing every feature that Linux lets it provide.
I have mostly only seen FreeBSD used for virtual networking devices so I think if that is your speciality (maybe someone who is a devops / networking role - is there a name for that?) you probably encounter it quite a bit. I do not think I would bother learning it or using it for much outside of networking just because it would make hiring competent people harder. Many times you also need to think about what may not be the very best solution from a purely technical perspective but also what is the best from both a people and technical perspective.
One of the things that annoys me to no end ( and I am not sure if this is something specific to Netscalers or FreeBSD networking appliances in general ) is that the damn VM's are the only thing I run that does not properly report memory and CPU usage to VMware. From the hypervisor perspective they are constantly running at 100% CPU (their support has told me that the system reserves the full amount of CPU given but is not actually running that high - and when you actually connect to the box it self reports the number correctly). Not a huge deal but it annoys me to have that little red "X" in the GUI next to the VM name all the time.
That being said I also have not seen any recent FreeBSD vs Linux benchmarks but I would imagine if a company as large as Netflix is already using it en masse then there would need to be not just parity but tangible benefits to swap it all out for some other linux distro. An org starting from scratch of course would be a different beast.
I'm seeing a lot about how hiring competent people would be harder. But I think competence is independent of this, so perhaps what you're saying is Linux use masks incompetence? I would really question the chops of somebody who says they can work with Linux but not FreeBSD. People who really know their stuff on Linux would be able to figure it out.
And if you never see FreeBSD again... So what? The stuff you are building on top is probably more relevant than questions like this.
I've played with each of Linux, OSX and BSDs. The commands and syntax are similar enough, it would be a trifle, indeed a matter of pride for any *nix sysadmin to learn another Unix!
And that much should be expected, since they are both in the same Unix family tree (even if Linux was "grafted in").
> perhaps what you're saying is Linux use masks incompetence?
Why do I see so much of this snide needling from FreeBSD proponents? Small dog syndrome?
It matters when you don't want to hire someone to learn a system, and someone who can land a job with their Linux skills does not necessarily want to learn a different system (one that's in a lot less demand). It's a valid concern even if not an insurmountable problem.
The line you are quoting was, for the record, meant to be a tad humorous.
But in regards to your point... I guess what I'm saying is for most cases, the knowledge gap will be very small, and "hiring them to learn" is a huge overstatement.
Within that you will find niches where there's more of a gap. Eg. Practical knowledge of kernel development will not be super transferrable (though some of it will)
> The line you are quoting was, for the record, meant to be a tad humorous.
That's the needling. And I'm sure it's considered absolutely hilarious among the people engaging in it.
But what it comes down to is someone raises a valid concern and you dismiss it and question the competence of others.
You could have said something actually helpful and constructive without being bitter and dismissive, for example that you don't believe it is as big a problem as might be first thought because of the similarities between the systems, and the good FreeBSD documentation available.
Sorry but I see this attitude every time FreeBSD vs Linux topics come up. Not saying all FreeBSD proponents do it or no Linux ones do similar, mind you.
I think it's important to not take oneself too seriously and that is personally why I inject humor of that sort.
The actual point I'm making beyond that, and I think I make it reasonably well if I can say so myself, is that most of your Linux knowledge is transferrable to FreeBSD, and vice versa. It therefore makes me skeptical when people suggest that it isn't so.
Solid point about networking. No one really seems to dispute its presence there and rightly so it seems (again, I'm a programmer not an admin so my personal experience is limited).
> Many times you also need to think about what may not be the very best solution from a purely technical perspective but also what is the best from both a people and technical perspective.
This is huge when it comes to running the company's software imho. It's useless having the perfect solution if you can't get anyone to run it when "good enough" will have a plethora of folks ready and willing. Especially when it comes to managing the bus factor. When I asked the admin at my org about that, he explicitly said he didn't care about it (that's more an indictment of him than FreeBSD though).
Overall the human factor is something I've been trying to embrace more lately when it comes to work. For personal stuff though I'll be as esoteric as I like.
> the damn VM's are the only thing I run that does not properly report memory and CPU usage to VMware
> Is there some conclusive recent benchmark about this. The post uses a 2014 post about ipv6 at Facebook, which I think is far from definitive today. Especially more since it "forgot" to mention that Facebook intended to enhance the "Linux kernel network stack to rival or exceed that of FreeBSD." Did they succeed over these 8 years ?
I would not be surprised if there still were some cases where FreeBSD could do better. The network stack in particular has a vast amount of performance heuristics and tuning where you can just happen to hit the right or wrong side of some performance jump depending on your exact case, for example.
But to go from there to the network stack is more performant is puzzling. There are certainly cases where the Linux network stack is more performant than FreeBSD -- search for linux vs freebsd network performance and you'll find benchmark tests and complaints and anecdotes around the place including freebsd forums and mailing lists where FreeBSD is slower, and yet FreeBSD proponents would (rightly) say these are not proof Linux's network stack is faster and there might be all sorts of reasons for the differences. So you can't have it both ways.
Netflix uses FreeBSD for some things yes. Not sure if number of big internet corporations using an OS would be a favorable metric for FreeBSD either. Again you can't have it both ways. Netflix says FreeBSD is capable of performing well in that kind of environment, it does not say that it outperforms Linux in general or even in that environment. Any more than Google says the opposite.
> Especially more since it "forgot" to mention that Facebook intended to enhance the "Linux kernel network stack to rival or exceed that of FreeBSD." Did they succeed over these 8 years ?
I don't know about Facebook-driven efforts specifically offhand, but I recall attending a talk at LinuxCon 2015 _specifically_ about optimizing the network stack. Given by a Red Hat employee, IIRC.
Facebook has spent a lot of engineering resources on the kernel. Can't speak to their network stack, but if you want to see what they've contributed, they use emails under the @fb.com domain:
"Some time ago we started a complex, continuous and not always linear operation, that is to migrate, where possible, most of the servers (ours and of our customers) from Linux to FreeBSD."
I don't really disagree with any of the stated reasons, but I also didn't see a reason that would make me even consider making the move with our servers, or even bother with some small number of servers. At least for me, I'd need a bunch of REALLY GOOD reasons to consider a move like that. A huge cost savings AND some huge time savings in the future might do it.
I agree that the stated reasons don't sound very compelling. Maybe in aggregate, but not individually.
But they left out one of the bigger reasons, IMHO. FreeBSD doesn't tend to churn user (admin) facing interfaces. This saves you time, because you still use ifconfig to configure interfaces, and you still use netstat to look at network statistics, etc; so you don't have to learn a new tool to do the same thing but differently every couple of years. Sure, there's three firewalls, but they're the same three firewalls since forever.
Deprecated on Linux. But I for one can’t consign them to the dustbin of my memory, because on my Mac, they are not deprecated, while the `ip` command that replaces them on Linux does not exist. With this part of macOS being derived from FreeBSD, I don’t know whether that makes FreeBSD a savior or a villain.
Personally I blame all of the major Unix-derived operating systems (Linux, macOS, BSDs), as none of them show any interest in standardizing any APIs or commands invented this millennium. The subset that’s common to all of them is frozen in time, and is slowly being replaced by new bits that aren’t. From epoll/kqueue to eBPF, containers/jails to seccomp/pledge, DBus/XPC to init systems… from low-level network configuration (ifconfig, ip) to high-level network configuration (whatever that is this month).
At first I wanted to say that while this is inconvenient, it is better for the larger ecosystem because we explore the problem space more. But the more I think of it, the more I see it as just a superficial exploration, not deep operating system research.
It's still used in some user spaces, and the Linux kernel supports the system calls for net-tools still today, because of the Torvalds Prime Directive, "though shalt not break userspace".
The FreeBSD POLA design principle makes sure fundamental tools don't disappear or worse double over the years. Linux distributions differ vastly from vendor to vendor.
Since it was just an example, I don't think refuting this particular item will nullify the opinion. The idea, I think, is that there are always more pieces in a state of deprecation and replacement at any given time in Linux land than in FreeBSD land.
I think that's just due to the pace of development. The BSDs are resource constrained, so they have to pick and choose what to work on. That is both a good thing and a bad thing. Here the benefit is less churn. On the downside, they're just catching the Wayland train recently. On the up side, by catching it late they didn't suffer a lot of the growing pains.
Some people see the bigger picture and recognize that a medium amount of work now is better than lots of small amounts of work stretched over many years.
Likewise, some people and many businesses see the immediate now and aren't always the best at planning for long term, and/or are overly optimistic that their pain points will eventually be fixed.
> Likewise, some people and many businesses see the immediate now and aren't always the best at planning for long term, and/or are overly optimistic that their pain points will eventually be fixed.
But that's the thing. The pain points mentioned in this article aren't really that strong to need to ditch the OS and move to a new one. These kinds of decisions have huge tradeoffs.
One could argue, in fact, that moving to a non-traditional OS will make it much harder to hire experts or hand off the system to another team in the future.
I think of it this way: If you interpret "rolling stones gather no moss" to mean you've always got to keep pushing forward or you'll become obsolete, choose Linux. If you venerate moss as proof of the stability granted by doing the same thing simply and perfectly, you're probably already using FreeBSD.
After CentOS Stable was cancelled, I migrated a number of our platforms to FreeBSD because it met our needs and I enjoyed working on it. No surprises, nothing breaking, and most importantly, no drama.
What? FreeBSD as a non-traditonal OS? I must disagree, FreeBSD may not be as common as Windows or Linux but it's not like Plan9 (from outer space) or BeOS.
Under what circumstances does the choice of backend OS ever deliver "explicit customer value", so much so that the customer would care about said choice?
Normally the customer doesn't care. They will care, however, if something goes wrong during the migration or some unforeseen issue comes up later that degrades their experience.
It is definitely the case that underlying architecture, certainly all the way down to the OS, can be the difference between "I can create, test, and deploy this feature in a week, and it will be rock-solid" and "it'll take months and still fail on some edge cases—and fixing those is not remotely in our budget".
I'd hazard to say that it delivers customer value via two avenues: (1) better SLAs, and (2) lower expenses.
If your backend OS starts to run software more efficiently, cost less to host, has less maintenance downtime, has fewer security incidents, has fewer crashes, etc, having changed to it has produced customer value.
> Under what circumstances does the choice of backend OS ever deliver "explicit customer value", so much so that the customer would care about said choice?
Is the service working, is it not? (are there show stopping OS issues?)
Are the performance characteristics we need there, or not?
Is the service secure or not?
There's a whole lot of things right through that entire space where the choice of operating system can make quite a fundamental difference.
As a general statement, I wouldn't want to make this kind of a migration without something bordering on a killer feature that wasn't possible otherwise, or some fundamental driver problems (I've replaced Linux with FreeBSD on border devices in the past, due to particular issues with Linux in ways I couldn't afford to have keep biting me)
Quick: what OS is running on the machine at your bank that makes your debit card work in an atm? If you have to care about the answer to that question, your bank is doing it wrong.
The characterisation of systemd in this post really bothers me, particularly this:
> 70 binaries just for initialising and logging
It’s just not true. Those 70 binaries provide much more functionality than an init system, they can cover a significant portion of system management, including a local DNS resolver, network configuration or system time management. You can dislike the fact everything is so tightly integrated (which feels ironic given that the post goes on to praise a user space from one team), but let’s at least be correct about this.
> including a local DNS resolver, network configuration or system time management.
Do. Not. Want.
So, I have this pile of 70 binaries that are inexplicably tied to my init system, and (in my, informed enough for me, opinion) they're all garbage. How do I remove them without breaking init?
This month's fresh hell: I have a problem where the systemd replacement for xscreensaver (logind, maybe? Good luck finding the culprit, let alone the manual!) won't accept my password unless I exit the current X session with "switch user" then restore the session using the normal login screen.
There's a whole section on JWZ's xscreensaver page (from over a decade ago) explaining how to avoid this class of bug, but what does he know?!?
That reminds me; I wonder if a *BSD is a good enough daily driver for the pine book pro yet (it's probably easier to port their kernel than to fix Linux userspace, after all...)
> So, I have this pile of 70 binaries that are inexplicably tied to my init system, and (in my, informed enough for me, opinion) they're all garbage. How do I remove them without breaking init?
Depending on distro, all can be substituted for alternatives (assuming they're used at all, I've seen a number of distros package the kitchen sink "just in case")
Nothing about using systemd as pid1 required using any of their other developed tools, even logind. Find a distro that doesn't use them, or roll one yourself.
It's a mistake to see "Systemd" as a single application - it's a collection of developed-together (so tend to work slightly nicer together) userspace tools - like FreeBSD without the libc and kernel, or GNU coreutils or similar. People don't seem to complain too much about those "Bundling Everything Together".
> Linux has Docker, Podman, lxc, lxd, etc. but... FreeBSD has jails!
Docker, podman, lxc, lxd, etc are userland components. Linux has cgroups and namespaces.
FreeBSD jails are a bit more complicated because FreeBSD isn't distributed the way Linux is. Linux is distributed as just the kernel, whereas FreeBSD is a base OS. This probably could've been phrased better as, "Linux has no interest in userland and I want some userland consistency". That's fair, Linux was built around the idea that operating system diversity was a good thing long term, FreeBSD was more interested in consistency. I'm reading between the lines, a bit, here because of the critique of SystemD (note: not all linuxes use SystemD)
Personally speaking, I like both Linuxes and FreeBSD but I don't think debating the two is important. Rather, I'd encourage turning your attention to the fact that every other component on a system runs an OS-like interface that we don't make open OS's or "firmware" for.
> Consider systemd - was there really a need for such a system? While it brought some advantages, it added some complexity to an otherwise extremely simple and functional system. It remains divisive to this day, with many asking, "but was it really necessary? Did the advantages it brought balance the disadvantages?"
This is really telling for the level of analysis done: systemd has been the target from a small number of vocal complainers but most working sysadmins only notice it in that they routinely deal with tasks which are now a couple of systemd stanzas instead of having to cobble together some combination of shell scripts and third-party utilities. Confusing noise with numbers is a dangerous mistake here because almost nobody sits around randomly saying “this works well”.
Linux took many markets. The HPC, for example, has been 100% linux in TOP500 for a few years already. Monopoly by FLOSS is still monopoly. Healthy competition is good for users and forces options to improve, see LLVM vs GCC.
To sum up: healthy FLOSS competition is welcome and needed.
This may not be the best place, but I have to. I'd like to tell you that, although we've had some disagreements on HN, I carry no bad feelings and, for as much as possible, have extreme respect for you and your opinions.
I'm glad our previous disagreements don't prevent us from posting when we do agree.
Sure, most of the stuff I kind of rant about tend to be founded on experience, you will me seldom see ranting about stuff I never worked with on daily basis, and it always goes both ways, just like coins have two sides.
I think it's somewhat to late for UNIX to evolve in general.
There's too many decades of cruft and backwards compatibility built up. Afaict, most interesting new OS's being built right now are similar to UNIX, but very explicitly not.
I don't know about UNIX per se, but consider Linux and MacOS progress in the last two decades. MacOS showed that it is possible for UNIX to be successful on the desktop. During the same period, Linux scaled from embedded computers and smartphones to supercomputers and servers.
In terms of innovations, I'd bet MacOS has evolved too. Although "logical partitions"-like solutions were already known for some time, Linux made it widespread through containers; io_uring allows high throughput syscall-less zero-copy data transfer and futex2 allows to implement NT synchronization semantics that are very common in game development. All that ignoring just how much the desktop changed!
The UNIX children are definitely not sitting still.
I have always wondered what the world might be like if Apple had also focused big time on developing a server version of MacOS. I am very much not a fan of Apple as a company in general but I have always liked OSX quite a bit.
Then again they would probably charge 10x as much for a 2U rackmount server where the main difference was a nice stainless steel front fascia...
macOS’s best POSIX-level innovations are probably sandbox, xpc/launchd, and libdispatch. These have been copied elsewhere as Capsicum, systemd, and libuv (TBB?), but the originals are more consistently used.
To some extent you are right, however as POSIX actually won the server room it will stay around for decades to come, even when it is not fully exposed as you mention.
> FreeBSD's network stack is (still) superior to Linux's - and, often, so is its performance.
Where is this coming from exactly? The linked article about Facebook is 7 years old. The following benchmark shows the exact opposite: Linux's network stack has long surpassed FreeBSD's. And I would expect nothing else given the amount of work that has gone into Linux compared to FreeBSD.
Do you have any numbers you can share publicly? What's the reason it's better (Linux has in-kernel TLS as well, correct?)?. It would make for a great topic for Netflix TechBlog.
Have you guys done a ton of your own customization and modification to the FreeBSD stack?
Basically just wondering if both of you are correct and the default implementation of FreeBSD is in fact losing to other flavors of Linux but the networking customized OS' (not even just thinking of Netflix at this point) are still superior.
Most of our customizations and modifications have been upstreamed. We get similar performance on production 100g and 200g boxes when running the upstream kernel. I see no reason I shouldn't be able to hit 380Gb/s on a single-socket Rome box with an upstream kernel. I just haven't tried yet.
Most of the changes that I have for the 700g number are changes to implement Disk centric NUMA siloing, which I would never upstream at this point because they are a pile of hacks. They are needed in order to change the NUMA node where memory is DMAed, so as to better utilize the xGMI links between AMD CPUs.
Its mostly an experimental platform, never intended for production, to shake out issues that we'll see when next generation platforms show up with DDR5 and PCIe Gen5. With a current single-socket setup we'd be limited by memory bandwidth around 500Gb/s or so, but 800Gb/s should hopefully be within reach from a single socket with DDR5 based servers. Especially since the limiting factor now is the traffic on the xGMI links between the sockets is unbalanced.
I honed on this as well. I can't speak much to running FreeBSD as a server, but can say that using it as a router is not a great experience compared to Linux. I can't even get the latest ECMP (ROUTE_MPATH) feature working with FRR (or even by hand).
I ran FreeBSD servers for about a decade. Now all my servers are Linux with systemd. I'm liked FreeBSD then, I'm happy with systemd now. I have commits in both.
I'm glad there are some people who use and prefer FreeBSD and other init system now, because diversity in digital ecosystems is benefits the whole just as diversity in natural ecosystems do.
The shot taking at systemd here was disingenuous though. The author complained about the number of different systemd binaries and the lines of source code, but all these tools provide a highly consistent "system layer" with standardized conventions and high quality documentation-- it's essentially the same argument made to support FreeBSD as a large body of kernel and userspace code that's maintained in harmony.
It feels like the title is wrong. Instead of saying "Linux is bad because I encountered X problem in production, which would have been prevented by BSD" the author goes on to list why BSD is better in general outside his specific use case.
Nothing wrong with the comparison probably, but I got the impression the author just really wanted to do the migration and found some reasons to do so, without actually needing it. Nothing wrong with that as well. It's just the expectations set by the title that are off
I had a similar feeling, throughout reading the post I wanted to know what the specific issues were that made BSD better suited, it's all been too abstracted.
I think many of us (including me) have a tendency to try to quickly generalise our experiences, even when it's not appropriate - and when we go onto explain things to others without the original context it can sound too abstract or come across as evangelical. Either way, it loses meaning without real examples.
>There is controversy about Docker not running on FreeBSD but I believe (like many others) that FreeBSD has a more powerful tool. Jails are older and more mature - and by far - than any containerization solution on Linux.
If FreeBSD jails and Solaris zones were equivalent to Linux containers, we'd have seen them take over the backend already. We haven't. They're really useful, they provided a degree of safety and peace of mind for multi-tenancy but they're not granular enough for what's done with $CONTAINER_RUNTIME these days.
I think the problem is that docker is an excellent frontend, and zones and jails are excellent backends. People who say jails are better are probably right but they're missing the point, because they're not really solving the same problem; until I can use jails to create a container image, push it to a registry, and pull it from that registry and run it on a dozen servers - and do each of those steps in a single trivial command - jails are not useful for the thing that people care about docker for.
I ran a FreeBSD ZFS NFS server for a cluster for quite a while. I loved it. It was simple and stable. The thing that led me away from FreeBSD (aside from IT not being happy with an "alternative" OS), was that I needed a clustered filesystem. We outgrew the stage where I was comfortable with a single node and where upgrading storage meant a new JBOD.
Are there any FreeBSD-centric answers to Ceph or Gluster or Lustre or BeeGFS?
Ceph 14 was EOL on 2021-06-30. There are two major releases past v14 that don’t seem to be available through freshports. I’m not sure I’d qualify this as supported, and definitely not actively supported.
I don't have enough experience with FreeBSD (outside of FreeNAS seven years ago), but I've never had any success getting it to run on a laptop. Every time I've tried installing it on a laptop I get issues with either the WiFi card not working, issues with the 3D accelerator card not working at all, or the lid-close go to sleep functionality not working.
I've been using Linux since I was a teenager, so it's not like I am a stranger to fixing driver issues, but it seemed like no amount of Googling was good enough for me fix these problems (googling is much harder when you don't have functioning wifi). As a result I've always just stuck with Linux (or macOS semi-recently, which I suppose is kind of BSD?).
I've recently switched my home server to Freebsd after it was on Debian for who even knows how long (Debian still virtualized on bhyve for some tasks).
My take: I love FreeBSD as a server os. It's really, really well designed. Spend a bit of time with the handbook and after a while it's really simple to hack on. I really like the separation of the base OS, I like the init, I like jails, documentation puts every Linux distribution to SHAME.
But laptop?... Unless you scour for exactly the parts that will work (esp for wifi), you're going to have a bad time, even on old equipment. At the same time as I rebooted my server, I had an old ThinkPad I decided I wanted to FreeBSD for the hell of it. I gave up on it. It may have been possible but it was just too much work. In this day and age when almost any Linux desktop distro boots without a hitch on hardware that's not completely brand spanking new, it was just not worth it
That was basically my experience. I had tried multiple times on old laptops, thinking that maybe someone had developed a better driver by this point, and it just didn't happen.
Usually when this happens in Linux, a few hours of Googling, swearing, and retrying is enough to fix my problems, and I'm sure that with enough time that approach would have worked in FreeBSD as well, but I always grew impatient.
> almost any Linux desktop distro boots without a hitch on hardware that's not completely brand spanking new
Can't speak for anyone else, but even for brand spanking new hardware, as long as I've stuck with AMD drivers, nowadays Linux "Just Works" when I boot it. Obviously YMMV between systems, but I feel like Linux has finally become competitive with Windows and macOS from a usability standpoint.
You might also end up carefully researching all the parts, only to find out that a new major FreeBSD release breaks hardware support for something. Had that happen to video on one of my netbooks.
I haven’t actually verified it for myself, but I’ve read several times that for BSD on laptops, OpenBSD is generally a better experience, supposedly because the OpenBSD devs heavily dogfood (using OpenBSD to develop OpenBSD) whereas FreeBSD devs tend to use other operating systems (predominantly macOS, apparently) on their laptops/desktops.
I use OpenBSD on my laptop. Honestly it was a better experience than even Arch Linux or the like. More things worked out of box (ran on thinkpad) and there is love put into some simple commands - zzz and ZZZ alias built-ins are actually super useful and you can really tell the people building the OS use em themselves.
I have heard that too, and at some point I might try it out.
Honestly I just wish there was a way to run ZFS as a core partition on macOS. APFS is pretty good, but a full on ZFS thing is really cool, and makes FreeBSD so much more appealing as a result.
I don't think relying on kernel extensions on Mac is a good idea. It takes a lot to get it working and I suspect it will be brittle.
I've had a hard enough time in the past with out of tree drivers on Linux. I can't imagine what it's like with an OS vendor as hostile to the concept as Apple is.
I've had similar issues when trying to run it in Hyper-V or VirtualBox - issues with network adapters and disk not being recognised. I've tried FreeBSD and FireflyBSD, and give it another try ever year or so, but always hit the same walls
I've run FreeBSD in Hyper-V and VirtualBox with no issues, are you starting from the virtual machine images or the installer images? If you haven't tried the virtual machine images, they're linked from release notes, and 13.0 is available here: https://download.freebsd.org/ftp/releases/VM-IMAGES/13.0-REL...
I set up Hyper-V for the first time last summer, and I seem to recall it just working.
I've read hundreds of post surrounding this issue. I've used laptops for decades but I don't think I have ever used this feature. It doesn't take even a second to hit a couple keys and sleep a laptop. The only feature lower on my priority list would be syncing the RGB keyboard to soundcloud. But that's just me. Evidently the lid-close-sleep thing is of vital import to millions of laptop users. It's one of those things where I just shake my head in bewilderment.
Having computers do things automatically is kinda the whole point of what we do.
> Evidently the lid-close-sleep thing is of vital import to millions of laptop users.
It's one of many little things that takes a laptop from feeling like some weird barely-functioning misfit of a toy to an actual portable tool that you don't have to worry about or baby.
I mean, sure, I'm an engineer, I'm sure I can figure out a workaround, but I like the feature of going to sleep on lid close. It's a feature I use, and I couldn't get it working in FreeBSD.
Maybe get out of your rut and try it before mocking everyone who uses it and posting long rambling comments about how you “don’t understand” but still think you have enough understanding to judge “millions” of people as beneath you?
One of the features I always actually make sure to turn off - I am in the same boat and there are many times when I want to close the lid of my laptop but still have some stuff chugging along. Saves a lot of battery that way!
This articulates most of my frustrations with the Linux world.
Some of the distros are very good, but some of us who have work to do cringe at the thought of bringing up newer versions of an OS just to check all the things that've broken and changed needlessly.
I hear people always complaining about 'needless' changes like systemd over init. But I sure like how my modern database servers reboot in less than 30 seconds, vs 5-12 minutes when they were running RHEL 5. (yes, some of that is because we moved from BIOS to UEFI)
Hmm, why do I feel that a) the older servers verifies memory banks fully before boot (which takes up minutes) and b) the change to solid-state media has the biggest impact and not on SystemD. Can you confirm that at least a) is not the reason for slow boot times?
I know that anecdote is not data, but I had an Arch Linux laptop when the distribution moved from Init scripts (SysV init ?) to systemd, and the boot time was easily cut in three, going from 30s to 10s, on exactly the same laptop. Of course switching from HDD to SSD was a huge improvement, but don't discount what systemd's efficient boot parallelism was able to achieve.
It’s worth keeping in mind that the old Linux rc scripts where quite mediocre compared to their BSD counterparts. They didn’t even have a dependency mechanism. So, it’s fine to compare sysv Linux scripts to systemd, but please don’t extrapolate that to other systems.
I did not know that the BSD ones are more advanced, thanks for pointing it out. Out of curiosity, are there alternative init systems in the BSD world ?
Yeah, maybe you're right. I actually had this conversation with my PM the other day - I work on a fairly well known database. They were asking us to improve the startup time of the product.
If it's something like a galera cluster then not really, I think. You want the updated machine to become operational again in a reasonable amount of time, because during the time it's down you have fewer points of failure. But the difference between 2 minutes and 5 minutes in this case is not a big deal, in my opinion.
Sure (although 5-12 minutes is very odd no matter how you slice it), but that's a bit of a false choice. I have nothing against systemd personally, but I recognize more and more that a good part of the reason is because I never had to use it in anger.
There were other choices that Linux could have taken rather than sysvinit vs init.d and some distros through the years have taken different approaches. FreeBSD's rc.d is not sysvinit either.
The Linux machines that I have managed that took anywhere close to that long to boot (older IBM and HP) just have a very lengthy BIOS/POST start-up process. Once you got through that, the OS (RHEL 5 at the time) booted up pretty quickly.
>There were other choices that Linux could have taken rather than sysvinit vs init.d
I don't think that is true. If you look at those other distros you'll mostly find that it either didn't pan out, or it's quickly approaching the same design and architecture of systemd, where distros start using declarative prgramming to integrate tooling around common workflows. This is inevitable when you consider the constraints on the system as a whole. Other related service management tools like Docker are under the same constraints and you'll notice those have a similar architecture too.
I've run a lot of Linux machines on a lot of hardware for a lot of years and have no idea what could have been going on that would cause a 5+ minute boot, that was also unnecessary enough that a different BIOS could get it down to 30s. Back in the bad old days of circa ~2000 when I ran Linux on a variety of garage-sale potatoes, I don't think any took 5 minutes to boot.
“ The system is consistent - kernel and userland are created and managed by the same team”
Their first reason is really saying a lot but with few words. For one, there’s no systemd. The init system is maintained alongside the entire rest of the system which adds a lot of consistency. The documentation for FreeBSD is also almost always accurate and standard. Etc etc
I think you also largely don’t need a docker or etc in it since jails have been native to the OS for decades. I’d want to do some cross comparison first though before committing to that statement.
Shouldn’t be lost that the licensing is also much friendlier to business uses. There’s afaik no equivalent to rhel, for that matter. This goes both ways though as how would you hire a FreeBSD admin based on their resume without a rhce-like FreeBSD certification program?
Edit-I’ll posit that since FreeBSD is smaller an entity wishing to add features to the OS might face either less backlash or at least enjoy more visibility from the top developers of the OS. Linus, for instance, just has a larger list of entities vying for his attention on issues and commits.
To be fair all of these reasons come down to personal preference (sans the TCP performance claim). E.g. he prefers FreeBSD’s performance monitoring tools to Linux’s monitoring tools, or he prefers FreeBSD’s user land to Linux’s user land. That’s fine but it’s not very persuasive.
vmstat -z gives me counters of kernel limit failures on FreeBSD. Very useful when debugging errors in very high performance environments. Anybody knows what the Linux equivalent is? Say I need to know how many times file descriptor/network socket/firewall connection state/accepted connection limits were reached?
Things I actually care about:
kernel that supports my hardware, ZFS for secure snapshotted data, scriptable tools to manage NICs and ppp and VPNs, a fast optimised C++ compiler, the latest versions of dynamic language runtimes, a shell, a text editor, a terminal multiplexer, a nerdy window manager and an evergreen browser.
On that playing field, the integrated nature of FreeBSD is nice but it’s an asterisk on top of the kernel rather than anything approaching what makes up a the system part of an Operating System. Almost everything else comes from a third party (and I’m fine with that.)
I haven’t used FreeBSD as a daily OS for over a decade though. What’s the new coolness?
"Btrfs is great in its intentions but still not as stable as it should be after all these years of development." may have been true years ago, but doesn't seem to be anymore.
Pick any given feature and a FS may or not support it well. RAID56 did have a major update, but still has the write hole and isn't recommended for production.
I think the point still stands though. There has been lots of stabilization work done to BTRFS, and anyone using production-recommended features should consider the filesystem stable.
The key point is not Linux vs FreeBSD. It is simply choice. You have a real choice. Do it this way or that - do it your way. I like both Linux and FreeBSD but I deploy them differently.
I slap Linux on my servers and desktops and I deploy FreeBSD via pfSense on firewalls.
Sometimes I do experiments and try out BSD on the desktop which hasn't worked out yet for me but I live in hope because I adore *BSD as much as I do Linux.
If BSD is the way to get your servers to do what you want then lovely. Do it and remember you have choice.
I use freebsd for one project on node/js and for ssh jumper boxes. I have also been admining linux boxes since 99. I have no hate for systemd - there just was a learning curve. I like a basic rc.conf set up that freebsd has. Everything can go in this one file for startups. Binary updates have been around for years so doing security updates are easy with no need to rebuild world or compile. You can use pkg for third party installs (binaries) although they don't always follow the version in ports. Security wise kern_securelevel / ugidfw for security. freebsd update also allows for easy updating in major os releases. ZFS on root just works on freebsd. PF / ipfw to me makes much more sense than iptables (I haven'ted really moved to nftables).
When I compare to ubuntu which is the OS I use for linux mostly now:
* kvm is superior to bhyve in every way
* automating security updates via apt are better than a combination of freebsdupdate/pkg updates. Plus the deb packages are made by Ubuntu and just work. ports/pkgs are third party on freebsd
* rebootless kernel updates exist for ubuntu
* It is easier to find people familiar with linux right away
Really though the learning curve of freebsd <-> linux is not high.
Who mentioned anything about an Ars article? I read the whole debacle on mailing lists and Twitter. That was enough to form my own conclusions and I did not come away from that impressed with the FreeBSD development environs. It definitely flys in the face that somehow FreeBSD upholds technical excellence above business interests as is being claimed in the article.
FreeBSD 13 came very close to shipping with a WireGaurd implementation with many bugs an vulerabilites that were quickly identified by the creator of the WireGaurd prototocal shortly after learning about the update.
Attempts were made to fix it, but they eventually decided to ship 13.0 without wiregaurd.
It was very policitcal because the company that sponsored the development had already promised the feature to customers.
While I get the author's reasoning, it makes me wonder at what scale, portability and level of automation and disposability all of this is done.
Even if an OS is 'better', a VM with a short lifetime will generally be 'good enough' very quickly. If you add a very large ecosystem and lots of support (both open source and community as well as commercial support) and existing knowledge, FreeBSD doesn't immediately come to mind as a great option.
If I were to go for an 'appliance' style system, that's where I would likely consider FreeBSD at some point, especially with ZFS snapshots and (for me) the reliably and fast BTX loader. Pumping out BSD images isn't hard (great distro tools!) and complete system updates (due to the mentioned "one team does the whole release") are a breeze as well. This is of course something we can do with systemd and things like debootstrap too, but from a OS-image-as-deployable perspective this will do just fine.
First off FreeBSD FTW. I use it everywhere over Linux now for the first time in 25 years and couldn’t be happier. My only wish is that BSD had a better non-CoW file system. Databases and Blockchains are already CoW so it does irk me slightly to use zfs for them. That being said, I’ve never had a problem because of it.
That's one of the fields FreeBSD is bad at: it's not really possible to get info on the current "normal" file system, UFS2.
This latest version has something called "journaled soft updates" and it's a metadata-journaled system, i.e. the actual data is passed through, and it's non-CoW.
I don't think there's much (anything?) in UFS that would lead to poor performance other than the usual suspects:
If your disk is slow or dieing, you might blame UFS, but it's not really UFS.
I've had some vague issues with the I/O scheduler, which isn't really UFS, but at the same time, UFS may be the only real client of the I/O scheduler, I think ZFS does it's own thing, anyway the systems were UFS only. This is super vague, and I don't have more details, but I just want to put it out there. For one class of machines that had a lot of disks (about 12 ssds), did a pretty even mix of reads and writes, evenly spread across the disks, upgrading from FreeBSD X to X + 1 wasn't possible because there was a large performance reversion. I think this was 10 -> 11, but it's possible it was 11 -> 12. Because this came up while my work was in progress migrating to our acquirer's datacenter which included switching to their inhouse Linux distro, it made sense to just leave those hosts on the older OS, and not spend the time debugging this. We didn't have a way to test this without production load, but that had user impact, and it would take a while to show up. It's quite possible this was just a simple tuning error, or possibly a bug that has been fixed for some time; the symptoms were obvious though: processes waiting on io, but the disks had a lot of idle time.
If you have a lot of files in a given directory, that's kind of slow, and IIRC, if the directory every had a lot of files, the extra space won't get reclaimed until the directory is deleted, even if most of the files are unlinked. (This isn't uncommon for filesystems, some filesystems handle it better than others, there are application level strategies to deal with hashing/deeper directory trees)
If the filesystem ever gets too full, the strategy to search for free space changes to one that's less fast; it won't change back, so don't fill your disk too much. (This ends up being a good idea for SSD disk health too, and again isn't super unusual in filesystems, but some filesystems probably do better). tunefs(8) says:
> The file system's ability to avoid fragmentation will be reduced when the total free space, including the reserve, drops below 15%. As free space approaches zero, throughput can degrade by up to a factor of three over the performance obtained at a 10% threshold.
UFS has snapshots, which is great, but everyonce in a while, you end up with a snapshot you forgot about, and it can really eat disk space and you may miss it. Not really a performance issue, but can lead to overfilling your drive.
Of course, there's the obvious that UFS has no support for checksumming, but that's not performamce. Soft updates do allow for some amount of consistency in meta data, and background fsck is nice (but could tank performance, I suppose).
I haven't made much use of them but the mirrors or raidzs seemed to perform more or less inline with expectations (consumer hardware may not have the PCIE lanes really available to run multiple fast NVME devices well).
for instance, what is your ARC configuration in this case? It can have a massive impact of performance.
getting ZFS to perform well takes a bit of work, but in my opinion performance is on par with most filesystems. (and it has a ton of additional features).
No, it doesn't, there's a hard cap. I spent a long time trying to replicate the performance I was accustomed to in XFS.
L2ARC can improves cached reads, but it's not magical, especially not for random reads... or writes. (and yes, I know about SLOG, but doing async is faster than improving sync)
And don't get me started on how ZFS is not using mirrors to improve read speed (unlike mdadm can do, cf the difference between o3 n3 f3) or how it can't take advantage of mixed arrays (ex: a fast NVME + a regular SSD or HDD to add redundancy: all the reads should go to the NVME! The writes should go async to the slow media!)
If you don't have a RAID of fast NVMe that are each given all the lanes they need, you may not see a difference.
But if you are running baremetal close to 100% of what your hardware allows, and the choice of everything you want to buy and deploy, you'll see these limits very soon.
In the end, I still chose ZFS most of the time, but there are some usecases where I think XFS over mdadm is still the best choices.
> I sometimes experienced severe system slowdowns due to high I/O, even if the data to be processed was not read/write dependent. On FreeBSD this does not happen, and if something is blocking, it blocks THAT operation, not the rest of the system.
I’ve seen this for a long time in Windows, where any prolonged I/O brings the entire system down to its knees. But it also seems to affect macOS (which is based on FreeBSD) as a system, though it’s not as bad as on Windows. Has Windows improved on this over the years? I’m unable to tell.
Adding onto this; the parts of the kernel that were from FreeBSD were taken two decades ago, without much if any attempt to follow up and rebase. I don't know about the disk I/O subsystem, or even if it was taken from FreeBSD, but the 2000 era FreeBSD tcp was scalable for 2000 era machines (although it would have been nice if Apple had taken it after syncache/syncookies), but needs changes from the 2010s to work well with modern machines. I'm sure that similar improvements have happened for disk I/O, but I just don't know the details. Not a lot of people would run a FreeBSD 4.x kernel today, but that's what's happening with the FreeBSD derived kernel bits in Mac OS.
The funny part about this is I actually end up installing a lot of the GNU tools so I can have some level of parity when writing (mostly shell) code on MacOS.
Windows seems to rely on disk accesses in a lot of critical processes which is why it tends to have GUI lockups and slowdowns under I/O load. Even opening task manager, or just switching tabs in it, while the system is loaded can take a few seconds (~dozens to hundreds of billions of cycles).
I have no idea how operating systems work, but at a wild guess. If you're running the OS off the same disk thats threashing the io, and, its queued too many read/write operations to handle the reads for the OS? And maybe FreeBSD is just so small it effectively caches itself into memory and doesn't need the IO so it appears to function still?
I guess based on the downvotes I’m wrong but they also don’t know?
Reason I want to know is when I tried running ubuntu off a spinning disk and wanted to move some games to the drive it would transfer about 1gb then lock up. Didn’t happen with an SSD.
I used FreeBSD for many years (on servers) between 2001-2009. I also used it as a personal machine in the 90's. We used it for stability, at which it did well.The real problem was that everything was moving to Linux. The Linux kernel and community kept up with with bleeding edge hardware or software. Stability of Linux continued to improve and most people stopped compiling custom kernels anyway. I used to compile most user-space software too, and almost never do now. That largely negated the FreeBSD benefits.
Interesting, I had the exact opposite experience. Because Linux always felt the need to support the latest gadgets, it became less stable over time. There is no feature in Linux that I use now that I did not use 10 years ago.
FreeBSD is a nicer, more logical Unix than Linux in general. Now as soon as you have a package or hardware that you want to use and it's not supported by FreeBSD let us know how that goes.
It may be a more “logical unix” but most engineers (including me) don’t care about that. I started using Linux in the mid 2000s and had never heard of unix at the time and only discovered it by reading about Linux.
I have been forced to work far too much with Citrix Netscaler virtual networking appliances and while I can see how it was probably a great product before Citrix purchased it the amount of bugs and regular security holes in it is insane. Especially for a damn networking appliance!
That being said it also forced me to use FreeBSD a lot more than I ever would have otherwise and I have a lot of respect for the OS itself. I would not use it everywhere but it has amazing latency which makes it obviously great for networking.
Congratulation, i did the same ~5years ago, and cant be any happier, jails bhyve dtrace zfs/ufs pf geom-compressed pkg/ports etcetc...nearly every day i find some useful features, and when try them out...they work!!
FreeBSD is most likely slower than linux in most scenarios. ZFS is supported natively in Linux ( Ubuntu ), jail are terrible compared to Docker, since Docker is very popular there are a millions tools built arround it, it's not just for sand boxing, it's a part of a complet development process.
Who cares about boot process on a server seriously?
"FreeBSD's network stack is (still) superior to Linux's - and, often, so is its performance."
This is wrong, if it was the case most large compagnies would use BSD, atm they all use Linux, the only large compagny using BSD is Netflix because they added some tls offloading in kernel for their CDN which could have been done in Linux btw.
imo don't use tech that is not widely used, you're going to reinvent the wheele in a worse way because tool a.b.c is missing.
Not sure about 1997, but definitely in 1998: I've been a Linux user since 1995, when I worked for a (then small now big) software company. I setup up their web site on Slackware Linux, building an early content management system, early content delivery network etc. But after I left in 1998, one of the first things my replacements did was change all the web servers to run NetBSD rather than Linux, because they said it had a better networking stack.
tl;dr - FreeBSD has ample nice features for their use case, and is considerably simpler. Linux has loads of unneeded (for their use case) features, and so many cooks in the kitchen that the ongoing cognitive load (to keep track of the features and complexity and changes) looks worse than the one-time load of switching over to FreeBSD.
Netflix is pushing 400Gbit/s of TLS traffic per server with 60% CPU load. WhatsApp was doing millions of concurrent TCP connections per server. FreeBSD's networking has been multi-threaded for a long time now.
Well, to be honest a whole lot has been done to FreeBSD network stack scalability quite recently (14-CURRENT), eg introduction of epoch(9) and the routing nexthop patches.
I don't think that's fair. Maybe half of their reasons are more on the subjective side but half of them are actual technical choices like wanting ufs/zfs and jails.
zfs on FreeBSD is from the same sources as in Linux. There is not a single word in the article why to chose jails. Article is 90s style rant, which is not a bad thing.
> zfs on FreeBSD is from the same sources as in Linux.
You should know that due to license incompatibilities (CDDL and GPLv2), it's not really as smooth as the BSD integration (I wonder how Canonical avoids this issue). You know that OpenZFS's codebase is a monorepo (for the most part) but the in-kernel implementation is vastly different.
> The issue isn’t any different from using closed source NVidia drivers with FreeBSD kernel.
There's a significant difference between almost all devices and essential devices like storage: you need to have the storage online as soon as possible, graphics are purely optional in the boot process. If you only use ZFS as "external" storage, sure it's effectively not different but as a boot drive?
All the "advantages" of FreeBSD are really just personal preferences, and little more. E.g., FreeBSD jails are not a replacement for containerization in any way. The FreeBSD network stack is better? I'll bet you can talk to a Linux kernel expert who will explain why exactly the opposite is true. And things being "simpler" in *BSD? Simpler is not always better. SystemD may be somewhat over-engineered, but it's also powerful as hell and can do things the old rc.X system couldn't dream of doing.
There's nothing wrong with switching to another OS, but implying it's because the other OS is somehow empirically "better" is misguided.
* Lack of systemd. Managing services through shell scripts is outdated to me. It feels very hacky, there is no way to specify dependencies, and auto-restarts in case of crashes. Many FreeBSD devs praise launchd, well... systemd is a clone of launchd.
* FreeBSD jail are sub-optimal compared to systemd-nspawn. There are tons of tools to create freebsd jails (manually, ezjail, bastillebsd, etc…) half of them are deprecated. At the end all of your jails end up on the same loopback interface, making it hard to firewall. I couldn't find a way to have one network interface per jail. With Linux, debootstrap + machinectl and you're good to go.
* Lack of security modules (such as SELinux) -- Edit: I should have written "Lack of good security module"
* nftables is way easier to grasp than pf, and as fast as pf, and has atomic reloads.