A: "Use BSD"
Q: Why is there such a strong focus on trying to get Linux network performance when (I think) everyone agrees BSD is better at networking? What does Linux offer beyond the network that BSD doesn't when it comes to applications that demand the fastest networks?
ps. I think the markdown filter is broken, I can't make a literal asterisk with a backslash. Anyone know how HN lets you make an inline asterisk?
We're not consuming/using the built in network stacks anyway, we're using the OS as a content delivery system. Get us something we can get to the cores on, we're going to pin our applications directly to those cores keeping the kernel relegated to scheduling tasks on whichever NUMA core is further away from the PCIe bus running into the CPU. We'll have the CPU's we're using pegged in a constant spinlock anyway which would make the scheduler think really, really hard about running tasks there. We don't use realtime kernels as it's better for us to pay the price on the occasional spike outlier than to raise the baseline latency up.
Due to my own unfamiliarity, I don't know what BSD's equivalent to isolcpus is. I don't know how to taskset on a BSD. I don't know if the infinibad/ethernet controller's firmware/bypass software works. I don't know how BSD's scheduler works (not that we usually care, but there are times when one can't avoid work needing to be scheduled, things like rpc calls to shut down an app, ssh if you can spare the clock cycles for key verification, etc).
Would dtrace come in handy? Most definitely. Is that enough for us to abandon what we know works? Not yet.
However, we're talking about very specific needs: super high performance networking. If you have that specific of a need, wouldn't you want something unfamiliar if it solves the problem best?
I remember these claims being made in the late 90s, and perhaps they were true back then, but it's been 15 years, and I would be surprised if Linux hasn't caught up by virtue of its faster development pace, greater mindshare, and increased corporate/datacenter usage.
So, in all seriousness: what recent, well argued essays/papers can you refer me to so I can understand the claim that BSD networking is still better than Linux in 2015?
It was also linked to from HN: https://news.ycombinator.com/item?id=8167126
Both have some pretty good sources (and some not so good sources too)
Shipped in FreeBSD by default, developed for FreeBSD first -- and that's just the bleeding edge side of things.
Time for a epoll for kqueue swap, and make this performance debate go away for both, Linux and FreeBSD once and for all. No reason for this pissing contest.
(As in, the stuff that facilitates registered I/O is based on concepts that can be traced back to VMS, released in 1977. Namely, the Irp, plus, a kernel designed around waitable events, not runnable processes.)
epoll requires more syscalls to do the same stuff.
that's not responsible for the differences speed, but at this level syscalls are a meaningful expense.
At that point ease of management or package installation probably matters more to developers and it might simply come down to things like driver support and other stability/performance issues where Linux has gotten a LOT of highly-specialized attention from hardware vendors and the HPC world. Back when 10G hardware was just starting to enter the market, we bought cards which came with a Linux driver but it took awhile before that was ported to FreeBSD and longer still before the latency had been optimized as much.
And the chance of a Linux distro having a newer version of something important is unlikely unless a new distro was just released and picked the latest release of a particular software to standardize their package on for the lifetime of this new OS.
Either way -- FreeBSD will continue (or be capable of) getting updates to track upstream and your Linux distro will only be backporting security fixes.
In most cases what actually happens is that people use the main distribution repo for the 99% of packages which are stable and when you need something newer you add external apt/yum/etc. sources for those specific projects.
In the Ubuntu world, there's a huge ecosystem supporting this style where you upload source packages and they'll build and host the binary packages for you:
This approach gives you the speed, reliability and security benefits of binary packages with the currency of ports and, more importantly, allows you to opt-in only where you specifically know you need new features.
In many cases, the repos are maintained directly by the upstream project – see e.g. https://www.varnish-cache.org/installation/debian for what that process looks like.
In other cases, you have to decide whether you trust a particular developer. If not, you can choose to create your own version – which could be as simple as taking an source package, auditing it to whatever level you want, and signing it with your own trusted key.
Look, I ran servers using OpenBSD for years in the 90s and FreeBSD in the early 2000s. I respect the work which has gone into the ports system but the reason to use it is not security and advocacy based on limited understanding will not accomplish anything useful. If you want to praise ports, talk about how much easier it makes it to have the latest version of everything installed — and do your homework to be ready to explain how that's meaningfully better than e.g. a Debian user tracking the testing or unstable repositories.
Furthermore, third party repos not by upstream or trusted OS developers is a nightmare. I regularly spend time trying to find trustworthy 3rd party repos to get newer versions of developer tools into RHEL 5/6. Sometimes it's just random RPMs on an FTP or rpmfind.net style sites. Not trustworthy at all. And sometimes I can't even build the package myself because the tool refuses to build because the rest of the OS is too old.
Long term release Linux distros make life hell.
It sounds like you don't know how the Debian packaging system works regarding security. If you are installing from Debian repos (as opposed to third-party repos), then all binary packages go through the ftpmasters. The packages are checksummed, and the checksums are GPG-signed. Each package's maintainer or team handles building the binary from source. Of course, you can also download the source package yourself with a simple command, and then build it yourself. But if you don't trust the Debian maintainers to verify the integrity of the source packages they build, then you shouldn't be using Debian at all. This is no different than using a BSD. The ports tree could also be compromised.
Third-party repos are always a risk. That's one of the nice things about PPAs: their maintainers can use the same security mechanisms that the regular distro repos use, but with their own GPG keys. Again, if you don't trust the maintainer, I guess you should be building everything yourself. LFS gets old though, right?
And that's another good thing about Debian: almost anything you could want is already in Debian proper, so resorting to third-party repos or building manually is rarely necessary.
For long-term use, you can use testing or unstable or both, which are effectively rolling releases. There is also the backports repo for stable. And if you need to build a package yourself, between the Debian tools and checkinstall, it's not hard.
Debian and Ubuntu are the only distros I use, and for good reason. They solved most of these problems a long, long time ago. Compared to Windows or other Linux distros, it seems more like heaven than hell.
By the way, I'm no expert on BSDs--but do they even have any cryptographic signatures in the system at all, or is it just package checksums? Checksums by themselves don't prove anything; you need a way to sign the checksums to verify they haven't been altered. Relying on unsigned checksums is akin to security theater.
I don't think there is any particular problem with Linux. The core problem of "slow" performance is mostly due to limitations of BSD API (recvmmsg is a good example of a workaround), and due to the feature richness of Linux.
Also, doing the math: 2GHz / 350k pps = 5714 cycles per packet. 6k cycles to deliver the packet from NIC to one core, then pass it between cores and copy over to application. That ain't bad really.
Setting aside the fact that a typical fanboi comment is at the top of HN post, are you seriously contending that since one OS does a thing there's no need for other OS's to do the same thing?
E.g. on freebsd I recently ran into an ENOBUFS that I've never seen on linux. Although man pages said that they could happen.
Obviously it provides nothing extra, and certainly nothing better, when it comes to a server role. Other than support contracts. And support for 3rd-party apps. And hardware drivers/compatibility. And users that know how to operate it. And brand name recognition. And size of software development community, libraries, tools. And industry reliance on non-portable software (Docker).
Answer to your question: technical superiority is dwarfed by a more popular product.
Docker is the new shiny kid on the block, but the industry is far from reliant on it. It's also a bit misleading to call the software non-portable in a Linux v FreeBSD debate - it's not that the software is non-portable, it's that it uses a feature that is not available in FreeBSD. In a contest about bragging rights, that's significant.
If so, thanks for the aneurism. Windows is decades ahead, thanks to a fundamentally more performant kernel and driver architecture, the key parts of which have been present since NT. (And VMS, if you want to get picky.)
If you need something specific with better performance than that, you should probably look at moving more of the network stack to your application layer.