Hacker News new | past | comments | ask | show | jobs | submit login
What I think what we need to do to keep FreeBSD relevant (2019) (leidinger.net)
118 points by rodrigo975 on Jan 12, 2020 | hide | past | favorite | 154 comments



Quick summary:

- Port several container and virtualization systems (OpenStack, OpenNebula, oVirt, CloudStack, Kubernetes, Docker, Podman, …) to FreeBSD.

- Improve linuxolator, and run Linux tests on linuxolator as part of FreeBSD CI.

- Make kerberos work.

- Port more SDN software to FreeBSD and improve the documentation for it.

- Add interfaces to hardware sensors (fans, voltage, temperature, etc.)

- Add an implementation of Multipath TCP.

- Make SecureBoot work.

- Support writing kernel modules in C++ or Rust.

- Improve end-user HOWTO docs on how to use specific 3rd party software (mail server, web server, display server, etc.) with FreeBSD.

- Review existing docs and remove outdated information.

- Add a "cloud" section to end-user docs.

- Use doxygen or a similar system to generate additional developer docs from source code.

- Polish and improve DTrace.

- Make default options for ports packages consistent.

- Create "meta-ports" for specific use-cases (webserver, database, etc.)

- Revise default settings to target improved performance on modern machines.

- Add fuzzers and Clang sanitizers to FreeBSD CI.

- Make the CI system more visible.

[I'm not the author, just a guy reading the post. Add a comment if I missed anything important]


> Improve linuxolator, and run Linux tests on linuxolator as part of FreeBSD CI.

This is the worst thing they could actually do because people will just run the Linux versions and there would be even less demand for anyone to bother with a FreeBSD version.


> This is the worst thing they could actually do because people will just run the Linux versions and there would be even less demand for anyone to bother with a FreeBSD version.

The alternative is that people will just run the Linux version on Linux. From my perspective, Linuxolator's job is to let me run commercial software that is released for Linux and unlikely to be released for FreeBSD; for example, whatever JNI garbage SuperMicro needed for their Java KVM; I had to keep an old Java 7 JRE around for that because they wouldn't update to fit into the security model of Java 8, so they're really unlikely to build for FreeBSD, especially in a competent way. Of course, if I had a choice, I would use a native build / something with at least source available, so I could build it natively; but if it's a sometimes run utility, I'm ok with something terribly hacky.


That was my thought as well. Thinking back to OS/2 Warp's similar approach.

Unless FreeBSD has really, really strong differentiating capabilities that are in super high demand _already_, and insufficient Linux interop is a switching impediment, then focusing on Linux interop with otherwise low differentiation/demand seems likely to further lower FreeBSD differentiation & demand.

Were I them, I'd pick a couple really strong niches to try to dominate for a little while to drastically differentiate. Maybe in SDN or SSI clustering or something. Create the absolute best platform in that regard while still having familiar tools available for general purpose use cases... and _then_ work to trivialize the on-ramp to FreeBSD from Linux et. al.

FreeBSD needs to find some way to be a part of the meme, "Nobody ever got fired for going with..."


> people will just run the Linux version

Don't worry, there are more than enough stumbling blocks in Linux emulation to ensure it would never be as convenient as native applications. For example, if you pass LD_LIBRARY_PATH with FreeBSD libraries to the Linux program it will predictably blow up in your face. Now, imagine running a shell script which calls FreeBSD and Linux executables.

Linuxulator is really a method of last resort and it always requires special care for each individual application.


This one is trivially solved using jail or chroot. Otherwise, there’s the path translation mechanism (/compat/linux) specifically for this purpose.


Nah, not quite the same. Jail makes it easy to avoid calling FreeBSD binaries from the Linux process by accident. If you must mix-and-match commands, jail is exactly of zero help.


Not necessarily. It can also actually increase adoption because it fixes friction: there might always be a few tools or docker containers or whatnot which are not ported to BSD, and this provides an answer to that.

Microsoft has the “embrace, extend, extinguish” mantra for a similar reason.


> Not necessarily. It can also actually increase adoption because it fixes friction: there might always be a few tools or docker containers or whatnot which are not ported to BSD, and this provides an answer to that.

It tells developers that users will put up with the compatibility layer. Also setting up these compatibility layers has its own set of complications e.g. random things not working, sub-optimal performance etc.

The only people that have pulled it off has been Microsoft with WSL and there has been problems with that. I for a laugh setup basically a Xubuntu desktop and I had lots of odd errors being reported in the console.


Exactly this. I use FreeBSD exclusively for my development, but there are a few pieces of proprietary software that are only available as Linux binaries (like FPGA toolchains), and having a viable Linux ABI layer means that I do not need to resort to having an entire second operating system in a VM or another machine.


I personally find the Linux support "good enough", and it's one reason I choose FreeBSD over OpenBSD on a few machines. OBSD is great, they got rid of Linux emulation for reasons I understand entirely, but it is on the other hand very handy to run the occasional Linux binary, mostly for a handful of closed source stuff that is never going to port to *BSD.


If I am not mistaken, I think NetBSD was the first to add Linux binary compatibilty. It is certainly useful. It is easy to exclude when compiling the kernel if it is not needed. I have used /emul/linux for running closed source Linux binaries.


Don’t forget vsyscalls.


I didn't see that in the post.


You’re correct. I should’ve replied to a different thread.

In my experience gettimeofday() as a vsyscall is a substantial performance benefit to PHP web applications.

I read the list and added that in thinking it’s relevant to growing the FreeBSD user base.


What about vsyscalls?


It seems to me like a list of things the Linux community is working on; and I think that's a bit of the problem with FreeBSD. For a long time, it was known as the "high-reliability / high-performance server OS". But Linux has mostly taken that crown now, for quite a few years. Where does that leave FreeBSD? I wouldn't be able to define FreeBSD's identity at the moment; but granted I'm more a OpenBSD user.

It's easier to define OpenBSD, beyond the obvious stuff (security). It's a system that does not really want to deviate too much from old-school Unix, and a community that firmly rejects any fad or fashion and embraces simple, and indeed sometimes simplistic solutions. That doesn't make it the best system for everything, but at least we know where it stands: the best old-school Unix OS. It's almost as if it lives in a parallel universe where time has stopped in 1990, and the OpenBSD developers are asymptotically converging to the perfect Unix system of that era. I'm not saying that as a criticism at all.

And NetBSD and Dragonfly BSD are also, more or less explicitly, research operating systems where individual developers can come and try stuff.

FreeBSD is still a very impressive project but I think they can do better than being just-like-Linux-but-five-years-later.


In some cases it's linux-but-ten-years-earlier - think ZFS, or even trivial stuff like NFSv4 ACL support ;-)

I think the result for what you're seeing is that both are similar in purpose and architecture, and thus it's not surprising that they are converging on a similar set of functionality.


That's pretty old news though: 2008 for ZFS, and 2012 for NFSv4.

I stopped using FreeBSD in 2013/2014a as pkgng was a completely broken mess of bugs and developers seemed uninterested in fixing things, but if I look to what's changed since then[1], then it all seem rather ... lacklustre. Sure, things have improved, but there's no "ohh, that's cool!" like ZFS, jails, etc. Certainly nothing that makes it markedly different from Linux.

Compare this to OpenBSD pr Linux, both of which have had a number of innovative concepts since then.

https://en.wikipedia.org/wiki/FreeBSD_version_history


In my experience, that time was a particular low point for pkgng. It still breaks, of course. I've just been told that FreeBSD 12 has some bizarre manifest parsing bug that completely contradicts the doco. But it did get markedly better for a while in 10 and 11.


FreeBSD sucks at marketing, that’s very true.


Some things I like about FreeBSD in particular:

- high quality docs, especially for development. In my experience manpages are to a higher standard than Linux, especially for syscalls and the like - `make install` - you can build userland, including libc, and the kernel in a couple of commands and then reboot into your new system (or fire up a VM). this is extremely hard to do on most Linux distributions. - a smaller community where it's perhaps a bit easier to meet people and contribute. I remember going to the AUUGN conferences and the *BSD folks there were always friendly

All of that said, while I did use FreeBSD as my desktop OS at various times in the late 90s and 2000s, I haven't for years. Although I am now curious to see if I can fire it up on a Pi or something like that :)


I found that FreeBSD has its use in NAS builds with its nicely integrated ZFS. Otherwise I'm all in for OpenBSD or Arch Linux.


1988. OpenBSD does not have the waitid() library function.


I think this is the first time I've heard of waitid.


As long as FreeBSD has sustainable interest among cooperate usage it should be fine, but I am not sure if the number of FreeBSD client / cooperate user are growing or shrinking.

WhatsApp moved away from FreeBSD to Linux. And last time there were a few Netflix employees mentioned FreeBSD were used on Open Appliance for historical reason, not technical. So I would not be surprised if someday they move to linux as well. ( Once it offer similar performance )

Mellanox loves FreeBSD, but I am not sure if the same could be said for their new owner Nvidia.

OpenBSD and NetBSD are both an easy sell, one focus on Security and the other on Embedded. I am not sure how to sell FreeBSD, and it needs focus and direction. May be aiming for Network Appliance which is what the majority of its cooperate customer are using it for, where it could offer greater value.


Netflix employee here: FreeBSD generally scales well for our workload, and where it doesn't, we can improve it and upstream the fixes without a lot of friction.

Right now, we're serving 200Gb/s of kTLS encrypted video on a single box (see talk at https://youtu.be/8NSzkYSX5nY), and looking at 400Gb/s. I have not heard of others doing this on Linux, and its sexy enough that I assume anybody doing this on Linux would make a splash about it.

In fact, as far as I know, some other big CDN providers that use Linux are just now moving to 100GbE, where as we've been serving at 100Gb/s in production for almost 4 years now on FreeBSD.


Yes! I have been following your crazy work pushing the kLTS limit running into Memory bandwidth problems, and things get wonky once it reaches 75% of max bandwidth. I have always assume FreeBSD's performance was sort of a pride within, so I was surprised another Netflix employees being rather dismissive. ( Edit: I just spend 15min searching for it and couldn't find the comment. I should have favourite that. May be it was my false memory )

Which is one reason why I wish there are CDN running on FreeBSD. So you have an ecosystem / business around FreeBSD ( partly to save guard its future ). AFAIK the only CDN running on FreeBSD is Limelight Network, and they dont really do any pay as you go pricing, everything is ask for quote.

I think not too long ago I floated the idea that Netflix should open a Video Delivery CDN with the Fast.com branding. ( In hindsight it was a silly idea ) And it was for the same reason, more FreeBSD usage = more vested interest = long term sustainability.


>I have not heard of others doing this on Linux, and its sexy enough that I assume anybody doing this on Linux would make a splash about it.

You guys have had to do some considerable (from my interpretation of your presentations, at least) work to make this happen even in FreeBSD - do you think that a similar level of effort working in Linux would put it within reach there, too? It seems like there's been a lot of people doing some serious packet pushing with XDP.


Would you recommend a few specific server parts that help you achieve those numbers (in addition to FreeBSD). I am curious about constructing a 100GbE network in my neighbourhood.


We use Mellanox ConnectX-4 NICs (cx5 and cx6 would be fine too), as well as Chelsio T6 in mostly Supermicro boards, making sure to have a full Gen3 x16 (or Gen4 x8).

The Mellanox (and Chelsio) NICs are helpful for us because they support RSS assisted TCP LRO. If you're doing just plain packet forwarding, that would not matter.


Thanks a lot!

I am not sure about the plain packet forwarding or if I am gonna do more, but a few hundred (or thousand) bucks more sound pretty good for future-proofing a neighbourhood level of networking.

Pretty valuable info for a hobbyist looking to get better (and trying to get into community work). Thank you.


Well, the RSS does, but LRO does not.


RSS assisted LRO is a FreeBSD feature where LRO holds batches of hundreds or thousands of packets, then sorts packets by arrival time and LRO hash result. That puts packets from the same connection adjacent to each other, and lets LRO combine them. Delivering packets in batches of hundreds requires some work from the driver (and requires the driver use a new LRO API).

At least on our (Netflix) workload with tens of thousands of active connections, RSS assisted LRO increases our aggregation rate from almost nothing to about 2:1, and saves about 10% CPU.

Its been on my TODO list to add RSS-assisted LRO support to iflib, which would give the intel and broadcom 100g drivers access to it, along with assorted Intel and other 1g, 10g, and 40g devices. But it still hasn't happened yet.


I think he was saying that RSS would still matter for plain packet forwarding.


> WhatsApp moved away from FreeBSD to Linux.

I'm a former WhatsApp server engineer[1]. WhatsApp primarily moved from bare metal hosting running FreeBSD to Facebook's owned and operating containerized management system which incidentally runs Linux.

We did not make a technical choice to abandon FreeBSD in favor or something else, we made an organizational choice to abandon external hosting in favor of owned and operated hosting which required a lot of technical changes, one of which was switching operating systems.

About half of the server engineering team pre-acquisition was former Yahoo employees, and we had seen what happens when an acquired company does not assimilate the acquirer's basic tech stack --- every issue with hardware or software first has to go through the triage step of is it broken because you're not running the company's OS and the company's package manager and the company's monitoring service and whatever else. That's not a great position to be in when you're trying to run a reliable service, so we decided it would be better to accept Facebook's foundational tech stack and hope the benefits outweighed the costs.

[1] As such, I usually recuse myself from discussions of WhatsApp and Facebook, I'm certainly not authorized to speak on behalf of either; my opinions are my own, etc.


Last time there were Netflix employees claiming Linux can't do more than 60% of FreeBSD's performance in their workloads. Pretty good technical reason if you ask me.


Could you please provide a link to those discussions/benchmarks regarding 60% performance? In the presentations by Netflix engineers that I have seen, they've primarily cited licensing as the reason they chose FreeBSD over Linux.

It would certainly be interesting to see a performance comparison between Netflix's open connect appliances with Google's global cache servers.


Netflix has modified FreeBSD networking stacks to serve 100 Gbps at edge. But it's just their secret juice.


Nothing is secret. AFAIK, everything is upstream in 13-current

EDIT: The above applies mostly to the kernel. I've run a vanilla FreeBSD kernel on one of our appliances and gotten good performance (and served at 100Gb/s).

In terms of userspace, openssl support for KTLS is only available in OpenSSL's master tree. We'd like to bring them back into the stable OpenSSL that's in FreeBSD's contrib, but we're not sure that's going to happen. And there are some patches to support kTLS with ngninx that are floating around in various forms; I'm pretty sure Mellanox has been giving out a version that should work for FreeBSD as well as Linux ktls.


Same way Google modified Linux to serve whatever they need it to. That’s how Open Source works. (And yet, 60%. Curious.)


at FOSDEM the Netflix guy there said they use FreeBSD because of licensing reasons. [1]

This is because they've changed certain parts and don't want to release some changes to the community.

https://archive.fosdem.org/2019/schedule/event/netflix_freeb... (first question covers it @ 44m~)


They should only have to release the changes if they publicly release the modified OS.


Its tricky. For business reasons, Netflix's ISP partners own the Openconnect appliances that are hosted in their data centers. So I think they could request the sources for GPL compliance.


Thank you for posting this Drew.

It's easy to dismiss companies license concerns over the GPL as ignorance, because, they usually are. But this is a good reason that I have never heard of.


> Another item we should have a look at is to provide means to write kernel code in different languages. Not in the base system, but at least in ports. If someone wants to write a kernel module in C++ or Rust, why not?

Having the development of OS modules in a safe language like Rust as one of the strategic goals, would likely attract some eyeballs and dev hands. Long term it would result in a more stable and safe system and possibly lead to increase in the market share.


I'm a veteran of Windows and Linux, but I've never used a *BSD (unless you tenuously count MacOS).

Why or when would I want to use BSD over Linux? Is it more performant? Does it use less resources? What real differences are there?

I've tried to find out myself, but it mostly seems to come down to philosophical reasoning and personal preference.


I am a OpenBSD / Windows user (all the VMs server I am running are OpenBSD). I haven't used FreeBSD a whole lot.

1) You get an operating system. One of the problems with Linux is the number of distros. If you want to know how to do something you've got to google around and look for a guide. In the BSDs it is just reading the man pages and figuring things out from there.

2) I've found with OpenBSD things either are documented and work or they don't work at all. The system is easy to understand whereas with Linux I tend to get overwhelmed with one distro doing it one way and another distro doing it another way.

3) Things are a bit difficult to setup (lots of reading docs) but it doesn't change once you've got it running which is my major gripe with Windows since Windows 7.


Hah, that's a good point! I mainly work with CentOS and Ubuntu, but also sometimes Debian, RHEL and SLES. Not only do firewalls, init systems etc differ between distros, they also vary between versions - I often have to consult personal cheatsheets or Google.


> Why or when would I want to use BSD over Linux?

When you're tired of churn. Network interfaces are still configured with ifconfig (including all of wireless interfaces). When you want to see network statistics, and open connections, that's still netstat.

It's a lot of effort to run proper head to head tests, so I don't know for sure if FreeBSD is faster than Linux, but I've rarely run into bottlenecks that I can blame on the kernel as opposed to hardware limitations (or, more frequently, application bottlenecks).

Edit to add some bottlenecks I recall seeing that are related to the kernel.

1) on FreeBSD, loopback is a lot more real than on Linux; packets traverse the full tcp stack. This can cause issues if you're running TLS in a separate process and you want millions of user connections to the host. Tuning helped, but ultimately, using UNIX sockets, or running TLS in the real server process avoided the bottleneck.

2) I ran into some disk i/o bottleneck upgrading from FreeBSD 10 to 11 that I couldn't figure out. Because our company was acquired and the affected systems were being moved to a totally different environment, I didn't have the time to debug. Basic parameters were write heavy pattern on UFS with many SSDs (6 or 12), latency got really bad.

3) The tcp data structures are not setup well for outgoing connections. You can have millions of inbound connections no problem and accepting connections is pretty much only rate limited to what your application can handle, most likely limited by a crypto handshake in modern protocols, or memory used. But outgoing connections are going to bottleneck on locking/touching the global tables, even if you're using RSS tables. Unfortunately, I don't remember the numbers, but like tens of thousands of connections per second. Getting RSS alignment helped (but it's tricky, the client program has ro compute the rss hash and bind before connect), and putting in a modern hash function and using more of the connection parameters in the hash function helped, and changing how connections are checked helped a lot, but it could use a good refactor by a core networking person. That's not to say they've done poorly, the last time this had major changes was about 20 years ago, and everything works pretty well, until you hit connection rates that would have been inconceivable at the time. This was for an HAProxy service.


It just works, you don't have to deal with systemd, things are kept simple, it's rock solid, works quite well under load, the networking layer is performant, PF is a well designed firewall, the base system is very well integrated, the ports tree is very extensive, only Debian offers more packages I think. PostgresSQL also used to run a bit better on FreeBSD. Nowadays thanks to pkg installing and upgrading packages is easy. I have 11.3R instalations that were incrementaly upgraded from 8.0R. It fells like the OS is designed by greybeards that don't care too much about the latest and greatest hype, hence this article.


> It just works, you don't have to deal with systemd, things are kept simple, it's rock solid, works quite well under load, the networking layer is performant, PF is a well designed firewall

I don't mean to offend, but TBH there isn't anything terribly compelling there. I don't care either way for SysV/systemD, but distros still using SysV exist if it matters to you. CentOS et al are rock solid and work well under load, and have performant networking layers. FirewallD and UFW are also good firewall systems.


If it works for you fine, for me systemd has been an annoyance at times, as in not predictable. Plus the mess of badly designed tools it brings with it (eg. systemd-resolved, journald). Never been a fan of SysV either. We also run openSuSE, Ubuntu and Debian. FirewallD and UFW are just layers above a not so greatly designed firewall setup tool (iptables). PF is very well integrated and can be easily used directly without any higher level layers. You can even add ALTQ queuing rules in its config file.

I'm not trying to convince anyone, but FreeBSD has always worked great for me and has been a sane OS in every respect.


> distros still using SysV exist if it matters to you

As a BSD user, it wouldn't matter to xem at all. The BSDs never used the AT&T System 5 mechanism, nor its van Smoorenburg clone. It's you the Linux user that it matters to, and even then not nearly as much as you are implying it might matter to someone else. It pretty much hasn't mattered greatly to anyone apart from Linux, and before it Minix, users for 30 years.

There is a fallacy in discussions of systemd, which was called out by the Uselessd Guy years ago. It is the fallacy that only systemd and van Smoorenburg init/rc exist. This is not the case. It wasn't the case for Ubuntu and Fedora. It is especially not so when a BSD is the topic. The BSDs have a quite different family heritage in this area.

But then, AT&T Unix itself actually has a different history to what people who bandy about the erroneous "SysV/systemd" dichotomy think. The init/rc that Miquel van Smoorenburg cloned had actually already been superseded years before Linux was even invented. It wasn't used in the BSD side of the universe at all. But it was only a major mechanism in the AT&T side of the universe for just over half a decade, in the 1980s.

The BSD side of the universe could turn around and observe that they were right all along; were it not for the fact that the other side of the universe largely thought better of the mechanism as well, coming up with the SAF in 1988 and the likes of the SRC in 1991. (-: Of course, the BSD world wasn't just standing still all of those years, moreover, and had things like /etc/rc.local.d/ and Mewburn rc.

* https://web.archive.org/web/20190306213420/http://uselessd.d...

* http://jdebp.uk./FGA/rc.local-is-history.html

* http://jdebp.uk./FGA/inittab-getty-is-history.html

* http://jdebp.uk./FGA/run-levels-are-history.html

* http://jdebp.uk./FGA/unix-service-access-facility.html

* https://blog.darknedgy.net/technology/2015/09/05/0/


I have to say, I'm very confused by this, and I'm not really sure what you're trying to say.

> As a BSD user, it wouldn't matter to xem at all

I can only assume "xem" is some kind of "BSD thing"?

> It's you the Linux user that it matters to... AT&T Unix itself actually has a different history to what people who bandy about the erroneous "SysV/systemd" dichotomy think...

I did actually say I didn't care much either way, in part because I really didn't want to derail this thread into a sysv/systemd flamewar - I only wanted to know why I might use BSD over Linux.


Arch offers the most packages, bar none. You have the excellent base pacman package-system, and then you have AUR, which is easily sanitizable. You can quickly verify that even things from AUR, in Arch Linux, won't do anything fancy in a bad way. Thinking Debian and *BSD has more packages than Arch is retarded. And once you AUR something, could be anything, and, everything is easy to verify not doing any shenanigans, then the final built package is handled with the internal Arch Linux package system, ie pacman.


FreeBSD could get a boost by switching to building with a C++ compiler, beginning to modernize their ancient codebase (red-black trees, really?), and inviting people with modern ideas to implement them.

The exokernel people know how to write good kernel-grade C++. Much of the code just needs to go; kernels can't keep up with the I/O needs of user-space programs, and need to learn to get out of the way. Linux has its io_uring thing which is vastly overcomplicated. FreeBSD could lead the way with universal kernel bypass, with only authentication, resource allocation, and permissions handled in ring zero.

Much more likely, though, is that somebody new will need to start from scratch.


What's wrong with red-black trees? :) Since you mention C++: you do realize that std::map will typically be implemented with one?


You want more fanout to use the entire cache line or even the entire virtual memory page. RB trees assume access to any address in memory has equal cost, which is extremely incorrect.


The red-black tree implementation typically used in *BSD development, <sys/tree.h>, is an intrusive data structure; tree nodes are embedded within the object you must dereference for key comparison. There's no need to optimize fanout because you've entirely removed that extra indirection.

As a general purpose data structure implementation for systems development, <sys/tree.h> is quite nice. However, for any particular task you can always optimize the data structure, or choose an entirely different data structure, to better fit the problem. Which is perhaps why the Linux kernel has so many different tree and hash implementations.


std::map and std::set are famously terrible, for that reason: https://youtu.be/fHNmRkzxHWs?t=2695


Indeed. C++ needs some better Standard containers. As it is, you need 3rd party library containers when you need optimal performance on ordered collections. Abseil and boost have alternatives.

The great advantage C++ has is that there is no temptation to open-code them. Library implementations can absorb immense optimization and testing effort, amortized over all uses, and delivered without compromise.


Horrible cache utilization.


Compared to what?


If you're not interleaving insertions and lookups, then a sorted vector can be really good.

If you need to interleave insertions and lookups, but don't need to traverse in sorted order, then a good hash table (not std::unordered_map) is normally the best option.

You only need a tree if you need traversal in sorted order and interleaved insertions and lookups, which is pretty uncommon. Even then you are almost always better off with a B-tree than a red-black tree.


Even with interleaved insertions and lookups, there are common scenarios that make a sorted vector still significantly more performant. I wrote about it when we published our open source SortedList<T> implementation for .NET and .NET Core [0], specifically comparing it to AVL and Red-Black trees.

[0]: https://neosmart.net/blog/2019/sorted-list-vs-binary-search-...


That's a good point, and a really interesting blog post.


Seems like these criteria miss the use case for mm_rb, one of the central red-black trees in linux.

If you need to be able to store intervals (i.e. virtual memory areas) and do lookups based on any address in the interval, not just the base address, I don't think a hash map is the best option.


IMO "lookups based on any address in the interval" requires "traversal in sorted order", although I could have probably been more precise in my terms.

I would be curious if anyone has ever profiled the impact of changing mm_rb to a B-tree. It might be very difficult if existing code that uses mm_rb depends on pointer stability, though.


I believe Splay trees are usually mentioned as an alternative.


Aren't splay trees terrible because they turn all reads into writes? (Which means the cache lines bounce between readers, instead of being shared as they would be with pure reads.)


Yeah, having reads rebalancing the tree in a multithreaded subsystem is probably not optimal. Might be that the Splay-trees have outstayed their welcome :)

FreeBSD’s tree.h used to have, iirc, both RB- and Splay-trees.


A B-tree set, for example.


std::set too, which really is just a map where the key and value are the same.


Netflix went in the opposite direction and contributed KTLS so we now have encrypted sendfile :)

but we also have netmap, which is like mmap for networking, pretty cool stuff


the http related syscalls in freebsd are so nice to work with. it's a small thing but it makes me happy whenever I get to use it


We have quite a few types of trees that are available in generic implementations. red black is not even commonly used. The vm has had a path compressed radix tree for over a decade. We had more splay than red back really.


Universal kernel bypass for IO? Does it imply every userspace program should know how to communicate with every kind of IO device? No block layer? What about memory and caching?


Userspace libraries (which userspace programs could choose to use or not) would presumably be available.


That's not a bad idea. You'll need some sort of mechanism to make sure they don't step on each other's toes, but otherwise it's a sound concept, not unlike MS-DOS.


It seems some highest-performant enterprise solutions consider the MS-DOS way as well.

"The Coherent Accelerator Processor Interface (CAPI) is a general term for the infrastructure that provides high throughput and low latency path to the flash storage connected to the IBM POWER 8+ System. CAPI accelerator card is attached coherently as a peer to the Power8+ processor. This removes the overhead and complexity of the IO subsystem and allows the accelerator to operate as part of an application. In this paper, we present the results of experiments on IBM FlashSystem900 (FS900) with CAPI accelerator card using the "CAPI-Flash IBM Data Engine for NoSQL Software" Library. This library provides the application, a direct access to the underlying flash storage through user space APIs, to manage and access the data in flash. This offloads kernel IO driver functionality to dedicated CAPI FPGA accelerator hardware. The results indicate that FS900 & CAPI, together with the metadata cache in RAM, delivers the highest IO/s and OP/s for read operations. This was higher than just using RAM, along with utilizing lesser CPU resources."

https://arxiv.org/abs/1909.07166


OTOH, "As one step toward building high performance NVM systems, we explore the potential dependencies between system call performance and major hardware components (e.g., CPU, memory, storage) under typical user cases (e.g., software compilation, installation, web browser, office suite) in this paper. We find that there is a strong dependency between the system call performance and the CPU architecture. On the other hand, the type of persistent storage plays a less important role in affecting the performance."

https://arxiv.org/abs/1903.04075


What are your thoughts on DragonFlyBSD?

https://www.dragonflybsd.org/


Don't know much about it, other than that it has a version of variable expansion in symlinks that I liked on Apollo's Aegis, then Domain/OS, back in the day.

Aegis had a better read() syscall than Unix. You would give it a buffer, but it would return a pointer into its buffer cache if it could, and only copy to your buffer if not.


> Aegis had a better read() syscall than Unix. You would give it a buffer, but it would return a pointer into its buffer cache if it could, and only copy to your buffer if not.

You could do a similar hack with a Unix interface. Eg. the syscall could change the page table so that the caller's buffer now points to a COW page in buffer cache. Worst case you would need to copy 2*(PAGE_SIZE-1) bytes that are not page aligned.

This also has me wondering... If the kernel is handing out pointers to cache, it would need to do some MMU protections too, right? So the costs of COW pages would still likely need to be incurred with a "better" interface.


Some of this might be possible with Dune, iommu or pointer tracking semantics.

http://www.scs.stanford.edu/~dm/home/papers/belay:dune.pdf

I had never heard of Aegis, how big is my blindspot and what do you recommend?


I gather that Aegis took its inpiration from Multics, which I have not studied, but is said to rely very heavily on memory mapping. Aegis ran on custom bit-slice re-implementations of the 68000, and later on 68020s. It mapped all the system libraries, shared, to the same address of all processes because the 68000 was bad at position-independent code. It integrated a token-ring network, and demand-paged over the network, in the early '80s. You could fork onto any machine on the ring. It was very advanced for its time. Some of its best features are not seen on current systems.


> You could fork onto any machine on the ring. It was very advanced for its time.

So a precursor to research operating systems like Mosix, Amoeba and Sprite. The Mosix literature doesn't mention it, but I think it is/was common to not mention work in industry. This is really cool, every time I dive into old research there is always something mind opening.

afaik, the 68k was actually good are PIC code, it just didn't have any memory protection until the 68020. On pages 2-13 through 2-17 of the 68k programmer's reference manual it outlines the program counter relative addressing modes.

https://www.nxp.com/files-static/archives/doc/ref_manual/M68...


Dune looks very, very interesting. Thank you for the reference.


Yes. That list is pretty good. LXC or Docker Integrated with jail and bhyve would be awesome. Even just finishing BBR would be awesome.

ZFS is totally awesome. Iscsi can't yet do dual path and dual controller magic well I believe which is a bit of a shame.


I was a FreeBSD desktop user for many years, but lack of Docker support was one of my drivers for switching to Linux. Being able to run containers without a random Linux VM is just so much easier. Linuxulator sort-of did that on FreeBSD for a while but it just wasn't maintained and just rotted.


Why not use FreeBSD jails? Is it because your starting point is some pre-prepared docker format container?


Well I'm just a regular joe user, but jails were a PITA and still is even with iocage. Sure getting a jail up and running with nothing in it is simple, but anything useful will need networking and then it's not so fun anymore.

With Docker even a Windows schmuck like me could get a networked container up and running in a few minutes.


Because a huge amount of the world (tm) ships things in Docker, and a lot of dayjob work targets Kubernetes in GCP which demands builds to docker.

on OSX this is "solved" by hosting Docker in virtualbox. It is possible this actually is the right method.


Yes, this was my problem.

OSX ish solves it, you still have to split the machine memory between docker and OSX manually. For example, if you want to run for example a big integration test that needs lots of ram you have to tweak it and then put it back. With Linux it's automatically managed.


Various people have said they want to work on Docker for FreeBSD over the years but nothing has come of it. There was a port years ago but it was just before a major change (containerd split) so it didnt get merged. It is not an enormous amount of work, but it is non trivial.


I've been running Docker containers in freebsd using a bhyve debian VM. seems to work pretty well. That's the same way you can run docker in windows and mac.


The fine-grained distinction between easy and simple personified in this comment: Understood how to do thing, but too much for anyone to come at doing absent a strong drive


God damn, I wish I read more articles like that one. That guy is objective and not a fanboy even with him working on FreeBSD.

We need more such critical thinking and insight. Kudos to him.


Came to say the same thing, clear writing and a great voice. Definitely something to emulate.


Good article, but FreeBSD top-managers don’t want(or can) to change anything.

So, where FreeBSD can be used nowadays: 1) Desktop/Mobile - no, drivers/sensors problem. 2) Network/Web service in SOHO? Linux is much way better. 3) Network/Web in medium/large company? May be, for special network services. Well-tuned FreeBSD can beat well-tuned Linux easily, but requires skilled staff.

There is a very little chance for FreeBSD to come up Linux and be useful for the masses.

P.S. Worked with FreeBSD since 1996 as default OS, in 2008 switched to Linux(RH/Debian/Ubuntu).


> Well-tuned FreeBSD can beat well-tuned Linux easily, but requires skilled staff.

This doesn't seem obvious to me, there are a lot of tech companies with skilled staff that choose Linux and also deeply care about performance.


FreeBSD doesn't have 'top-managers'. It is more democratic than the linux organization. It means that people and companies contribute what they need or what they are personally motivated to work on. I also don't think there is a general consensus that we want or need to be as popular as linux. FreeBSD supports products making in excess of 100bn a year. It is likely a much smaller number than linux but hardly irrelevant and enough to keep the community active.


> Well-tuned FreeBSD can beat well-tuned Linux easily, but requires skilled staff.

This must be why 100% of the TOP500 sites use FreeBSD instead of Linux. Oh, wait ...

Sorry, didn't mean to come off as a Linux-fanboi (which I might be) or rekindling a flame-war (in fact I have little experience, but lots of sympathy for FreeBSD), but above was just too tempting.


In that list, nothing strikes me as a killer feature that I would leave either Linux or Windows for. I've actually gone from Windows (about 10 years) -> Ubuntu (6 years) -> Clear Linux (a few months if that) -> Windows 10 in the last week or so because I realized that I value convenience and performance over whatever else is on offer from Linux, even though there are a few good tools I immediately missed from Linux (grep firstly, but at some point I'm going to setup WSL anyway). There's nothing compelling enough to even bring me back to Linux any time soon, why would I bother with FreeBSD?

I develop software, and just want to get things done and the OS to stay out of my way. Windows 10 LTSC does that amazingly well. Sorry if I sound like a shill, I'm really not, I just grew tired of fighting silly little fights to get Ubuntu/x-distro to do what I want.


> In that list, nothing strikes me as a killer feature that I would leave either Linux or Windows for.

I would guess it’s mostly about keeping existing users, instead of leaking them to Linux.

> I've actually gone from Windows (about 10 years) -> Ubuntu (6 years) -> Clear Linux (a few months if that) -> Windows 10 in the last week

For me the whole point of Linux (or BSD) is not performance or lack of “bloat”, as much as being able to build and customize everything into working for me, into fitting my workflow.

I can’t have that on Windows or OSX. But everyone’s different: what works for me may not work for you, and that’s 100% ok.

You use whatever helps you get the job done.


What workloads offer better performance in windows 10 than linux?


The term 'workload' is itself slightly loaded as it implies something constant like a server maintaining performance over a given period. I did mention my use-case is developing software, so I'm not exactly pushing this to its limits, however programs seem to work a lot faster which I know sounds anecdotal. I found a benchmark which did influence the switch back to Win10.

https://www.phoronix.com/scan.php?page=article&item=icelake-...

>if simply counting the number of first-place finishes, Windows 10 performed the best with coming in front 50% of the time to Clear Linux at 36% and Ubuntu 19.10 at 13%.


To me, my optimised compilation of liquorix on f2fs beats windows on my laptop outright. I lose bandwidth but gain lower latency. Even moving around windows explorer and searching by name is embarrassingly sluggish on my windows 10 install. Can't afford ms telemetry service and windows "defender" taking up half my resources whenever they please, either.


Yes! That's the spirit!


It's trivial to remove telemetry these days.


I seem to be getting a lot of downvotes for that comment for some reason, not sure why but there are several programs that can disable telemetry, this is probably the best-known one.

https://wpd.app/


Removing telemetry completely isn't supported by MS


I also recommend W10 Privacy and Blackbird. Definitely turn off windows defender and all spectre/meltdown mitigations (inSpectre). They don't do anything either way.


[flagged]


People using the tech every day care about the perceived performance though, not the geometric mean.

You are factually correct but your conclusion doesn't improve the parent poster's everyday experience.


Web browsers are 20% faster for example. OpenGL/Vulkan performance (the gap is closer than before but still exist) But the biggest gap is openCL.


can you elaborate on CUDA?

My understanding is that the NVIDIA drivers are a hassle to setup on linux, but once you do, the performance is just as good. Are you saying the performance is still worse if you can properly configure the cards?

I am considering a rig from System76 whose selling point is that their Pop!_OS fixes the driver config issues.


Actually my statement was baseless. After a few googling, I can't find any evidence. Still, this hold for openCL.


I'm not sure where you developed that understanding. NVidia drivers and CUDA are trivial to set up on Linux (Ubuntu, Debian, Mint, CentOS/RHEL). If you use a different distro, you might have a little more work to do, but even that is running a single script that you pull down from NVidia.

Some distros maintain packages with 3-6 month update cycle (Ubuntu), so you don't even need to bypass the normal management paradigm. This is sometimes a problem for corporate controlled systems, where you are not allowed to install packages/drivers/etc.

For me, I've been running NVidia cards on Linux on desktop and laptop systems since 2006. If you can't use the command line, Linux systems will generally be somewhat more work than windows systems, but not by much.

To keep this in context of FBSD, I've used FBSD11 and 12 for a number of things, including development build targets, potential hypervisor based systems.

FBSD is an opinionated system. When you run into an FBSD opinion (say, where include directories and libraries sit), and this doesn't mesh well with a configuration/build system, using FBSD winds up being a fairly frustrating experience.

The positives about FBSD from my experience are, fundamentally, the bootloader is much better than grub/grub2, from a UX perspective. I know that's a low bar, but really, grub is terrible.

The negatives, and I am sure this will result in downvotes, but I am being open on this here, are some members of the community with a chip on their shoulders. You can see that here in the comments about "hey we can do X, and linux can't." I've confronted many of those (often but not always, incorrect) biases during my second to last job, from people who ought to know better. It was tiring having to deal with that crap.

FBSD has some interesting tech with it. (Puts on management hat) The hard reality is that unless this tech is so massively compelling, that companies can cast the remaining negatives as being less important to their individual value analysis, I don't expect FBSD to gain or even maintain market share. (Takes off management hat)

I've made the same points about SmartOS/Illumos. It doesn't matter how much you like something, if you can't show that it is compellingly better to a large swath of people/companies, chances are you just won't see wide spread adoption/replacement.

Moreover, a companies job is to make money for its owners. I've worked at companies that had some aspect of their engineering and management team blind to this. They thought that their job was to provide a home/advocacy base for their platform, rather than turn that platform into the absolute best version of itself. This wasn't FBSD, but the trajectory of this organization was sadly predictable. They had some great tech. But it was wrapped up in a culture that could not adapt. Similar to the FBSD people I indicated, they had a fairly massive chip on their shoulders, which prevented them from taking their real and valuable tech, and making it wide spread.

That was a shame.

So now, we have Windows and Linux, with the latter completely dominating clouds/containers and ML/AI, and only MSFT still pushing Windows at Azure. I don't see FBSD doing much more than being an appliance OS over the long haul.


That's interesting, I can't find any benchmarks showing windows browsers being faster.


I can't remember the original source.. Still: 10% faster according to phoronix https://www.phoronix.com/scan.php?page=article&item=windows1...

(clear Linux is still the fastest, as always, but is unlike all other distributions)


"I develop software, and just want to get things done" - exactly why i am a continuous windows user since win95(ie. ever). linux and (free)bsd always interested me but they were always impractical and eventually only a time wasters.


As a counter-point, I am much more productive on Linux than on Windows, and I don't need to deal with a new OS every 4 years with forced upgrades. I believe Windows 7 is going out of support soon, even though many people still run it as ... it does what they want, and they don't need to upgrade.

Meanwhile, my Linux/BSD environment only changes on one condition: when I want it to. While some internals change (i.e. systemd), much of the UI is very consistent.

This is a massive time-saver for me, because I don't want to deal with learning Windows 10 or whatever new Microsoft comes out with.


FreeBSD can hold its own well against Linux. You can now use it as a fully functional desktop OS, no worries (GhostBSD for the uninitiated). Best part is the ports tree, works better and easier than the AUR.

Take your pick: https://www.phoronix.com/scan.php?page=search&q=FreeBSD

tl;dr optimised freeBSD performs as well as and even faster than Clear Linux on lots of benchmarks - easily faster than most distros then.


I would love to just be able to have a stable package base where nothing changed for years except security updates, and have special repos for things that needed to change (like firefox or chrome). Eg, like what Ubuntu does with their distro and PPAs.

I personally hate the concept of a rolling distro, which is what the ports tree really is. Except for the things I care about, I don't want random stuff changing out from under me.

Some ports upgrade 15 years ago failing led me to rage-install Linux and run Linux on my desktop for 10 years because I just didn't have the time to deal with upgrade failures and X not working. Thankfully, now that we have ZFS boot environments (making rolling back easier), I'm back running FreeBSD. But just 2 days ago, I was left with a non-function system when a pkg upgrade renamed "startkde" to startplasma-x11 or something, and it took me nearly an hour to work out that it wasn't the nvidia driver crapping out, it was just that my 'exec startkde' in my .xsession was failing.

(Yes, I'm aware of the quarterlies, but that's about 8x too much change for me).


I ran into that same issue with startkde being renamed.

However, that's my fault for not reading the release notes beforehand. I wasn't frustrated with FreeBSD. I was frustrated with myself for not reviewing those release notes. Spending 5-10 minutes checking out those notes would have saved me the hour spent troubleshooting that problem. Now I always read the release notes before running updates/upgrades/etc. As a bonus, it's good for discovery of new/updates features.

IMO, OSS documentation is one of the major areas where there's a large gradient when it comes to quality. When there are projects like FreeBSD and OpenBSD that write good documentation, it's definitely worth it to take advantage of it.


I believe the issues you (and I) have had with ports are not fundamental issues with ports, which is to say that they are actually illegal changes in release repositories and would have been reverted if called out during development and testing.


Always go back and read the UPGRADING document if you encounter something like this in the future. It will save you a lot of time.

You should use Ubuntu LTS if a major release every 6 months is too much for you.


That's my entire point. With FreeBSD ports, we have a rolling release. When I ran Ubuntu, I did run LTS. I could not care less about having the latest KDE. I'd actually be about 100x happier with the KDE from 10 years ago


Why not use this?

[1] http://trinitydesktop.org/


Nice, KDE before it jumped the shark. Sadly, there is no FreeBSD port or pkg for it.


Not sure which of those specifically you're referring but from a benchmark done in Jan 2019, Clear Linux has a decent lead over *BSD https://www.phoronix.com/scan.php?page=article&item=dfly-fre...


I would not put too much weight into those benchmarks. Those micro-benchmarks are useful to detect performance regressions between commits in a same system, not to compare different systems. Aggregating micro-benchmarks is even less sensible.

A better type of benchmark is the more realistic stuff TechEmpower does, but they only test web frameworks, not OSes.


There's a number of fundamental problems with Phoronix benchmarks; one of them is building software in a wrong way (using vanilla sources, without neccessary patches), and often simply measuring compiler and not the operating system itself.


"Das Balkenspiel ist schlechter Stil."

"The bargraph game is lame."

edit: To expand on this, there is/was one person who tried to tackle this in a meaningful way.

[1] https://web.archive.org/web/20070607182125/http://www.coyote...

The bitrotten remains of this can be found here: [2] https://github.com/Acovea/libacovea

I'm aware of academic research regarding HPC and superoptimizing, but no one has applied that to generalized distro testing, so far. (At least not in public.)

I think this should be a hook in some reproducible build system.

But everyboody seems to have caved in against the complexity explosion this would involve. And build times.

OpenOffice, Firefox, Chrome, KDE, Gnome....the HORROR!


> using vanilla sources, without neccessary patches

Why is that a problem? From a 3rd benchmark point of view testing the official tarball is the most relevant point to test for a Linux using audience. Different distros has different patches. Some don't even patch the software.


Because upstream often uses defaults that are fine for Linux, but not necessarily anything else. That’s why one should use official ports/packages, instead of hacking things together like its ‘90s.


I, personally, and my businesses, have a tremendous amount of time and money invested in FreeBSD as a platform.

JohnCompanies[1] was started on FreeBSD, rsync.net runs FreeBSD exclusively, and Oh By[2] runs on FreeBSD.

FreeBSD is an operating system by, and for, the FreeBSD developers.

You may choose to deploy, and invest in, FreeBSD for your own purposes (as I have) but you need to understand what the development process is and how FreeBSD is "released" if you want to make meaningful investments of actual money into adopting it.

In short: the official position of FreeBSD is that -RELEASE is the only production release of FreeBSD and, technically, -STABLE and -CURRENT are not production ready. Yet at the same time all investment and development of FreeBSD by the actual developers is done with -CURRENT.

The result is your legal, contractual, fiduciary, and even moral obligations to the customers you serve demands that you run only -RELEASE. And yet, any issues you have with -RELEASE will be difficult to resolve because the entire community is already 1-2 major versions ahead of you in their development, their workspaces, and even their personal machines. You will be met with incredulity when you insist that you need to run only production code and your "current" problems with the "current" -RELEASE will not be addressed.

This makes it very difficult (although not impossible) to make any kind of long-term investments in FreeBSD.

Pointing to the big name firms that run FreeBSD, like Netflix, is a bit disingenuous as only they have the resources to, essentially, run their own forks of FreeBSD (which they do).

I have written about this in detail, twice:

First, in 2012[3] and then later in 2014[4]. The 2014 posting is probably more succinct and relevant here, but if you really want a deep dive in the culture and tendencies of FreeBSD development, read the 2012 thread.

[1] JohnCompanies, started in fall of 2001, was possibly the first "VPS" provider as we now think of it, although we called them "Server Instances" and did not coin the term "Virtual Private Server".

[2] https://0x.co

[3] https://lists.freebsd.org/pipermail/freebsd-hackers/2012-Jan...

[4] https://lists.freebsd.org/pipermail/freebsd-hackers/2014-Jun...


Developers pick their own priorities as most are donating their time. Those that are not are following the wishes of their employers. People make an effort to ensure that bug fixes are MFC'd back to supported releases and for security fixes we have a stronger policy. Otherwise it should be no surprise that we focus on future releases. The release process has a certain organizational cost for validation and running them too frequently burns out project volunteers.

Many companies who use linux also effectively maintain their own forks that are gradually updated and tested. This is not a unique problem to FreeBSD. We unfortunately don't have a redhat equivalent although there are now several companies offering paid FreeBSD support if you need that for your enterprise.

I think you also got quite a lot of very reasonable replies in your thread on hackers. You need to realize that your fundamental request is for people who are giving up their free time to do something interesting to change their plans so that you can run your enterprise while providing no help or financial incentive.

When there are reasonable proposals it may catch someone's eye and prompt action. I can't even really tell what your proposal is besides changing release frequency. What are _you_ willing to do to support that?


"You need to realize that your fundamental request is for people who are giving up their free time to do something interesting to change their plans so that you can run your enterprise while providing no help or financial incentive."

...

"I can't even really tell what your proposal is besides changing release frequency. What are _you_ willing to do to support that?"

...

I offered $50k to run 9.x up to 9.15:

"... I can contribute USD $10k per year that this course was followed, or $50k over five years. We can contribute some hardware, hosting and bandwidth as well."[1]

As I said in this old, old thread:

"I'm not a FreeBSD developer and I have little use for either CURRENT or STABLE. I'm saying that I need a major release to have an effective lifetime longer than two years."

... and in anticipation of your 2020 response, I direct you to the FreeBSD handbook, which explicitly states:

"This is still a development branch, however, and this means that at any given time, the sources for FreeBSD-STABLE may or may not be suitable for any particular purpose. It is simply another engineering development track, NOT A RESOURCE FOR END-USERS."[2]

FWIW, we have also offered and paid several FreeBSD development bounties since 2005.[3][4]

[1] https://0x.co/TCZM6C

[2] https://www.freebsd.org/doc/handbook/current-stable.html

[3] https://lists.freebsd.org/pipermail/freebsd-announce/2007-Ap...

[4] https://blog.kozubik.com/john_kozubik/2009/01/64bit-freebsd-...


You do understand that releases made from -STABLE are considered release quality, not development branches?


Not according to current FreeBSD documentation ... see link above.


You have confused the branch with the release.


Given your current frustrations with the project, could you also describe the benefits of using FreeBSD in your company that offset these problems?


Heh I just had a chance to mention this in another thread but I think the biggest thing holding the BSDs back is they repackage all the solver.

Who said 1 distro, 1 kernel?! I fully believe NixOS/Nixpkgs should be able to support non-linux kernels, and the philosophies of their communities. We are always trying to bake in future assumptions, this is a great opportunity to do that.

Coalition without compromise. Truely.


If someone is interested, please do reach out. I'm really serious about seeing this happen. We got cross compilation done distro wide, now we need more things to drive us forward.


Do I get you correctly? You are looking for Nix[OS] being able to work with FreeBSD?


Yes. Nix runs on FreeBSD so first step is cross compile pure stdenv with BSD libc. Next step is native builds and kernel. Final step is make NixOS work with multiple init systems. Shouldn't be too bad!


It's a good TODO, and it's worth pointing out that those things are being gradually put into place. Great example is the Continuous Integration, https://ci.freebsd.org/. (Shameless plug: it actually runs Linux Test Project Linux binaries too!)


First thing I would do is to merge the bsd distros and have an installer that could pick configurations like hardened or not. Add a cool name.

No reason so few folks should be pulling in different directions.


io_uring


Will be interesting to see how kqueue systems respond.


I noticed that Alexander had to grind his axe on his rejected bikeshed project.Re:Sensors.

PHK is one of the developers who have made FreeBSD great. The more you grind your axe the more you get discounted.


> I noticed that Alexander had to grind his axe on his rejected bikeshed project.Re:Sensors.

"Bikeshed" project? Who exactly is bikeshedding when functionality is lacking on FreeBSD 10+ years later because we don't like the configuration mechanism chosen for the previous work?

> PHK is one of the developers who have made FreeBSD great.

That doesn't mean there hasn't been dubious moments like this, or that they don't cause harm.

> The more you grind your axe the more you get discounted.

Menacing responses to criticism are not exactly indicative of good culture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: