As far as today — Linux has far more developers. It has driver support for a wider array of hardware. It gets more and faster vendor driver support (Intel, AMD, Nvidia, etc; even Microsoft) than the BSDs. It scales better on NUMA and on very high core count systems than any BSD. These are all legit reasons people use Linux instead of a BSD.
Then there's RHEL and Ubuntu. The BSD world doesn't have anything like either. RHEL is a huge boon to the Linux ecosystem — they take money for support contracts, and invest it in improving the GNU & Linux ecosystem for everyone. Ubuntu is a very beginner-friendly distribution that is serious enough for server use. On the FreeBSD side there's maybe TrueOS (née PC-BSD) but it doesn't attract anything like the audience that Ubuntu does. It also doesn't have Shuttleworth's funding.
Of course anything would do great with more funding/corporate backing.
1. For many years, the only way to install software was ports. Now if you're a full time sysadmin with time on his hands, it's great. But if you just need something up and running fast (and if you don't know your software internals, and don't know if you'll need perl's FLAG_ABC) , it's horrible.
It just feels like Linux in the 90's (been there), where recompiling the kernel/freex86 was a right-of-passage into Linux hackerdom. Nowadays, most of the time it's just not worth it.
2. apt vs ports/pkg. This is actually the biggest thing keeping me on Debian - stable + backports.
If I'm running my server, I want things to be stable. Now I know that there's no other project the size of Debian, which can backport security fixes to two year old software (and sometimes four year old software), but there's nothing like apt update && apt upgrade and 99% of the time have everything update without a hitch.
Yes, FreeBSD is more elegant (why couldn't GNU/RedHat have just modeled ifconfig rather than ifconfig, ifup, ip, etc.). Yes, FreeBSD's man pages are amazing (which is quite important, as there's not as many FreeBSD blogs around), but if you're learning a new system (coming from Windows), Linux isn't that much harder to learn than FreeBSD.
They work like Oracle - You pay them, and they'll hold your hand, and (unlike Oracle) they release their software under an OS license.
If you're a non-tech Fortune 500, that's very important.
Note, by the way, that those two distros have the vast majority of GNU/Linux installs.
This comes down to one of the key differences between Linux and BSD: BSD "owns" its userland—the people who develop the kernel, or some system utility, can literally decide to change something, and then do a global search-and-replace on all usages of that something across all consuming projects. Because all those projects are "part of" BSD in a very literal sense. You can decide that ifconfig(8) should work differently, and bam, there you go, now it works differently. Now the tools that call it and parse its output work differently, too. Everything works differently; but everything still works. Document the new behavior in the man(1) pages.
Linux, meanwhile, is in essence a giant Mexican standoff: nobody can change the interface of the thing they're responsible for, without potentially breaking something someone else is doing that they're not aware of at all. So Linux devs, rather than changing old interfaces for the better, just leave the old interfaces where they are in a sort of "legacy-compat" mode, and build entirely new interfaces that work the way they "should." (And then people start to depend on the details of the new interface, and it all happens again five years later.)
1. Linux doesn't care about ifconfig et. al. It's Debian/RedHat/Arch/Gentoo/Slackware that does. And they can do the same grep across their codebase.
They don't now for the same reason BSD doesn't just run in and change things, because lots of admins have scripts which aren't maintained by BSD which depend on existing config.
2. I would assume that the BSD ifconfig came first, so why didn't existing distros copy their system? Is it a BSD vs. SysV thing?
The new package system is much better, and more apt-like. Again though, they are ports pre-compiled on freebsd.org's computer.
Most port options can be selected through an obvious curses interface that comes up when you make the port these days. The defaults are usually reasonable. As long as unix software is distributed with important compile-time options, I will enjoy how easily they are managed through ports.
Sure, if you're running a few servers. It doesn't help if you're running one though.
And it still takes time to compile and build.
If you want something special, why not set up poudriere and update during the night. Realistically, if you are running FreeBSD in production, you want a spare server to try new releases, so you can just as well use that box to build binary packages.
Nowadays, FreeBSD does actually have more apt-like package management, with a fairly simple high-level interface that installs prebuilt binary packages (properly resolving dependencies among them, etc.), called pkgng. I recently tried it again, and it's nice, exactly what I was missing at the time. NetBSD's story here is good today as well, with pkgin (the high-level interface to pkgsrc) also being quite nice, nice enough that I use it on OSX despite never having even run NetBSD (so I'm not using it on OSX because of a preexisting love of it) . But 15 years ago, apt was clearly better, at least for my uses.
 I use it on OSX via Joyent's packages, https://pkgsrc.joyent.com/
Also I think the FreeBSD handbook is awesome, even for newcomers. Is there something like that for Debian?
If you need new packages, you don't run stable.
You run stable if you need things, well, stable and "just works".
Also, there is a difference between new, like bleeding edge (e.g. Fedora), or newer (like Ubuntu or Debian Testing).
I don't know if *BSD "just works" either, but that is not what I am arguing here. I am making a point against your statement that Debian "just works."
You have stuff where there is a big amount of FreeBSD code in it:
MacOS, iOS, Playstation 3/4, Cisco Routers, Juniper Routers, Android (though way less, then in the rest)
You have a couple of big names using FreeBSD in big amounts or for important parts of their infrastructure:
Facebook (WhatsApp), Netflix, NetApp, VMWare, Trivago, DuckDuckGo, Flightaware, ...
And you have big amounts of code in other projects. Stuff like OpenSSH and many libraries.
I didn't include the usual suspects, such as non router hardware manufacturers (ARM, Intel, HP, Dell, ...) that use it a lot, but they also use other stuff.
In other words: The market share I think grows if you use big companies. For smaller companies there is a rule of kind of sticking with the most used stuff. And that simply is Linux (among other things for that legal reason, already mentioned here).
"The Trouble with FreeBSD" @ 43:20
As containers eat the world you will see more and more Linux, IMO.
Used both since .96 Linux and for network BSD was far superior but you could feel which was going to win very early so switched once Linux was usable.
Here at home, within a six foot radius of me, I've got a nice new (waaaay overbuilt) workstation, a pair of ThinkPads, some cheap Dell laptop, and a rMBP. The Dell and rMBP are rarely used but I had to decide what to run on my three "primary" machines (and the couple of neglected machines in the garage).
I'd much prefer to run FreeBSD everywhere but suspend/resume on the laptops is a deal breaker for me. I could run FreeBSD on the workstation (which is always running) but then I've got two operating systems to keep up on instead of one, so instead I just run Linux on all of them.
At work, on the servers, my line of thinking is basically the same. I pretty much have to run Linux (over BSD) in a few cases, so do I run FreeBSD where I can and Linux everywhere else or just make things easier on myself and run Linux across the board? Since I'm already running Linux on all my personal machines it makes sense to use the same at work.
Lawsuits and subpar driver support in the 90s, basically.
> This is a problem faced by all operating systems - even new versions of Windows. Most of the time, users don't care about the total number of drivers, only if drivers exist for their hardware. There are some omissions in terms of driver support, but FreeBSD supports a wide range of network cards (including an increasing number of 802.11n chipsets), most sound cards, and AMD, Intel and nVidia GPUs.
> Device support is a constantly moving target because we can't tell hardware makers to just stop releasing new hardware for a few years while we catch up. New devices do take some time to support, although some manufacturers do provide drivers themselves, for example nVidia provide drivers for their GPUs, and Intel for their newest network and storage controllers. Other manufacturers provide significant help to FreeBSD driver writers, including Broadcom, JMicron, HP, Mellanox, Chelsio and Solarflare. If you find a device that isn't supported, please let the project know and also notify the manufacturer: the only thing that motivates hardware manufacturers to support any operating system is the knowledge that their customers want it.
This is much, much less of a problem now than it used to be but I remember building a hobby box in the 90s when I wanted to run FreeBSD and I had to be very, very careful about what hardware I put in it so I ended up installing Debian. (Anecdotal and likely possibly coincidence, but Debian had all the correct drivers.)
The net result is no one really switched back to BSD once those issues were (mostly) resolved since too many Devs were bringing out software for Linux by that point. Driver support still lags on the BSDs (although nowhere near as badly as it used to!) so its just easier to use Linux for most people.
In fear of losing hardware sales, Dell/HP/Sun (their X86 hardware unit)/ IBM all made sure that their servers could run Linux also. This resulted in a lot of hardware support being added very quickly.
FreeBSD never had that "tornado" of uptake from major hardware vendors, and thus, support always lagged, if it was there at all.
At that time, the majority of Unixen were proprietary, so if you wanted your programsto run, you couldn't assume GNUisms or Linuxisms.
FreeBSD moves very carefully. New features are introduced when they are quite mature. For major changes, the old way of working is maintained for quite some time.
Early nineties that showed in hardware support. If you wanted to have a real system, you got yourself a SCSI card. There were some really crap IDE controllers out there and in the FreeBSD community nobody cared for them. So resources is one thing, but basically the FreeBSD community didn't want to spend time getting completely broken hardware sort of working.
(For a long time, partitioning was also a twisted maze. The BSD partitioning scheme was somehow combined with the MBR in weird ways. No problem for a system dedicated to FreeBSD, tricky if you want to shared with Windows).
The Linux community was way more trying to run on everything.
In the same way, the Linux community is much more into shiny, new. Color ls, that would famously break scripts because it also output escape sequences if you send the output to a pipe.
By and large a FreeBSD system looks less cool than a Linux. So FreeBSD attracts the users who know they want stability above everything else.
In contrast to some opinions, I like this sometimes.
For example, would the world come to an end if FreeBSD came with neovim?
If the prompt out of the box showed pwd?
I know that some things are controversial (ahem systemd), but when learning a new system, little things matter and make your system popular.
(And, as a side rant, in contrast to some who like to use haskell on nixos (Which I actually like!) running on an obscure chipset, popularity is good. If someone asked me what Unix should he learn, I'd send him to Linux and not FreeBSD, since it's going to be much easier to find noob help online. Then, this noob will go on to become a sysadmin, he'll recommend Linux because he knows it and will be able to find others who do.)
For an end-user friendly BSD you may want to look at TrueOS (https://www.trueos.org/). They take FreeBSD and then add more sauce to provide a better user experience.
So it's making a comeback, at least in my 11,000+ person org. :D
hmmm.. Android, Chromebook, Smart TV's, Routers, set-top boxes, NAS etc, these all ship Linux binaries, so I don't think it's a big issue.
If you do kernel space modifications which you want to keep as a 'competitive advantage', then going with something like FreeBSD instead of Linux would make perfect sense, otherwise I don't see why the license would matter much.
Too many letters in a *BSD, too scary. Linux is more toy-ish and thus more appealing as a name.
Plus the UCL vs BSD legal war, which shifted much of the community towards Linux.
Also the lawsuit with AT&T did not help.
1031 The FreeBSD Foundation
888 EMC / Isilon Storage Division
379 Microsoft OSTC
297 Solarflare Communications, Inc.
277 Dell EMC Isilon
221 DARPA, AFRL
170 Mellanox Technologies
144 Chelsio Communications
128 ABT Systems Ltd
90 Spectra Logic Corp
58 Smartcom - Bulgaria AD
45 Rubicon Communications, LLC (Netgate)
38 ScaleEngine Inc.
35 Citrix Systems R&D
GPL does not require contributions. GPL requires access to sources. Not to changes history, only sources. Contributions happen because it's more profitable for company (in long run) to participate in community and influence development than to be a passive actor.
My point is: license has very little to do with market share in this case. It's more about being good enough and having certain amount of luck when the right moment arives.
The very first VPS provider, JohnCompanies, was built entirely on jail (and FreeBSD 4.x).
At the peak we had over a thousand FreeBSD jails running for customers all over the world.
In the end, fancy provisioning and fine-grained resource tuning (with products like Virtuozzo) won out. Although JC is still operating and still provides jail-based VPS.
The offsite backup infrastructure that was built for JC customers became a standalone company in 2006 and was named "rsync.net".
 The term "VPS" had not been coined in mid-2001 so I made up the term "server instance" which didn't stick.
FreeBSD jails, Solaris Zones, OpenVZ, and Linux LXC were marketed as fundamental building blocks for improved consolidation and/or improved separation of concerns. These technologies could solve all these problems, but fundamentally they were sold as a transparent abstraction. "To end-users, jails operate and feel just like VMs or real hardware" they said.
Docker was different, docker was fundamentally sold as a higher-level product. Docker is not "just like real hardware", docker was a new way to think about deployment, a new way to think about pre-packaged building blocks.
Docker is operationally different.
The BSD community in general is anti-hype, vs. Docker having a for-profit company (dotCloud) behind it. Even it's first version's website looked nice, had a lot of pretty graphics.
It really does more than people think. Now, the following statement doesn't apply to Docker, but I've seen people with great technical understanding seen choosing totally messy toy projects, because they had a well designed website with pretty images and lots of marketing.
I mean to some degree this also works for Docker in the sense that I've seen people having totally wrong expectations on what Docker does.
It's just what good marketing does: Giving people the impression that something is magic.
The BSD communities always had both a strong no-hype stances. They didn't even have cool names, as they have now with bhyve, etc. They also had a strong "keep it simple" mindset, that counters the "there is magic behind it" effect. It is the sysadmin mindset of preferring boringness and no surprises.
The only other bigger open source project I know trying to be boring to some(!) degree is Go. But that only works, because you can always say "it's from Google" if you need to market it.
Of course there are others, but many of them are way less known.
The only other _somewhat_ new software that I can think of that was considered cool without there being a big hype machine and/or a company pushing it from the beginning is Redis. Out of the nowhere, I mean. Of course there is much cool stuff done by people that already have proven to do amazing things.
But I am sure other people can come up with way more.
Separate kernel and OS is ideal for containers, IMO. What was thought to be a plus with BSD I believe turned out to be a weakness.
OpenVZ was popular enough to support the parent company Virtuozzo/Parallels/SWSoft (many name changes over the years) and the commercial Virtuozzo product is still sold: www.virtuozzo.com .
The lack of OpenVZ being accepted into the kernel, is what eventually killed it IMHO. Since the cgroups and other code that underlies Docker/LXC etc. is in every kernel, it was only a matter of time until the default became accepted and then used widely.
My only guess is that, like everything, Microsoft beats it for the reasons MS beats anything: legalese.
It is nice that there are multiple wrappers around bhyve - competition is good, and it shows a healthy community.
Take Google that is rumored to be using the same kernel in their cloud, ChromeOS and Android.
They concievably could have the same kernel from iOT, wearables, phone, tablet, 2 in 1, laptops, TV and cloud.
Google now is using the container functionality in ChromeOS to enable Android. Now if they give access I can run my cloud service on a laptop or a tablet. Instead of spending a fortune for a Swift version and a copy in Java.
But I also could develop once and deploy. Google has the containers like!E this on ARM and X86 and in their cloud on Power.
Now the containers are arch specific but not far from fixing that.
Google needs to allow a second SSD that is walled from the system SSD and give us access to launching containers. We get such storage in something like the M3 with rumored 16gb Samsung Pros but it is flash.
It is just not possible to do the same in BSD based on my very old experience. Has it changed?
FreeBSD Jails were added in FreeBSD 4 which was released in 2000
Docker (on Linux) is based on Linux CGroups which were added in 2007
Initial release of docker in March 2013
Initial support for Docker based on jails added to FreeBSD in July 2015
I really wish the Docker team would have made FreeBSD a first-class citizen, considering the native zfs support in FreeBSD. Currently the only thing Docker runs on natively is Linux. Even with the newest MacOS/Win variants, it still running in a hypervisor.
What is exposed inside Windows?
Also there is App Sandbox for writing safer applications, but it requires programming explicitly against it.
I guess the point i'm trying to make, there will be no "native" docker for MacOS. You'll always (or for the foreseeable future) have to start a linux VM (mabye windows?!?) to host the processes.
As for Mac OS, I use it for Cocoa and iOS development, so no experience with containers as such.
But how is the author going to become rich and famous and be invited to all those conferences doing that? They need to drop this silliness and write the whole thing from the ground up! Get cracking, we want to see AT LEAST 30 000 lines of Go, or 10 000 of OCaml!