I don't use NetBSD, so perhaps I'm not in "the loop", and please forgive me if so, but;
- GCC 5.5 is only one major version higher than the oldest supported version; 6 or 7 would have been a better choice while you're banging out a whole new operating system release, 6 is still fully supported, 8 was released early this year. I understand newer compilers can sometimes introduce regressions in the build process for something as large as an entire operating system, but surely they could have tested it and gotten those bugs fixed if any?
- Similarly for Clang/LLVM 3.8.1 (Really? The oldest version of Clang that you can get on a modern Linux distro? Really?).
- OpenSSL 1.0.2 is only supported for another 17 months, 1.0.2k (the version they're shipping) is 3 versions older than the current 1.0.2o, and 1.0.2p is going to be released any day now. There are at least 4 CVEs patched between 1.0.2k and 1.0.2o; I hope they're applying those patches on top of 1.0.2k manually.
The comparison to "oldest clang/gcc you can get in a linux distro" is not right, you can get a lot of different versions as packages. this refers to the base compiler which is used to build everything. GCC 8.1 is available as a package for example.
As for the base GCC versions, NetBSD is a little conservative when updating, but keep in mind that it's doing this on a lot of architectures and problems arise. a complete transition to GCC 5.x was held back somewhat by a tricky mipseb-softfloat bug, GCC 6.x (in -current) was held back by a VAX ICE. Newer GCC kills ARMv4-nonthumb support.
If that is OpenSSL 1.0.2k it probably should be updated, yeah.
If it were free it wouldn't matter, but as you say its held up progress on architectures people actually use.
- Copy existing compiler into gcc.old
- Add new compiler as "gcc"
- Switch architectures one by one
If one architecture is left behind it's not a big pressing issue, you have until the next GCC update. They're done once 2-3 years hopefully so the GCC VAX issue taking 6 months to resolve wasn't close to being a problem.
As for removing architecture support, as long as it builds without extreme intervention, the non-VAX crowd is going to leave it alone. The VAX code is mostly separate in its own directory so doesn't get in the way.
The people who care about VAX within NetBSD are very knowledgeable and it would be a shame to alienate them without a good reason. They've contributed a lot of non-VAX stuff too.
> USB stack rework, USB3 support added.
"The Linux kernel mainline contains support for USB 3.0 since version 2.6.31, which was released in September 2009". "FreeBSD supports USB 3.0 since version 8.2, which was released in February 2011.".
So they're getting USB 3.0 almost 10 years after it was released for Linux.
It was added to other operating systems as the hardware became more readily available, understandably it took some time. USB 3.0 is not simple to implement.
IIUC, macOS uses NetBSD's userland though it has FreeBSD bits in its XNU kernel, so there must be benefits to NetBSD beyond its portability that I am just not aware of. :)
Incidentally, hasn't Linux been ported (eventually) to more architectures than NetBSD has?
The kernel? Or a full-blown OS like Debian GNU/Linux? Official ports, or also unofficial ports?
If you're going for official Debian GNU/Linux ports, then no. If you mean Fedora (say you really wanna run Fedora), then no.
NetBSD isn't just a kernel; it is a full-blown, portable OS. The base system is practically the same on every port. So if you're familiar with NetBSD, the idea is that you can always run that familiar OS on another computer.
Linux and other FOSS software becoming popular makes using NetBSD a lot more convenient, not less. There's a top tier FOSS browser that will accept NetBSD related patches, several really good document editors, graphical tools. We can even take some driver code from Linux itself.
Indeed, you can't. xHCI subsumes all USB operation and doesn't need companion controllers, but presents a _very_ different interface. That's different from the USB1/2 transition where USB2 was a separate controller that could take over the ports from a USB1 controller, giving them back if a USB1 device is attached.
Some boards/chipsets do that with EHCI and xHCI to support operating systems without xhci drivers, but that's getting rarer.
I'm simultaneously disappointed that this is not (as far as I can tell) widely known information. It sounds like the thing that may prevent you from going on goose chase if you ever need to debug it.
As for it working so well, that's mostly on the EE side: they were _very_ careful to provide this hardware based routing with defaults that make sure that old OSes continue to work while new OSes (or new drivers for old OSes) can make use of the new controllers.
- USB-C power delivery
- USB-C alternate modes, such as DisplayPort and Thunderbolt 3
The USB 3.2 standard is backward compatible with USB 3.1/3.0 and USB 2.0. It defines the following transfer modes:
USB 3.2 Gen 1x1 - SuperSpeed, 5 Gbit/s (0.625 GB/s) data signaling rate over 1 lane using 8b/10b encoding, the same as USB 3.1 Gen 1 and USB 3.0.
USB 3.2 Gen 1x2 - SuperSpeed+, new 10 Gbit/s (1.25 GB/s) data rate over 2 lanes using 8b/10b encoding.
USB 3.2 Gen 2x1 - SuperSpeed+, 10 Gbit/s (1.25 GB/s) data rate over 1 lane using 128b/132b encoding, the same as USB 3.1 Gen 2.
USB 3.2 Gen 2x2 - SuperSpeed+, new 20 Gbit/s (2.5 GB/s) data rate over 2 lanes using 128b/132b encoding.
USB 3.2 is supported with the default Windows 10 USB drivers and in Linux Kernel 4.18.
A domino effect perhaps, or the result of finding a niche?
Often overlooked while discussing performance impact of context switching; context switching also applies to the FPU. There are two modes in which the OS performs FPU context switching: lazy and eager.
“lazy” FPU context switching leaves the previous context on the FPU until a different context gives it a set of instructions. This saves an unload on the FPU, since not all time splices require the FPU, you may see some performance gains under some application workloads.
“Eager” FPU context switching unloads FPU context whenever a time splice is finished. On a new time splice, the FPU context is reloaded. While this constant reloading of context sounds more expensive, it is optimized in hardware and almost never noticeable on modern architectures.
By default eager FPU is enabled in Linux. You can test its’ impact by passing the eagerfpu=on or eagerfpu=off boot flags (Linux).
Kudos to the NetBSD team for enabling/disabling eager FPU based on FPU model instead. This approach makes more sense to me.
Actually this release enables eagerfpu on all Intel CPUs, because of CVE-2018-3665 (lazyfpu side channel attack).
From the respective home pages, FreeBSD targets high performance, OpenBSD security, but it is not so clear to me what is the focus of NetBSD.
They differ in their development models, families of related operating systems (consider TrueOS, DragonFly, MirOS, et al.), ports and packages mechanisms, extent of kernel APIs, filesystems, and toolsets.
Just three out of many examples:
* NetBSD's and OpenBSD's "wscons" subsystem for kernel virtual terminals is significantly different to FreeBSD's "syscons"/"vt" subsystem.
* OpenBSD's packing list files for pkg are different to FreeBSD's manifest files for pkgng.
* On OpenBSD, /bin/sh is the Korn shell, which is also the superuser's standard interactive shell, and the base toolset contains things such as doas and rcctl. On FreeBSD, /bin/sh is the Almquist shell, the superuser's standard interactive shell is the C shell (and the "toor" user exists), things like sudo come from ports, and the command for manipulating rcconf files is sysrc. /bin/sh is the Almquist shell on NetBSD, and the PD Korn shell on MirOS.
> Why the name?
> The “BSD” in our name is an obvious recognition of our heritage as a derivative of 4.4BSD and 386BSD.
> Our contributors communicate primarily via email and Internet-based chat systems; many of us have never met each other in person. We also use a remote source code management system called CVS which enables a large number of developers to do independent work on the same source tree easily. We believe that the Internet was an enabling technology that made NetBSD possible. The “Net” in our name was thus chosen as a tribute to the Internet.
Also, just as an aside, NetBSD was released in 1993, long before network security had anywhere near the attention it has today.
describing remote software development as if it was novelty
The base build system (build.sh) which is essentially Makefiles is absolutely beautiful to work with, ditto for pkgsrc. They’re “progressive” enough to include dtrace, work on neat security, and kernel models, but have eschewed modern Linux-isms like ip(1), systemd. Of BSD v Linux, my heart is definitely with the more traditional BSD. Of BSDs, I feel Net is capable enough, simpler than Free, but more feature full than Open. The other interesting BSD would be DragonFly - really interesting, but I’m happy enough w Net that I’m not going to swap it out, and don’t need more (different) systems in my life right now.
I often cite Neil Young to describe my “ditch” computing.
One of my NetBSD machines is an UltraSPARC box. I've heard both NetBSD and OpenBSD devs say it's one of their favourite platforms, because being a big-endian 64 bit machine, it helps discover many false assumptions made in low level code.
I'd like to think there can be enormous payoffs to this kind of careful thinking, but I suspect it pays out sporadically and sometimes not at all. Such is the fate of any outlier or pioneer.
Either way, traveling "in the ditch" means you get a lot of weird looks from people cruising by on the latest bandwagon. :-)
NVidia on the other hand seems like a far harder problem than needing to patch some C code here and there. NVidia make the drivers that they make. AMD has been more open source friendly these days, which is good if you want 3D graphics, but doesn’t solve the CUDA issue.
I've got a lot of netbsd running machines. The one I use now is a Haswell CPU with nvidia graphics, and everything works well on it (even nvidia graphics!).
Not all machines are so good. I have a too new Dell XPS. It doesn't have graphical acceleration. But it was very cool to have the touchscreen working already (although with rough edges).
It's very easy to build and modify everything, it doesn't hide errors under a rug (syslog messages are legible, coredumps not disabled by default and so on), so I feel like I can tackle any problem I encounter on it.
Also, if you're the type that carries their long .xinitrc/.Xresources/.profile around, it's more comfortable than the friendly linuxes which require you to re-do everything but in dconf.
Openbsd user here:-)
Right now I'm running mwm, emacs, Firefox, NetBeans, a couple of PDF viewers and lots of xterms.
Someone once said:
I just need a bootloader for emacs :-)
I assume this means an x86 machine, correct?
OpenJDK 8 will build for arm as well but it doesn't have a JIT.
On the server side we had it on some old machine with a weird architecture just as a jump host.
The great thing is pkgsrc for package management. I believe you can use it on OS X by now, which was done by the people who make SmartOS.
I can't vouch for how optimized it is; but zfs is definitely there.
[Edit: landisk platform are various NAS boxes, so nothing of comparable obsolescence as pmax]
The EdgeRouter devices are MIPS and people are still using Loongson laptops.
Here's a tribute/introduction written by Derek Sivers:
As for my own experience, I don't know if I'm ready to give up Debian as my primary desktop OS, but I really like OpenBSD on my coding-or-surfing-on-the-couch laptop.
On the server side, I also use both (as VPS) for personal projects.
$PKG_PATH is set up but commented out in the .profile for root. I'm going for a light window manager and Seamonkey and a few bits and pieces to see how the land lies.
... Over 20 years ago. Over almost the exact same time period NeXTSTEP became OSX. Any similarities are probably more coincidental or due to on-going borrowing from each other than to do with the fact that one is forked from the other.
fmtutil [INFO]: Not selected formats: 29
fmtutil [INFO]: Total formats: 29
fmtutil [INFO]: exiting with status 0
pkg_add: no pkg found for 'tex-pgf-[0-9]*', sorry.
pkg_add: Can't install dependency tex-pgf-[0-9]*
pkg_add: Can't install dependency tex-pgfplots-[0-9]*
pkg_add: Can't install dependency tex-milog-[0-9]*
pkg_add: Can't install dependency texlive-collection-langgerman-[0-9]*
pkg_add: 1 package addition failed
For the most part you can just hit enter through the installer for any of them and have a working system. To set up a graphical environment you will need to do a little reading. I would generally say any of the BSDs are easier than Arch and Gentoo to set up but offer similar capabilities if you dig in. Slightly harder than Debian text installer, maybe 1:1 with slackware.
That was the gist of their complaint. I'm guessing that since these guys (one uses Arch) both worked their way through LF-Scratch last year, they're not up to more reading right now.
It's been a while since I installed NetBSD, but the last time, on-board sound wouldn't work without kernel driver patches. And hardware-accelerated OpenGL on a modern GPU was unsupported.
(That said, I used NetBSD as my desktop OS besides Slackware Linux from ~2001-~2005, and it is an exceptionally clean and nice Unix.)
I have only ever used NetBSD as a server system, so I have no idea what the hardware support on desktop systems is like.
1. Why did you choose it?
2. What are your impressions?
Why did I choose it? Because at the time, I was still living with my parents and needed to get Internet access working via dial-on-demand ISDN-card. The only OS I found a tutorial for how to do that was NetBSD.
What were my impressions? Compared to, say, Debian, NetBSD takes more effort on the side of the user/sysadmin. But once something was in place, it stayed in place. I also liked the clear distinction between the base system and third-party software installed via pkgsrc (which ends up in /usr/pkg). These days, NetBSD has binary packages for the most mainstream architectures, so it less of a problem.
I liked the system - it was simple enough to get a realistic idea of what is going on under the hood, even for a Unix-newbie, but also powerful enough that it did not stop there. Also when a friend of mine got me a Sun SparcStation he had salvaged from his university's garbage pile, NetBSD was the only system I could get to boot on that machine.
NetBSD's strength is portability (or "ports" not to be confused with FreeBSD's ports collection). So any software written for it, is going to have portability in high regard. If you don't want to live in a monolithic world (example: the PC world went towards x86-32 in 90s, then shifted away to AMD64. ARM/ARM64 are market leader in embedded industry, though MIPS is also still being used) and if you believe in postmarket options (devices shipped with proprietary firmware/OS) though Linux is also strong in that regard. If you like RISC-V existence, software such as NetBSD helps its existence.
Yes, I had hardware on which of recent software basically only NetBSD ran on, or maybe a (broken/outdated) proprietary OS, or "Linux" (but some vague old port, or requiring patches, blahblah). It could be old/ancient hardware, proprietary firmware, obscure hardware. I ran NetBSD for example on Sharp Zaurus, various SGI MIPS (some of which Linux worked less good or not at all on), DEC Alpha (at least a DS10L), and various Sun hardware. The orig. Sharp Zaurus firmware for that specific Zaurus was in Japanese, and the OpenBSD port I couldn't get to work. So I experimented with a lot of alternative firmware, mostly based on Linux.
Back then the package management was slightly different than the other two popular BSDs. They work together a lot, but there's also fractures. For example, FreeBSD and NetBSD don't have unveil, but they do have PF these days. ZFS has been stable on FreeBSD for ages, yet on OpenBSD it isn't available and on NetBSD its only in -CURRENT. Another difference could be amount of binary packages available, or amount of packages/ports (terminology differs per *BSD).
Another choice could be that the BSD license is more attractive than the more strict Linux-related licenses such as GPLv2 or GPLv3.
So yeah, in short, strong focus points of the 3 major BSDs:
Popularity (amount of software, size of community): FreeBSD
DragonFly: I've never used this, ever. Can only speculate. I mean, I'm not sure the orig. focus still matters.
See also Wikipedia 
The strongest case today is probably for embedded use. build.sh is unlike any other OS build system, you can happily do development of a firmware on a Linux or OS X machine for instance and splat out results for a target device.
It's also one of the better choices for learning about OS development because of build.sh, small size, fairly clean codebase etc. And those skills are fairly transportable to Linux (which is a bit of a fire hose and probably not the easiest to start out on) or commercial operating systems like OS X or real time for a career path.
i learned a lot about linking and linker flags by working with it since it didnt use the "standard way", it used something simpler.
Is that still a thing? I've had netbsd on my weekend to-do list for years but just have never found enough time or motivation.
What's so great about netbsd?