The two things that loadable modules provide (runtime configuration, and third party proprietary code support) are not that valuable. I've always turned off loadable modules when I've build custom BSD kernels for servers for exactly that reason. I get around the propietary code issue by making sure the server I configure has hardware for which non-proprietary and/or source available drivers exist. Which for servers is generally network and disk drivers so pretty easy.
Serial Drivers, all the USB related drivers, all the printers, networks card, graphics card driver?
How big will the new kernel image be?
It also make dev/debugging driver task much harder as it requires complete reboot system to test it.
Also, what about FS? such as ZFS? Build in by default, disable by default, load and reboot different kernel everytime I need something?
> Are they going to build all the supported HW device
> drivers all into the kernel image?
There has been both a lot of research and debate about various 'microkernel' strategies where the core of the micro-kernel really only knows how to access the boot disk and load modules. Then as hardware is probed and discovered, modules to support that hardware is loaded.
As a desktop machine kernel, micro-kernels allow for changes in the hardware configuration to occur fairly easily without a lot of user visible fuss. But they do leave open the vulnerability that the module system can be exploited. Server systems generally change less frequently and hardware changes beyond adding or removing a disk drive are often quite rare.
Things are always trade-offs.
I wonder if there is anyone doing mixed virtualization studies. Using a hypervisor that is locked down for the lower layer and then booting the "user" machine for interactive use. This is a very powerful architectural choice and was been used for decades in IBM's mainframe OS. Perhaps we'll see a consumer OS based on those principles at some point.
$ du -h /bsd
Or does OpenBSD doesn't support those network fs neither?
If smbd/nfsd build in by default, wouldn't a single kernel image with samba/nfs build-in to kernel by default trigger more attack vector/surface especially if I just want to run a simple LAMP stack?
Sorry for the OpenBSD noob questions. I am just curious on tradeoff on the decision turn off the kernel module completely.
They always have. No drivers were modules. Modules were just there because netbsd had them. Kernel modules were already not being used at all in OpenBSD, they are removing an unused "feature".
Is there another mechanism?
If not, I see this as a step backwards.
LKM hasn't been worked on, improved on, or maintained in any significant way, so... it went away.
There won't be any fanfare made about it officially, because the dev group didn't seem to care about it anymore. Anything un-needed or considered bloat in OpenBSD goes this route eventually,. not with a bang but a whimper.
edit: Having looked at the article, it seems a bit sensationalized by Phoronix. Unfortunately the OpenBSD maintainers didn't care as much as this writer at Phoronix seems to have done.
OpenBSD is not Linux. The "story" is a non-event in OpenBSD world.