Very similar layout, but it used Objective-C instead of C++.
#define super IOService
#define fBlastIForgot [..]
Basically think about the state of c++ and c++ compilers two decades ago. I wouldn’t be surprised if that factored into it.
As others have said codesize can also be an issue - it’s super easy to make templates produce insane amounts of code, especially in older compilers.
Eliminating code bloat with templates requires careful judgment about where to put type erasure and how to factor your code
So even if the linker fails to de-dupe, you're still able to manually fix it pretty easily without giving up on templates entirely.
But libkern predates C++11, so decisions made by that team at that time are largely obsolete and should be heavily re-evaluated rather than blindly followed.
You can’t really just say “update to a newer version of the language” when you have both API and ABI compatibility constraints.
For source you can deprecate APIs, etc so future versions would have an early warning the source changes would be necessary.
But that doesn’t help shipping kexts, for that you need ABI stability, which really puts the hammer on changing/updating the features that you use. Many of the C++ features cause exciting binary compatibility problems, and make it super easy to accidentally change the ABI :-/
That aside yes, you totally can just update to a newer version of the language. If you are trying to maintain C++ ABI stability then your life is harder, yes, but it's no harder than it already is when you just upgrade compilers or deal with people building with other compilers (and most everyone ships C ABIs anyway to avoid this entire category of problems - extern "C" still works great in C++17). But you are still completely free to use newer features in the implementation itself which doesn't impact API or ABI stability in the slightest.
Just like everything else, it's a time/space tradeoff because it generates less optimal code in exchange for a smaller binary size.
These APIs are already deemphasized, so I wouldn’t be surprised if they were to deprecate/remove them altogether when they release the ARM version of macOS. They’ll probably do it with the update that introduces UIKit on macOS (as Craig Federighi said on this year’s WWDC) to divert the attention. Sneaky bastards, but their stuff still sucks the least ¯\_(ツ)_/¯
So in the future, we are going to tighten down access to the system hierarchy, the whole hierarchy down from /System and everything in there.
Another example of them sharing future plans was user space networking. I forgot what year it was but in the session they noted something about network kernel extensions (NKE) going away and to use Network Extensions instead. NKEs weren't the best but for Apple to send all the effort to recreate the 'same' thing in a new framework was odd. A visit to the labs and you were instantly told of the move to user space networking.
One last example. Apple ships in the default OS a number of third party mass storage kernel drivers. Take a look at /Library/Extensions on a new install. This ensure when you try to install that new OS or boot that new OS, you can see you're drives. Apple likely needs to work with those third-parties to make that happen.
I understand why it might appear sneaky but I don't think that's the case.
Still, disturbing that the URL includes "archive".
OTOH, they recently open sourced the iOS flip side of what's open on macOS. So who knows.
For example USB to Serial devices, or custom media devices, and more. I really don't expect kernel modules to go away.
I could see an argument where moving existing hardware Kexts to user space is easier because IOKit uses the libkern C++ Runtime. The OO design of IOKit may lend itself very nicely to the driver approach BarrelFish takes (http://www.barrelfish.org). The real hard one to move to user space would be third-party filesystems. That's mainly because of dated VFS architecture used in *NIX systems. I could see Apple completely moving away from that at a future point too.
I sure hope you're right though. It hasn't happened yet thankfully!
I mean, who needs old existing drivers anyways on a new platform?
Surely C++ has way more implicit conversions than C, what with having all of C's and ctors defaulting to converting?
I was referring to C implicit conversions that aren't valid in C++ code, like void* to other pointer types.
Embedded development positions are not your typical developer positions, they require more intimate knowledge of system, of possible states and transitions, and of hardware internals. This results in less focus on actual coding skills.
Even if you are highly skilled, you won't be doing your company a favor if you're writing code that only 5% people can understand and contribute to. My understanding is that every team that writes C++ will restrict it to some subset to make the codebase more manageable.
OTOH I think namespaces can only lead to better and more modular architecture, I'm not sure where they should be avoided.
Or maybe they just wanted developers to be "less creative".
I mean - I'm just guessing here, but I can at least see some technical reasons for these decisions.