Hacker News new | past | comments | ask | show | jobs | submit login

I don't have anything to say technically, but this ignores the social elephant in the room which is the interesting part of this story. We wound up with KMS because circa '07 AMD bought ATI and started to execute an Open Source strategy. That meant 2/3 of the big graphics providers were open source friendly (ATI/AMD & Intel) vs one closed (Nvidia).

That was when the bell tolled for X11 (yes, yes, is still tolling), as there was enough OS support to kill of UMS which was really just a mechanism to keep binary drivers a safe distance from the kernel. With key components open sourced, the Linux graphics stack has been reorganising like some sort of enormous mudslide with ripples and aftershocks that keep flowing out today 15 years later. KMS was one part of that.

At some point Nvidia, the last interesting closed source holdout & throwback to a bygone era, will cave and open source their damn driver and we can close the book on one of the most interesting episodes in the history of the open source movement.




You're absolutely right about the KMS requirement for kernel drivers being available and the social aspects of that; when I wrote the article I sort of blithely assumed they were, because they have been for a long time (even in limited form like noveau). But KMS certainly wouldn't have happened without open source drivers that could do enough to require being in the kernel, and that was a product of surprising openness on the part of vendors.

I think it's too strong to say UMS was just a mechanism to keep binary drivers out of the kernel. As far as I know, XFree86 was doing UMS from its beginnings in the early 1990s, which was well before graphics vendors were paying attention to Linux or other free Unixes. There were probably a whole host of reasons that XFree86 used UMS, including that it wanted to be portable across the free Unixes (and not need to coordinate releases with any of them).

(I'm the author of the linked-to article.)


From what I remember at the time, there also was a culture shift going from text-based first to graphics-based first. One of the complaints against UMS was that kerneloops could not be displayed while X was running, because the kernel knew nothing about how to get text onto a graphics console, nor could it return the display to text-mode by itself. Whether that was just a convenient rationalization or a driving factor, I don't know.

I also think that the widespread availability of EDID was a big factor in the change: what good does it do for the kernel to switch to graphics mode if it then requires extensive configuration before it can output anything useful? There may have been earlier interests in moving more graphics output control into the kernel, but manual scanline configuration would have prohibited deployment at large-scale anyway.


> At some point Nvidia, the last interesting closed source holdout & throwback to a bygone era, will cave and open source their damn driver and we can close the book on one of the most interesting episodes in the history of the open source movement.

Indeed, nVidia open sourced the kernel portion of their drivers a few weeks ago.

https://github.com/NVIDIA/open-gpu-kernel-modules

This is, at least for now, a somewhat limited release (only supports Turing and newer, only really tested for Tesla etc., which shows where the paint point is), and not mainlined (yet? Probably the long-term goal).


> Indeed, nVidia open sourced the kernel portion of their drivers a few weeks ago.

It's my understanding that this was more like an added shim layer in the kernel, and that it just communicates to the (still proprietary) code now in firmware. In other words the blob moved, but it's still there, just shimmed. Maybe somebody with more knowledge of this codebase can weigh in?

Certainly this pattern of responding to pressure to open source code by shimming the blob is quite common. It's very common in network gear that runs Linux, for example.


> It's my understanding that this was more like an added shim layer in the kernel, and that it just communicates to the (still proprietary) code now in firmware. In other words the blob moved, but it's still there, just shimmed.

There's a very important difference: that firmware runs in a separate processor, which assuming a working IOMMU (and that firmware has to assume a working IOMMU, since the IOMMU is controlled by the main processor), can only influence the GPU itself. Having all the code running in the main processor be open has a large benefit, and gives a lot of control back to the user. It also allows for faster evolution of the platform (for instance: the whole EGLStream vs GBM debate would be less of an issue if the kernel code was open), and even completely replacing the main processor itself (for instance, does the proprietary NVIDIA driver have a RISC-V version of its blob?).


Note that the NVIDIA firmware runs on RISC-V processors within their GPUs:

https://lwn.net/Articles/894893/


This is a severe misunderstanding of what the firmware is doing, or capable of doing. The kernel driver for NVIDIA handles memory transfers and DMA, command buffer submission & scheduling, watching and triggering fences, and register setting for some other components like the display controller. These responsibilities are the same for any GPU, firmware-enabled or not.

The userspace driver is where most of the secret magic is on any GPU, it's in charge of building the command buffers that are submitted. All the work to translate GL/Vulkan/Direct3D takes place in the userspace driver, and it's the majority of the codebase by far. The kernel driver being open-source means most of the hardware-communication stuff is upstream.

A combination of firmware and hardware interpret the command buffers and run the rest of the GPU pipeline. Firmware is used for power/clock management, managing the on-board GPU cache, queue balancing, running the front-end, and various other bits & bobs. None of this would replace any Linux or Windows driver code, it replaces dedicated hardware.


All the code running in the kernel with kernel privileges and visibility is at least open source now, though. The GPU firmware can't be audited, no, but the sandbox it's running in can be. Which is a huge improvement.


That's the thing though -- not only is it not an improvement, it's not even actually different than before. This pattern of a thin shim in the kernel is just veneer, to allow the vendor to claim they're now open source, and quit bugging them about it please.

I once put this exact argument to a network vendor's rep at a conference, after they'd introduced a really thin shim like this, with some fanfare. You know what his answer was, live on stage for all to hear?

"but it's open source"

If open sourcing proprietary code is the goal, as it has always been mine, then this sort of thing is very much a step backwards.


It is different, though. There's now zero Nvidia proprietary code running in ring 0. Does it meaningfully get you closer to a "true" open source driver? Not really. Is it still a significant improvement though? Absolutely.

This also opens the door for any other kernel (like the BSD's) to speak to Nvidia's firmware without needing to do any reverse engineering. As well as gets Nvidia out of the way in terms of adopting new stacks like DRM-KMS. They are just no longer a blocker on that now since that's all handled by the now open source "shim" (which is still more than a shim)


Don't worry, they've achieved both their goals: lowered the maintenance costs by opening and mainlining the few kernel bits that depended on the constantly breaking kernel APIs and do not contain anything potentially interesting to competitors, and got lots of 'shills' (probably the wrong word but I don't know any better) who didn't look into it too deeply and now spread great news for free that Nvidia has fully opened their drivers and the famous Linus gesture not longer applies.


Its a step forwards for nouveau, because they can now use the same proprietary firmware (which is now redistributable) as the proprietary driver, which means they can get the same clock speeds as the proprietary driver.

Its a step backwards too, because the firmware is now signed and therefore unmodifiable, but that was a problem before the recent code release too.


Intel runs all their stuff in a fat firmware blob and everyone considers it an "open strategy".


Can you be more specific?


> At some point Nvidia, the last interesting closed source holdout & throwback to a bygone era, will cave and open source their damn driver

I hope so but I don't think it's likely. It's all about money, and it seems like for them there's no benefit and only downsides, such as making it easier for other companies to sue over patent infringements, make it a little easier to clone "their" technologies, having to rearrange their development processes etc. (All of these things would be good for the world, my question is why it would be good for Nvidia.) Also I don't think they really feel any pain at the moment. They've got their shim.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: