Hacker News new | past | comments | ask | show | jobs | submit login

I thought CPUs did have binary blob drivers, both in the BIOS and on the operating system.



A driver is always required in order for the OS to speak to hardware. That's essentially what a driver is: an interface between software and hardware.

With CPUs, the code needed to run in the host OS (the driver) is fairly simple code that gives access to the CPU hardware more or less as it is: practically speaking, everything the CPU itself can do, the compiler can output code to do directly. The output of the compiler (eg. gcc) is sent more or less as-is to the CPU for execution.

With a GPU, we have a large chunk of proprietary code running in the host OS (the GPU driver), which provides a 3D API interface, such as OpenGL or Direct3D, to the GPU hardware. There is a huge difference between what is sent to this driver code, and what the driver sends to the GPU hardware.

Applications submit, eg., OpenGL instructions to this driver, the driver compiles these OpenGL instructions into code that will execute on the GPU, and sends it to the GPU. So the GPU driver essentially acts as a closed source compiler, that compiles from OpenGL/Direct3D into whatever intermediate language the GPU accepts, which the GPU may further compile into something that its processors can execute.

So for a CPU, the compiler does most of the work, while for a GPU, the driver does most of the work, and actually functions as a closed source compiler. On top of that the driver handles memory management, and automatically allocates memory according which OpenGL instructions are executed.

Imagine if Intel provided a CPU that you could only use to execute Python code. It would require a closed source driver to work. This closed source driver would compile Python code submitted to it into some unknown instructions that execute on this CPU. It would also have exclusive control over an area of memory, that the driver would automatically allocate portions of, to store data from Python variables. That's essentially what nVidia and AMD offer, except the hardware is a processing unit that comes with its own RAM, and the language is OpenGL/Direct3D and not Python.


Wasn't this approach taken with good reason though? So the effort to get code running on multiple GPUs was a reasonable? Remember the days of when games would only support select GPUs?

Sure you could have a standard like i386. But isn't it the case that there is a lot more innovation happening in the GPU architecture space making this very difficult?


GCC supports waaaaay more different CPU architectures than these drivers do for GPU architectures. This argument has no merit. The only purpose for these closed architectures is vendor lock-in.


And every vendor has an HLSL and a GLSL shader compiler. You also usually don't ship ten different binaries for ten different architectures, which you would need to do for GPUs.


In this model I think you'd distribute something in an intermediate bytecode, like .NET does w/ MSIL, and then the target system would be responsible for precompiling.

Edit: just saw this:

https://news.ycombinator.com/item?id=9140001


This is the rhetoric GPU vendors will use to justify their behavior, of course, but $billions and modern technology can address this if AMD and Nvidia feel that it is important.


> Sure you could have a standard like i386.

We're not talking about standardizing hardware (the ISA), but the API: Vulkan. It really doesn't matter in which form your hardware processing unit comes, as long as its driver accepts SPIR-V and can execute it on said processing unit.


Apropos your hypothetical Intel throwing restrictions scenario: Intel has been considering [0] dropping OpenCL support on the Xeon Phi. Which is exactly that kind of asshole move. Not to say that Nvidia or AMD are good guys in this respect...

[0] https://plus.google.com/105097056044353520580/posts/81ddzBqq...


Are you referring to the microcode updates that operating systems apply on startup? Those are binary blobs, but they're only to correct CPU errata.


I'm not really sure what exactly they are. I know I've had to update my BIOS to be able to use newer versions of CPUs that still fit in the old sockets before [1]. I know I've seen Windows updates for both the motherboard's chipset and for the CPU. But what difference any of that has made, I don't know.

[1] Actually, that particular instance was a complete pain in the ass, because I had bought all of the parts new, but the motherboard didn't have the latest BIOS on it, and I didn't have a compatible CPU on hand. This was also before the liberal return policies that online vendors now have, so I ended up just ordering a different motherboard completely.


You seem to be mixing up different concepts.

The BIOS runs outside of the OS and helps all the parts of your computer speak to each other.

This is why the process is a bit more involved than just running an update from your OS's update manager.

The "binary blob" that's referred to when talking about GPU's is the huge piece of software that you have to download and install in order to make you graphics card able to use all of its features. Some graphics cards won't even use a display's full resolution without the correct driver.

This is different to the CPU in that there is no big downloaded driver sitting between you and it.

If you really want to understand where everything sits, you should check out a book or search for information on Computer Architecture.


> You seem to be mixing up different concepts.

This is oversimplifying but my understand is that graphics programs like games send data and instructions as a job into a queue managed by operating system to the CPU which then has to then quickly decide what to do with it. GPU's function as a sort of co-processor in that the CPU will offload a portion or all of the instructions/data of jobs where GPU processing is indicated in the job instructions. This is why all GPU processing involves CPU overhead. Also GPU's are somewhat bottle-necked by this whole process.

It's my impression that current trends like Mantle, DX12 and Vulkan are trying to reduce the amount of stuff the CPU has to do to process jobs intended for the GPU. But they can never really eliminate the role of the CPU even in something like an ARM chip is embedded on the GPU board that handles most of the processing that the CPU would have done.

tl;dr - CPU is what makes the computer happen but the GPU is like a dedicated graphics co-processor.

Feel free to correct me if I'm wrong.


I took the post to which I originally replied to be more concerned about the proprietary nature of the driver, rather than the size. The BIOS is a type of simplistic set of drivers, even a basic operating system, in essence. My point was that the comparison to CPUs with regards to not needing proprietary drivers was not apt, as there are certainly several layers of proprietary code in between you and your CPU, for most systems.


There is actually a binary blob for processors that the BIOS is responsible for loading. The biggest difference is that you aren't responsible for downloading it as it is usually included with your motherboard (as the grandparent says). You can find CPU binary blobs in the open source Core Boot BIOS much like you can find the GPU binary blobs in the Linux kernel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: