Everyone who thinks that writing great graphics drivers can be a spare time activity is delusional.
The fact that we have Android with Gralloc (which in comparison to DRM is, well, a joke), Ubuntu with Mir, others trying out Wayland and folks still stuck on X11 makes this all so much more complicated than it needs to be (and SteamOS is rather terrible in this regard too, which is a shame because Valve is trying to do the right thing with Vulkan but SteamOS is just not a well put together distro, at least right now).
It's just not a driver model problem, it's the politics of it all. Outside of Google adopting DRM instead of Gralloc (or Gralloc getting all of the features on DRM and effectively becoming DRM and replacing it on the desktop) there's probably little chance of unifying all the drivers under one coherent umbrella.
I'm sad that there is no alternative to X that works with `legacy` graphic cards like mine (Radeon HD 4500). But to be honest I'm personally just concerned due to already mentioned security problems with Xorg.
Doesn't look like something you'd necessarily need to know about unless you were developing for the GPU on Linux.
Unfortunately only the Free Software drivers fully support it, AMD is planning to switch its proprietary drivers to it (using their new amdgpu drm driver), and Nvidia doesn't use it on desktop at all but will likely at least implement the prime buffer sharing for optimus at some point.
There are various reasons for the current situation, e.g. Nvidia's OpenGL drivers are simply the best (on both Windows and GNU/Linux) so they had little reason to play nice with others, especially given that their mode-setting code was a bit better than what DRM used to do which is not the case anymore and hasn't been for a while, but companies need financial incentives to rewrite parts of their working codebases.
And even if you get all of the desktop GPUs on top of DRM then you'll still have Android and mobile GPUs. Certain mobile GPUs do work with DRM but that level of support is usually embarrassing and mostly done to be able to say "we did" rather than actually support any useful features in it which, is not completely unreasonable, because DRM doesn't buy you almost anything on Android right now so why bother.
Now, having said that, WDDM, particularly 2.0, does do some things that DRM can not. But DRM is not far behind (and capable of certain things wddm can not) and the real travesty is that not all GPU drivers on Linux (both GNU/Linux and Android) use it.
The perfect Linux (again, both GNU/Linux and Android) graphics stack will be: DRM kernel driver (not only command submission but memory management and mode-setting), Vulkan user-space driver (Vulkan will become basically what Gallium is right now which is a common layer on top of which we'll implement other, more friendly graphics apis) and Wayland.
DRM solves the multi-gpu sharing, synching, recovery, memory management, gives us a central place to manage security (although with GPUs that's a sort of an "interesting" topic which is a long discussion in itself) and do some rudimentary scheduling (which is, as mentioned previously, also "interesting"), Vulkan, as a side-effect of the fact that it's so tight and will come with a conformance framework, will make drivers a lot more predictable and stable and the Wayland faq explains why it's a neater choice than X11.
Unfortunately, while Google will adopt Vulkan, I doubt they'll have enough good will and reason to drop SurfaceFlinger and Gralloc in favor of DRM and Wayland. So we won't likely get to that stack anytime soon.
People very smart, knowledgeable, wise, persistent, patient, cleaver, prudent etc. have been able to manage organizations with different concerns, priorities and goals.
On a rolling release like Debian Testing, with well-supported hardware like the Intel Ivy Bridge integrated GPU, WebGL has always worked beautifully.
The trouble is that it's a sort of retrofitted check. It works surprisingly well, but being added to a 20+-year-old OS and a 40+-year-old design, it can't guarantee being correct. The traditional, historic process boundary on UNIX has been user accounts / UIDs. Desktop Linux needs to figure out a way to do what Android does, and run every application with a separate UID, including the ability to have private files that are unreadable to other applications.
The challenge is defining "application". Android, being a greenfield platform, could define the interaction between applications. On desktop Linux, it's obvious that, say, a web browser and a PDF viewer should be different applications. The two processes in a "find | grep" shell pipeline probably should count as the same application. In between, it's pretty blurry, because we've had 40 years to develop patterns that don't account for this isolation ever happening.
Yes, that typically would be done by reusing libraries, not by starting processes, but modern browsers run a separate process per tab and some run flash in a separate process. Running a PDF viewer in a separate process would be a logical extension.
Unfortunately, it does absolutely NOTHING with regard to OpenGL isolation :( (That is if you give programs permission to talk to the graphics card, which is denied by default...)
Have you looked at OpenGL virtualization e.g. Virgil (https://virgil3d.github.io/ , https://www.kraxel.org/blog/tag/virgl/) for that problem? The idea is to use Gallium to give them just enough access to get host 3D acceleration, but not direct access to the graphics card.
GO.COM contained no program bytes at all – it was entirely empty. However, because GO.COM was empty, but still a valid program file as far as CP/M was concerned (it had a directory entry and file-name ending with .com), the CP/M loader, the part of the OS whose job it is to pull programs off disk and slap them into the TPA, would still load it!
So, how does this help? Well, using the scenario above:
the user exited WordStar
the user ran DIR (or whatever else they needed) and at some future point would be ready to re-run Wordstar
the user now ‘loaded’ and ran GO.COM
the loader would load zero bytes of the GO.COM program off disk into the TPA – starting at address 0100h – and then jump to 0100h – to run the program it just loaded [GO.COM]!
result – it simply re-ran whatever was in the TPA when the user last exited to DOS – instantly"
It also wasn't unusual to try and recover data from memory after a program crashed by launching a small program that dumped memory to disk. 100% reliable? No, but it was the best one sometimes had.
SAVE 0 X.COM
gdb, when using it to attach to an already running process.
And agree with the author. X11 has aging issues.
The whole framebuffer handle management of Wayland is a huge step in the right direction, but I despise Wayland's input and inter client communications model; not to say that X11 was anyway better, but whenever I code against Wayland, that part feels just wrong. Also the obsession of the Wayland devs of getting VSync right has its own share of issues (especially for low latency graphics, like you need it for VR).
- Clients only provide complete buffer updates rather than drawing on to the front buffer, fixing tearing for all non-OpenGL apps
- Each monitor can have its own set of buffers and sync signal (technically the wayland protocol doesn't specify this, but makes it possible, and Weston does it correctly)
Wayland, unfortunately, seems to be the type of project that has been a year or two away from usability since its first release in 2012. I suspect we'll be stuck with X11 for a while yet.
I used Wayland/Weston for a full work day about a year ago to try it out. It really was very nearly ready. I can't quite remember what problems I ran into, but I could do my work (as a web developer) with very few real problems. Probably it's time to look at it again.
"Bandwidth Limit Exceeded
The server is temporarily unable to service your request due to the site owner reaching his/her bandwidth limit. Please try again later."
Well, what it does is, that the compositor (DWM) keeps a (small) backlog of presented window framebuffers (think GL SwapBuffers, that add to the queue) and whenever a screen's VSync comes along it blits the most recent picture presented to it. Which has the effect that if you have a window spanning multiple GPUs' outputs they may not all show the same frame, because the display refresh frequencies will beat (even if you set the same refresh frequency on all outputs, since between different GPUs they're not driven by the same clock, you'll get a little bit of clock skew; and video clocks are usually not of the low drift type, because for video it doesn't matter if you're off a few ppm). If you need synchronized video sync vendors are happy to sell you genlock-able cards for $$$.
In the end I spent £30 on a second GPU. Problem solved... ish. XRandR still didn't work, and GNOME3 switched to fallback mode, which is basically GNOME2, i.e., a totally different window manager and a totally different shell.
I have no opinion on the performance on Windows vs that on Linux (not a concern, hence the £30 GPU...) but this whole situation was certainly a lot less hassle on Windows ;)
I definitely use linux primarily for other stuff these days, though, so it's not often I'm setting up a new graphics card with linux anymore.
Because I did spent such hours back with Slackware 2.0.
It is quite possible that one had to still edit modelines by hand at that time, but it really wasn't long before the modeline db came out and you could just plonk in canned values with a good expectation it would work untouched. Certainly after that time I never had any difficulty. About the only thing I ever did was to follow instructions to modify a couple of numbers for monitors that had bizarre refresh rates.
I think that this would probably fall pretty close to the "20 years ago" time frame (August 1995). Slackware 3.0 came out in November of that year. 2.0 of the kernel came out in 1996 and I'm quite sure that I wasn't spending any time editing modelines then.
If you wanted to get total plug an play without any configuration at all, I think you would have to wait until 2004, when Ubuntu was first released. So while the original statement that it took 10 years to get video drivers working is completely wrong, I think it would be fair to say that it took 10 years to have an equivalent install experience to Windows on that front.
It wasn't just getting the monitor refresh rate working.
I started using 3D acceleration when Compiz/Beryl was released (2006) with an NVidia card. I was playing World of Warcraft under wine at that time too. The only problem I had was when I updated the kernel I had to remember to recompile the drivers under Debian. I switched to Ubuntu for that reason (because then I didn't have to do anything).
Eventually I retired that and went with an Intel board. It worked 100% but was dog slow at the time (pre-2010 Intel drivers were quite good at 2D acceleration but abysmal for speed at 3D). Compiz worked perfectly, the only problem was a pretty big hit in frame rate for WoW.
I replaced that with a Radeon Card which worked perfectly under Catalyst. I started hearing rumours that the free software drivers were getting good performance numbers and switched to that. 2D was amazing (dramatically better than Catalyst) and 3D was slightly slower, but comparable.
Finally replaced that with a few laptops running Intel hardware again. By this time (2012) the 3D performance had dramatically improved and I'm quite happy with it. Obviously not a gamer set up, but more than adequate for casual gaming.
Before 2006, I think you were pretty much stuck with Nvidia and proprietary drivers if you wanted 3D. Again, I never heard anyone who had real problems other than recompiling the drivers if you were on a distro that didn't do it for you.
I actually worked with Gavriel State at Corel before he started up TransGaming. I thought he was completely nuts to do it because at the time there was almost nothing that ran under Wine properly. That was 2001. Honestly, if you were trying to get 3D working on Linux, you were mostly doing it just to say you did it.
I'm not going to deny that some people had problems with video cards. I often had to jump through hoops to get random hardware at work to give me nice displays. But if you were careful to buy supported hardware, there were very few problems.
To be fair, I did a lot of Windows development in the late 90s and early 2000s and I had at least as many problems with video drivers under Windows. The difference was that on Linux I would have trouble with new hardware, while with Windows I would have trouble with old hardware. The other big difference is that with Linux, you could almost always get something to work so that you could hack on it. With Windows, getting to a point where you could download updated drivers could be unbelievably painful.
a. Graphics requires software and hardware to be integrated
b. Open Source communities don't work where you are driving designs in software and hardware - they are uncoordinated
I can't comment on (a). In terms of (b) you might be right for the idea of "hackers at night" open source. But, a lot of open source is 'professional', so a graphics company could solve this by:
1. Graphics companies provide hardware and information under NDA to specific Open Source developers.
That gets you enablement but not necessarily a "design" as you can't direct their work.
2. The hardware companies hire Open Source developers.
That way you have either all, or a big chunk of a specific area of the developers. As employees they will work together on the companies priorities.
3. The hardware companies set-up a foundation.
Then the foundation co-ordinates and works on the priorities of the members. There are sub-groups of the Linux Foundation that work like this - for ARM Linaro is an example.
There's a meta-comment that talks about the economics of this area - that's a more significant challenge than the organisational ones.