
Maybe it’s time to talk about a new Linux Display Driver Model - epsylon
http://www.yosoygames.com.ar/wp/2015/09/maybe-its-time-to-talk-about-a-new-linux-display-driver-model/
======
doesnotmatter
Yea, that's cute, except that DRM can handle pretty much all of it. You can
split hairs and complain about GPU scheduling which is inherently rather
difficult because scheduling at the command queue level is problem very much
alike the halting problem. The real issue isn't that we don't have the pieces
we need but rather that we can't get all the players to agree on using the
same ones. On Windows you have one entity (Microsoft) that can post WLK and
unless you pass it you won't be certified and on GNU/Linux "a working driver"
can be anything from "not catching on fire on boot" through "actually brings
up display" to "oh, hey a textured triangle!". And I get it, everyone is
frustrated because ultimately displaying a bunch of pixels, seems trivial,
that is, until you mix in politics. You have NVIDIA, AMD, Intel and the
community at large pulling all in different directions. With GNU/Linux
graphics support having marginal effect on the bottom line there's little
incentive to deal with it. And you'd still miss a controlling entity that
could validate that "works on Linux" means anything but "compiles with some
random kernel release".

Everyone who thinks that writing great graphics drivers can be a spare time
activity is delusional. The fact that we have Android with Gralloc (which in
comparison to DRM is, well, a joke), Ubuntu with Mir, others trying out
Wayland and folks still stuck on X11 makes this all so much more complicated
than it needs to be (and SteamOS is rather terrible in this regard too, which
is a shame because Valve is trying to do the right thing with Vulkan but
SteamOS is just not a well put together distro, at least right now). It's just
not a driver model problem, it's the politics of it all. Outside of Google
adopting DRM instead of Gralloc (or Gralloc getting all of the features on DRM
and effectively becoming DRM and replacing it on the desktop) there's probably
little chance of unifying all the drivers under one coherent umbrella.

~~~
FractalNerve
I am sorry, sir. What is DRM? I only have Digital Rights Management flashing
in into my memory, but I have no idea what you could mean with DRM. Apparently
I've been able to ignore and live behind the moon, not knowing what DRM is,
while using Linux mainly since 2003-2004.

I'm sad that there is no alternative to X that works with `legacy` graphic
cards like mine (Radeon HD 4500). But to be honest I'm personally just
concerned due to already mentioned security problems with Xorg.

~~~
RussianCow
A quick Google search found "Direct Rendering Manager":
[https://en.wikipedia.org/wiki/Direct_Rendering_Manager](https://en.wikipedia.org/wiki/Direct_Rendering_Manager)

Doesn't look like something you'd necessarily need to know about unless you
were developing for the GPU on Linux.

~~~
doesnotmatter
Right, exactly. DRM is responsible for memory management, mode-setting,
command submission and is capable of basically everything the original article
asks for.

Unfortunately only the Free Software drivers fully support it, AMD is planning
to switch its proprietary drivers to it (using their new amdgpu drm driver),
and Nvidia doesn't use it on desktop at all but will likely at least implement
the prime buffer sharing for optimus at some point.

There are various reasons for the current situation, e.g. Nvidia's OpenGL
drivers are simply the best (on both Windows and GNU/Linux) so they had little
reason to play nice with others, especially given that their mode-setting code
was a bit better than what DRM used to do which is not the case anymore and
hasn't been for a while, but companies need financial incentives to rewrite
parts of their working codebases.

And even if you get all of the desktop GPUs on top of DRM then you'll still
have Android and mobile GPUs. Certain mobile GPUs do work with DRM but that
level of support is usually embarrassing and mostly done to be able to say "we
did" rather than actually support any useful features in it which, is not
completely unreasonable, because DRM doesn't buy you almost anything on
Android right now so why bother.

Now, having said that, WDDM, particularly 2.0, does do some things that DRM
can not. But DRM is not far behind (and capable of certain things wddm can
not) and the real travesty is that not all GPU drivers on Linux (both
GNU/Linux and Android) use it.

The perfect Linux (again, both GNU/Linux and Android) graphics stack will be:
DRM kernel driver (not only command submission but memory management and mode-
setting), Vulkan user-space driver (Vulkan will become basically what Gallium
is right now which is a common layer on top of which we'll implement other,
more friendly graphics apis) and Wayland.

DRM solves the multi-gpu sharing, synching, recovery, memory management, gives
us a central place to manage security (although with GPUs that's a sort of an
"interesting" topic which is a long discussion in itself) and do some
rudimentary scheduling (which is, as mentioned previously, also
"interesting"), Vulkan, as a side-effect of the fact that it's so tight and
will come with a conformance framework, will make drivers a lot more
predictable and stable and the Wayland faq explains why it's a neater choice
than X11.

Unfortunately, while Google will adopt Vulkan, I doubt they'll have enough
good will and reason to drop SurfaceFlinger and Gralloc in favor of DRM and
Wayland. So we won't likely get to that stack anytime soon.

------
thrownaway2424
The thing about being able to read other process memory under Linux has always
driven me nuts. Linux is fairly secure _except_ for this one giant gaping hole
that nobody mentions.

~~~
rbanffy
Should be pretty straightforward to fix. I wonder why it isn't. Is someone
discussing the issue?

~~~
geofft
The Yama patchset has been in mainline for the last few years, and disallows
non-root processes from reading each others' memory (except for direct
children).

The trouble is that it's a sort of retrofitted check. It works surprisingly
well, but being added to a 20+-year-old OS and a 40+-year-old design, it can't
guarantee being correct. The traditional, historic process boundary on UNIX
has been user accounts / UIDs. Desktop Linux needs to figure out a way to do
what Android does, and run every application with a separate UID, including
the ability to have private files that are unreadable to other applications.

The challenge is defining "application". Android, being a greenfield platform,
could define the interaction between applications. On desktop Linux, it's
obvious that, say, a web browser and a PDF viewer should be different
applications. The two processes in a "find | grep" shell pipeline probably
should count as the same application. In between, it's pretty blurry, because
we've had 40 years to develop patterns that don't account for this isolation
ever happening.

~~~
timthelion
Perhaps you would be interested in my project, subuser.org

Unfortunately, it does absolutely NOTHING with regard to OpenGL isolation :(
(That is if you give programs permission to talk to the graphics card, which
is denied by default...)

~~~
geofft
Neato! Many years back I was playing with Xpra for this, but it was in the
days before either Docker or useful seccomp. Glad to see something making
progress here.

Have you looked at OpenGL virtualization e.g. Virgil
([https://virgil3d.github.io/](https://virgil3d.github.io/) ,
[https://www.kraxel.org/blog/tag/virgl/](https://www.kraxel.org/blog/tag/virgl/))
for that problem? The idea is to use Gallium to give them just enough access
to get host 3D acceleration, but not direct access to the graphics card.

~~~
timthelion
Looks interesting. But no, I have not tried it. It isn't really a priority for
me. I'm not a gamer, nor am I a climatologist, and while I do use CAD, I've
had other, simpler, problems to solve :D

------
tfranco
Still remember the days where one had to compile the kernel to enable sound in
doom. The fact that most hardware is supported today is a blessing.

And agree with the author. X11 has aging issues.

~~~
datenwolf
Or for that matter XFree86 / Xorg and Xlib have aging issues. IMHO the
protocol is okay (it's not perfect). Maybe cut away some of the legacy cruft,
implement antialiased rendering for core primitives (yes, even if they are
just lines, arcs and ellipses, those are still needed even on modern screens;
heck with all the flat style UIs currently being all the rage on a HighDPI
screen you could probably disable XRender and nobody would notice).

The whole framebuffer handle management of Wayland is a huge step in the right
direction, but I despise Wayland's input and inter client communications
model; not to say that X11 was anyway better, but whenever I code against
Wayland, that part feels just wrong. Also the obsession of the Wayland devs of
getting VSync right has its own share of issues (especially for low latency
graphics, like you need it for VR).

------
XorNot
How much of the compositor problems will be fixed by Wayland? I know a big
goal of Wayland is to address tearing and corruption.

~~~
ac29
Anecdotal, but these "compositor problems" are not inherent to Linux or X11. I
am extremely sensitive to things like tearing, and I can tell you it simply
doesn't occur on my linux machines (though it does require enabling a TearFree
X11 option on my intel GPU).

Wayland, unfortunately, seems to be the type of project that has been a year
or two away from usability since its first release in 2012. I suspect we'll be
stuck with X11 for a while yet.

~~~
mikekchar
The open source Radeon driver also does a good job of avoiding tearing. I
think there are some pathological cases where it can still occur, but it is
pretty rare. Catalyst was terrible at avoiding tearing the last time I used it
(getting on to be 3 years ago now). I haven't looked into it, but it is
possible that the tearing issue may not be solvable without a KMS driver (i.e.
no binary blob drivers can do it).

I used Wayland/Weston for a full work day about a year ago to try it out. It
_really_ was very nearly ready. I can't quite remember what problems I ran
into, but I could do my work (as a web developer) with very few real problems.
Probably it's time to look at it again.

~~~
XorNot
Interesting to hear, I might give it a shot too now.

------
theodorejb
In case anyone else has been getting the "Bandwidth Limit Exceeded" error when
attempting to access the page, here's a link to Google's cached copy:

[http://webcache.googleusercontent.com/search?q=cache%3Ahttp%...](http://webcache.googleusercontent.com/search?q=cache%3Ahttp%3A%2F%2Fwww.yosoygames.com.ar%2Fwp%2F2015%2F09%2Fmaybe-
its-time-to-talk-about-a-new-linux-display-driver-
model%2F&oq=cache%3Ahttp%3A%2F%2Fwww.yosoygames.com.ar%2Fwp%2F2015%2F09%2Fmaybe-
its-time-to-talk-about-a-new-linux-display-driver-model%2F)

------
ryanmarsh
It was time to talk about a new Linux display driver model 15 years ago. 10
years ago I gave up on Linux for this reason and switched to OSX

------
gkop
I ran into issue 3) and a variation of 4) recently trying to accomplish what
should be a pedestrian task: running Xvfb on Heroku. X11 is a pain in the ass
to package, I spent a day on it and gave up.

------
pant
I am getting this error

"Bandwidth Limit Exceeded The server is temporarily unable to service your
request due to the site owner reaching his/her bandwidth limit. Please try
again later."

------
RX14
I am currently using 2 displays connected to my graphics card and one
connected to my igpu on my Linux desktop at home, and it's really not that
hard to set up. Unfortunately the GPU does all the rendering and passes images
to the iGPU, but I believe this is the case too with Windows. Also there are a
few bugs when using this setup to do with monitor positioning.

~~~
ccrush
You misread OPs point. He is saying Windows can do both. And vsync across
cards. That is quite a feat. If using separate outputs, this is probably
achievable on Linux with a bit of work. Also vsync with a compositor shouldn't
be impossible. However, if the processing is done on on two graphics cards and
we output only on one of them then we need cooperation from the device
manufacturers to provide us drivers or implementation details on how to drive
their hardware muxers. Without that info, we're dead in the water.

~~~
datenwolf
> And vsync across cards. That is quite a feat.

Well, what it does is, that the compositor (DWM) keeps a (small) backlog of
presented window framebuffers (think GL SwapBuffers, that add to the queue)
and whenever a screen's VSync comes along it blits the most recent picture
presented to it. Which has the effect that if you have a window spanning
multiple GPUs' outputs they may not all show the same frame, because the
display refresh frequencies _will_ beat (even if you set the same refresh
frequency on all outputs, since between different GPUs they're not driven by
the same clock, you'll get a little bit of clock skew; and video clocks are
usually not of the low drift type, because for video it doesn't matter if
you're off a few ppm). If you need synchronized video sync vendors are happy
to sell you genlock-able cards for $$$.

------
rootlocus
Maybe it's time to upgrade the hosting plan.

------
lololololololol
Maybe it's timw to pay for some more bandwidtth

------
yarrel
Let's incorporate it into systemd. That will solve everything.

------
nikanj
New year, new Linux graphics driver model. CADT is alive and well.

------
Animats
Be grateful that graphics cards work on Linux at all. For over a decade, they
barely did.

~~~
jbuzbee
Depends on what you mean. I've been running Linux for 20+ years and have never
had a graphics card that I couldn't get to work. If you're talking about
acceleration, there have obviously been issues getting manufacturer support,
but as a non-gamer, that mattered little to me.

~~~
pjmlp
So no endless hours trying to get a config file that wouldn't set the monitor
in flames?

Because I did spent such hours back with Slackware 2.0.

~~~
mikekchar
Slackware 2.0 was released in August 1994. Even 1.0 (which is probably the
first distro to have XFree86) was released in July 1993. I used MCC before
that and I don't think X was available (though, to be fair, I can't really
remember when I first got X working on my machine). I distinctly remember
having to wait until mmap() was implemented before anyone could even think
about getting X to work. :-)

It is quite possible that one had to still edit modelines by hand at that
time, but it really wasn't long before the modeline db came out and you could
just plonk in canned values with a good expectation it would work untouched.
Certainly after that time I never had any difficulty. About the only thing I
ever did was to follow instructions to modify a couple of numbers for monitors
that had bizarre refresh rates.

I think that this would probably fall pretty close to the "20 years ago" time
frame (August 1995). Slackware 3.0 came out in November of that year. 2.0 of
the kernel came out in 1996 and I'm quite sure that I wasn't spending any time
editing modelines then.

If you wanted to get total plug an play without any configuration at all, I
think you would have to wait until 2004, when Ubuntu was first released. So
while the original statement that it took 10 years to get video drivers
working is completely wrong, I think it would be fair to say that it took 10
years to have an equivalent install experience to Windows on that front.

~~~
pjmlp
If you wanted basic 2D stuff yeah, then there were the adventures with
hardware accelerated 2D and the first batch of 3D accelerator cards, followed
by first generations of integrated 2D/3D cards.

It wasn't just getting the monitor refresh rate working.

~~~
mikekchar
Personally, I never had any problems with that. I was admittedly careful to
buy supported hardware. 2D acceleration just worked as far as I could tell.
Nobody I knew ever had a problem with it at all.

I started using 3D acceleration when Compiz/Beryl was released (2006) with an
NVidia card. I was playing World of Warcraft under wine at that time too. The
only problem I had was when I updated the kernel I had to remember to
recompile the drivers under Debian. I switched to Ubuntu for that reason
(because then I didn't have to do anything).

Eventually I retired that and went with an Intel board. It worked 100% but was
dog slow at the time (pre-2010 Intel drivers were quite good at 2D
acceleration but abysmal for speed at 3D). Compiz worked perfectly, the only
problem was a pretty big hit in frame rate for WoW.

I replaced that with a Radeon Card which worked perfectly under Catalyst. I
started hearing rumours that the free software drivers were getting good
performance numbers and switched to that. 2D was amazing (dramatically better
than Catalyst) and 3D was slightly slower, but comparable.

Finally replaced that with a few laptops running Intel hardware again. By this
time (2012) the 3D performance had dramatically improved and I'm quite happy
with it. Obviously not a gamer set up, but more than adequate for casual
gaming.

Before 2006, I think you were pretty much stuck with Nvidia and proprietary
drivers if you wanted 3D. Again, I never heard anyone who had real problems
other than recompiling the drivers if you were on a distro that didn't do it
for you.

I actually worked with Gavriel State at Corel before he started up
TransGaming. I thought he was completely nuts to do it because at the time
there was almost nothing that ran under Wine properly. That was 2001.
Honestly, if you were trying to get 3D working on Linux, you were mostly doing
it just to say you did it.

I'm not going to deny that some people had problems with video cards. I often
had to jump through hoops to get random hardware at work to give me nice
displays. But if you were careful to buy supported hardware, there were very
few problems.

To be fair, I did a lot of Windows development in the late 90s and early 2000s
and I had at least as many problems with video drivers under Windows. The
difference was that on Linux I would have trouble with new hardware, while
with Windows I would have trouble with old hardware. The other big difference
is that with Linux, you could almost always get _something_ to work so that
you could hack on it. With Windows, getting to a point where you could
download updated drivers could be unbelievably painful.

------
stevebmark
This is a good example of one of the dangers and failure points of open
source. When hardware integration matters, you're going to have sub-par
results if you don't have a strong relationship with the hardware
manufacturers. The best firmware is from companies that control both the
software and the hardware. Linux fundamentally cannot have this relationship.
Some things need to be very specifically controlled and designed, with
business interests driving progress. Open source doesn't help much with this.
Trying to build a system that works fluidly with the larger audience, when the
audience is independent corporations, is for better or worse a nightmare.

~~~
slgeorge
I think you're trying to say that:

a. Graphics requires software and hardware to be integrated

b. Open Source communities don't work where you are driving designs in
software and hardware - they are uncoordinated

I can't comment on (a). In terms of (b) you might be right for the idea of
"hackers at night" open source. But, a lot of open source is 'professional',
so a graphics company could solve this by:

1\. Graphics companies provide hardware and information under NDA to specific
Open Source developers. That gets you enablement but not necessarily a
"design" as you can't direct their work.

2\. The hardware companies hire Open Source developers. That way you have
either all, or a big chunk of a specific area of the developers. As employees
they will work together on the companies priorities.

3\. The hardware companies set-up a foundation. Then the foundation co-
ordinates and works on the priorities of the members. There are sub-groups of
the Linux Foundation that work like this - for ARM Linaro is an example.

There's a meta-comment that talks about the economics of this area - that's a
more significant challenge than the organisational ones.

