
Former Nvidia Dev's Thoughts on Vulkan/Mantle - phoboslab
http://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/?view=findpost&p=5215019
======
pavlov
Very interesting post.

 _So ... the subtext that a lot of people aren 't calling out explicitly is
that this round of new APIs has been done in cooperation with the big engines.
The Mantle spec is effectively written by Johan Andersson at DICE, and the
Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a
couple guys at Valve into the fold._

This begs the question: what about DirectX 12 and Apple's Metal? If they
didn't have similar engine developer involvement, that's clearly a point
against DX12/Metal support in the long run.

Also, the end is worth quoting in its entirety to explain why the APIs are
taking such a radical change from the traditional rendering model:

 _Personally, my take is that MS and ARB always had the wrong idea. Their idea
was to produce a nice, pretty looking front end and deal with all the awful
stuff quietly in the background. Yeah it 's easy to code against, but it was
always a bitch and a half to debug or tune. Nobody ever took that side of the
equation into account. What has finally been made clear is that it's okay to
have difficult to code APIs, if the end result just works. And that's been my
experience so far in retooling: it's a pain in the ass, requires widespread
revisions to engine code, forces you to revisit a lot of assumptions, and
generally requires a lot of infrastructure before anything works. But once
it's up and running, there's no surprises. It works smoothly, you're always on
the fast path, anything that IS slow is in your OWN code which can be analyzed
by common tools. It's worth it._

I wonder if/when web developers will reach a similar point of software layer
implosion. "Easy to code against, but a bitch and a half to debug or tune" is
an adequate description of pretty much everything in web development today,
yet everyone keeps trying to patch it over with even more "easy" layers on
top. (This applies to both server and client side technologies IMO.)

~~~
andrepd
About your last paragraph, I would say that that is even more important to
optimize and to make low level layers for than graphical game engines (believe
it or not :)).

The thing is while gaming is seen as a "performance critical" application,
where it is natural to want the maximum performance to be squeezed out of your
hardware, Web development is just a behemoth of inefficiency, and _we think
that that is the norm_ , an inevitable drawback of developing for the web, an
inescapable trait of web applications. Web apps are dozens or even hundreds of
times slower than native applications, and we think nothing of it, because it
seems to be an inherent characteristic of the web about which we can't do
anything.

Your comment made me reconsider that. Maybe it doesn't have to be so. What if
there was a Vulkan for the web, instead of the layers upon layers of
inefficiency which are all the rage these days? What if I could run a medium
complexity web page or app without it bogging my computer several times harder
than a comparable native app?

~~~
emilsedgh
Write your whole webapp in WebGL?

Edit: Seriously, how feasible is that? Creating a GUI framework on top of
WebGL and creating an abstraction layer (which the original article advises
against) would be a huge performance boost comparing to current webpages,
wouldn't it?

~~~
RaleyField
WebGL should never have been invented, at least on current architectures. It's
a ticking bomb, precisely for the reasons this articles explored - drivers are
complex and buggy black boxes that lie in kernel space so ring 0 access is
never too far from reach for many exploits. WebGL has a potential to be
exploit vector much more severe than what we've seen with other auxiliary web
technologies like JavaScript or Java applets. Why not confine web to (mostly)
nicely behaved x86 VMs?

~~~
mreiland
Personally I think browsers will become service layers that expose the
underlying OS in controlled and specific ways. Not a completely sandbox, but
not a free for all either.

We've started creeping that way with a lot of the newer HTML5 stuff, but we
have a long way to go.

------
lambda

      These are the vanishingly few people who have actually 
      seen the source to a game, the driver it's running on, 
      and the Windows kernel it's running on, and the full 
      specs for the hardware. Nobody else has that kind of 
      access or engineering ability.
    

One option is to release the source of the drivers, making it possible for
motivated engine developers to do this without explicit access to AMD
developers.

If they did this on Linux as well, then developers would have access to the
full stack, in order to be able to learn about how the lower levels work and
more easily track the problems down without simply a whole lot of trial and
error on a black box.

~~~
adrusi
It seems to me, based on this article, that providing support to major game
studios and releasing driver updates to accommodate their games is a major
revenue source for GPU manufacturers. I guess that's why only Intel releases
open source drivers: their GPUs aren't really expected to provide high
performance for games.

I guess with Vulcan/Mantle, the drivers are just trivial hardware abstraction
layers, so the GPU manufacturers no longer have to make up for the cost of
developing complex drivers by doing part of the work on every AAA title.

We might see Nvidia and AMD releasing open source Vulcan drivers, and if not,
it would at least be easier for reverse-engineering projects like Nouveau to
produce competitive drivers. This at least means you don't have to have non-
free code running in ring-0 to use a discrete graphics card on Linux.

~~~
makomk
Yeah, one of the things this doesn't mention is that, to a certain extent,
these issues are self-imposed and the GPU vendors make money from them. In
particular, one of the big reasons OpenGL games are so standards-violating is
that NVidia consistently refuse to enforce the specs. So software is developed
against the NVidia driver's behaviour and breaks on AMD, Intel and open source
drivers that enforce the specs, then people blame this on the drivers being
crap and buy NVidia hardware. This was a big reason why Wine worked badly on
non-Nvidia hardware for a long time.

------
korethr
To me, this neatly explains why in just about every performance comparison of
video drivers in Linux shows the proprietary drivers having an edge, even if
only a slight one.

I've never actually dived into the source code for the open source video
drivers but I'm now curious how much time the devs of the open-source drivers
have to spend on anticipating and routing around the brain damage of the
programs calling them. Do they similarly try to find a way to correctly do the
right thing despite the app crashing if it finds out it didn't get the wrong
thing as asked for? Or is the attitude more akin to 'keep your brain damage
out of our drivers and go fix your own damned bugs'?

~~~
sounds
I've spent a fair bit of time in the open source radeon and nouveau drivers
and I never noticed any workarounds in place solely to fix a broken client.

Open source driver developers don't have the time or resources to pull off a
stunt like that!

~~~
buster
Especially since an open source client, can be fixed on the client side. No
reason to include nasty hacks for buggy clients.

------
gavanwoolery
Performance gains are worth any trouble IMO, here is why:

While at GDC I saw a DirectX 12 API demo (DX12 is more/less equivalent to
Vulkan from an end-goal perspective). On a single GTX 980:

DX11 was doing ~1.5 million drawcalls per second. DX12 was doing ~15 million
drawcalls per second.

This API demo will ship to customers, so I am pretty sure we can easily verify
if these are bunk figures. But a potential 10x speedup, even if under ideal
conditions, is notable.

~~~
malkia
That's great, and for project like yours (Voxel Quest), it'll definitely help.

I'm wondering though, if the demo also uses the CPU for other things -
physics, audio, collision, path-finding or some other form of ai, state
machines, game script, game logic. My point is that 10x might be possible (on
a 10 core cpu) if the cpu's are only used for graphics, but there are other
things that come into play... But even then, even if only half the cpu's are
used for graphics, it's still better.

The bigger question to me, is how would game developers on the PC market
(OSX/Linux included) would scale their games? You would need different assets
(level of detail? mip-mapped texture levels? meshes?) - but tuning this to
work flawlessly on many different configurations is hard...

Especially if there are applications still running behind your back.

E.g. - you've allocated all cpu's for your job, all to be taken by some
background application, often a browser, chat client, your bit-coin miner or
who knows what else.

~~~
tarpherder

      The bigger question to me, is how would game developers
      on the PC market (OSX/Linux included) would scale their
      games? You would need different assets (level of detail?
      mip-mapped texture levels? meshes?) - but tuning this to
      work flawlessly on many different configurations is
      hard...
    

This isn't really any different from what it has been until now. All AAA games
have different levels of detail for meshes/textures/post-processing etc. Even
when not exposed to the user as options in a menu these different levels of
detail exist to speed up rendering of for example distant objects or shadows
where less detail is needed. DX12/Vulkan is not going to change anything in
that regard.

Doing a good PC port is not as easy as it may seem at first glance. Different
hardware setups and little control over the system cause lots of different
concerns that simply don't exist on consoles, which means nobody bothered
taking that into account when the game was originally built. These new APIs
will help though; the slow draw calls on PC are a pain compared to lightning
fast APIs on consoles (even Xbo360/PS3!).

------
pothibo
> What has finally been made clear is that it's okay to have difficult to code
> APIs, if the end result just works.

So true and yet, we have all those crazy JavaScript frameworks trying to
abstract everything away from developers. There's a lesson in there.

~~~
venomsnake
You cannot abstract away complexity. This is a lesson people relearn every 2-3
years or so.

From the javascript frameworks - I use only backbone and jquery - first as a
simple router, second for easy dom traversal.

~~~
stupidcar
Hearing web developers discuss the dangers of abstraction is just about the
craziest thing in computer science. JavaScript and the DOM sit atop layer upon
layer upon layer of abstraction that hides an absolute mountain of complexity
around networking, memory management, hardware capabilities, parsing,
rendering, etc.

Believing you're somehow avoiding abstraction because you only use a couple of
additional libraries _on top_ of JS and the DOM is like insisting on a 99th
storey apartment instead of an 100th storey one, because you prefer being
close to the ground.

You can by all means argue that all the big JS frameworks are poor
abstractions. But a sweeping statement like "you cannot abstract away
complexity" completely ignores the fact that web development as a field is
only _possible_ because of the successful abstraction of huge quantities of
complexity.

~~~
Svenstaro
Well you have to take something for a reasonable base, otherwise the argument
becomes absurd quickly. We think of bare-metal (CPU-level) as our default base
and take that for granted. But the CPU itself has many abstractions over
physics, multiple levels of caching, branching prediction, some built-ins and
so on. A modern CPU sits much deeper in the skyscraper that you describe but
it's still nowhere near ground level.

Similarly, since with web browsers we get what we get, we might as well
consider that our reasonable base for web development, it's not like anyone is
going to do that part differently any time soon.

------
MAGZine
I've been discussing this a fair amount with a colleague, as I'm curious to
see further uptake on linux as a gaming platform. I think the biggest barrier
for this is when games are written for D3D and all of a sudden your game
cannot communicate to any graphics API outside of windows.

More companies are starting to support OpenGL, but I'm just curious as to why
uptake is so slow. It seems like poor API design may be a part of it. I'd like
to see more games written with OpenGL support, and I think it's happening
slowly but surely. We even see weird hacks add Linux support at this point...
Valve opensourced a D3D -> OGL translation layer[0], though hasn't supported
it since dumping it from their source.

[0]
[https://github.com/ValveSoftware/ToGL](https://github.com/ValveSoftware/ToGL)

~~~
kevingadd
Having shipped multiple PC games with separate D3D/GL backends, and one game
with GL only, a contributing factor here is that OpenGL (at least on Windows)
still totally sucks.

To be fair, driver vendors have gotten a lot better. And the spec itself has
matured tremendously, with some great features that don't even have direct
equivalents in Direct3D.

Sadly as a whole the API still sucks, especially if you care about supporting
the vast majority of the audience who want to play video games. Issues I've
hit in particular:

Every vendor's shader compiler is broken in different ways. You HAVE to test
all your shaders on each vendor, if not on each OS/vendor combination or even
each OS/vendor/common driver combination. Many people are running outdated
drivers or have some sort of crazy GPU-switching solution.

The API is still full of performance landmines, with a half-dozen ways to
accomplish various goals and no consistent fast-path across all the vendors.

At least on Windows, OpenGL debugging is a horror show, especially if you
compare it with the robust DirectX debugging tools (PIX, etc) or the debugging
tools on consoles.

Threading is basically a non-starter. You can get it to work on certain
configurations but doing threading with OpenGL across a wide variety of
machines is REALLY HARD, to the point that sometimes driver vendors will tell
you themselves that you shouldn't bother. In most cases the extent of
threading I see in shipped games is using a thread to load textures in the
background (this is relatively well-supported, though I've still seen it cause
issues on user machines.)

OpenGL's documentation is still spotty in places and some of the default
behaviors are bizarre. The most egregious example I can think of is texture
completeness; texture completeness is a baffling design decision in the spec
that results in your textures mysteriously sampling as pure black. Texture
completeness is not something you will find mentioned or described _anywhere_
in documentation; the only way to find out about it is to read over the entire
OpenGL spec, because they shoved it in an area you wouldn't be looking to
debug texturing/shading issues. I personally tend to lose a day to this every
project or two, and I know other experienced developers who still get caught
by it.

I should follow with the caveat that Valve claims GL is faster than Direct3D,
and I don't doubt that for their use cases it is. In practice I've never had
my OpenGL backend outperform my Direct3D backend on user machines, in part
because I can exploit threads on D3D and I can't on GL.

~~~
dikaiosune
If Vulkan solves some of these problems, does it seem likely to you that the
tooling and community could grow up around it and shift the momentum?

~~~
kevingadd
Absolutely. Tooling & community around GL have improved over the past few
years, and Valve has been a part of that. If there's a concerted effort from
multiple parties to improve things, Direct3D could probably actually be
dethroned on Windows.

------
vehementi
"Former Nvidia dev" \- s/he did an internship there. Maybe too weighty a HN
title.

~~~
ertdfgcb
I'd agree, but they certainly know what they're talking about so I don't
really mind. They obviously did some form of software development at Nvidia if
only as an intern.

~~~
cpitman
Right, he's also worked as a game developer, has been a moderator on
GameDev.Net for many many years, and lead the SlimDx project to implement one
of the only remaining DirectX libraries for .Net. He knows what he is talking
about.

------
alricb
For reference, here's a tutorial on drawing a triangle in Apple's Metal, which
follows some of the design principles as Vulkan, Mantle and DX12:
[http://www.raywenderlich.com/77488/ios-8-metal-tutorial-
swif...](http://www.raywenderlich.com/77488/ios-8-metal-tutorial-swift-
getting-started)

So, essentially, no validation, you have to manage your own buffers (with some
help in DX12 I think), you can shoot yourself in the foot all day long. But if
you manage to avoid that, you are able to reduce overhead and use
multithreading.

------
jwildeboer
Former intern that admittedly sucked at his job gets promoted to NVIDIA Dev by
HN moderators because clickbait. Srsly?

~~~
cpitman
I'm guessing the title came from the tweet that went about it as well (which
Promit did correct, but that isn't going to matter when John Carmack retweets
it):
[https://twitter.com/josefajardo/status/574719821469777921](https://twitter.com/josefajardo/status/574719821469777921)

------
jokoon
What an horror show.

I always was interested in game programming, but never was able to really get
interested enough in graphics programming, I guess having a messy API is not
an excuse, but you can really sense that CPUs and GPUs really have different
compatibility stories, and that's maybe why it's not attracting enough
programmers.

I still hope that one day there might some unified compute architecture and
CPUs will get obsolete. Maybe a system can be made usable while running on
many smaller cores ? Computers are being used almost exclusively for graphic
application nowadays, I wonder if having fast single core with fat caches
really matters anymore.

~~~
usefulcat
> maybe why it's not attracting enough programmers

Game companies have, in general, never had difficulty attracting programmers,
especially young programmers. That's why they have such a reputation for low
pay and crappy work conditions: because they can.

------
byuu
The idea that nVidia and AMD are detecting your games, and then replacing
shaders, optimizing around bugs, etc should be absolutely terrifying. And vice
versa, the idea of having to fix every major game's incredibly broken code is
equally terrifying. And it's a huge hit to all of us indie devs, who don't get
the five-star treatment to maximize performance of our games. So overall, I'd
say this is a step in the right direction.

However! Having written OpenGL 3 code before, the idea that Vulkan is going to
be a lot _more_ complex, frankly scares the hell out of me. I'm by no means
someone that programs massive 3D engines for major companies. I just wanted to
be able to take a bitmap, optionally apply a user-defined shader to the image,
stretch it to the size of the screen, and display it. (and even without the
shaders, even in 2015, filling a 1600p monitor with software scaling is a
_very_ painful operation. Filling a 4K monitor in software is likely not even
possible at 60fps, even without any game logic added in.)

That took me several weeks to develop just the core OpenGL code. Another few
days for each platform interface (WGL/Windows, CGL/OSX, GLX/Xorg). Another few
days for each video card's odd quirks. In total, my simple task ended up
taking me 44KB of code to write. You may think that's nothing, but tiny code
is kind of my forte. My ZIP decompressor is 8KB, PNG decompressor is another
8KB (shares the inflate algorithm), and my HTTP/1.1 web server+client+proxy
with a bunch of added features (run as service, pass messages from command-
line via shared memory, APIs to manipulate requests, etc) is 24KB of code.

Now you may say, "use a library!", but there really isn't a library that tries
to do just 2D with some filtering+scaling. SDL (1.2 at least) just covers the
GL context setup and window creation: you issue your own GL commands to it.
And anything more powerful ends up being entire 3D engines like Unity that are
like using a jack hammer to nail in drywall.

And, uh ... that's kind of the point of what I'm doing. I'm someone trying to
_make_ said library. But I don't think I'll be able to handle the complexity
of all these new APIs. And I'm also not a big player, so few people will use
my library anyway.

So the point of this wall of text ... I really, really hope they'll consider
the use case of people who just want to do simple 2D operations and have
something official like Vulkan2D that we can build off of.

Also, I haven't seen Vulkan yet, but I really hope the Vsync situation is
better than OpenGL's "set an attribute, call an extension function, and cross
your fingers that it works." It would be really nice to be able to poll the
current rendering status of the video card, and drive all the fun new adaptive
sync displays, in a portable manner.

~~~
jallmann
> The idea that nVidia and AMD are detecting your games, and then replacing
> shaders, optimizing around bugs, etc should be absolutely terrifying.

Actually it's par for the course, if you care about backwards compatibility.
Raymond Chen has many, many stories about the absolutely heroic lengths
Microsoft has gone to ensure that popular, yet broken, programs continue to
work smoothly between Windows upgrades [1][2][3][4][5].

On Depending on Undocumented Behavior:

[1]
[http://blogs.msdn.com/b/oldnewthing/archive/2003/12/23/45481...](http://blogs.msdn.com/b/oldnewthing/archive/2003/12/23/45481.aspx)

[2]
[http://blogs.msdn.com/b/oldnewthing/archive/2003/10/15/55296...](http://blogs.msdn.com/b/oldnewthing/archive/2003/10/15/55296.aspx)

Why Not Just Block Programs that rely on Undocumented Behavior?

[3]
[http://blogs.msdn.com/b/oldnewthing/archive/2003/12/24/45779...](http://blogs.msdn.com/b/oldnewthing/archive/2003/12/24/45779.aspx)

Who cares about backwards compatibility? (A lot of people):

[4]
[http://blogs.msdn.com/b/oldnewthing/archive/2006/11/06/99999...](http://blogs.msdn.com/b/oldnewthing/archive/2006/11/06/999999.aspx)

Hardware breaks between upgrades, too:

[5]
[http://blogs.msdn.com/b/oldnewthing/archive/2003/08/28/54719...](http://blogs.msdn.com/b/oldnewthing/archive/2003/08/28/54719.aspx)

Joel Spolsky's (somewhat dated) "How Microsoft Lost the API War" gives a good
overview of the above concerns:

[6]
[http://www.joelonsoftware.com/articles/APIWar.html](http://www.joelonsoftware.com/articles/APIWar.html)

~~~
greggman
I'd argue there's also a bigger problem. There have never been conformance
tests for OpenGL until recently. They finally wrote some for ES and then
finally started backporting them to OpenGL.

Worse, they rarely check the limits, only the basics. They also they didn't
check anything related to multiple contexts. The point being the drivers
are/were full of bugs on the edge cases.

Testing works. If there were tests that were relatively comprehensive and that
rejected drivers that failed the edge cases they'd gone a long way to
mitigating these issues because the dev's apps wouldn't have worked.

There's also just poor api design. Maybe poor is a strong word. Example,
uniform locations are ints so some apps assume they'll be assigned in order
then fail when they get to a machine where they're not. Another example,
you're allowed to make up resource ids. OpenGL does not require you to call
`glCreateXXX` just call `glBindXXX` with any id you please. But of course if
you do that maybe some id you use is already being used for something else. So
id=1 works on some driver but not some other.

I'm excited about Vulkan but I'm a little worried it's actually going to make
the driver bug issues worse. If using it in a spec compliant way is even
harder than OpenGL and your app just happens to work on certain drivers other
driver vendors will again be forced to implement workarounds.

------
higherpurpose
From the looks of it, it seems Khrono's API may actually be significant
better/easier to use than DirectX?

I haven't heard of DX12 getting overhauled for efficient multi-threading or
great multi-GPU support. DX12 probably brings many of the same improvements
Mantle brought, but Vulkan seems to go quite a bit beyond that. Also, I assume
DX12 will be stuck with some less than pleasant DX10/DX11 legacy code as well.

~~~
dietrichepp
It sounds like Vulkan is not going to be easy to use. If anything, it is going
to be harder to use.

For example, in OpenGL, you can upload a texture with glTexImage2D(), then
draw with glDrawElements(), then delete it glDeleteTextures(). The draw
command won't be complete yet, but the driver will free the memory once it's
no longer being used.

It sounds like with Vulkan, you'll need to allocate GPU memory for your
texture, load the memory and convert your data into the right format, submit
your draw commands, and then you'll need to WAIT until the draw commands
complete before you can deallocate the texture. At every step you're doing the
things that used to be automatic. So it's harder to use, but you're dealing
with more of the real complexity from the nature of programming a GPU and less
artificial complexity created by the API.

~~~
jeremiep
This is already how we do it on most consoles. The CPU has to sync with the
GPU to know when the resources associated with draw commands are safe to
release. Having something like glTexImage2D is way too high level for these
graphics APIs and would be a luxury. Instead we get a plain memory buffer and
convert manually to the internal pixel format.

There is no waiting to free resources however, unless either the CPU or GPU is
starving for work. We have a triple-buffering setup and on consoles you also
get to create your own front/back/middle surfaces as well as implement your
own buffer swap routine. This provides a sync point where we can mark the
resources as safe to release.

It's definitely more complexity on the engine part, but as mentioned in the
forum post it makes everything much, much easier when you get to debug and
tune things. Also having to implement (or maintain) all of that engine
infrastructure gives us a better perspective into how the hardware works and
how to optimize for it.

However, even with Vulkan or DirectX12 I doubt NVidia or AMD will expose their
hardware internals publicly which is critical in optimizing shader code. On
consoles we get profilers able to show metrics from all of the GPU's internal
pipeline. It makes it easy to spot why your shader is running slow without
having to send your source code to the driver vendor.

~~~
sounds
Do you think -- since the XBox One and PS4 both use an AMD GPU -- that the
developer tools will improve on Desktop PCs?

~~~
jeremiep
Hard to say, they are very different tools aimed at different audiences.
Console SDKs are behind huge paywalls and all their tools and documentation
are confidential. The developer networks even go as far as checking if the
requesting IP address is whitelisted.

I haven't had to profile an AAA title for the desktop so far and therefore
don't know much about the state of tools there. However, I heard only good
things about Intel's GPA.

------
zurn
It's telling about the whole graphics programming scene that this has been
such a well kept secret from the public. Games you see are not in reality
running on open APIs, but are based on back alley arrangements between
insiders camouflaged as open API apps. I bet this post is very demoralizing
eg. to hopeful indie game devs or people holding out hope for driver situation
improving on Linux.

~~~
keypusher
Actually I think this points to a potential area of success on Linux. One of
the points the post brings up is that there is only a very small group of
people who have access to the Microsoft kernel code, the driver code, and the
game code. On Linux, this pool is vast. With open source kernel and open
source drivers, game developers can dig all the way through the stack to
understand the entire system. I would hope this leads to better APIs that
expose the nature of the system directly, and not having to impose workarounds
for specific games inside the driver itself.

------
vcarl
This is interesting, it's good to hear the opinion of somebody who's actually
programmed against the APIs. I think the increase in exposed complexity is
probably a good thing. The AAA studios have proved that they're able and
willing to throw engineering resources at tough problems, so actually allowing
them to directly interact with a lower level of code is probably a good thing.

I'm sure there's a counterargument that it raises the bar for indie game devs,
but when's the last time an indie game directly programmed against DirectX or
OpenGL (barring webGL)? This should let engine developers better use their
development time.

~~~
yoklov
> when's the last time an indie game directly programmed against DirectX or
> OpenGL (barring webGL)?

I don't work in indie, but my understanding is that this is still pretty
common, especially for games with any budget at all.

~~~
vcarl
I'm sure it still happens, but the popularity of Unreal, Unity, etc says to me
that they likely wouldn't be too negatively effected.

------
faragon
That's evolution. New graphics APIs are intended for engine developers, not
for application programming. Application should use higher level engines, not
low level APIs.

~~~
shmerl
Not necessarily. If application cares about performance, it can go to lower
APIs to craft parallelism model that it needs.

------
ggchappell
> Part of the goal is simply to stop hiding what's actually going on in the
> software from game programmers. Debugging drivers has never been possible
> for us, which meant a lot of poking and prodding and experimenting to figure
> out exactly what it is that is making the render pipeline of a game slow.

This is really sad. Imagine if someone were pushing a new file API with the
justification that storage-device drivers were full of bugs.

I get a similar feeling whenever I read one of those " _CSS trick that works
on all browsers!_ " articles. Yes, it's nice that you can build a thing of
beauty on top of broken abstractions, but ....

------
backinside
How come, that the open source drivers did not show much better performance?

~~~
pothibo
Because the documentation for the hardware is not open source. You need to
sign very strict papers to be able to look at the hardware documentation which
means few selected people have access to it which means not much support, etc.
Can't remember why I know this but it was something related to my linux open
source back in the days.

~~~
wtallis
I think things have shifted a bit away from that. IIRC, AMD's pretty free with
the documentation for everything that isn't part of the DRM-enforcement chain,
so their open source drivers are mostly lacking in manpower. NVidia just won't
release the docs at all, so the open source drivers for them are reverse-
engineered and plagued by problems like an inability to get most GPUs out of
power-saving mode. I'm not sure how open Intel's documentation is, but their
only Linux driver is the open-source one that they do significant in-house
work on and it has often been competitive with the Windows driver.

~~~
pothibo
Good to know it's moving towards a more open environment. Also good to know
that my memory wasn't playing tricks on me. Thank you for the update.

------
anon4
Reminds me a bit of what I recall as a unix principle: The API should be
designed so that the implementation is simple, even if that makes the use
complex.

