
OpenGL Is Broken - watermel0n
http://www.joshbarczak.com/blog/?p=154
======
fixermark
Quite a few of these issues (especially in the "Too many ways to do the same
thing" category) relate to OpenGL's age as an API, which is something
Direct3D's design was able to learn from and improve upon. OpenGL's origin was
as a vector-based drawing tool for CAD applications which was repurposed to
games; D3D was targeted for performant rendering with games as a specific
target. This is demonstrated by some key features necessary for performant
games (clock synchronization to avoid 'tearing' comes immediately to mind)
that are core to the D3D spec and extensions to the OpenGL spec. There's also
a bit of a cultural issue; if you learn OpenGL from the 'red book,' you'll
learn the function-per-primitive API first, which is precisely the _wrong_
tool for the job in making a performant game. Really, the OpenGL API is almost
multiple APIs these days; if you're using one flow of how to render your
primitives, the other flow is downright toxic to your rendering engine (in
that it's going to muck about with your GL state in ways that you shouldn't
burn mental energy predicting).

Some of this is ameliorated by the OpenGL ES standard, which throws away a big
chunk of the redundancy. But I'm not yet convinced that OpenGL has gotten away
from its philosophical roots as a performance-secondary CAD API, which
continues to dog its efforts to grow to serve game development needs. The fact
that it's functionally the only choice for 3D rendering on non-Windows
platforms is more testament to the nature of the hardware-software ecosystem
of graphics accelerators (and the creative doggedness of game developers) than
the merits of the language.

~~~
gaustin
I dabbled with the Red Book aeons ago, but never got very far with 3D
programming. What would you suggest for learning OpenGL? Or is there some
other tool you'd recommend learning?

~~~
yoklov
This is the list I typically see given to newbies (on freenode ##OpenGL, and
/r/opengl) wanting to learn modern OpenGL.

1\. Arcsynthesis's gltut
([http://www.arcsynthesis.org/gltut/](http://www.arcsynthesis.org/gltut/)) is
good and reasonably thorough. He explains things well but not always in the
order you'd like him to. At the end you will probably know enough to be able
to figure out the rest on your own as you need.

2\. [http://open.gl/](http://open.gl/) is good but somewhat short. It also
goes in depth into creating/initializing a context with various APIS (SDL,
SFML, GLFW, ...). More of a good starting point than a complete guide.

3\. [http://ogldev.atspace.co.uk/](http://ogldev.atspace.co.uk/) has a lot of
good tutorials on how to do more advanced techniques, as well as many beginner
level tutorials. I've never gone through them so I can't speak to their
quality, but I've heard good things.

4\. [http://www.lighthouse3d.com/tutorials/glsl-core-
tutorial/](http://www.lighthouse3d.com/tutorials/glsl-core-tutorial/) is also
good, but focused on the shading language.

See /r/opengl and freenode's ##OpenGL channel for more. Both those places are
fairly newbie friendly (/r/opengl moreso than ##OpenGL, but as long as you
actually know your language of choice they're nice), so feel free to ask
questions.

------
raverbashing
I think what's broken is not OpenGL, D3D, etc

What's broken is that the abstraction between graphics card and data (on the
screen) is too big

We don't have troublesome/fat drivers as these since the "Softmodem" days and
even then (Wifi is also complicated)

It's too big of a gap.

In 2D graphics, you send graphical data and it is displayed. You may even
write it directly to memory after some setup.

Audio, same thing. Network, it's bytes to the wire. Disk drive, "write these
bytes to sector X" (yes, it's more complicated then that, still)

With 3D, we have two APIs that have an awful amount of work to do between the
getting the data and displaying it.

I'll profess my ignorance in the low-level aspects, I only know "GlTriangle" ,
OpenGL 101 kind of stuff, and I have no idea how: 1 - this is sent to the
videocard, 2 - how does it decide to turn that into what we see on the screen.

Compared to the other drivers this is a lot of work and a lot of possibilities
of getting this wrong.

Adding GPGPU stuff makes it easier in one aspect and more complicated in other
aspects. We don't have a generic way of producing equal results from equal
inputs (not even the same programming environment is available)

We don't have OpenGL, we have "this OpenGL works on nVidia, this other one
works on ATI, this one works on iOS, or sometimes it doesn't work anywhere
even though it might be officially allowed"

~~~
sliverstorm
To my understanding, the critical difference between framebuffer graphics and
3D API graphics is the processing! In a framebuffer scenario, the CPU does all
the rendering. Since CPU is poorly suited to rendering 3D, we have a
coprocessor called a GPU. The CPU has to feed the GPU work.

Because the GPU is cutting-edge, there is a certain amount of magic voodoo
required for top performance that needs to get abstracted away- maybe this
_particular_ model of GPU you have doesn't support some common instruction.
You don't want to handle that in your software, you want to hide that in the
driver.

Beyond that, the API is also there to make the GPU easier to use. OpenGL is a
mess, sure, but to my understanding most developers would pull their hair out
and give up if they had to program the GPU directly.

~~~
cousin_it
Isn't the GPU just a computer that happens to support more parallelism than
the CPU? Why not have a simpler API based on general-purpose operations like
map/reduce/scatter/gather? Then there would be no need to add new "cutting
edge" operations every year. I for one would be happy to use that instead of
OpenGL or DX.

~~~
overgard
> Isn't the GPU just a computer that happens to support more parallelism than
> the CPU?

Not really. It's like quantum mechanics compared to classical physics.

For instance, "branches" don't work like you'd expect. On a cpu you execute
one branch or the other. On a GPU, you get things like both branches execute,
but then it just throws away the half that shouldn't have run, but that means
you're bottlenecked by whichever branch takes the longest (Or something like
that -- the details escape me but I do remember something about CUDA's
branching doing weird things). Point being, GPU's are weird. It's nothing like
programming a CPU at all.

~~~
geon
> On a GPU, you get things like both branches execute, but then it just throws
> away the half that shouldn't have run

It's not that weird. You don't really have thousands of parallell processors,
but a single processor, operating on thousands of values. (Like SIMD on
steroids.)

Since all operations must be done identically on all values, a "branch" is
really doing both branches and recombining them with a mask of equally many
booleans - as you say "throwing away" the unwanted branch.

------
druidsbane
Well-reasoned response: [http://timothylottes.blogspot.com/2014/05/re-joshua-
barczaks...](http://timothylottes.blogspot.com/2014/05/re-joshua-barczaks-
opengl-is-broken.html)

~~~
mmarks
I nod my head in agreement with most of these OpenGL are broken articles. I've
work on the OpenGL version of Call of Duty, Civilization, and more for the
Mac. I think Timothy misses the real point on driver quality.

[https://medium.com/@michael_marks/opengl-for-real-world-
game...](https://medium.com/@michael_marks/opengl-for-real-world-
games-7d0f4d35891c)

------
espadrine
The "compiler in the driver" part of this post sounds awfully like the "asm.js
vs. NaCl" debate.

Sure, building an IR from scratch is fun. But making it truly cross-platform
and ready for many usages is really hard. Also, the GLSL source _is_ an IR
between the programmer's intent and the driver's behaviour. Code is just
another type of binary. It is just slightly harder to parse, but not by much;
without performance comparisons, a complaint about how hard it is to parse
code is invalid.

Feeding the driver GLSL can also yield much clearer error messages for
programmers. I can only imagine what kinds of error messages the IR compiler
would produce. Sure, hopefully, our cross-platform IR would be accepted by all
GPUs without pain, but that's improbable.

Regardless, starting from a clean slate is much harder than working our way
from the current state to an improved OpenGL. Just like few browsers are on
board with NaCl, few GPU makers would be on board with a brand new design.

~~~
fzltrp
> Sure, building an IR from scratch is fun. But making it truly cross-platform
> and ready for many usages is really hard.

It's not that hard, as long it remains as close as possible to the source
language (ie. GLSL). Iow, as the OP is advocating for, an AST of the shaders.
This removes the cost of parsing the source code (but requires on the other
hand to validate the AST, so this isn't exactly a complete gain, but
definitely an progress compilation wise). However, I suppose that what
motivated the choice of using GLSL source directly is the simplicity of the
approach: no need to build the GLSL scripts separately. When working with
interactive tools, it's a non negligible comfort, imho. Another interesting
aspect is the ability to build the scripts dynamically, like people do with
SQL. I wonder if this approach is used by professional game studios.

------
pyalot2
This post is factually wrong, and misguided. Here's why:

#Preamble: Except on Windows you cannot run Direct3D anywhere else. Unless you
plan not to publish on Android, iOS, OSX, Linux, Steambox, PS4 etc. you will
have to target OpenGL, no matter how much you dislike it.

#1: Yes the lowest common denominator issue is annoying. However, in some
cases you can make use of varying features by implementing different
renderpaths, and in other cases it doesn't matter much. But factually wrong is
that there would be something like a "restricted subset of GL4". Such a thing
does not exist. You either have GL4 core with all its features, or you don't.
Perhaps author means that GL4 isn't available everywhere, and they have to
fall back to GL3?

#2: Yes driver quality for OpenGL is bad. It is getting better though, and I'd
suggest rather than complaining about OpenGL, how about you complain about
Microsoft, Dell, HP, Nvidia, AMD etc.?

#compiler in the driver: Factually this conclusion is completely backwards.
First of all the syntactic compile overhead isn't what makes compilation slow
necessairly. GCC can compile dozens of megabytes of C source code in a very
short time (<10ms). Drivers may not implement their lexers etc. quite well,
but that's not the failing of the specification. Secondly, Direct3D is also
moving away from its intermediary bytecode compile target, and is favoring
delivery of HLSL source code more.

#Threading: As author mentions himself, DX11 didn't manage to solve this
issue. In fact, the issue isn't with OpenGL at all. It's in the nature of GPUs
and how drivers talk to them. Again author seems to be railing against the
wrong machine.

#Sampler state: Again factually wrong information. This extension
[http://www.opengl.org/registry/specs/ARB/sampler_objects.txt](http://www.opengl.org/registry/specs/ARB/sampler_objects.txt)
allows to decouple texture state from sampler state. This has been elevated to
core functionality in GL4. The unit issue has not been resolved however, but
nvidia did propose a DSA extension, which so far wasn't taken up by any other
vendor. Suffice to say, most hardware does not support DSA, and underneath,
it's all texture units, even in Direct3D, so railing against texture units is
a complete red herring.

#Many ways to do the same thing: Again many factual errors. Most of the "many
ways" that author is railing against are legacy functions, that are not
available in core profile. It's considered extremely bad taste to run a
compatibility (to earlier versions) profile and mix&mash various strata of
APIs together. That'd be a bit like using Direct3D 8 and 11 functionality in
the same program. Author seems to basically fail in setting up his GL context
cleanly, or doesn't even know what "core profile" means.

#Remainder: Lots of handwaving about various vaguely defined things and
objecting to condjmp in the driver, again, author seems to be railing against
the wrong machine.

Conclusion: Around 90% of the article is garbage. But sure, OpenGL isn't
perfect, and it's got its warts, like everything, and it should be improved.
But how about you get the facts right next time?

~~~
leorocky
> how about you complain about Microsoft, Dell, HP, Nvidia, AMD etc.?

These companies are businesses that need a business reason to support your
platform. Until more people are playing triple A games on platforms that use
OpenGL you can't really fault them for spending money when it doesn't make
sense. Apple designs its own chips for its mobile device so I'd think the
OpenGL on iOS would have better driver support.

~~~
pyalot2
OpenGL is the only thing you get on iOS. There is no Direct3D on iOS.
Likewise, it's the only thing you get on PS4, Steambox, OSX etc.

But that's not my issue, I acknowledge freely that OpenGL drivers are bad. I
just don't quite see how that's a failing of OpenGL, rather than the vendors
who actually implement the drivers.

~~~
kevingadd
PS4 doesn't use OpenGL. No game console I'm aware of has ever used OpenGL.
(The PS3 is the closest example, since it used to let you run Linux, so you
could run Mesa - but the GPU wasn't accessible to you.) I don't know why
people keep claiming that a given console runs OpenGL.

~~~
Mikeb85
PS4 doesn't use 'OpenGL', just a low level api and a higher level api that has
features suspiciously close to OpenGL 4.3...

Also uses Clang and a bunch of Unixy open source stuff...

~~~
kevingadd
Sure, but in practice this is not 'OpenGL' enough to count when talking about
OpenGL making ports trivial. (I say this as someone who recently shipped a
game w/a OpenGL renderer that has a PS4 port in the works - there are a
surprising number of differences!)

The core OpenGL feature set and API factoring are almost certainly things you
can expect to be similar on console platforms, at least where the hardware
matches. So in that sense 'It's OpenGL' is almost true!

------
npsimons
OpenGL might be broken, but Direct3D and DirectX are _not_ the solution.
Otherwise, you might as well just correct yourself and say "Windows gaming"
_not_ "PC gaming". And clinging to a different single vendor's proprietary
standard doesn't seem like a good idea either.

~~~
Tuna-Fish
Maybe that's why his blog post was full of links to Mantle.

------
VikingCoder
Reason #2 is chicken and egg.

I'm not excusing OpenGL for this fact, I'm just stating that if people cared
about the quality of OpenGL drivers and made purchasing decisions based on
that, then you bet your ass the manufacturers would make the OpenGL drivers
better.

------
jmpeax
I've written an in-house visualization program in OpenGL that runs on Mac and
Windows. These articles just make me laugh, especially the bit where they talk
about cross platform being a myth, then follow on with the virtues of DirectX.

~~~
lilsunnybee
The article specifies OpenGL being deficient for high-performance gaming, not
so much for other graphics computing tasks.

------
Mikeb85
So the solution he proposes at the end is to use Mantle? AMD has already
proven themselves incompetent (or are they only unwilling?) at implementing
OpenGL, unable to compete with either Intel or Nvidia, and now they want to
fragment graphics APIs? And this is the 'solution'?

As for OpenGL's issues - that's what happens when a spec gets old enough. But
the fact remains, it's the only graphics API that could be called 'universal',
they have modernized the spec, and despite all its failings, somehow it still
delivers better performance than DirectX...

------
CmonDev
And it will never be properly fixed due to backwards-compatibility
requirements - just hack-patched. Just like web (HTML/CSS/JS).

------
icambron
I don't know anything about graphics programming, but I couldn't make any
sense of this:

> While the current GL spec is at feature parity with DX11 (even slightly
> ahead), the lowest common denominator implementation is not, and this is the
> thing that I as a developer care about.

Isn't DX restricted to Windows, meaning its lowest common denominator
implementation is nothing at all?

~~~
MBCook
What I believe he means is that if you have an OpenGL driver that implements
the ENTIRE specification (correctly and in a performant manor) then you have
basically all the features of a modern DirectX 11 card available.

The problem is that many OpenGL drivers implement the base OpenGL specs and
then a couple of extensions here and there. Because of this you can't rely on
what's available and you end up with something that's more akin to a mix of
many previous versions of DirectX: some advanced capabilities but many basic
ones missing.

------
AshleysBrain
I think the difficult thing about OpenGL is it is hard to learn. The core
profile of the latest version might be nice, but in practice there are still a
wide range of OpenGL versions in use, so you have to learn the various ways of
doing things through the OpenGL versions, or code against a crufty old version
which is the lowest common denominator. Then there are various driver issues,
platform-specifics around context creation, and so on. Overall it's a pretty
tough chestnut if you're not going to use it directly instead of relying on an
engine/framework that has figured out lots of that already.

Mobile on the other hand seems decent - OpenGL ES 2+ seems to be a well-
designed clean and relatively minimal API with widespread support.

------
ksec
With the traction of iOS Ecosystem. I think Apple could have created its own
API, or even just reuse Mantle or use it as a base for a new API. This
certainly wasn't possible when Mac was the minority. But now even if Apple
gets only 10% of the Phone market there is still a huge userbase.

No longer bounded by OpenGL.

------
alariccole
Your site is broken.
[http://webcache.googleusercontent.com/search?q=cache:http://...](http://webcache.googleusercontent.com/search?q=cache:http://www.joshbarczak.com/blog/?p=154)

------
shmerl
What is the current stance of Nvidia and Intel on implementing Mantle support?
And what are the chances of mobile GPU makers doing the same? Is it easier to
make a better OpenGL 5, or to adopt Mantle across all GPU manufacturers?

~~~
corysama
Even if it is technically quite feasible, I would be seriously surprised if
Nvidia and Intel implement Mantle if only for political reasons. However, they
definitely will implement DX12 --which as far as I can tell is pretty much the
same thing as Mantle except explicitly multi-vendor. If Mantle was a tap on
OGL's door, DX12 is a full-on wake-up call.

Back when I worked in Windows/console games, my market demanded D3D (+
sony/nintendo's wacky custom APIs). Now I work in mobile and my customers
demand GLES. GL advocates used to cry "GL has the better tech! D3D only wins
because of politics! Boo!" It will be quite a turn if they switch tunes to
"D3D has better tech, but GL still wins because of politics! Yay!"

DX12 API Preview vid
[http://channel9.msdn.com/Events/Build/2014/3-564](http://channel9.msdn.com/Events/Build/2014/3-564)

DX12 API Preview slides
[http://view.officeapps.live.com/op/view.aspx?src=http%3a%2f%...](http://view.officeapps.live.com/op/view.aspx?src=http%3a%2f%2fvideo.ch9.ms%2fsessions%2fbuild%2f2014%2f3-564.pptx)

~~~
shmerl
DX12 is not the way forward because it remeains MS only and there is no
indication that MS is interested in opening it up. The way forward is either
creating a new open API which all manufacturers would support (Mantle, or
whatever), or seriously improving OpenGL if it's possible.

------
zurn
> The GL model, placing the compiler in the driver, is WRONG

How does he figure OpenGL mandates this? OpenGL allows a (caching) GLSL
compiler to be part of the OS OpenGL support, leaving drivers to consume
bytecode or some other kind of IR.

------
maaku
You're webserver is broken. Anyone have a cached link?

~~~
robin_reala
[https://webcache.googleusercontent.com/search?q=cache:http%3...](https://webcache.googleusercontent.com/search?q=cache:http%3A%2F%2Fwww.joshbarczak.com%2Fblog%2F%3Fp%3D154)

!cache in DDG will get that for you in future.

------
SteveDeFacto
I can't agree 100% with everything in the article but the part about GLSL is
spot on.

------
feistyio
As is your server.

------
Cocodyne1
OpenGL works fine for me, so it must be user/programmer error.

~~~
fixermark
I'm not sure if you're being sarcastic. On the chance that you're not, you may
very well be working in a space where you haven't had to either port your
OpenGL-utilizing app to another hardware platform with an OpenGL API, or you
haven't needed the "deeper magic" parts of OpenGL that become necessary when
you get close to the limits of hardware capabilities.

Which means I envy you, in short. ;)

~~~
Cocodyne1
Is that why I was downvoted?

Seems petty.

------
foxhill
theory: articles of the form: "X is broken", "X is wrong", or some other
equally dramatic statement, say almost exactly the same thing; nothing.

~~~
sdfjkl
While I hate link baiting as much as the next guy, I find myself agreeing with
most of the things he says about OpenGL.

