
Things that drive me nuts about OpenGL - nkurz
http://richg42.blogspot.com/2014/05/things-that-drive-me-nuts-about-opengl.html
======
jarrett
One of the biggest headaches for me is debugging. As the author says, the
facilities for reading state back out are often questionable. Even where
they're not, I'd rather not spend all my time rolling my own custom OpenGL
debugging tools. I'd love a cross-platform OpenGL debugger--even if it only
handled basic stuff.

For example, when nothing renders, I don't want to waste an hour, staring at
my code with no direction, until I realize I forgot a call to
glEnableVertexAttribArray. Instead, I'd like to boot up my trusty debugger and
go through a sane process of narrowing down the problem, like I do for just
about every other class of bug.

Also a sane way to debug shaders would be fantastic. The usual advice is to
write debug info out as color values. The fact that anyone considers that a
healthy debugging strategy just illustrates how far behind graphics
programming is in terms of developer friendliness.

I don't know if it's better on other APIs. OpenGL is the only one I use,
because I never have occasion to develop Windows-only apps.

~~~
flohofwoe
If you're on an nvidia GPU, check out nvidia nsight. And this is also one
problem with GL, if there are good debugging/profiling tool they often only
work on one OS and/or for specific GPUs (and I hope this is where VOGL will
come in and fix that).

Other then that I don't even think of OpenGL as a single standard anymore.
It's more like "Nvidia GL" "AMD GL", "Intel GL", "Apple GL", etc... there is a
core set of functionality which works across all implementations (and which
could be cleaner), but if performance is more important then easy portability,
you need to implement driver specific code paths anyway. Whether this is good
or bad I haven't yet made up my mind completely. At least on GL you have an
environment where GPU vendors can experiment and compete through extensions.

~~~
jarrett
> if performance is more important then easy portability, you need to
> implement driver specific code paths anyway

I think that's fine, because as you said, it encourages GPU vendors to
innovate. Plus, it probably affords more of an opportunity to squeeze every
last drop of performance out of each card.

On the other hand, I'd start to get upset if we had to write card-specific
code just to do the absolute basics. (Which isn't the case right now.) There
are tons of apps that need only a tiny fraction of the performance a GPU
offers. For those apps, it would be insane having to maintain multiple
codebases just to, say, draw a box with a texture and Lambert reflectance.

------
smw

      * GL extensions are written as diffs vs the official spec
      So if you're not a OpenGL Specification Expert it can be extremely
      difficult to understand some/many extensions.
    

This is spot on. I was thinking last week about writing a webapp that
displayed the spec with a list of extensions you could check to have them
merged in.

Then I decided I was yak shaving and went back to actually working on my
project.

~~~
frik
Something like this? [http://webglstats.com/](http://webglstats.com/)

~~~
smw
That's really useful in itself, but no, I mean something that lets you read
the actual OpenGl spec with the extension text patched in.

If you skim the text of an extension [1], you'll see an awful lot of this:

> Add a new section "Buffer Objects" between sections 2.8 and 2.9:

It'd be very nice to read those sections without having to have the full spec
open in another window. Especially because some _other_ extension you're using
might be modifying the text as well.

1:
[http://www.opengl.org/registry/specs/ARB/vertex_buffer_objec...](http://www.opengl.org/registry/specs/ARB/vertex_buffer_object.txt)

------
jamesu
After spending a few months porting a Direct3D9 game to work in OpenGL 2.x,
this article certainly resonates with me. It's quite easy to get code working
nice and fast in one driver, while in another it stalls and drops down to 1
fps because you didn't anticipate the driver developer designed their version
of the API to work under different performance characteristics.

Meanwhile in Direct3D9, the game still runs smooth at 60+ fps on all major
drivers on most recent hardware. Granted, it's a bit of an Apples vs Oranges
comparison, but it certainly causes a lot of headaches especially when you
need to go so far as to modify the art so it batches better.

There is also still a lot of conflicting information on how best to use
OpenGL. OpenGL 3.x certainly helped by consolidating a lot of stuff which was
in extensions, but in my case its not really that good for me as I still have
to put up with the land of OpenGL 2.x.

~~~
yoklov
> OpenGL 3.x certainly helped by consolidating a lot of stuff which was in
> extensions, but in my case its not really that good for me as I still have
> to put up with the land of OpenGL 2.x.

Ha, where I work we still get support tickets about our ancient GL1.5 renderer
from time to time. If only we could drop it.

And then I get home and see people on /r/gamedev suggesting that OpenGL 3.2 is
outdated and not even worth supporting anymore. I even got downvoted for
saying that my less-than-three year old laptop ran GL3.2. Maybe I'm just in
need of an upgrade...

------
azakai
> Mantle and D3D12 are going to thoroughly leave GL behind (again!) on the
> performance and developer "mindshare" axes very soon.

Huh? Performance, maybe, but how is "mindshare" being measured here?

DirectX "beat" OpenGL a long time ago - does the author claim OpenGL beat its
way back to the top? If so, I can only assume that was due to GL on mobile
platforms. But those mobile platforms - iOS and Android - still use GL, and
are growing? How can D3D12 and Mantle beat those when they don't even run on
those platforms, while Windows - the platform they do work on - is anyhow
already under DirectX control?

Furthermore, GL is seeing another area of growth through WebGL, which now
works on even Microsoft's browser.

Am I missing something? That mindshare statement seems completely off base.

~~~
corillian
He's referring to mindshare among professional (game) engine developers -
those people who try to get maximum rendering performance out of multiple
platforms. His statement is totally on target for reasons I enumerated with a
blog article on this subject back in December:
[http://inovaekeith.blogspot.com/2013/12/why-opengl-
probably-...](http://inovaekeith.blogspot.com/2013/12/why-opengl-probably-
isnt-graphics-api.html)

As for WebGL Apple intentionally disables WebGL support on iOS - except for in
iAds - so game devs can't circumvent the app store. Since WebGL is needlessly
restricted to the feature set of the OpenGL ES specification its usefulness is
severely limited. WebGL also has many other problems such as the fact that
JavaScript is slow as hell.

~~~
azakai
I read the blogpost, but it seems to focus on why OpenGL isn't good - not on
measuring mindshare of OpenGL? Yes, perhaps Mantle and DirectX are better, and
that might in theory lead to more mindshare. But the fact remains that OpenGL
is standard on mobile and on the web, which should increase OpenGL's
mindshare.

Yes, WebGL is disabled on iOS. It's disabled on OS X desktop too, for now.
Hopefully that will change soon.

> JavaScript is slow as hell.

I wouldn't say that 67% of native speed is "slow as hell", and that's where
things currently stand. Perhaps you have a specific workload in mind that
happens to be slow - how did you measure?

~~~
corillian
The blog states that if the Mantle specification is adopted by other
manufacturers, particularly on mobile, then it will become the new cross
platform standard and OpenGL will cease to be relevant. 67% of native speed is
astronomically slow in the world of real-time games where improving per-frame
performance by 3ms is considered a huge win. Getting all of your nice special
effects and post-processing implemented while still running at a smooth 60fps
is very difficult - especially on mobile.

~~~
greggman
> Getting all of your nice special effects and post-processing implemented
> while still running at a smooth 60fps is very difficult

And has very little to do with CPU speed. There are some games where CPU speed
is important but there's a large subset (I'd be willing to bet 95%) of AAA
games where only GPU speed is important and they run just fine on low end
CPUs.

~~~
corillian
The whole point of D3D12 and Mantle is to decrease driver overhead, improve
multi-threading support, and increase the number of per-frame draw calls that
can be made. Those are all CPU performance improvements so I think it's safe
to say that CPU speed is quite important or else AMD and Microsoft wouldn't be
going through the trouble.

------
tmurray
>> 20 years of legacy, needs a reboot and major simplification pass

This was attempted. It failed:
[http://en.wikipedia.org/wiki/OpenGL#Longs_Peak_and_OpenGL_3....](http://en.wikipedia.org/wiki/OpenGL#Longs_Peak_and_OpenGL_3.0_controversy)

~~~
ksec
Mainly because the CAD industry refuse to move forward.

~~~
waps
Mainly because design by massive committee doesn't work. It gets you supremely
useless things like HTML5 canvas.

~~~
gmac
Can you explain the uselessness of HTML5 canvas? I've found some uses for it
...

------
wtallis
I think it's instructive to compare with OpenCL. It was designed to be very
similar to GL, but didn't have a backwards-compatibility legacy. It's still
not exactly _friendly_ , and some of the nicer bits are optional, but it does
serve the purpose of being a hardware abstraction layer for parallel
processing. It strikes me as being many times simpler than GL, even though its
fundamental purpose is only a bit simpler.

~~~
greggman
OpenCL is actually much worse. it requires way more querying of what the
hardware does to use it rather than abstracting it away like other GPGPU
libraries.

While OpenGL can be full of extensions, if you ignore the extensions and stick
to the base it's pretty easy to write a program that doesn't have to care what
system it's on

~~~
wtallis
OpenCL requires a lot of querying the hardware because that information _can
't_ be abstracted away without removing the ability to do effective
performance tuning. If it were possible to make effective use of a GPU given a
well-tuned BLAS, OpenCL wouldn't have been created in the first place.

OpenGL extensions are likewise absolutely necessary to make full use of the
hardware, but the extensions mess is way worse than the range of optional
OpenCL features.

~~~
greggman
You don't need _full use of the hardware_ most of time. Sure you get some
flurishy extras but few apps/games if any need those extensions to be fun and
beautiful. I know of no games that require extensions to run. Those that use
them are rarely all that much better with them than without.

So my point stands, you can generally use GL without worrying about hardware.
Not so with CL.

~~~
angersock
_You don 't need full use of the hardware most of time_

If you're bothering to write something in OpenCL/other parallel computation
framework, you probably care about that stuff.

------
dkarapetyan
So is there a reason graphics programming is still in the dark ages. This
sounds like assembly programming where you have N different add instructions
with slightly different semantics for how flags are set.

~~~
Hemospectrum
The primary reason is that these APIs are very thin abstractions over the
actual assembly-level instructions sent to the graphics card. It's certainly
possible to write a higher-level abstraction; it's been done many times, in
the form of game engines and so forth. The risk of doing so is that you narrow
down the possible types of visual effects you can implement, and historically,
the implementation of unique and original visual effects was a big part of the
way AAA games competed with each other.

In principle there's no longer any reason why it has to work this way, because
now we have shaders. These are a more domain-specific tool than a C API, and
can thus safely present large cross-sections of graphics card functionality in
a higher-level way without taking power away from the graphics programmer. So,
for example, people modding Minecraft can introduce their own visual effects
stages by hotloading shaders into the standard Minecraft graphics pipeline.

EDIT: I haven't talked about performance. For game developers, rendering a
frame quickly is as important as having total control over the output of the
rendering process. For the most part, building an API higher-level than OpenGL
also means dictating a particular scene graph structure, as with OS-specific
window system APIs. If the API imposes a certain way of organizing your scene
graph, this can have serious impacts on your game's performance, because the
scene graph traversal is optimized for one type of scene and you're using it
for another. I'm not sure I'm simplifying this explanation very well, but
that's the gist of it.

~~~
yoklov
I don't disagree with what you're trying to say, but this is my line of work
so I have a few nits, sorry.

> these APIs are very thin abstractions over the actual assembly-level
> instructions sent to the graphics card

If only this were the case... The drivers end up doing quite a bit, to the
point where most renderers spend most of their time waiting for the driver to
return. This is part of the reason Mantle was a big deal, and why DX12 and GL5
promise to be lower level.

> In principle there's no longer any reason why it has to work this way,
> because now we have shaders

Shaders don't replace all the fixed function parts of the GPU, and they don't
try to. Maybe someday they'll replace more of it, but there are still several
fixed function stages in the rendering pipeline.

Not to mention, setting GPU state will pretty much always be completely
independent from shaders, and required for many visual effects. D3D or CG FX
files try to abstract this, but I don't think they're popular anymore.

> For the most part, building an API higher-level than OpenGL also means
> dictating a particular scene graph structure...

The general premise that any layer on top of the driver will limit on the way
you structure the renderer is accurate, but scene graphs are an antipattern
inside a modern renderer. Plenty of engines use them for higher level
organization, but keep it far away from the renderer. All that pointer chasing
murders the cache.

------
wolfgke
> 20 years of legacy, needs a reboot and major simplification pass

Since OpenGL 3.2 >
[http://en.wikipedia.org/wiki/OpenGL#OpenGL_3.2](http://en.wikipedia.org/wiki/OpenGL#OpenGL_3.2)
there is a strict division between core profile and compatibility profile. If
you want a major simplified API just request a core context instead of a
(default) compatibility context.

~~~
pjmlp
It doesn't help when you need to target DX9 cards that only have OpenGL 2.1
drivers.

~~~
wolfgke
The only graphics cards from the last years that don't support OpenGL >= 3.1
are from Intel before Sandy Brige. Since Sandy Brige OpenGL 3.1 (for which my
remark is also true; the only difference is that in OpenGL 3.0/3.1 you
probably want to use a forward-compatible context instead; this remark only
makes sense with this specific OpenGL versions) is supported and since Ivy
Brige OpenGL 4.0.

Thus you only drop support for computers with only Intel GPU and processor
older than Sandy Brige. When I consider how many app developers drop support
for older iPad/iPhone versions that are much more recent than pre-Sandy-Bridge
CPUs and hardly anybody complains (the same is, of course, true on Android), I
really have difficulties understanding where the problem is.

~~~
pjmlp
Game developers targeting the casual user market, where most owners have a 5+
old computer running XP think otherwise.

~~~
BystanderX
We're to the point where 5+ old computers are running vista. XP is really
finally starting to phase out for game-capable computers.

~~~
pjmlp
True, I have forgotten how old Vista is.

However, it does not change the fact that many of those systems are only 2.1
capable.

------
paddlepop
Having only ever made OpenGL applications, I feel a transition to DirectX may
be in order

~~~
tachyonbeam
Yeah, so your code can only run on Windows. That's clearly the future. You
should trust Microsoft, the company with the most vision.

~~~
bitwize
If Microsoft has the best API, you _should_ develop exclusively or
preferentially on Microsoft platforms.

Autodesk's entire product line migrated to a D3D-only rendering pipeline. Why?
Because D3D is the superior API.

~~~
pfranz
Do you have any info on "Autodesk's entire product line" migrating to
D3D-only?

I know many of Autodesk apps run in Linux, Mac OSX, Windows and iOS--so I'm
curious. The only reference I can find is a long reply from an Autodesk
Inventor developer [1] from what looks like around 2007 where OpenGL was
removed form their Windows-only app (Autodesk Inventor). It sounded like his
beef was with the sub-par driver support for the OpenGL spec amongst video
cards and not with the API itself (which I appreciate as an issue, but is very
different from this discussion about it being a poorer API).

[1]
[http://forums.autodesk.com/autodesk/attachments/autodesk/78/...](http://forums.autodesk.com/autodesk/attachments/autodesk/78/442801/1/autodesk_inventor_opengl_to_directx_evolution_788.pdf)

------
frozenport
>>The API should be simplified and standardized so using a 3rd party lib
shouldn't be a requirement just to get a real context going.

Nonsense! OpenGL is an evolving specification for the best-of-best. Why take
away the tools that make games exceed the theoretical performance of a GPU?
Removing features is unjustified if we know how to use them.

What the author needs is a wrapper and many exist. For example, Qt will let
you write graphics code that runs on the desktop and mobile.

Certainly nobody would call for the end of WinAPI because thats how we wrote
the other APIs!

~~~
pubby
Simplification does not mean removal of performance or features. In fact, most
of the recommendations he makes (such as DSA) could perform better than
current OpenGL.

>Removing features is unjustified if we know how to use them.

This is not the path that OpenGL has taken, as shown by all the deprecations
and removals done in previous versions.

~~~
frozenport
>>shown by all the deprecations Yes, but they also added a bunch of stuff.
Going from #version 120 and #version 430 feels like a whole different
language.

------
twelvechairs
Some good points here but what it misses is that generally time-wasting due to
crappy API is less than time wasting due to having to learn a different api
for different platforms. Performance is a bigger issue and it will be
interesting to see whether opengl can improve here or mantle develop on other
platforms (and be reasonably fast on them).

~~~
soup10
I disagree, crappy API design wastes an enormous amount of developer time and
effort. OpenGL is full of legacy cruft and has lots of room for improvement,
streamlining, and simplification(same with d3d12 for that matter). There is a
lot of needless complexity, hoop jumping, and wheel reinventing when it comes
to 3d graphics programming(not to mention, countless common "gotchas" and
tricks of the trade that are not as well documented and easy to learn as they
could be.

~~~
twelvechairs
I don't disagree. The point I'm trying to make is just that using a non cross-
platform api is likely to be just as time wasting (in having to port a program
or just learn something new for the next one). And that user speed is
effectively more important than both of these. The article is right, but it
needs perspective. There's a reason for OpenGL's success - just that this is
not because itsn fun to use.

------
zurn
We should be much more worried about the higher level interfaces/languages
that app developers actually program to. They're stuck in the stone age. The
level of abstraction needs to be raised many notches and programmers need to
be freed from running around in circles after performance tricks.

~~~
flohofwoe
Nah, just use a higher-level frameworks or a 3D engine for this. You give up
some control and performance, but gain productivity. What we need at the
moment is less abstractions, because D3D's and OpenGL's abstractions don't fit
the current GPUs very well (D3D works around this somewhat by breaking API
backward compatibility with each new release)

~~~
zurn
That's what I was talking about, those frameworks/engines aren't on a
trajectory that's going to lead away from the current graphics programming
quagmire.

------
jbb555
I can't argue with most of this.

