
Key Advances in the OpenGL Ecosystem - Jarlakxen
https://www.khronos.org/news/press/khronos-group-announces-key-advances-in-opengl-ecosystem
======
exDM69
OpenGL is long overdue for a spring cleaning. A bit less than 10 years ago
there was an initiative to create a new OpenGL API, code name "Longs Peak"
which would have solved some of the obvious issues there are.

Right now there's a variety versions of OpenGL out there and they are
incompatible in subtle ways. To write "portable" graphics code, you need both
compile time and runtime version checks for a variety of features. Some
restrictions in mobile versions make sense because of hardware requirements,
others are just plain ridiculous.

And the versions of OpenGL that vendors ship is very diverse. For years,
Nvidia and AMD have been the only ones to provide (at least almost) the latest
version of OpenGL (but only on Windows and Linux, not Mac). Other vendors are
lagging behind by several years.

I won't even start listing the obvious problems with the OpenGL API. Everyone
who is working with it knows that the API is ridiculous.

I'd like to see an API clean up (ie. rewrite from scratch), a common shader
compiler frontend, a common shader binary format and common tooling like
benchmarking and profiling tools. Perhaps even a software emulated "gold
standard" implementation.

At the moment, writing practical OpenGL applications is miserable. It's quite
alright as long as you're working on a small project for your own enjoyment
but once you have to start dealing with a variety of driver bugs from
different vendors and whatnot, it takes a lot of time to actually ship an
application.

~~~
rrradical
Would you mind clarifying your problems with the API? I've done a fair amount
of work with OpenGL (mostly ES), and I've found it to be a sensible low-level
target for my rendering engine. That's not to say I haven't run into problems,
but I assume any high performance graphics API has a learning curve. I don't
have experience with Direct3D or other APIs to compare with it though.

All of my complaints with OpenGL are related to, as jarrett mentioned, the
lack of debugging tools and the fiddly cross-platform and cross-version
issues.

~~~
hrjet
The craziest thing I found in OpenGL API was that there was no way to bind
data to a primitive. You have to repeat it for every vertex of the primitive,
which sometimes also means that you can't share vertices between primitives.
This results in a 9x overhead in data for triangles!

Examples illustrating what I am saying:

[http://stackoverflow.com/q/6530700](http://stackoverflow.com/q/6530700)

[http://stackoverflow.com/q/6056679](http://stackoverflow.com/q/6056679)

[http://stackoverflow.com/q/23879737](http://stackoverflow.com/q/23879737)

~~~
exDM69
You can definitely "bind data to primitives" and share vertices between
triangles in OpenGL and that has always been possible ever since the first
version of OpenGL.

Are you talking about legacy OpenGL style glBegin/glVertex/glEnd? That hasn't
been the preferred way of drawing ever, and since OpenGL 1.5 (circa 1999),
vertex buffers are the way to go.

If you want to share vertices between triangles, you should build and index
buffer and use glDrawElements* functions.

~~~
hrjet
I meant to say "bind data to elements". It's been a while since I programmed
in OpenGL so the terminology is shaky.

I saw your other answer below too. I am not able to visualise how the
duplicate data could affect caching postively. But anyway, I don't have deep
knowledge about the hardware.

And I didn't know other 3D APIs had the same problem. Thanks for mentioning
that.

------
SynchrotronZ
The quotes on that page feature the best job title I have ever seen: "Aras
Pranckevičius, graphics plumber at Unity."

~~~
robterrell
BTW Aras is a great guy to follow on twitter:
[https://twitter.com/aras_p](https://twitter.com/aras_p)

~~~
Kronopath
Heh.
[https://twitter.com/aras_p/status/496027538460651521](https://twitter.com/aras_p/status/496027538460651521)

------
theandrewbailey
Direct State Access is finally in the core spec. That might be the last
missing piece of what OpenGL 3.0 was supposed to be.

------
DonHopkins
We've come a long way from the time that the HP technical support person asked
Steve Strassmann "Why would you ever need to point to something that you've
drawn in 3D?"

[http://www.art.net/~hopkins/Don/unix-
haters/x-windows/disast...](http://www.art.net/~hopkins/Don/unix-
haters/x-windows/disaster.html)

~~~
flohofwoe
In a twisted way that's still true though :) Getting the content of the pixel
beneath the mouse still isn't trivial since CPU and GPU run in parallel and
the CPU is at least one frame ahead. In general it's better to only push data
into the GPU, and not trying to read any data back with the CPU.

------
bitwize
Next Generation OpenGL: Key Advances That Will Bring OpenGL to Where Direct3D
Was Two Years Ago.

~~~
sharpneli
Not quite. It will be like Mantle.

Mantle caused MS to rapidly announce DX12. So it's the exact same direction
where everyone else is going.

While DX is more pleasurable to use than OpenGL (I personally am purely OpenGL
dev, yet I acknowledge it's weaknesses) it has similar limitations. Very
limited threading model and massive CPU overhead in drawcalls.

The future of graphics api's is going to be mantle style command buffer
generation from multiple threads and then sending those buffers to the GPU.

------
NINJATESTER
HOLA FACU:D!

