Hacker News new | past | comments | ask | show | jobs | submit login

Here are things that made learning OpenGL extremely painful for me:

- I came from doing graphics on the CPU where you were often handed a buffer of ARGB pixels and did whatever the heck you wanted to them. You could call OS or library routines to draw simple 2D shapes, text, or images, or you could write directly to the buffer if you were generating a pattern or writing a ray tracer. Understanding textures, framebuffers, 3D polygons etc. was really strange. (What do you mean I have to draw geometry if I just want to display a JPEG? WTF?)

- Understanding the state machine and having to hook up so much crap for simple operations. I can't tell you how many times I wrote, or even copied existing working code into a program, and it resulted in nothing on the screen, and no errors. It was almost always because I either hadn't enabled some state or hadn't bound some thing to some particular location. And there's no way to debug that other than stumbling around and asking vague questions online because you don't have any idea what's going on, unless you're already an expert and know what to look for.

- The complete lack of modern language features in the API, such as using enums instead of #defines. This means you can do stuff like call glBegin(GL_LINE) and if you don't check glGetError() (another stupid pattern!) you'll never realize it's supposed to be glBegin(GL_LINES), but you just won't see anything rendered.

- Having 20 different versions of each function for every combination of data type and size. (e.g. glColor3b, glColor3f, glColor4f, glColor4i, etc., etc.)

The entire API is horrible. I'm sure it's all related to how SGI machines worked in the early 90s and how graphics hardware evolved since then, but it made the whole thing shitty to work with. We have a solution now, which is to use one of the new APIs. Vulkan honestly doesn't look much better. I've only dipped my toes into Metal, but it looks like it has solved some of these issues.

I almost wonder if libraries that are more specialized would be useful? Unity seems as hard to learn as OpenGL (and if you want to understand why things are going wrong, you probably have to understand OpenGL anyway). Something specific to image processing might be easier to understand for people doing that, for example. I'm sure there are other niches that would benefit from libraries that hide the complexity of OpenGL (or whichever library you build it on top of).




Use SDL or SFML. They both let you write RGBA into a buffer and throw it onto the screen. I think they both handle hardware stretching and scaling, too.

> Having 20 different versions of each function for every combination of data type and size.

Welcome to C land.

> I'm sure it's all related to how SGI machines worked in the early 90s and how graphics hardware evolved since then

Fixed pipeline would be, I suppose. The programmable pipeline is what you get when you have GL 1.x users reinvent the API. It's great for graphics experts and a barrier for everyone else.


> Vulkan honestly doesn't look much better.

What? Vulkan seems to give pretty good access to hardware command buffers etc. It's pretty good API for low overhead graphics.

> I've only dipped my toes into Metal, but it looks like it has solved some of these issues.

I looked into it, and couldn't even find proper C bindings. Isn't it also only available on some OS X and iOS devices?


I don't mean to sound dismissive, but a lot of what you just mentioned is because you are trying to use OpenGL in a way it wasn't intended.

OGL is meant for 3D work--if you want a framebuffer, an SDL_Surface or some native GUI element is probably a better fit for manual CPU hackery. The reason you find it so uncomfortable is that you are trying to use a refinery to bake a waffle. Textures, framebuffers, and polygons are all kinda central to how 3D graphics (and thus OGL) work--they require some work to understand, but it's not wasted work, if you are doing 3D pipeline stuff.

If you're trying to do simple operations, you probably shouldn't be using OGL directly. For debugging, RenderMonkey was quite a good tool.

Lacking modern language features means keeping a clean C ABI is easier. Having #define's means that there is little work to make sure that any language can invoke different things. Using enums, or even better strongly-typed/templated arguments ala C++, would harm that portability.

The many variants on the APIs are actually quite useful under certain circumstances, and it is always pretty clear how they are used (and again, remember that this is meant for a C ABI).

~

I agree that a simpler environment (like Processing, mentioned above) would be helpful. That said, OGL is hardly bad for the reasons you listed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: