Hacker Newsnew | comments | show | ask | jobs | submit login

I'd be interested in seeing an argument explaining why these API calls smother puppies when the premise of the original article is that you can in fact offer them as an interface to the shiny new better way of doing things, without accidentally summoning cthulu. If he's wrong on that point I'd like to see a clear explanation of why.



I'm not an OpenGL expert, but my understanding is that the fixed function pipeline (FFP) is like a set of big, generic shaders and state that everybody had to go through to actually do the work of displaying 3D graphics. You could write shaders a fraction of the size that does just the work you need without having to pay the performance price for features you don't use. FFP-using code looks nice in tutorials, but performs terribly outside of demos, never mind the complexity tax it imposed for implementations as a 3D graphics layer for people who don't understand how 3D graphics works.

Also the old OpenGL API had immediate mode functions which encouraged people to trickle in interleaved data and operations; the exact opposite of what 3D APIs need to run fast.

-----


The idea is very simple. With fixed pipeline you have a constant pipe diameter you could not change.

Imagine that you plan to manage 3Million vertex and to draw 6M points on the screen(fragments) so you make your pipes for it.

Now , what happens when you need to update only 200 pixels but want to draw 30 Million vertex on them? You can't do it on fixed.

What happens when you want to do 10 passes to the screen(60M points) but you just use textures with 4-8 vertex?. You can't do it on fixed.

With non fixed you could just use your compute units where you need them.

-----


Sure, here's my attempt at explanation:

OpenGL is a gigantic mess, one which only somewhat recently has started to get better. For those that don't know, it's lineage goes back to IrisGL and big-iron Silicon Graphics machines. There's a wonderful recap of its history on Stack Overflow ( http://programmers.stackexchange.com/questions/60544/why-do-... )--long story short, design-by-committee and squabbling vendors (especially the CAD folks, whom I until recently counted myself among) resulted in bloated, sad, crufty APIs.

Having to maintain a codebase to mimic old OpenGL functionality, especially when in some cases it wasn't particularly well-defined/standardized, in addition to coming up with a small profile for new features on embedded systems, would present a nontrivial burden on the driver and hardware writers. Hell, even Intel has only somewhat gotten it right recently--and they've had the OS community via Mesa do most of the work for them (as I understand it)!

These aren't features that are hugely important, these aren't features that are game changing, these are a lot of things that simply obsolete or unnecessary. jwz laments the lack of quads support, so let's start there:

OpenGL 1.x supported the following primitive types: points, lines, line strips, line loops, triangles, triangles strips, triangle fans, quads. quad strips, polygons (see http://www.opentk.com/doc/chapter/2/opengl/geometry/primitiv... for examples). Several of these options are quite redundant, and supporting them is not really helpful. Moreover, several of them present interesting questions for a driver writer: what is the preferred way of decomposing quads or polygons? Strips? Fans? Discrete triangles again?

Sphere mapping has, I believe, been replaced with cube mapping. OpenGL ES 1.1 has cube mapping as an extension, but I don't know if Apple decided to implement it or not--such is part of the evil of OpenGL, this use of extensions.

1D texturing (and 3D texturing) were omitted, again presumably to make implementors' lives easier. To face this, fill a whole 2D texture with a gradient, and clamp on the edges when sampling (glTexEnvi I think should do this...?). Hopefully that would work. Only recently have 1D and 3D textures gotten really useful, for clever tricks in passing LUTs and such to the programmable shader pipeline; I think the older use for them was ghetto cel-shading and palette mapping--cool but not critical.

~

Anyways, the problem with requiring that the library writers support all that is again that they would have to create most of the OpenGL environment (which is terrible), and then map it onto their new environment (even more terrible), as well as develop the new environment. This is nuts.

It's similar to asking if people could write a portability layer atop Win32 to support Win16 to support old DOS system calls--anyone can do a subset of that and complain that "Hey, it's easy!" but to do it right (and you must do it right, or else somebody else will complain!) is very nontrivial.

For a more timely example, consider the issues folks have had getting people to move on to Python 3--and contrast that with what the Rubyists have accomplished by just moving fast and fixing things as they break.

Or think about the amount of time/money spent on keeping the COBOL infrastructure up, or supporting legacy VB6 installations.

Honestly, sometimes we should applaud vendors for Doing the Right Thing and trying to force users into fixing outdated code.

-----


> It's similar to asking if people could write a portability layer atop Win32 to support Win16 to support old DOS system calls--anyone can do a subset of that and complain that "Hey, it's easy!" but to do it right (and you must do it right, or else somebody else will complain!) is very nontrivial.

Didn't Microsoft actually do that? Isn't that how we have Win16 support in 32-bit Windows 7 today?

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: