Hacker News new | past | comments | ask | show | jobs | submit login
If you were involved in the OpenGL ES specification, you are an idiot (jwz.org)
316 points by cpeterso on June 19, 2012 | hide | past | web | favorite | 213 comments

I respect jwz, but this is very much a step backwards, and while I appreciate his complaint about breaking existing APIs, this would be for programs near 20 years old targeted at a different platform.

OpenGL is a terrible, awful, crufty API, and the reason those methods were removed is that they are comically suboptimal. They do not reflect anything remotely like modern card capabilities, and their use directly causes harm to the ozone layer, kittens, and infants. I'm pretty sure that glBegin() gave a coworker cancer, and the matrix stack has claimed more lives than Kevorkian.

Building a shim to port over old OpenGL 1.3 apps is kind of like translating the Necronomicon into English--possible, of questionable utility, and likely to bring about insanity and demons.

As others have pointed out, OpenGL ES is not intended to be an extension of OpenGL--it was a chance to break out a lot of the dumb cruft that had accumulated into the API. Most of the features he's complaining about are either bad practice or should be gotten rid of entirely.

Compare the length of the API listings for GL 1.x, 2.x, and modern 3.x / 4.x. Remember that the whole thing is a hissing, clanking state machine, and that interactions between functions can be arcane--and threading presents additional issues.

Immediate mode rendering with glBegin()/glEnd()/glVertex()/glNormal/etc. is ugly. Any shim that collects that information still has non-trivial work stuffing it into a buffer, and the overhead of drawing anything with more than a few hundred triangles soon becomes absurd. Worse, this style of programming discourages storing geometry on the card, and that causes additional inefficiency--and trying to use those calls remotely over X causes all kinds of stupid as glx can barely do indirect rendering anyways.

Additionally, we have additional vertex stream attributes available now which are very flexible and don't map onto that anymore. It's time to let go.


tl,dr: jwz is complaining about a fork of an API that removed cruft people depended on, but the cruft needed removing. :(

jwz makes it clear that his deeper complaint is not about removing the cruft per se, it's about removing vast swaths of the API and then continuing to call it "OpenGL". If they'd called it "MobileGL" or some other nonsense, he probably wouldn't have liked the result (based on this rant), but he also wouldn't have complained that they broke working code.

EDIT: 'continuing to call it "OpenGL"' -> 'using "OpenGL" in the name at all' Effectively, he's complaining about a form of false advertising.

They didn't call it "OpenGL". They called it "OpenGL ES". The ES is a significant part of the name. The fact that the name happens in part to contain the word "OpenGL" is not a promise of source compatibility, and even the most cursory glances at the documentation would have made it clear that the API has a different name because it is a different API.

It's incredibly daft to argue that part of a name amounts to a promise of backwards compatibility with a different API, in perpetuity. The X11 protocol isn't backwards compatible with that of X9 simply because they both share a name-fragment.

Right, but again: those are the parts of the API that are already discouraged and downright deprecated as of OpenGL 2.0. If you were developing an app targeting OpenGL 2.0, following what I understand are the best practices for that version of the specification, OpenGL ES will require little to no porting.

The problem then, arguably, is not that OpenGL ES is called OpenGL but that OpenGL 2.0 added an entirely unrelated pipeline to that used in OpenGL 1.3: you could then claim "how dare they reuse the name for what is now two APIs stuck into one library". However, they shared a lot of underlying conventions, and they were honest about bumping the major version number.

If anything, the fact that jwz is happy he proved--that you can build an OpenGL 1.3 emulation library over OpenGL ES--would argue to me to /not/ include the 1.x features as part of the standard, but to instead encourage third parties to distribute such libraries. The fixed function pipeline wasn't removed from the API as it is unimplementable, but because it is a ton of obsolete code that 2.x coders avoid anyway.

" … but to instead encourage third parties to distribute such libraries."

What _I_ took away from jwz's rant (and agree with), is that if providing backwards compatibility to existing users is something one guy can do in three days (including doing the research to find out exactly what's getting taken out of an API), then it seems entirely reasonable to expect a "well behaved" standard like OpenGL to have provided the 1.3 emulation library themselves. Second best would be have a well defined deprecation period with appropriate warnings to developers, which they also failed to do according to jwz, or did properly in OpenGL 2.0 according to you. Whichever of you is right there doesn't _really_ matter much, since it's arguing over whether they got the "second best" thing right, when they seem to have failed at the "right" thing.

It's not something that can be done in 3 days. Fixed function on top of shaders is a PITA. If you want any kind of speed you've got to generate shaders on the fly based on which features you've turned on or off. Otherwise you create an uber shaders that's show as shit.

glBegin, glEnd are shit APIs given how GPUs work now-a-days.

Worse, things like flat shading require generating new geometry on the fly.

Fixed function pipelines suck balls.

OpenGL ES 2.0 FTW!

Finally someone actually works in the cg industry replies. +1 to this. No one really uses fixed function stuff these days, everything is shaders and vertex and index buffers. There are no fixed function hardware units, everything in the graphics pipeline is programmable and done in shaders. Even using fixed function stuff on today's hardware forces the driver to compile a built in shader. In the interest of keeping driver size small (for mobile apps), they force the programmers to write their own shaders and throw away the fixed function stuff that would bloat the driver and slow the shader compiler.

New code doesn't use fixed function stuff these days. JWZ's point is that there is more than new code. Legacy code also matters, e.g. CAD applications. Those have little use for shaders. Frankly, your point of view sounds very game-centric to me.

Both nVidia and ATI have committed to supporting these older APIs for the foreseeable future.

Old code doesn't just convert itself to using shaders and vertex and index buffers.

Also: old code isn't necessary un-useful code.

Maybe not - but imagine the loss in hardware sales and ecosystem revenue if everyone ported old shitty games without re-writing them, causing batteries to die quickly and a poor user experience?

It was for the better of the industry. Boo-hoo. If it took him 3 days then hes a smart fucker. As someone with plenty of OpenGL AND OpenGL ES experience, I'd say it would have taken him just has much time to port his existing code.

And if that were the end of the story, I think we'd be able to call it a day. But everyone has this funny expectation that that old code should keep getting faster with newer GPUs, in spite of the fact that GPUs don't work the way those programs were designed to use them.

Getting modern GPU performance, or anything close to it, through the crufty old immediate-mode API code is like drawing blood from a stone. Eventually developers need to take some responsibility for the code they're maintaining and migrate to a more modern API. Even on the desktop they'll have to do this - when their customers ask for modern GPU features, they'll have to move to OpenGL 3, which doesn't have immediate-mode either.

> If you want any kind of speed you've got to generate shaders on the fly based on which features you've turned on or off. Otherwise you create an uber shaders that's show as shit.

I think jwz's point is that he prefers having his old code run very slow through a compatibility layer rather than having to port the same not-so-important old code over the new APIs.

He wants to trade developer time for execution time, something that may be very sensible in some cases (probably not in most, but for fancy screensavers...).

> I think jwz's point is that he prefers having his old code run very slow through a compatibility layer rather than having to port the same not-so-important old code over the new APIs.

If that's what you want, just write it yourself once (which he did) or use one of the many (subsets of) fixed-function pipelines running on OpenGL ES that others have made. The official 'Programming OpenGL ES 2.0' book even shows you how to do most of it, with example code included.

What jwz fails to recognize is that OpenGL ES does not only have to run on iPads, iPhones or other relatively high-powered mobile devices, but also on extremely low-powered devices with really small memory sizes (RAM and ROM) where every byte (code or data) counts. Compared to mobile devices at the time the first OpenGL ES API's were designed, an iPad could almost be considered a supercomputer. For OpenGL ES, small API size was one of the design constraints, simple as that.

Last but not least, OpenGL ES was supposed to become the industry standard for mobile 3D graphics, which means it needed strong industry support. Stuffing the API with loads of crap that almost nobody would use would drive up implementation costs for no good reason. Programmable shaders are called 'programmable' for a reason, if you want to do very specific stuff with them (such as emulating the fixed-function OpenGL pipeline), there is nothing preventing you to do so.

The single point I can kind of agree with is that maybe they should not have used 'OpenGL' in the name of the API, because it suggest at least some form of compatibility with previous OpenGL versions. Confusing indeed, but not really worth the kind of rant in this article.

OpenGL 2.0 was released in September 2004 (Wikipedia).

OpenGL ES 1.1 (which jwz complains about here) was ratified and publicly released in August 2004 (going back to OpenGL ES 1.0 would only make the comparison worse, of course): http://www.khronos.org/news/press/khronos-group-announces-th...

That's a quite, um, impressive deprecation cycle, I suppose.

Seven years is not a long time. My toaster is older than that, and I like to think the people who made that would be embarrassed if those cheap moving parts had decayed so quickly.

You probably have enough horsepower at your disposal to emulate each and every computer you ever bought (simultaneously!) and run all that software forever. But instead we're going to require any tool you want to use to be rewritten half a dozen times over the course of your career alone. And why? Because fuck you, we just can't be bothered to start taking engineering seriously.

If bread had changed as much as GPUs have in the past seven years, your toaster would be obsolete, too.

This isn't about good engineering vs bad, this is about mature technology vs a rapidly developing field. Different characteristics beget different engineering trade offs.

Cars are switching from petrol to hybrid to electric motors, but roads still work... "Rapidly developing" is a red herring.

Bad analogy.

Car engines and transmissions are the heart of the vehicle, and those change frequently as well. Roads are a fundamental, static, landscape feature today. They're like telephone poles and fiber conduits, neither of which have changed much in recent times.

Software APIs change to match the features/needs of the users and developers. Part of this is based on the changing hardware, part on desired features. The hardware today is vastly different than when OpenGL 1.1/1.5 was available so why should we be constrained to use it in the same fashion?

In short, APIs shouldn't be static for now and forever, we'd only be limiting ourselves and ignoring the fact that sometimes things change and sometimes early decisions were wrong (or less effective than desired).

Roads still work because they are too expensive to replace. GPU's and bread are not, so your argument does not make sense.

Expensive things are more likely to work? I can tell you've not worked long in this industry, my friend.

I think that paraphrase was slightly closer to the opposite of what tinco said than what he did say.

>Cars are switching from petrol to hybrid to electric motors, but roads still work...

Yes, and this is what makes this a bad analogy.

He's got the whole analogy inverted. If roads were rapidly changing, we'd need dramatically different cars to handle the new roads.

Touché: I apparently wanted "OpenGL 1.5", not 2.0. This, in fact, undermines part of my argument regarding the major version number. Further, reading through the history and timeline a bit better, I am now concerned I was horribly misinformed. I would just ignore my comment.

So OpenGL ES (Embedded systems) was not different enough? I actually think OpenGL ES didn't go far enough in the initial spec.

ES 1.1 still has alot of the old fixed function pipeline, but then in 2.0 they scrapped it all entirely, opting for a smaller, more modern API. That, to me, was a bad move. I think they should've made the shader based pipeline in 2.0 part of the 1.1 spec.

Interesting. I hear what you are saying.

To contradict everything I've said elsewhere on compatibility - I'm actually in favor of the 1.0/1.1 OpenGL ES standards and would have liked WebGL to offer 1.1.

My self deceiving justification in this instance is that plenty of platforms that can touch the web are still running hardware only capable of fixed function (eg. Intel 945GM).

Fixed function is easy to make safe, and offers a path to leverage 3d acceleration on these platforms that cannot be beaten in software. Sure its not as exciting as 3d with programmable shaders, but it is still useful 3d none the less.

Fixed function is no easier to make safe than shaders so that's a false assumption.

OpenGL ES 1.1 on JavaScript would be a joke. Do you really think JavaScript is up to tens of thousands of

calls per model per frame?

The rest of the graphics world left OpenGL 1.x long ago. OpenGL 4.0 has none of the fixed function stuff in it anymore either.

Using fixed function features in 2012 is like using oldskool 80s BASIC with line numbers and no functions as your programming language.

It's time to move on.

I've delivered a paid contract that would say otherwise =)

(Not OpenGL ES1.1 but a similar narrow API. One frame latency penalty in my case as the calls enter a staging area for analysis one frame before tiling and dispatch).

Dont forget the original iPhone also has been fully sandboxed OpenGL ES1.x

So its already been done.

Things like PCC (NaCl) and Xax (to a lesser degree) also reveal surprising results as you would know.

OpenGL ES 1.1 != OpenGL 1.1

OpenGL ES 1.1 does not have support for immediate mode, thus no glBegin/glEnd and no glVertex etc. It's just VBO:s.

Do note that fixed function does not mean same as immediate mode (which is what the glVertex etc calls are all about).

Based on Wikipedia, the 945GM has a GMA950, which supports Pixel Shader 2.0. That should be enough for WebGL.

At that point, the issue is more likely to be driver quality. Allowing people to use weird fixed-function corner cases of the OpenGL spec would make problems more likely.

Are you kidding? The 945GM doesn't have fixed-function hardware! The entire 3D pipeline save for the shaders is implemented in _software_.

Intel didn't even add fixed-function hardware until the GMA X3000 series in the G/GM965 and GL960 chipsets. Hell, it looks like they removed it in everything since the i740.

But it's not OpenGL. It's OpenGL ES. Windows 8 is also not Windows 95.

> Windows 8 is also not Windows 95.

Windows 8 is backwards compatible with Win32 and even Win16. If it wasn't, there would be hell to pay. WinRT is a somewhat clean break with the Win32 legacy. But that's only required for Metro apps.

To be fair, 64-bit Windows 7 (and Windows 8) don't support Win16 (as I understand it, because you can't sneak real-mode code into x86 64-bit mode the way you can in x86 32-bit mode).

OTOH, let's be clear about what Win16 is. Win16 is an API that Microsoft deprecated in 1995 (not coincidentally, when Windows 95 was released). In other words, Microsoft took 14 years to go from deprecation to significant (partial) non-support.

One major reason there have been 32-bit versions of all Windows releases up to and including Windows 8 is so that corporations big and small will be able to seamlessly continue running their crusty Win16 and MS-DOS legacy applications. If not for that, many companies would not have been able or willing to upgrade. Microsoft could instead have taken a DOSBox or Rosetta style approach. Hopefully they will do that with some future release, so developers will no longer have to worry about 32-bit support.

So, I feel my original statement was accurate in both its letter and spirit.

Oh, I completely agree with the spirit of your statement. And my disagreement over the letter is over the interpretation of an unqualified "Windows 8" (or "Windows 7"), not about the substantive facts.

The reason I went into that level of detail is because I wanted to highlight the insanely long 14-year deprecation cycle. Is that a record for a deprecation that was eventually removed?

I think if you focus on the win{16,32} API, it doesn't do any justice to, for instance, the C library. FreeBSD has always supported binaries from various UNIX systems and I think this heritage goes back a bit further then 1995. Also, windows bundles a c library of its own which implements a significant subset.

Sure win{16,32} support a GUI with bells and whistles but that's because it evolved several years later specifically to support this newer mode of UI. It does not support 50-year old teletypes.

You misunderstand my point about length. I'm completely aware that there are plenty of APIs that have been supported far longer than Win<anything>. That's not what I'm talking about.

What I'm wondering is this: Has there ever been an API with a longer deprecation period than Win16? Remember, Microsoft announced the deprecation in 1995 (IIRC), but it didn't start to bite until 64-bit Windows mattered (you could draw the line at some server versions, Vista, or, as I do, at Windows 7). No matter how you slice it, that's a long time.

X11: 1987. Still works in 2012.

X11, not backwards compatible with X10 or X9.

OpenGL 4.2, backwards compatible to OpenGL 1.0 (1992)

OpenGL ES, not backwards compatible with OpenGL.

There's an obvious parallel here of APIs being compatible within themselves but not across boundaries between major architectural revisions intended to throw out cruft and target new environments.

> X11, not backwards compatible with X10 or X9.

That's why it's called X11 and not X10 ES.

This line of reasoning is utterly absurd. They are different APIs with different names. The specifics of the substrings they have in common and the format of the substrings that differ is utterly irrelevant.

OpenCL is a different API from OpenAL, Cocos-2d is an API for an entirely different language than Cocos-2d-x, the Cocoa API is wildly incompatible with Cocoa Touch. Horrors!

You determine whether or not two APIs are compatible (or even striving to be the same kind of API) by reading the documentation, not by applying stupid heuristics to common substrings in their names.

X11 wasn't deprecated in 1987, that's when it was released. So far as I know, X11 hasn't been deprecated.

Side-note: Win16 (called "the Windows API" back then) actually predates X11 (though not previous versions of X) since Windows 1.0 was released in November 1985.

Sure, MS took 14 years to deprecate this

Which it absolutely doesn't mean you should wait 14 years to stop writing for Win16!

And I think that's what the rant is implying

Ok Mac OS X Lion is not System 9. And no, Lion isn't backwards compatible to System 9 and there hasn't been hell to pay.

The difference is that tens of thousands of business users used System 9. The Microsoft platform probably had hundreds of millions.

The actual name of the OS changed to 'MacOS' as of MacOS 8.x. The 'System X' designation ended with System 7.5, sadly.

I'd be interested in seeing an argument explaining why these API calls smother puppies when the premise of the original article is that you can in fact offer them as an interface to the shiny new better way of doing things, without accidentally summoning cthulu. If he's wrong on that point I'd like to see a clear explanation of why.

I'm not an OpenGL expert, but my understanding is that the fixed function pipeline (FFP) is like a set of big, generic shaders and state that everybody had to go through to actually do the work of displaying 3D graphics. You could write shaders a fraction of the size that does just the work you need without having to pay the performance price for features you don't use. FFP-using code looks nice in tutorials, but performs terribly outside of demos, never mind the complexity tax it imposed for implementations as a 3D graphics layer for people who don't understand how 3D graphics works.

Also the old OpenGL API had immediate mode functions which encouraged people to trickle in interleaved data and operations; the exact opposite of what 3D APIs need to run fast.

The idea is very simple. With fixed pipeline you have a constant pipe diameter you could not change.

Imagine that you plan to manage 3Million vertex and to draw 6M points on the screen(fragments) so you make your pipes for it.

Now , what happens when you need to update only 200 pixels but want to draw 30 Million vertex on them? You can't do it on fixed.

What happens when you want to do 10 passes to the screen(60M points) but you just use textures with 4-8 vertex?. You can't do it on fixed.

With non fixed you could just use your compute units where you need them.

Sure, here's my attempt at explanation:

OpenGL is a gigantic mess, one which only somewhat recently has started to get better. For those that don't know, it's lineage goes back to IrisGL and big-iron Silicon Graphics machines. There's a wonderful recap of its history on Stack Overflow ( http://programmers.stackexchange.com/questions/60544/why-do-... )--long story short, design-by-committee and squabbling vendors (especially the CAD folks, whom I until recently counted myself among) resulted in bloated, sad, crufty APIs.

Having to maintain a codebase to mimic old OpenGL functionality, especially when in some cases it wasn't particularly well-defined/standardized, in addition to coming up with a small profile for new features on embedded systems, would present a nontrivial burden on the driver and hardware writers. Hell, even Intel has only somewhat gotten it right recently--and they've had the OS community via Mesa do most of the work for them (as I understand it)!

These aren't features that are hugely important, these aren't features that are game changing, these are a lot of things that simply obsolete or unnecessary. jwz laments the lack of quads support, so let's start there:

OpenGL 1.x supported the following primitive types: points, lines, line strips, line loops, triangles, triangles strips, triangle fans, quads. quad strips, polygons (see http://www.opentk.com/doc/chapter/2/opengl/geometry/primitiv... for examples). Several of these options are quite redundant, and supporting them is not really helpful. Moreover, several of them present interesting questions for a driver writer: what is the preferred way of decomposing quads or polygons? Strips? Fans? Discrete triangles again?

Sphere mapping has, I believe, been replaced with cube mapping. OpenGL ES 1.1 has cube mapping as an extension, but I don't know if Apple decided to implement it or not--such is part of the evil of OpenGL, this use of extensions.

1D texturing (and 3D texturing) were omitted, again presumably to make implementors' lives easier. To face this, fill a whole 2D texture with a gradient, and clamp on the edges when sampling (glTexEnvi I think should do this...?). Hopefully that would work. Only recently have 1D and 3D textures gotten really useful, for clever tricks in passing LUTs and such to the programmable shader pipeline; I think the older use for them was ghetto cel-shading and palette mapping--cool but not critical.


Anyways, the problem with requiring that the library writers support all that is again that they would have to create most of the OpenGL environment (which is terrible), and then map it onto their new environment (even more terrible), as well as develop the new environment. This is nuts.

It's similar to asking if people could write a portability layer atop Win32 to support Win16 to support old DOS system calls--anyone can do a subset of that and complain that "Hey, it's easy!" but to do it right (and you must do it right, or else somebody else will complain!) is very nontrivial.

For a more timely example, consider the issues folks have had getting people to move on to Python 3--and contrast that with what the Rubyists have accomplished by just moving fast and fixing things as they break.

Or think about the amount of time/money spent on keeping the COBOL infrastructure up, or supporting legacy VB6 installations.

Honestly, sometimes we should applaud vendors for Doing the Right Thing and trying to force users into fixing outdated code.

> It's similar to asking if people could write a portability layer atop Win32 to support Win16 to support old DOS system calls--anyone can do a subset of that and complain that "Hey, it's easy!" but to do it right (and you must do it right, or else somebody else will complain!) is very nontrivial.

Didn't Microsoft actually do that? Isn't that how we have Win16 support in 32-bit Windows 7 today?

OpenGL is a terrible, awful, crufty API, and the reason those methods were removed is that they are comically suboptimal. They do not reflect anything remotely like modern card capabilities

But the guys who wrote it weren't idiots. They were in fact super smart engineers working at the cutting edge company of the day, SGI. And they made a philosophical call, which was that OpenGL should be an abstraction of geometry and an idealized rendering pipeline with just enough hardware-specific hackery in it to make it perform[1]. They did the best they could operating under the constraints of the state-of-the-art of the time and the resources they had available. And people still use OpenGL, decades later, and have done amazing things with it.

Isaac Newton said "If I have seen further, it is by standing on the shoulders of giants". Kids these days, talking about legacy technologies, would be wise to remember that.

[1] Whereas Microsoft believed in an abstraction of physical hardware, with just enough geometry in to make it useful.

Thats not at all true. The original OpenGL API is very much a direct mapping of the original SGI graphics hardware. Most openGL calls on SGI are single cpu instructions feeding the data to the hardware, which fully implements the whole OpenGL state machine.

Its just that modern GPUs work in completely different ways, so this kind of API is useless for them.

No, you have it backwards :-) SGI devised OpenGL and then implemented it in hardware, not vice versa!

Put some info about yourself into your profile so that graphics nerd colleagues (me) can learn about your very interesting work experience!

By the same reasoning, we should remove printf from libc. It's a terrible, awful, crufty API with threading issues, and it's overhead on modern windowed systems is just horrible.

Come now, be reasonable.

If we released some sort of libc for embedded systems, specifying only fprintf(), your analogy would be valid.

In fact, the concept of "freestanding implementation" (as opposed to "hosted implementation", which is a implementation of the full standard) exists in C, and is sometimes used in embedded systems:

  a conforming freestanding implementation is only required
  to provide certain library facilities: those in <float.h>,
  <limits.h>, <stdarg.h>, and <stddef.h>; since AMD1, also
  those in <iso646.h>; since C99, also those in <stdbool.h>
  and <stdint.h>; and since C11, also those in <stdalign.h>
  and <stdnoreturn.h>
(source: http://gcc.gnu.org/onlinedocs/gcc-4.7.1/gcc/Standards.html)

So yes, a conforming (freestanding) C implementation without printf for embedded systems can exist.

OpenGL ES is not designed to be OpenGL.

It has a different set of constraints. The point is to prune back the API for small devices - NOT - to make make migration of legacy code simple.

For sure it would be nice if it came with a client side library to emulate OpenGL to assist in migration where people dont care about foot print size or perf.

JWZ is a very smart guy and I respect his opinion, but he is coming from a narrow viewpoint and not considering the wider implications.

In my experience, backwards compatible APIs and languages are what makes development a pain going forward. This is not to say backwards compatibility should not be provided in some form - but ejecting it from the core is a sane decision.

Otherwise APIs and languages expand at an unfathomable rate. Imagine if every API or language you ever used had features both added and removed over time to make it a better language. Javascript without the bad parts for example.

An incremental only approach to design is non-design in my view.

Evolution both promotes and retires ideas.

>It has a different set of constraints. The point is to prune back the API for small devices

I came here to say something along these lines, but right now I'm limited on time so I don't have time to go into specific examples.

In OpenGL, there might be 5 ways to do something. Three of them are very much suboptimal, one way worked but was incredibly kludgy to write, and one was performant and pretty clean.

With OpenGL ES, they got rid of all the suboptimal and kludgy methods. The benefit today is if you write an OpenGL ES application, porting it to OpenGL nowadays is pretty easy. The other way around? Yes, that can be tremendously difficult. Honestly, I wish a lot more in OpenGL 3.x and 4.x was deprecated. Working with ES and the reduced extension hell is a big step up from the mess the full OpenGL API can be.

The point is to prune back the API for small devices

I don't disagree with the general sentiment that it was time to clean out the cruft in OpenGL, but I find this part of the argument to be a bit humorous. These "small, constrained devices" we're talking about are probably 10x faster than a goddamned Reality Engine.

I tend to agree that the API should have been renamed entirely once it was pared down this far, as 3Dfx did when they created Glide.

> I don't disagree with the general sentiment that it was time to clean out the cruft in OpenGL, but I find this part of the argument to be a bit humorous. These "small, constrained devices" we're talking about are probably 10x faster than a goddamned Reality Engine.

OpenGL ES 1.0 was released in early 2003, IIRC. The decision to exclude Immediate Mode from OpenGL ES 1.0 was made sometime prior to that.

2002-2003's typical mobile hardware was pretty damn weak, in particular in the areas of CPU cycles and memory bandwidth, which is where Immediate Mode really bites you in the ass. And the RAM/ROM sizes on most of these devices were small enough that every byte you could shave off the driver was a win for application writers, so there was little desire on the part of mobile graphic hardware vendors to spend memory budget on redundant features that an application could rebuild on top of lower level primitives if they so chose.

Core APIs and languages do not expand at an "unfathomable rate". How long as the Berkeley sockets API been with us? TCP/IP? Twos-complement arithmetic? Do you honestly think that those are going to go away for the sake of some vaguely hand-waved "wider implications" and "idea promotion"?

OpenGL has been, like it or not, the only open, widely-adopted, non-proprietary 3D graphics API around for quite some time now. Enabling it on mobile devices wasn't exactly a sea-change requiring tossing all compatibility with the past in order to make progress (especially not as mobile GPUs continue to get more powerful).

jwz's point was that this could have very simply been included as an optional compatibility layer, which he then went and did.

[edited to put in the "not" in the first sentence that my fingers skipped over, which kinda changed the whole argument]

But OpenGL ES does not enable OpenGL on mobile devices; that's the entire point of its existence. It enables OpenGL ES, which is intentionally designed to be a simplified subset.

If mobile device manufacturers feel that full-on OpenGL is appropriate for their device, then they are free to implement full-on OpenGL. JWZ should be complaining to the manufacturer, not the spec authors.

I believe that on modern hardware, OpenGL proper is simply OpenGL ES style features (and then some) with a software compatibility layer.

> In my experience, backwards compatible APIs and languages are what makes development a pain going forward

That's the exact opposite of my experience! Libraries that are constantly changing their APIs produce the vast majority of my work.

I think hermanhermitage meant that backwards compatibility makes development painful for the library developer.

I'm not knowledgeable about OpenGL at all, but how hard would it be to write a compatibility layer so older apps continue to work? It could be released as a third party shim.

Isn't that precisely what he did here? (note, I know nothing about nothing when it comes to graphics/OpenGL stuff)

I believe the authors original point is why not provide the shim support as part of OpenGL ES in the first place? Stick a big red sticker on it saying here be dragons, but it's obviously not an impossible task.

The funny thing is, his shim is actually useful for speeding up code (in theory, this may already be done) on normal OpenGL. (For anything using these interfaces)

Disclaimer: I've dabbled as a driver writer in a past life - but not OpenGL ES.

The problem is a 100% compatibility layer is not necessarily easy nor valuable. The makers of OpenGL ES don't want a lifetime of maintaining someone elses problem. Also there is a line where you cross and you lose hardware acceleration and the mapping breaks down.

Their charter is to make a new lightweight API that meets the needs of device manufacturers and low level app developers. As soon as they adopt 100% compatability at their core or even offering an additional adapation layer they will be taking time and effort from their focus.

In this instance any OpenGL shim is an Apple responsibility as they are the SDK and environment provider. Apple and Videologic need to nut that one out themselves.

As to a shim speeding up code its essentially comes down to any impedance mismatch that may occur between an application writer and the API. This is identical to buffered versus non buffered IO and whose responsibility is it to filter idempotent operations.

When you look at a typical call stack. You'll see an application (potentially caching and filtering state), calling a library shim (potentially caching and filtering state), queuing and batching calls to a device driver (potentially caching and filtering state), dispatching to a management layer (potentially caching and filtering state), and so on, eventually getting to a graphics card processor potentially caching and filtering state and finally to a pipeline or set of functional blocks (which may have some idempotent de-duping as well).

Again how this is communicated to the developer or structured is an issue of the platform provider.

Apple can choose to say we optimize nothing (ie add no fat, waste no extra cycles) its up to you to dispatch minimal state changes, or we optimize a,b & c... - don't repeat this work, but maybe add optimizations for d, e &f... Thats something they need to document and advise on for their platform. Its not part of most standards.

Warm fuzzies for calling us Videologic instead of Imagination or PowerVR. Your description of the layers between an application and execution on the graphics core on iOS is pretty good. There's nothing between driver and hardware though.

As for why OpenGL ES is different to OpenGL, it's documented in myriad places. The resulting API might be bad in many ways, but it was never designed to allow easy porting of OpenGL (at the same generational level). It was designed to be small, efficient and not bloated, to allow for small, less complicated drivers and execution on resource-constrained platforms. It mostly succeeds.

Long live mgl/sgl! The mention about hardware dedupe/filtering was more a hat tip to culling sub pixel triangles and early culling of obscured primitives that seems to happen on many chips these days :)

We tip our hat right back! It happens to be pixel-perfect for us in this context, and it's a large part of why we draw so efficiently. Oh, and I still have a working m3D-based system that plays SGL games under DOS!

There actually are PDFs out there for the various GPU IPs on how to write best for them (Adreno, PowerVR, etc.). Sometimes they even disagree, so using triangle strips with degenerate triangles to connect separate portions can be better than using all separate triangles on another, depending on their optimizations. Apple also has recommendations: http://developer.apple.com/library/ios/#documentation/3DDraw...

Although I don't recall off hand if any of them have mentioned sorting commands by state and deduping, which I suppose is one of the most basic optimizations for OpenGL * APIs.

> I believe the authors original point is why not provide the shim support as part of OpenGL ES in the first place?

OpenGL ES isn't intended to be the same API as OpenGL, despite the shared "OpenGL" in the name. It was a new API created with the idea that it would be based on the lessons learned from OpenGL, but be completely modern and not bogged down with the need for embedded driver authors to waste time implementing tons of legacy crap calls that nobody in their right mind should have been using for the last 10 years anyways. It uses the opportunity afforded by building a new API for a different target environment from normal OpenGL as an excuse to make all of the breaking changes that everybody would love to make in regular OpenGL if only there weren't so much legacy software that depended on the presence of deprecated, decade out-of-date practices.

That's why OpenGL ES never contained all of the immediate mode cruft from OpenGL, and OpenGL ES 2.0 throws out the fixed-function pipeline altogether.

Why didn't the Kronos group define a shim to begin with? When your goal is to build a new API that throws out all of the shit legacy calls that are a bunch of pain to support for no benefit, what do you gain by then re-implementing all of those shit legacy calls again? Any number of people have built a fake immediate mode on top of OpenGL ES over the years; there's nothing new about what jwz did here. If you really want to write OpenGL ES as if it's 1998's OpenGL, there's nothing stoping you from doing so.

It sounds like it. He could just package it up a bit better, release it and render this whole discussion moot.

And OpenGL ES only existed for 5 years before someone came along as was pissed off enough to do it!

Edit: 5 years was OpenGL ES 2.0, can't seem to find a date for OpenGL ES 1.0, but suffice to say it was around for quite some time.

> And OpenGL ES only existed for 5 years before someone came along as was pissed off enough to do it!

Eh, he's hardly the first guy to do this. Appendix D of my copy of Graphics Shaders: Theory and Practice contains a simple reimplementation of Immediate Mode on top of VBOs for people with a burning desire to prototype their code as if it were 1998 again.

And in reality, in most modern OpenGL (non-ES) implementations, the actual hardware-backed bits basically look like the OpenGL ES API, and all of the legacy cruft is implemented in exactly the same kind of software shim.

Now that smartphones and tablets have respectable GPUs in them, is there any reason why they shouldn't implement the full OpenGL spec?

> Now that smartphones and tablets have respectable GPUs in them, is there any reason why they shouldn't implement the full OpenGL spec?

To what benefit?

OpenGL ES is basically OpenGL minus all of the bits you really really should have stopped using over a decade ago. Originally all of that crap was culled out because it was only realistic to write new software for such resource constrained devices anyways, so why burden driver authors and hardware with the need to support crap that should never be used anyways?

Now that mobile device CPU/GPUs are powerful enough to start being appealing as targets for porting OpenGL-based applications, I think the proper response is less "great, slather back on all of the deprecated legacy cruft from the desktop version of OpenGL" and more "for the love of god update your rendering pipeline to reflect the last 15 years of progress".

Afaik OpenGL ES 2.0 is only a subset of OpenGL 2.0, and doesn't have anything from latter versions (3.x and 4.x). Fixed-function pipeline was removed in OpenGL 3.1 (core). See eg OSX implementation of OpenGL.

So OpenGL ES is not OpenGL minus legacy bits. It was that back in the day, but today it is far smaller subset. Implementing OpenGL > 3.0 would not require implementing fixed-function pipeline, and would benefit programmers using the latest and greatest features.

I thought the issue with immediate mode which prevented its inclusion (in ES) was that immediate mode is very inefficient for the CPU, resulting in increased battery drain on smartphones and tablets.

All rendering is in some capacity incremental. Sometimes you keep that (incrementally constructed) list of vertices around, of course.

If you look at the old immediate mode API, fundamentally, you're just passing in some floats that it copies into a buffer. This is not an expensive thing to do. It's not free, sure, but CPUs aren't bad at it. It's just some overhead compared to if you were to hand an entire buffer (in a known format) full of floats to the GPU at once. Some extra function calls, etc. If your app is only drawing a few thousand vertices, the overhead difference here is trivial... and if your app is drawing a million vertices, you won't be using immediate mode anyway.

Modern GPUs have to put the vertices in vram before they can render them.

That means immediate mode emulation is effectively this

      a) copy all vertices from user code to some cpu buffer
      b) at draw time, copy cpu buffer to vram
      c) ask GPU to draw
steps A and B are very expensive CPU wise as well as waste memory.

vs OpenGL ES 2.0

    init time:
       put vertices in vram

       ask GPU to draw

An incremental only approach to design is non-design in my view.

Nearly all design is incremental. That's how design works. That's partly why chairs are still recognizably chairs, and other useful stuff like that.

Quite true. But there is another word there "only".

I don't think jwz would have had a problem with it if you called it EmbeddedGL or PhoneGL instead of trading on the name of OpenGL. Like jwz I thought "Oh its OpenGL I've got code already that does most of what I want." only to find none of that code worked.

Were people really surprised that Java ME, Java SE and Java EE were different APIs around the same core idea of write once, run everywhere?

So this whole rant was just about the name having a common substring of three or more characters?

I hope nobody tells JWZ what Linux was supposed to sound like.

true. I think I read your meaning wrong anyway.

God what flamebait, how is this near the top of the front page?

When your primary argument that the ES designers were idiots is lack of immediate mode, I'm sorry, you are the idiot. These are embedded systems with highly constrained resources and that immediate mode API is horrible for a lot of reasons:

* Requires tons of driver calls.

* Stupidly hard to optimize on the driver side when you have no idea just how many vertices or other per-vertex data are to follow your call to glBegin.

* Trivial to replace with a much better, and much, much higher performing vertex representation either through vertex buffer objects or simple calls to glVertexPointer/etc.

* Teaches beginners the Wrong Way of doing things -- you won't use this API for anything beyond a toy program as the last thing you would do is load an exported mesh from Max or Maya and then iterate through every vertex.

Having learned OpenGL initially with the ES 1.1/2.0 spec, then transitioning back to the desktop version, I couldn't believe how bloated the API had gotten. There is a reason they want to deprecate most of it and move to a spec that is similar to ES in its simplicity.

> God what flamebait, how is this near the top of the front page?

Because the source is widely respected and has an amazing demonstrated grasp of what good programming involves.

Eh, not any more. JWZ is a has-been, who admittedly did good things back in the 90's, but who has lost touch with the industry when he became a hobby coder. This post is a clear demonstration of that - if he knew even a little bit what he was talking about, he wouldn't make the outrageous claims he does. But no, he just want to hang on to his toys he wrote 15 years ago and have them work with just a recompile on platforms that don't even remotely resemble the platforms he wrote them for back then. That's just absurd. And yes, I too complain (loudly) when I have to change code to accommodate changes in the platform (like how VS12 doesn't support WinXP) but in the end, deep down, I recognize that that's just how it has to go, and so should somebody who knows (or should know) much better than me, like JWZ.

That's just absurd.

Its not absurd at all. He's got a massive collection of amazing GL-based screensavers that a LOT of people have learned graphics programming from, over the years. There are still contributions being made to this collection in 2012, and there have been consistent additions to the collection since the very early 90's. This is no toy collection.

Fact is a lot of great OpenGL code could run on the iPad today, if only the false ideology of cutting 'archaic things' out of the ES profiles wasn't getting in the way. There are plenty of opportunities for OpenGL apps from decades ago to be re-targetted to the new platforms, if only for this problem - and jwz is right to point it out.

Anyone learning OpenGL from code as young as 5 years old (i.e., all fixed-pipeline code) is getting shafted because what they learn is so out of touch with the state of the art in graphics technology. Just like for the Nehe tutorials that whole generations have been brought up on. That's the whole point here, 3d graphics has moved on, and everybody working in it should, too; of course, as with anything, there are always grumpy greybeards who feel that their way of doing things is Good Enough For Them, and therefore should be supported indefinitely.

The arguments on why fixed-pipeline OpenGL should be deprecated have been re-hashed several times convincingly in this thread, I don't have to repeat them here. I don't quite see why the state of the art in graphics should be held back because some people want to see spinning teapots on their iPads without having to learn something new. You seem to be implying that programs written in the past can't be made to work on new platforms; while it's true that it would need changes to the code, please show me the decades-old program in Objective C or Java you mention that you'd like to see working on your iPad without having to change the code.

I'd love to have Electrogig 3DGO on my iPad.

I hadn't heard of it, but a quick google shows that it's (or rather, was) a closed-source 3d modeler that was discontinued in the late 1990's. That is another situation than the one under discussion; API backwards compatibility doesn't even apply to that product, that product would require a way to run native-code binaries, written for (presumably) Windows (or otherwise, some Unixes), unmodified on the iPad.

The point of my last sentence was that it's impossible, because that long ago Java, iPads, Cocoa etc. didn't even exist. For somebody who wants to port a C++ codebase from those days to a mobile platform, having to re-do the 3d rendering (which is only a very small part of any well-engineered large application anyway) in a programmable pipeline is the least of his worries.

Yes but in this case he's an idiot. Try asking Carmack. Any GPU programmer knows GL 1.x was utter crap. Good riddance.

jwz, if you've been following him, is inherently pragmatic. He's a follower of the philosophy that the computer, and by extension the frameworks and languages to program it, should be subservient to the programmer. They shouldn't tell you how to live your life or behave like a stubborn mule when, for whatever well intentioned reason, people decided to overhaul the spec everyone depended on.

I think his argument is that OpenGL ES should have had an "optimal" mode, where performance is best using the newer calls, and "compatible" mode, where if you don't care about performance and just want to port, you can get by. This appears to be what the jwzgl layer does.

Essentially he's just whining that porting his project is difficult because the API doesn't have the more beginner friendly(but inefficient) drawing calls that used to be there. OpenGL ES is not designed to be beginner friendly, it's meant to give low-level access for high performance graphics. It's perfectly reasonable to use an abstraction layer on top of that for simple drawing, it's not that hard, and a quick google would probably find many projects that have done so.

Personally if your going for simple, and not performance, there are much better ways to design the api than glBegin(). So all those calls are the worst of both worlds, awkward, and slow.

OpenGL ES is for "embedded systems" which basically means phones. That means inefficient programming drains the battery. Sure if jwz want's to punish his user's I guess that's his prerogative but it seems like a good decision to provide an API that discourages bad practices.

Basically they decided to get rid of the cruft. OpenGL 1.x was designed in 1992? GPUs fundamentally changed in the mid 2000s and the decisions made for OpenGL 1.x no longer fit.

If you want 1.1 go contribute to this project. http://code.google.com/p/gles2-bc/

The iPad graphics subsystem absolutely destroys anything around when the OpenGL spec was released and many of these screen-savers were designed against hardware that's unbelievably slow compared to an iPad.

He's not making a game that's going to drain the battery in ten seconds flat, he's porting screensavers made in the late 1990s that were never heavy-duty to start with.

Sure. But OpenGL ES was not designed for the iPad. The decision to reject immediate mode was made in 2003, for the mobile devices of that time. And it is used in the industry for devices much less powerful than the iPad, even today.

It continues to be a popular choice for anything like the iPad, where there is no legacy software based on regular OpenGL, because it is a much cleaner and much easier to implement stack, and because if you are writing new software you should never be using all of that old deprecated cruft anyways.

First rule of optimization still applies to these battery-operated GHz/GB machines. If it's not a bottleneck, don't waste complexity and time on it.


GL 1.x is good for GLXGears and not that much more

(Yes, even though some games used it, etc, still...)

> These are embedded systems with highly constrained resources Gigahertz quadcores with several gigabytes of RAM is "constrained resources" nowadays? I agree with you though - it's flamebait. But it's also true.

These "embedded systems with highly constrained resources" are machines with 512MB+ of memory and monster CPU/GPUs. It's perfectly OK to write code for them that you haven't bled over to optimize the hell out of.

And JWZ just showed you don't need "tons of driver calls" unless you mean simple function calls that don't cross the kernel boundary.

It's not like there's this massive base of OpenGL code just waiting to be ported to embedded devices if there were just a few more API calls. Yes, there's glxgears, screensavers, and handful of open source games. You could probably get some old CAD programs to run on your Android.

But if we compare the massive number of newly written OpenGL ES applications to all the old OpenGL 1.x fixed pipeline apps, the latter seem insignificant.

As he so ably demonstrates, it's just not that hard to port the older apps either. This "All you engineers are idiots because I had to work THREE WHOLE DAYS to port this 20 year old code to an iPhone" just sounds childish and silly.

The OpenGL board made the right decision for ES.

Except he didn't port his code to OpenGL ES - he ported the missing API from OpenGL 1.3 to OpenGL ES.

If it takes one developer three days to implement the missing API, then perhaps it suggests that this API wasn't in any way damaging to the new system and could have been left in place originally?

It didn't take him three days to port all of the cruft from OpenGL 1.3 to Open GL ES; it took him three days to port the precise calls and precise semantics he relied upon in OpenGL 1.3 to OpenGL ES, and even then he punted on some of it.

The latter is a far, far cry from the former. The former is basically what a modern OpenGL (non-ES) driver does to support all of the legacy cruft on modern hardware. Recall that the OpenGL API is an API for a state machine. All of those layers of cruft built up over the years have incredibly arcane state interactions, are a bitch to support, and are why modern OpenGL (non-ES) drivers have gotten rather quite bloated.

Edit: to tackle the second part of your comment:

> perhaps it suggests that this API wasn't in any way damaging to the new system and could have been left in place originally?

The Immediate Mode calls he complains about not being there? They were "removed" (never introduced) way back in OpenGL ES 1.0, which IIRC originally debuted in 2003.

It's a pretty big stretch to argue that because reimplementing Immediate Mode on top of OpenGL ES doesn't have a crushing performance penalty on 2012's iPhone, it wouldn't have been a problem on what passed for mobile hardware back in 2003.

(which is ignoring the fact that even if the performance back then wouldn't have been garbage, taking the opportunity to make breaking changes and throw out cruft for an API that wasn't expected to host legacy apps was absolutely the right move)

* It costs money to document.

* It costs money to implement. Three days for a lone coder, much more than that for a careful consistent development process.

* It costs money to develop regression and validation tests.

* It costs money to test, including collecting all the different hardware it needs to be tested on.

* It costs money to fix bugs in it.

* It costs money to answer questions from developers about it.

* These costs are multiplied by every future version that will continue to support it.

* These costs are multiplied by the many vendors that will be required to implement it.

* Some of these costs are multiplied by every language that will support bindings to the API.

* It costs code space in embedded devices.

* It costs as much as a vendor wants to spend trying to make it perform as fast as possible.

* It costs conceptual complexity in developers' heads.

* It costs developers when they inadvertently do things the old inefficient way.

* It costs reputation when somebody compares benchmarks of your product using the old inefficient way against another vendor's product using the new recommended way.

* Requirements to continually support it at the same level of performance constrain your choices when developing new hardware.

* Sometimes it costs precious die space which could be better used for other things.

* The inefficiency sometimes increases power consumption and reduces battery life.

* Users are disappointed if every version of new hardware doesn't run the old inefficient code even faster than the previous version.

So if it doesn't have a real need to be there in ES 1.0 then, yes, it is actively damaging and should be thrown out in the rewrite.

While that's all true - it completely ignores the externalities. It cost Jamie 3 or 4 attempts and eventually 3 days work to get his unbroken code running on the new OpenGL version. What's the multiplier needed to account for the cost this change incurred for all the other developers who wrote code using OpenGL before ES?

Sure, maintaining backwards compatibility is costly for a project like OpenGL. But if you choose _not_ to maintain backwards compatibility, for whatever reason, and then someone shows that your reasoning is bogus by reimplementing the old API calls in 3 days, you should expect to get called "idiots". (And you then should either be sure enough in your convictions that you know jwz is wrong, or take it on the chin and say "Hey, we fucked _that_ one up. Mind if we include your code in our next release?")

This is a good question, and if you look a few posts up I started this comment tree by addressing it:

> if we compare the massive number of newly written OpenGL ES applications to all the old OpenGL 1.x fixed pipeline apps, the latter seem insignificant.

It was a design judgment they made and I think it was the right one.

> by reimplementing the old API calls in 3 days

JWZ did not produce a complete production-quality set of OpenGL 1.x compatibility APIs in 3 days. I'm going to guess without looking at his code that he didn't even write regression tests for the calls that he did implement.

What JWZ did was 3 days of possibly great coding, but that's only a tiny part of what needs to be done to ship new APIs in something like OpenGL.

> What's the multiplier needed to account for the cost this change incurred for all the other developers who wrote code using OpenGL before ES?

Let me ask you that - please list as many examples as you can think of of code written for OpenGL 1.x that would make sense to port to embedded devices.

I'll kick off the list:

* xscreensavers ...

> While that's all true - it completely ignores the externalities. It cost Jamie 3 or 4 attempts and eventually 3 days work to get his unbroken code running on the new OpenGL version. What's the multiplier needed to account for the cost this change incurred for all the other developers who wrote code using OpenGL before ES?

This is ignoring an awful lot of reality. Like, that OpenGL isn't a project that ships one software stack, but a standard managed by a group and voluntarily implemented by graphics vendors.

The Khronos group exists to negotiate between vendors and create a spec they're willing to implement, not to dictate to them a spec they hate and will ignore. There was a near riot when the OpenGL (non-ES) 3.0 was announced; many vendors were seriously pissed off that it didn't drop Immediate Mode and the the fixed pipeline and that they had to sink enormous amounts of time into writing and maintaining a compatibility layer on top of modern GPUs for shit that had been deprecated for damn near a decade. That meant not just writing an Immediate Mode shim for a tiny subset of OpenGL 1.3 like JWZ (and like countless other have done in the past), but doing shit like generating shaders on the fly based on the current state of the emulated fixed function pipeline, and making sure to cover all of the bizarre corner case interactions between all of that old fixed function garbage and threading and any new calls the program happened to mix in.

Vendors were ready to walk away from OpenGL over this. That's why the OpenGL 3.1 spec dropped the fixed pipeline like a hot potato and doesn't require that vendors ship a compatibility profile for the old crap.

Now, what if, back in 2003 when the decision to not include this stuff in OpenGL ES 1.0 was made, they'd released a spec requiring mobile vendors to do the same thing? What would the cost to JWZ and his project, and all other developers who wrote OpenGL before ES, have been if the mobile vendors had collectively told Khronos to fuck off? If OpenGL ES didn't even exist as a target for a shim, because there was no way in hell mobile vendors were going to ship a multi-meg driver that had to generate code on the fly, on the off chance anyone ever wanted to ship AutoCAD for hardware that barely had enough RAM to fit the driver?

Because there was certainly no guarantee, at the time, that OpenGL ES was going to become the standard for mobile graphics. It happened only because the standard that was proposed was something that the vendors were willing to ship.

> Sure, maintaining backwards compatibility is costly for a project like OpenGL. But if you choose _not_ to maintain backwards compatibility, for whatever reason, and then someone shows that your reasoning is bogus by reimplementing the old API calls in 3 days, you should expect to get called "idiots". (And you then should either be sure enough in your convictions that you know jwz is wrong, or take it on the chin and say "Hey, we fucked _that_ one up. Mind if we include your code in our next release?")

Oh come off it. He didn't reimplement ALL of OpenGL in 3 days. He reimplemented the tiny subset of calls, with the precise semantics (and only the precise semantics) his program needed, in 3 days. That's a far cry from shipping the entire behemoth that is a fully back-compatible OpenGL stack. And he's not the first guy to do this, or the 5th. I've got books on the shelf behind me with small Immediate Mode shims in their appendixes for people who want to do it "the old way". I've seen code to do it in forums and in repos. It's dead easy.

The Khronos group doesn't see this shit and think "damn, we fucked up". They think "Good. We gave vendors a decently small spec that they were willing to implement, allowing OpenGL ES to become the standard in mobile graphics, but we still managed to base it on well-chosen primatives that people like JWZ can use to build higher-level interfaces around if they choose".

He didn't implement OpenGL 1.3. He ported an OpenGL 1.3 app to OpenGL ES 2.0. There's a HUGE difference.

He ported the OpenGL 1.3 app to OpenGL ES by implementing the missing parts of the API that he needed inside the OPenGL ES API (albeit, as some people have pointed out, imperfectly).

That is not the same as rewriting an app to conform to the new API. There is, as you say, a HUGE difference.

And lets not forget he's also unwilling to help people out by uploading his OpenGL wrapper to github (or by extension any revision control system).


The source is available in http://www.jwz.org/xscreensaver/xscreensaver-5.16.tar.gz and he mentions that in his blog post.

How much more help are you expecting him to provide? Why would it be any more "helpful" for him to put it in some revision control system instead of just posting the source?

I meant the OpenGL wrapper part, not the entire xscreensaver code base. It's kind of hard to fork a tarball.

How hard is it to untar, take the header and do whatever you want with it (including putting it on github if you want)?

It's definitely a lower barrier to entry than requiring people to create a github account!

Edit: the point being, that it's not just "in the tarball": it's in a specific header file, and the blog post says which exact header file you want.

That is exactly what mercurial users say about code distributed on github. :)

Please respect the author's right to decide how to distribute his code.

the exact same criticism applies to you. why haven't you posted it to github yet?

Cause I don't care one bit. Unlike jwz who seems to care a hell of a lot.

You kind of miss the point. This is a philosophical argument ILLUSTRATED through OpenGL and a port. To quote, "thou shalt not break working code"

No, I understand that point. I just don't agree with it in this case.

* "thou shalt not break working code". That's not some Universal Code of Software Engineering, that's just something he made up.

* No one in their right mind would expect you could port 20 year old C using OpenGL 1.x code to an iPhone with no effort.

* There are plenty of reasons APIs can benefit from breaking changes. Security fixes, major architectural improvements, platform porting, even general evolution and modernization. The question is always one of cost/benefit, relative pain vs relative gain.

* Sorry JWZ, but your X screensaver project is actually not an overriding concern driving the evolution of OpenGL ES.

* I didn't mention it again because so many other have pointed this out, but this isn't even the same API. OpenGL ES is not the same thing as OpenGL 1.x.

* Changing the major version number is a common and accepted way to indicate breaking changes are present in an API. The OpenGL board went even farther and gave it a different name and a different versioning scheme specifically because it is different.

"Sorry JWZ, but your X screensaver project is actually not an overriding concern driving the evolution of OpenGL ES."

Seriously, if you think that's a valid response, then you did not understand his point. Repeating something does not make it any more valid an argument.

OK I re-read his post. I think I understand his point. He has a something of a valid point, but what he is raising are considerations, not overriding principles.

If he wants to write his code by treating these as "thou shalt not" type laws, that's fine. He can criticize others for not following the same principles. But calling a bunch of top graphics engineers "idiots" because they looked at a vastly different engineering problem with different priorities and arrived at a different design is rude and ignorant and it reflects poorly on JWZ.

JWZ makes this statement: "Your users have code that works."

He is simply wrong about this.

* No one had working OpenGL ES code before OpenGL ES was standardized. This is by definition and one cannot argue the definition of OpenGL ES with the OpenGL standardization body itself.

* Even if you wanted to accept the mistake of thinking of the existing OpenGL 1.x codebase was supposed to be forwards compatible with ES, it's not true in the main. The amount of existing OpenGL 1.x that could plausibly benefit from being ported to embedded devices is insignificant. Seriously, JWZ may have just ported most of it.

This is what I mean by "X screensaver project is actually not an overriding concern driving the evolution of OpenGL ES."

JWZ seems to think that his principles of software engineering are the only correct way to look at it. He goes so far as to say "If you don't agree with that, then please, get out of the software industry right now. Find another line of work. Please."

To support such absolute claims and broad sweeping statements he uses a toy porting project that takes three days. The vendors who participate in the OpenGL ARB are concerned with projects that take 3 years and APIs that persist for more than 20! Don't you think they actually might know a thing or two about APIs and engineering them for software? Don't you think they might know extremely well what the costs of failed backwards compatibility are?

But that's OK, everybody scratches their head for a bit trying to sort out the different flavors of graphics APIs. Even single-vendor Direc3D has similar issues. If this is the most confusing thing about OpenGL to him and he can do something useful with OpenGL ES in his first three days of messing with it, he is a really really smart guy.

If your architectures are completely different, you're going to have to break some fucking code. That's just the way it is.

The article showed the opposite. That there was no need to break existing code.

It showed that some existing code with a shim containing a reimplementation of a small subset of OpenGL 1.3 could be made to run acceptably fast on a rather beefy example of 2012's mobile hardware.

It certainly did not show that there was no need to exclude those calls back in 2003 when they were originally not included in OpenGL ES.

Wasn't there? There were a number of functions he didn't port. He ended up with a subset of OpenGL 1.3. Maybe that subset wasn't worth the effort?

That's just what OpenGL needs right?

Yet another incomplete ad-hoc extended subset of OpenGL 1.x functionality lacking documentation, regression and performance tests, a stable and committed team of maintainers, ...

That is a noble goal, but it's one of hundreds of often-conflicting noble goals. It's not inherently wrong to design a toaster that fails to work as a refrigerator.

I don't think xscreensaver is a good example of valuable OpenGL 1.3 code that users want running on the iPhone. I think a person ought to have such an example before they start calling people idiots. Not to say that porting xscreensaver to the iPhone isn't cool, but it's only cool because it is anachronistic and whimsically impractical, like playing GameBoy games in a vintage arcade chassis in 2012.

There are a few things I couldn't figure out how to implement:

This after a year of percolation and three days (plus a few hours) of intense work by jwz. Given jwz's reputation as an insanely good programmer, I think this says a lot. As in, there's a nontrivial amount of work to do to claim near-compatibility with OpenGL 1.3, and an undetermined amount more work to claim true compatibility. Plus there may be more functionality from 1.4 and 1.5 to claim backwards compatibility with all of OpenGL 1.x. And all that to grant a (deservedly) dead desktop API new life on a mobile platform. No wonder they didn't bother.

Maybe the idiot is jwz.

There was a change in the technology that needed a change in the API. Not a single GPU use a fixed pipeline anymore, so the API needs to change.

I see OpenGL as a testing platform that learned by trowing to the wall and learning what sticks. Companies create extensions and if people find it useful they became part of the language, but in any API there is a need for removing what is not used anymore or can be done much more efficient with new ways.

Maintaining code takes a lot of resources and is a pain in the ass to program, I can tell you(you have to emulate a lot of things that do not exist anymore, including BUGS or HACKS).

I could understand not being Apple extreme here, e.g look at the Apple TV and you only see new digital ports, but the other extreme, don't touch anything for 20 years because it works is equally non sense..

I remember a brand new computer that was hanged 5 seconds on the start up by a 1.4M floppy disk because the people who assembled the computer were afraid to remove the thing. I had to remove the thing, left a horrible hole and I discovered the machine had the floppy check hardwired in the BIOS!!!

You buy a computer today and it has parallel, serial and PS2 ports with an outdated BIOS with 1981 timings they can't change so they don't "break legacy standards".

You buy a computer in 2012 with a 1900x1080 pixels and the first thing you see is a horrible black screen with fuzzy letters.

Non sense. openGL needs to be way more clean that what it is now, where it is impossible to remove anything because some company of the consortium find the feature "essential" because they have some legacy code they are too lazy to update.

>If there are old features that you would like to discourage the use of, then you mark them as obsolete -- but you do not remove them because thou shalt not break working code. > If you don't agree with that, then please, get out of the software industry right now. Find another line of work. Please.

Arrogant and wrong. There was no working code in the first place, OpenGL ES is not OpenGL.

I, for one, am very glad with the changes the OpenGL board made. The glBegin/glEnd combo is so inane it should have been removed ages ago. Maintaining all these outdated codepaths carries its cost and clutters up documentation.

:) Back in 2008 I helped JWZ port daliclock to the iPhone and once that was working I proposed a port of xscreensaver. He couldn't imagine ever wanting screensavers on an iPhone at the time. Guess he changed his mind.

I also was the one that opened his eyes to the differences between OpenGL and OpenGL ES it seems. I apologize.

re: your username. Like retired basketball jerseys, shouldn't "quux" be reserved for Guy Steele?

Hmm, I didn't know the he uses quux too. I claim independent discovery.

Sure that's fair enough, but still, it's a bit like using Woz as a username...

Ruthless dropping of backward compatibility is a good thing in certain circumstances. I'm not saying this blog post is one of those cases, but for example Microsoft Windows pays a huge price in complexity, cruft, bloat and performance to obsessively maintain backward compatibility. Python 3 has dropped backward compatibility and experiencing significant pain for the decision but after getting to Python 3 it looks arguably like a better, cleaner, more consistent language.


The thing that particularly bothers me about this post:

> People defend this decision by saying that they "had" to do it, because the fixed function pipeline is terribly inefficient on modern GPUs or some such nonsense. These people don't know what they're talking about, because the contour of the API has absolutely fuck-all to do with what goes over the wire.

As far as I can tell from what he's written, all he's done is forcibly enact the same inefficiencies that the paradigm shift away from immediate-mode rendering was intended to eliminate. The contour of the API in this case has a great deal to do with what goes over the wire: most critically, correctly-designed OpenGL ES programs will not retransmit static geometry to the GPU every single frame.

And what I've understood he's done is not to retransmit static geometry every single frame, but to use an array to send batches behind the scenes.

This is, the exposed API doesn't force any particular implementation.

And that makes a lot of sense.

If he's reimplemented the OpenGL 1 API, then he must be retransmitting geometry every frame, because the API is not sufficiently expressive to allow for retained state of that nature. He describes glBegin and glEnd as accumulating an array to batch out, but that batch gets re-accumulated and transmitted every single time the glBegin/End block is executed, i.e. every single frame.

Question from ignorance: Couldn't you internally buffer the geometries and, say, send them to the GPU every 5 frames or so?

This rant should probably say "my iPhone could run a full OpenGL implementation, and instead I'm provided a subset. I proved that it could run full OpenGL by writing most of the missing parts".

"(OpenGL ES) is a subset of the OpenGL" according to http://en.wikipedia.org/wiki/OpenGL_ES

Was there ever any question over whether an iPhone could run a full OpenGL implementation?

Perhaps the people JWZ should be complaining about are those who decided to provide OpenGL ES instead of OpenGL proper on the iPhone in the first place.

People that don't study history are doomed to repeat it, or something like that. There's pretty much no reason to drop the old API calls, if you don't want it clogging stuff up, fine, make a separate library you have to throw a flag for or something.

Favorite quote: "If there are old features that you would like to discourage the use of, then you mark them as obsolete -- but you do not remove them because thou shalt not break working code.

If you don't agree with that, then please, get out of the software industry right now. Find another line of work. Please. "

This indicates a fundemental misunderstanding of OpenGL ES. It cannot break working code by not having OpenGL 1.0 features, because OpenGL ES is not a revision to or successor of OpenGL 1.0.

Note that OpenGL 4, which is a (distant) successor to OpenGL 1, does include OpenGL 1s features, marked as deprecated.

OpenGL > 3.1 does not require implementation of compatibility profile, which includes deprecated funtionality. OSX doesn't.

Awesome! I wasn't aware that had become optional.

I don't know much about OpenGL ES in particular, but backwards compatibility arguments are very tricky in general.

On the one hand you have all the people maintaining legacy apps and not interested in improving their code. They'll scream at you for breaking BC. On the other hand you have other people complaining that the library sucks due to all the old stuff and why they don't just remove all the crap.

I noticed this in particularly in the context of PHP. People really hate some parts of the language (for good!) and commonly demand a big BC breaking release that fixes all the bad parts. But every time something is fixed (obviously breaking BC in some way) there is a big outcry about wtf the developers have been thinking and whether they are all braindead - well, the usual stuff.

So really, before you start calling people idiots because they didn't keep comparability with an older version (or here even a completely different version), think again. There probably was a lot thought put into the decision. It's not like people just say "Oh, let's drop this, just so everyone can change his code!"

My understanding is OpenGL ES is intentionally designed to be a subset of OpenGL so mobile devices can be simpler and therefore cheaper. It shouldn't be surprising mobile hardware and software is more limited. If deleting 80% of the platform features made your iPhone $50 cheaper and last another hour on battery, surely that's worth it?

Also, I believe the reason glVertex was removed is because it is a very inefficient way to draw. It wastes a lot of CPU time in "jump in to glVertex function, do a tiny amount of work, jump out of function, jump in to glVertex function, do a tiny amount of work...". It's to the extent that glDrawArrays is always faster. Take a buffer, fill it with data as fast as memory can transfer it, then a single function call to send all the data in one go, resulting in negligable overhead. Interesting anecdote: at least in my work in 2D games, modern GPUs are so fast they can render a quad faster than you can write its vertices to a vertex buffer on the CPU. So performance is limited by the CPU overhead! So this really makes a huge difference to the performance of some applications. And on mobile performance is usually more of an issue. So by removing the old functions, you're forced to do it in a more efficient way which may boost your framerate. Not so bad, huh? Unless, of course, you write a compatibility layer on top, which will reverse the performance gains.

Another reason there isn't a compatibility layer is it isn't OpenGL's job: it's supposed to be a super thin layer on the hardware so you can interface to the capabilities of the hardware as efficiently as possible. I also expect writing a compatibility layer that is standards-compliant in the general case is extremely difficult - check out all the effort that went in to ANGLE, for example.

So I think it's just a misunderstanding of the purpose and design of the tools. In future, I guess porting from OpenGL ES to desktop OpenGL would be a lot easier. 0.02

Part of the story here is that OpenGL ES isn't a new version of OpenGL, it's a different API for different platforms. It's nice if you can take old code and make it run on a mobile device easily but overall that's probably a small consideration compared to having an clean API that performs well.

If we want to focus on immediate mode specifically, it arguably shouldn't be used even in regular old OpenGL. It's slower and results in overly verbose code. Most people seem to use immediate mode because that's what most of the tutorial examples use.

Maybe one way to tackle the switch from immediate mode -> vertex arrays (and maybe quads -> triangles) is just to make some macros that take old code blocks and generate new ones.

Are you sure it's fair to call it a different API? OpenGL ES is officially described as "well-defined subsets of desktop OpenGL" [1], which would suggest it has a great deal in common with desktop OpenGL.

[1] http://www.khronos.org/opengles/

He probably won't be very happy to know OpenGL ES 2.0 is not backward compatible with OpenGL ES 1.1.

I wasn't. I can understand getting rid of immediate mode; as jwz demonstrated it's not that hard to rewrite glBegin/glVertex code to use vertex buffers. But 2.0 has an absurdly steep learning curve if this tutorial is accurate: http://developer.android.com/resources/tutorials/opengl/open.... Multiple custom shaders to draw a single triangle, really?

The first hurdle is a little steep, yes. You need a shader to project your points, and a shader to texture your triangles.

But once you have those, it can scale to far more triangles.

Thing is without shaders it's still fixed-function, wich is bad.

It's common for hardware that supports the newer version to have a driver to support the older versions as well, though. Sort of just like the article did, but faking fixed function hardware using programmable hardware.

disclaimer: I write OpenGL ES implementations for living. I occasionally deal with some of the guys writing (parts of) the GL specs.

This article is a load of bollocks. It actually isn't the first article I read that complains about the removal of immediate mode in GLES. This article, like the others I've read, leaves me with the impression that the author is rather clueless.

The article essentially complains that a feature from a 20 year old API should have been included in an API designed 10 years ago. The glBegin/glEnd API is a horrible mistake from the start and it should have never existed. It's a good thing it's removed from the GLES API. OpenGL 1.x and GLES 1.x are both deprecated, more than 10 years old and should not be used any more. In any case, he's complaining about mistakes(?) made more than 10 years ago.

One major flaw in jwz's reasoning is that something was removed. Immediate mode was indeed removed from the spec but an implementation of GLES w/ immediate mode never existed. It was dropped from the spec to avoid having the GLES implementers to waste time adding a legacy drawing API that is essentially useless and has awful performance.

GLES1 was a stripped down API for early mobile 3d applications that used rather primitive hardware or software.

There are lots of flaws in the GL(ES) API, but removal of immediate wasn't one of them. If anything, they should have fixed/broken the API more and intentionally destroy backward compatibility.

The the committees designing the API's are understaffed and have too much work on their hands. They're trying to make the interests of hardware manufacturers, content creators and OEM's match. Because of all these pressures, they're not really doing a great API but at least we have some kind of well-specified standard. Calling these people idiots doesn't help anyone.

With modern 3d API's, the vertex data is pushed into buffers in video memory. If you're writing 3d code in the 21st century, that's what you should do.

So a legacy feature that the author depended on was removed and rather than updating his code to run on modern software and hardware he re-implements it in software. He spends 3 days doing it, gets pissed off and writes a blog post.

This guy needs to get over himself. The world doesn't revolve around his pet project.

His rant is stupid for at least four reasons.

First, OpenGL ES is not regular OpenGL. If it were, it would just be called "OpenGL". It's a different thing, based on OpenGL, but targeted at mobile phones and "small" devices. It's almost like complaining that DirectX and OpenGL have different interfaces. They're different things, therefore they will be different. At the time of OpenGL ES's release there was no backwards compatibility to consider, because it was a new thing.

Second, few applications or games used the immediate mode drawing code in regular OpenGL. It's slow and inconvenient to use for the data formats used in real life. It might be great for the OpenGL equivalent of "Hello, World", but other than that nobody uses it.

Third, there was a lot of discussion about what should be in OpenGL ES, and he could have contributed his opinion when the spec was being drafted. Where was his outcry then, when he could have made an impact? Honestly, though, it probably would have been ignored, because immediate mode is so lame.

Fourth, immediate mode is deprecated even in regular desktop OpenGL, as of version 3.0.

If you were porting GL-based screensavers that were written a decade ago you'd probably be feeling the same way. You don't want to rewrite code that already works on an earlier version of the spec.

What earlier version? GL ES is NOT GL.

That's why it's bothering a lot of people that it has "GL" in the name at all.

Why? It's basically a subset of GL, what's wrong with that? It's a bit like complaining that XHTML Basic profile is incompatible with the full XHTML.

In a fashion, OpenGL ES is about as OpenGL as C# is C.

Next Week: JWZ tries to port xscreensaver to webgl.

Wasn't the whole point of the reduced API surface area in ES so that implementors only had to write the code that actually interfaces to the graphics hardware and let higher level libraries and engines deal with abstracting it appropriately for the task at hand?

Just some numbers, based on my hardware:

Lion OS X install on disk: 7GB

iOS 5.0.1 install: 1.7GB

My MacBook's RAM: 8GB

iPod Touch: 512KB

MB's disk size: 100GB

iPod Touch: 8GB

So: Maybe they removed 80% of the API in general because the iPhone/iPod touch is only 10-20% of a normal computer? Something had to go.

jwz is complaining about OpenGL 1.3 support. It might be worth asking yourself what computers were like when OpenGL 1.3 was released (August 2001).

P.S. I'm guessing your iPod touch has 512MB of RAM, not 512KB.

It actually has 128MB or 256MB of RAM: http://en.wikipedia.org/wiki/IPod_Touch#Models

<As with all things, the first 90% took the first 90% of the time, and then the second 90% took the second 90% of the time.

That math is totally consistent with my experience.

JWZ is so right.

The whole "FFP is slow" argument is completely wrong.

Most graphics drivers are broken and inefficient. Its just a lack of decent dedicated programming effort coupled with over-management at NVidia etc.

My own brush with Khronos group back when this was being done still makes me shudder.


Yeah, graphics drivers are broken and inefficient, and one reason is they have to implement ALL of the LEGACY, DEPRECATED SPEC in SOFTWARE, because some people's code could break.

Stealing this quote:

"As with all things, the first 90% took the first 90% of the time, and then the second 90% took the second 90% of the time."

Sorry to be that guy, but that quote is over 25 years old - http://en.wikipedia.org/wiki/Ninety-ninety_rule .

Which is of course a thinking error, he meant: "As with all things, the first 90% took the first 90% of the time, and then the remaining 10% took the second 90% of the time.".

This all makes me wonder, is it hindering to learn open gl even if you want to develeop prmarily for android/iphone. On one hand I would say: Yes it is hindering. On the other hand, it feels like a good idea to learn open gl, in order to now how and what a rendering pipeline is, gl state machine, etc. What do you guys think ?

"I wrote this because you are all idiots."

Speaking as an idiot, why do I want to run a screensaver on my iPhone?

You don't, especially an ancient one written for a deprecated graphics API optimised for SGI Graphics workstations.

JavaScript is also not "backward" compatible with Java, and their names are similiar.

So, what happens when all this less efficient drawing code now drains the batteries of devices more quickly?

Good effort and all, and it's pretty cool that you can do this, but why all this work to avoid improving something?

Speaking of which, is OpenGL ES 3.0 supposed to launch this year?

Speaking of idiocy, a screen saver for the iphone? Jeez!! What a waste of talent and battery.

I do agree with the sentiment that APIs should be deprecated over time.

I bet people would have less of a problem with it if had been called "OpenGL SE" instead.

All I have to say about this post is "Thank goodness for Readable."

One committee, ten years. One jwz, three days.

This is the stuff "10X" is made of.

You're comparing completely incomparable tasks. JWZ did nothing but write a thin wrapper. The committee didn't omit such a wrapper because they couldn't accomplish it, they ommitted it as a deliberate design decision.

Exactly. Forcing every embedded device that wants to do portable 3D to include megabytes of wrappers for over a decade of cruft wasn't worth it, when the alternative is people having to spend a few days' time if they can't find a suitable shim already written somewhere.

It took NASA several decades to build a rocket to the moon, I build a Diet Code & Menthos rocket in 30 minutes. Guess I'm 1000 times the space scientist those moron at NASA are rollseyes.

jwz, how about porting the ~Aaron extension to Mac OS X now. Too bad SGI isn't around to hold a funeral like Apple did for OS 9 :)

I'm home sick right now with a mild headache and dizziness. I thought it wouldn't hurt to browse hacker news for a bit. Then that black/green color scheme hit me.

Wow ... this guy is just an arrogant idiot ...

You first need to understand that opengl specification are not made to be easy to use, but efficient. And the api design is highly constraint by the hardware. You also need to understand that opengl 1.3 is FCKING 11 YEARS OLD !

Back in the day you had immediate mode. You first call glBegin, then make one call to glVertex for each vertex and finally call glEnd. So if you want to render one frame of a cube, that's 1+6*4+1=26 calls, and for each call your cpu send data to the graphic card. There are two huge problems with that : you need to the transfer the same vertex data to the graphic card for each frame and your cpu processing power limit your graphic card processing power (because cpu calls take way more time than your graphic card takes to render a vertex).

Why was it designed like that in the first place if it's inefficient ? Easy : there were no graphic card. All the 3D rendering was done by the cpu.

So with the development of more powerful graphic card, they updated the specification. With the new api you allocate memory buffers of vertex directly on your graphic card memory and you just need to call one function to render the whole buffer. So you don't need to transfer all the vertex data at each frame, and you don't need to do gazillion of function calls to render a cube. THIS WAS IN 2004 !

So when the khronos consortium (just a reminder, the "idiots" are : Apple, AMD/ATI, Nvidia, Intel, Google, Id Software, ...) decided in 2007 to make a mobile version (ES stand for embedded system) of opengl, did they choose to base their api on something conceived in 1992 and dropped in 2004 ? OF COURSE NOT !

But why not keep it compatible ? Well it's a waste of time. Developers shouldn't use deprecated functionality since 2004 in 2012, especially when it's highly inefficient.

This guy need to grow up, understand that opengl ES main objective is not screensavers, understand that he sometimes need to update a little bit his programming knowledge and acknowledge that he's not an expert in the very complex field of gpu.

tldr; he's the idiot

Easy : there were no graphic card. All the 3D rendering was done by the cpu.

Err, what? SGI, who wrote the OpenGL spec, had rendering hardware before OpenGL existed...

Yeah you're right, they were graphic cards, my bad, I made a shortcut. But at the time (1992), opengl implementations were mostly on the cpu (except for SGI graphic cards that were reserved to professionals). And the graphic card processing power was not enough to make the cpu a bottleneck.

You missed a key point: with OpenGL 1.3 you don't need to follow the glBegin/glVertex/glEnd paradigm. You can create a display list or you can use vertex arrays. In fact the reason for including display lists in the first place was to make it so you didn't have to send one vertex at a time.

I don't agree with JWZ, regardless. OpenGL ES was a good simplification to the overall system. But he is right in that an optional compatibility layer that lives on top of OpenGL ES would have kept the original API valid for those who need it (and without slowing down the core of OpenGL ES).

Related interesting read if you haven't, the brouhaha a few years ago over OpenGL 3.0 preferring to support CAD/CAM programs over games.


I've heard that since OpenGL 3.2 most of the issues that caused that drama got resolved. Does anyone know details?

Edit: Reading parent's link, it looks like there were two complaints: First, OpenGL 1 era features haven't been completely removed--merely marked deprecated, as JWZ loudly advocates. Second (and this took up most of the article) OpenGL 3.0 was not feature-competitive with DX11, which is a bit silly, given that DX11 wasn't available at the time. As of the present day, and as far as I know for most of the recent past, the latest version of OpenGL has maintained feature parity with current hardware and therefore DirectX.

A spec that's not backwards compatible. Oh my! I've never seen that before. Get over it. Some software isn't backwards compatible. OpenGL ES != OpenGL. Get over it.

someone send that post to the gnome lists... hopefully they will be using xfce and hence be able to read it. Then just hope they get the analogy

If you use green text on a black background you are an idiot.

Green text on a black background is very readable in normal lighting, yet doesn't assault the eyes if you are reading in a darkened room.

In addition, there are still people using CRTs. Green on black is often clearer than white on black or black on white, especially in a small font, on CRTs because it is essentially monochrome, and so cannot suffer from fuzziness due to color misalignment.

FWIW, greenscreen (and amber) CRTs used green and amber phosphors; color alignment didn't come into it, right? As I understand it, green and amber because they were cheap.

Correct, alignment was not a factor for single-phosphor tubes.

The eye is not nearly as sensitive to red and blue so those colors did needed higher energies. But I doubt that it was because the phosphors are cheaper. Computer terminals and monitors were very expensive back then, but commercial color television made some of the common components (like phosphors) relatively cheap.

Green was found to be readable and pleasant. It was the most common for IBM equipment (such as the PC monochrome display).

Amber is very visible, especially in bright light. It was popular too.

White was popular with DEC terminals.

If I recall correctly, those green phosphors were long persistence, so they could have a lower refresh rate. Amber and white phosphors came along later when it didn't matter anymore because logic got faster. Some of the first terminals I used had core memory, so they still retained their contents when turned off.

Right. They were monochrome hardware, so color alignment was not a factor.

Unrelated but interesting. I always wondered if it was just nostalgia, preference, or some other reason for green text. It makes sense with RGB pixels of course. Now what about orange? That's not monochrome ...

Much respect to JWZ, but I don't agree here. I am not even sure what to say, except that he is wrong, so I will just leave it there. And no, I was not involved in anyway with GL ES specification.

The above comment is not particularly useful, interesting or relevant. Neither is this one.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact