
Mozilla’s WebGL2: an experimental WebGL implementation based on OpenGL ES 3.0 - robin_reala
https://wiki.mozilla.org/Platform/GFX/WebGL2
======
agentultra
As a new contributor to Mozilla FF, I did some work earlier this year
implementing the OES_vertex_array_object. I just want to point out that it was
a very open and welcoming team to work with. jgilbert was incredibly patient
with me as I came up to speed on a large codebase to make this patch. I think
Mozilla has a very good community and the technical review/mentorship process
is top-notch. I highly recommend contributing if you are even the least bit
interested.

------
mtgx
Oh, this is exciting. I've been wondering when WebGL will switch to OpenGL ES
3.0. But is Google letting Mozilla do all the work? Are they helping with
this?

I also wonder if OpenGL 4.0 will arrive sooner than expected (2014, or 2015),
since with Nvidia starting to support the full OpenGL in its mobile chips, the
others will be falling quite a bit behind, because I don't think it will be as
easy for them to support the full OpenGL on their GPU's. Nvidia achieved it
simply by using its PC architecture. None of the others have that advantage.
Even Intel barely got to support OpenGL 4.0 in its PC architecture, Haswell,
and they're probably years away from implementing the latest OpenGL 4.4.

~~~
angersock
" _Even Intel barely got to support OpenGL 4.0 in its PC architecture,
Haswell, and they 're probably years away from implementing the latest OpenGL
4.4._"

As memory serves, the Mesa folks usually have pretty great turnaround time on
new GL feature support--and as a side effect, Intel does too.

~~~
X4
And that's a HUGE pitty! I'm not alone when I say that this entire situation
is depressing. It harms not only crossplatform success (=xTimes less money
than possible, yes think about it dear Publishers).

So, do you have an idea on howto game it, to get more Hardware vendor support
on Linux/Unix to become actually a "real alternative" to the DirectX
competitor Microsoft? I don't believe that it's just the license, because
amd/nvidia/intel give out their proprietary drivers, but we all know that they
still so big time that using them is difficult.

~~~
simcop2387
Actually intel doesn't give out a proprietary driver, at least for new
hardware (did they for old?). It's all open source in MESA now. Which is one
of the big reasons that mesa has been improving so much lately, they've been
pouring developers into it.

~~~
X4
It's actually the opposite, Intel gives out their driver for old hardware, but
for the recent GMA 500 they didn't yet do that. So you're 'almost' right with
Intel[1], but how is that making my entire statement wrong? AMD and Nvidia and
many other graphics hardware vendors don't open source their driver, but that
is not the main problem. The problem is that they don't care for the
Linux/Unix platform enough to make their drivers actually stable or good
enough.

And yes, that's why I believe that there needs something to be done in order
to get Graphics Hardware vendors do a better job with their linux drivers.

1\.
[http://en.wikipedia.org/wiki/Graphics_hardware_and_FOSS#Inte...](http://en.wikipedia.org/wiki/Graphics_hardware_and_FOSS#Intel)

------
maaaats
A question: Why are this and the standard WebGL both based on OpenGL _ES_ ,
and not OpenGL?

~~~
bryanlarsen
It's my impression that OpenGL ES is basically OpenGL without a bunch of
backwards compatibility crap that no modern program uses.

~~~
zackmorris
OpenGL ES was a way to pass the buck to developers so companies could save
money. Since the OpenGL fixed function pipeline could have been implemented to
some degree above OpenGL ES (but never was), it forced developers to learn the
entirety of rendering from modelview projection matrices, to setting up render
buffers, to compiling shaders, and so on and so forth. The small one-page
examples that made OpenGL great are now project downloads and video tutorials
and blog posts full of errata and workarounds. To me, something of great value
was lost in the transition, probably forever.

What I see now are the great failures of OpenGL ES. Having to specify line
widths in the shader. Inconsistent support for points. Not being able to
predict if a shader will have one instruction too many and fall off the fast
path. I've learned it to great depths but the mental pollution I had to absorb
in the process has caused lasting damage in the form of resentment and
exhaustion.

If it was up to me, I would scrap the whole thing and just give developers
full DSPs so they can write their own renderer in a language like a hybrid
between Go and Matlab, but with exact specifications and basic hardware
guarantees.

~~~
masklinn
> OpenGL ES was a way to pass the buck to developers so companies could save
> money. Since the OpenGL fixed function pipeline could have been implemented
> to some degree above OpenGL ES (but never was)

Wasn't the fixed-function pipeline in OpenGL ES 1, and removed from 2.0
following the OpenGL 3.0 -> 3.1 transition?

~~~
zackmorris
Ya you are right. I was startled to see I got down voted for my comment, but
realized that there is quite a bit of context I skipped over. I've rewritten
things for OpenGL 3 times now: once for Mac OS, once for iOS ES1 and once for
iOS ES2. Then I scrapped it all and wrote some small wrappers so I could use
the same code between iOS and Mac OS. We have everything defined in macros,
for example MY_glGenFramebuffers() to map similar code to glGenFramebuffers()
and glGenFramebuffersOES(). Unfortunately there are some iOS-specific calls
that tie a render buffer's backing store to a UIView's memory so it's
difficult to make a perfect wrapper. It feels like most of my day to day work
now is just slogging through subtle issues with render and stencil buffers,
and tweaking the shaders to even get 15 fps on older iOS devices. I get very
little "actual work" done on any given day with OpenGL anymore. I don't think
that it's gone the direction that it was originally intended to go. I just
mean, I think OpenGL started with a scientific or mathematical approach and
it's become kind of a catch-all or least common denominator for wherever the
industry is going. So for example you really have to use one giant texture and
one shader to do all of your work, because context switches are incredibly
expensive on iOS. This doesn't make sense to me though because reading from a
different texture address should be practically a free operation. My hunch is
that they left off some critical piece of hardware, maybe a memory controller,
or they didn't write a very good driver, so changing textures or shaders has
to move a lot of data in and out of registers. I realize this is for cell
phones and the hardware really is puny, but I don't think it would have been
that expensive to give the hardware/drivers just that little bit of extra
power to take the burden off developers. So that's what I meant about passing
the buck. It's hard to explain my reasoning for all of this so I guess I came
across as trollish.

------
kkowalczyk
Why isn't Mozilla working with standard bodies on this and instead trying to
pushing their proprietary, developed in secret APIs on everyone?

Do they think they can just implement new web APIs without first discussing
them with W3C and other browser vendors like Microsoft or Apple?

~~~
AshleysBrain
I think the intent is to get this standardised, and I'd assume they already
are working with standards bodies on this. Mozilla are very much in favour of
openness and I can't think off the top of my head anything they've actually
done which is intentionally proprietary.

