
Faster 3D rendering with WebGL 2.0 - DanielRibeiro
https://blog.chromium.org/2017/03/faster-3d-rendering-with-webgl-20.html
======
roschdal
Developers are welcome to help update Freeciv WebGL 3D to WebGL 2.0.
[https://play.freeciv.org](https://play.freeciv.org)

------
mixedbit
According to webglstats.com WebGL is supported on 96% of browsers/devices. I
wonder what is a maximum possible coverage to be achieved by WebGL2 with
current hardware (If I understand correctly some WebGL capable devices will
never be able to support WebGL2 due to hardware limitations).

~~~
overgard
On PC it's probably no difference. WebGL 1 is a really low baseline -- its
based on an ES spec from 2007. ES3/webGL2 isn't super cutting edge. In terms
of mobile I dont think webgl is that useful anyway (too much overhead), but
pretty much every high end device newer than 2013 should support it.

~~~
iainmerrick
Too much overhead? Can you go into more detail?

Part of the point of OpenGL is to offload as much computation as possible to
the GPU. There's a lot of very cool stuff you can do with barely any CPU usage
at all, so even if you're using JavaScript it shouldn't make much difference.

(High-end games still need a lot of CPU power too, of course)

~~~
overgard
Well, two things. First off, because it has to be safe in the browser, there's
no direct memory access. Which means if you need to dynamically write CPU data
to buffers WebGL is going to add a lot of overhead. The trend with newer APIs
(DX12/Metal/Vulcan) the last few years has been direct access to avoid API
overhead, and it's unlikely the browser+javascript will allow for that kind of
memory usage (since it's unsafe). So that's one limitation that's unlikely to
be addressed.

There are other limitations, like for example, Windows has no native OpenGL ES
support, and so all your API calls get translated to DirectX9 (via ANGLE in
chrome and firefox), so now you're actually going through two APIs.

Another consideration is that, excluding pure viewers, you're likely to be
doing a lot of other expensive calculations in a 3d scene and javascript is
poorly suited to that kind of work.

~~~
iainmerrick
_The trend with newer APIs (DX12 /Metal/Vulcan) the last few years has been
direct access to avoid API overhead_

Hmm, is that really true? I haven't used those latest APIs yet! I thought good
practice was still to keep the data on the GPU as much as possible, and avoid
fine-grained sharing with the CPU. There's plenty you can do with static data.

I agree that JS/WebGL is not quite there yet for big-name games, but it's
surely pretty close. And asm.js and eventually WebAssembly can help claw back
CPU performance.

Rather than technical concerns, as a game developer I'm more concerned by the
fact that nobody is willing to pay for anything on the web. If and when WebGL
goes mainstream for games, it'll be even more of an ad-encrusted "freemium"
race to the bottom than the various mobile app stores already are.

~~~
pjmlp
Yes, modern cards are migrating to a model where CPU and GPU have have direct
access to the same memory region with zero copy.

Of course you need to synchronize access.

~~~
iainmerrick
_modern cards are migrating to a model where CPU and GPU have have direct
access to the same memory region with zero copy_

Hmm, looking at the Metal docs, that's not quite how I read it!

Here's Apple's "best practices" doc
([https://developer.apple.com/library/content/documentation/3D...](https://developer.apple.com/library/content/documentation/3DDrawing/Conceptual/MTLBestPracticesGuide/TripleBuffering.html#//apple_ref/doc/uid/TP40016642-CH5-SW1)):

"Dynamic buffer data is typically written by the CPU and read by the GPU. An
access conflict occurs if these operations happen at the same time; [...] For
the processors to work in parallel, the CPU should be working at least one
frame ahead of the GPU. This solution requires multiple instances of dynamic
buffer data"

It's still zero copy, but they're not literally peeking and poking the same
buffer at the same time.

So WebGL "just" needs to offer zero-copy access to buffers from JavaScript.
That seems at least possible...? TypedArray would be the key component, and
that already exists. Whether all those pieces have been put together in the
right order yet, I dunno.

Anyway, low-overhead access to the GPU from JavaScript seems possible in
principle.

~~~
pjmlp
Well, modern graphic cards and Apple devices don't go hand in hand on the same
sentence.

[https://developer.amd.com/wordpress/media/2013/06/1004_final...](https://developer.amd.com/wordpress/media/2013/06/1004_final.pdf)

> It's still zero copy, but they're not literally peeking and poking the same
> buffer at the same time.

Hence why I mentioned some synchronization is needed.

------
andrew3726
Relevant Talk from GDC17: "Reach the Largest Gaming Platform of All: The Web.
WebGL, WebVR and glTF" [0]

[0]
[https://www.youtube.com/watch?v=jLNtjujPhzY](https://www.youtube.com/watch?v=jLNtjujPhzY)

~~~
jabl
Another GDC17 talk with the other side of the coin:
[https://deplinenoise.files.wordpress.com/2017/03/webtoolspos...](https://deplinenoise.files.wordpress.com/2017/03/webtoolspostmortem.pdf)

(Not specifically anything to do with webgl, more about using web tools vs.
native for a non-trivial application)

------
blockoperation
I'm still waiting for Chromium to add a per-site switch for WebGL (just like
for JS, cookies, Flash, etc). Enabling WebGL globally seems like a disaster
waiting to happen, even with sanitisation and sandboxing.

------
supernumerary
A good place to start: [https://webgl2fundamentals.org/webgl/lessons/webgl-
fundament...](https://webgl2fundamentals.org/webgl/lessons/webgl-
fundamentals.html)

------
ionwake
Sorry for the ignorance but :

1- will devices remain backwards compatible with webgl 1 ?

2- will three.js be able to easily update to this version ?

~~~
roschdal
> 1- will devices remain backwards compatible with webgl 1

Yes.

> 2- will three.js be able to easily update to this version ?

[https://github.com/mrdoob/three.js/issues/9965](https://github.com/mrdoob/three.js/issues/9965)

~~~
ehsankia
I'm still not fully clear with #1. If your device only supports WebGL1, but
the site uses getContext('webgl2'), you'd get an error right?

So if your application _can_ work with WebGL1, you should use that if you want
to reach maximal audience?

Also, is it all or nothing? Could certain parts of WebGL2 work and others not
be supported? Could you get away with using webgl2 if you don't use certain
advanced features?

~~~
sillysaurus3
Welcome to graphics programming! Enjoy your stay!

(Yes. To everything. Except "Could certain parts of WebGL2 work and others not
be supported?" \-- The point of having a spec is that if your graphics driver
advertises support for WebGL XX, then you can be sure the host is capable of
running all the features of XX, short of bugs in the driver.)

------
willvarfar
I liked the After the Flood
[https://playcanv.as/e/p/44MRmJRU/](https://playcanv.as/e/p/44MRmJRU/) linked
to in the post, but was disappointed by the lack of shadows cast by the leaves
and that blowing leaves intersected and passed through each other. Its still
very distinctly computer generated.

~~~
sillysaurus3
That's true of literally every computer-generated video. Side-by-side camcoder
footage shows no contest in terms of realism.

I devoted several years to this, and there are a bunch of reasons why that
ends up being the case. The main issue is, think of how many atoms there are.
Now realize the actual number of atoms is for all intents and purposes
infinitely larger than whatever you thought of.

Ok, so we try to approximate. Right away you run into problems. What we call
"light" is just a set of photons entering our eyes. Each photon has a
wavelength. Our eyes do a compression algorithm in our retina -- our brains do
not receive a "count" of the number of photons entering our eye. Instead, our
retina cells themselves are responsible for coming up with an average of all
the photons hitting it, and sending a signal to our brain: that's color.

Anyone who has any graphics experience whatsoever will know that it's an
extremely common operation to calculate lighting by multiplying a light's
color with an RGB texture. (The leaf's texture, say.) Now, tell me: What does
it mean to "multiply" two colors together?

Nothing! It's fake! It has no basis in reality whatsoever. It literally is an
astonishing approximation that _happens_ to work reasonably well.

But our brains know the difference, inchoately, when you look at a real video
vs a computer-generated video. You can tell that there's something almost
indescribably "off" about the CG video. And I suspect it's because the final
product consists of a series of approximations that work well in isolation but
are not quite perfect when it comes to simulating reality.

Combine that with a lack of detail in CG video due to the fact that there are
a gazillion atoms in real life and only a few million triangles in CG life,
and you will always end up with experiences that are very distinctly computer
generated.

Defeatist! Call me names! Say we can do better! Say that the latest advances
in graphics and raw computing horsepower will solve these concerns. Maybe they
will. But most importantly, I dare you to get so frustrated with our lack of
ability to generate truly realistic computer video that you take on the
problem yourself, and dive into the field with your whole heart. Surpass me.
Surpass what everyone thinks is possible. Question the fundamental assumptions
that the entire field is based on: that `N dot L x diffuse` is even slightly
reasonable. Create your own "diffuse" textures that stores counts of photons
bucketed by wavelength, measured from a real-life data source, instead of RGB
colors. Try fifty things that everyone dismisses as unpromising, because
whatever the final solution looks like, it's going to be unlike anything we're
doing right now.

Cracking the problem of realistic video isn't going to happen with incremental
improvements of our current techniques. Remember this rule of thumb: A video
that's indistinguishable from real life will appear equally real to both
humans and animals. A cat, for example, would think your video is real, if
it's viewing your video on a monitor calibrated to cat eyes instead of human
eyes.

It's more productive not to take the field too seriously, and to fall in love
with the endless puzzles and challenges that graphics programming affords.
Once you accept that the game is to generate something that looks _cool_ , not
that looks _real_ , everything falls into place.

~~~
JoshTriplett
I've seen images rendered using "path tracing"
([https://en.wikipedia.org/wiki/Path_tracing](https://en.wikipedia.org/wiki/Path_tracing))
which look literally photorealistic (indistinguishable from a photograph).

~~~
sillysaurus3
It's very important to avoid photos altogether when dealing with the question
of realism. Our brains process videos differently than still-frames. It's why
video compression is very different from image compression.

In other words, it's unfair to say that a given graphics technique looks real
because it generates photorealistic still-frames. Our target is videorealism,
not photorealism. I fell into this trap myself: it's so tempting to start with
still frames and think that results are encouraging just because they look
good. But string those still-frames together into a video and it'll be obvious
why it's artificial, assuming it's a rendering of a fairly complex scene,
which is equally important. But then you get into questions of whether the art
assets were carefully prepared to match the properties of a real-life object,
etc, which is why everybody starts with cubes and spheres, which also happen
to be objects that you never see in real life. (When's the last time you were
in a room composed of perfect cubes and spheres?)

~~~
dahauns
> It's why video compression is very different from image compression.

Is it? I was under the impression that I-Frames in current video compression
formats are encoded rather similarily to current still image formats. (JPG's
comparable lack of sophistication notwithstanding, but c'mon, it's old. :) )

~~~
sp332
Usually I-frames are a small minority of the frames. Most frames encode motion
information. So the decoder takes picture data from previous frames and moves
it according to the motion vectors. Then, if the result was deemed too
inaccurate by the encoder, there will be a little more picture data for the
current frame. Having accurate motion vectors is more important than having
perfect color reproduction.

------
cr0sh
First off - this is a great thing to hear; I'm glad that WebGL is advancing.

That said, there are a few voices hear bemoaning the fact that this update
still isn't up-to-par with the latest native API abilities or whatnot.

These kind of comments remind me of comments complaining about artificial
intelligence not being useful because it can't do x/y/z (ie - something that a
human can do), and so it is worthless.

Never mind the fact that when x/y/z tasks are demonstrated, the goal posts are
moved to a/b/c...

I see the same thing here with WebGL. Personally, I feel this is unfair.
Instead of complaining about what it can't do, instead one should work with it
and push it to do what it can do. I dare-say that in the hands of talent, the
current WebGL could easily do things that would be almost considered
"impossible". By talent, I mean those who really push the capabilities of
hardware and software, pulling tricks and such to maximize the apparent
capability (demoscene).

Even with that taken off the table, though - WebGL (even 1.0) can still do
things and represent ideas and worlds in enough detail to be fun for a game or
other use. I feel that those who complain will always complain. Instead, those
who can - use it! So what if it isn't perfect; I dare say that a Infocom's
Zork is a better game than many of today's triple-A FPS shooters or whatnot!
Graphics aren't everything; if you know what you are doing, you can make a
compelling and fun game today without them - there are plenty of recent IF
titles to attest to that.

I sometimes wonder if the complaints aren't excuses or something; that
developers are complaining about WebGL because they don't want to move the the
platform of the web browser (for some reason)? So any excuse not to develop on
the platform is a valid excuse? Perhaps. For me, what I see in WebGL is
extreme promise; in fact, it holds it right now, with WebGL 1.0 - if more
people would just develop for it.

Maybe I'm just biased, having grown up with computers from a time when the
idea of 3D graphics was either simple wire-frames or other slow software
rendering - or you dropped many thousands to tens-of-thousands of dollars on
hardware to get output that wouldn't even be worthy of the old-school Quake
engine. For me, what I see with WebGL blows me away, despite it not being
"realistic". It's realistic enough for me to see compelling action and stories
developed with it.

...and with native API access? Well - what little I have seen (because I run
*nix and you don't see much there) - it's amazing. One day it will be on the
browser too. But what we have already available, in the right hands, should be
more than enough. So quit complaining, and get to creating.

/personally wish I had the talent to work in this field...

~~~
merlish
> I dare say that a Infocom's Zork is a better game than many of today's
> triple-A FPS shooters or whatnot! Graphics aren't everything;

Interesting point of view, certainly. But the majority of game purchasers do
care about graphics. Great graphics suggest (often incorrectly, I'll admit!)
great production quality, and in the minds of many this is a 1:1 mapping to
the monetary value of a product.

So selling a game that's accessible via a URL is a bit of a barrier on the
commercial front. I guess itch.io works here.

On the technical/noncommercial side, for more high fidelity games I'd argue
guaranteeing a stable framerate (and not having the rest of the browser chug
while the game's running) and being able to use multi-threading would be a
great help.

Once WebAssembly gets threading support, I'd expect to start seeing more
intensive games. Web workers are... okay, but hardly convenient. I genuinely
believe multi-threading support is more important than a better graphics API.
(Typical engines cross-compiled from C++ then basically only have to deal with
targetting mobile-quality graphics with a few edges knocked off &
workarounds.)

As for Zork: You don't need WebGL for Zork. (Indeed, embedding text in a WebGL
scene (as opposed to overlaying it) is very hard work.)

So WebGL is a suitable technology when you need interactive 3D that can't be
prerendered, but you don't need impressive 3D; you're happy with results from
a decade ago with worse performance. It's not a bad prospect for me as a
prospective indie developer, but the tooling is noticeably more difficult than
in native solutions.

And the WebGL API is much harder to use than glBegin(), glVertex3f()...,
glEnd() from the bad old days... but that's just the influence of OpenGL ES.

Suggested path: write something in C against bgfx (
[https://bkaradzic.github.io/bgfx/examples.html#metaballs](https://bkaradzic.github.io/bgfx/examples.html#metaballs)
), then pray to kripken. And if Emscripten doesn't work just put some binaries
on itch.io. Even the subset of gamers interested in indie games expect to
download binaries.

\--

Edit: You say you wish you had the talent to work in this field. If you're a
fan of IF, you certainly have the technical talent to do that :) And writing
is fun, but extremely time-consuming. Good grief it's time-consuming.

If you can deal with some mildly heinous gameplay, consider buying & playing
Sunless Sea. The writing is transcendental, and the universe is brilliant.

------
uegwqhugehoq
does WebGL 2 have geometry shaders ?

~~~
jsheard
Nope, and it doesn't have compute or tessellation shaders either. Web 3D has a
way to go before it catches up with native APIs.

~~~
pjmlp
Every time I see something done with it, versus what native code can do on the
exact same hardware, I reach the conclusion it is only useful for DX 9 class
graphics or just plain prototyping.

~~~
goldfire
DX 9 class graphics are great for a lot of applications. Not every project has
to be at the cutting edge of rendering technique to achieve its goals.

~~~
pjmlp
Which is a waste of resources already on a DX 10 level card, which were
released alongside Windos 7, almost 10 years ago.

When will we get a 3D Web API that can do DX11 class graphics?

And I am already leaving out DX12, Vulkan level cards.

------
NickBusey
The fact that the Chromium blog requires JS to display some text and one image
is kind of ridiculous.

~~~
uwu
i disabled scripts and the article is visible in a noscript element for me

it's like in this meme:
[https://i.imgur.com/TUEABrP.jpg](https://i.imgur.com/TUEABrP.jpg)

the majority of users don't go out of their way to disable scripts so i can
totally understand it when site owners refuse to waste time (= money) catering
to the needs of the small fraction of users that do

