
Real-Time Ray-Tracing in WebGPU - Schampu
https://maierfelix.github.io/2020-01-13-webgpu-ray-tracing/
======
hinkley
I read Practical Parallel Rendering (1st Edition: 2002) quite a long time ago,
on someone's recommendation. There's quite a substantial section on how to
build and manage effective job queues that's worth a read even if you don't do
any CGI.

But there's also a thesis in there: given that scene descriptions grow in size
much faster than screen resolution increases, there should be a tipping point
where ray-tracing is more efficient than rasterization. I don't think they
expected it to take quite this long though.

~~~
corysama
"Practical Parallel Rendering" is a great book for anyone. It really is
"Parallel Work Distribution Options: The Book".

The argument that "Ray tracing is logarithmic in scene complexity while
rasterization is linear. Therefore, ray tracing will win eventually!" ignores
the fact that rasterizers also use hierarchical graphs to maintain logarithmic
complexity just like ray tracers do. You could make the same argument if you
compared a well-designed rasterization-based system vs. a naive ray tracer
that brute-forces every triangle vs. every pixel.

The difference is really a focus on local vs. non-local data. Rasterizers
focus on preparing data ahead of time so that it can be directly indexed
without searching. Ray tracers focus on making global searching as fast as
possible. Rasterizers do more work at the start of a frame (rendering shadow,
reflection, ambient occlusion maps). Ray tracers do more work in the middle of
the frame (searching for shadowing/reflecting/occluding polygons).

It's wonderful that we finally have both accelerated in hardware. HW ray
tracing still has a very long way to go. Currently, budgets are usually less
than 1 ray per pixel of the final frame in real time apps! Figure that out how
to use that effectively! :D But, it still opens up many new possibilities.

~~~
pixelpoet
I (lycium) wrote a bunch on this topic on a recent thread on reddit
r/hardware, with many similar points:
[https://www.reddit.com/r/hardware/comments/enn41z/when_do_yo...](https://www.reddit.com/r/hardware/comments/enn41z/when_do_you_see_fully_path_traced_aaa_games_being/)

------
ArtWomb
>>> Recently I began adapting an unofficial Ray-Tracing extension for Dawn,
which is the WebGPU implementation for Chromium

Wow, very impressive! I believe this is only available for MacOS Chrome
Canary, with enable-unsafe-webgpu flags toggled on. But we are starting to see
more example code.

[https://github.com/tsherif/webgpu-
examples](https://github.com/tsherif/webgpu-examples)

This is the first specific RTX target engine I've seen so far though. Starting
to feel like the future with full time real-time hardware rendering
capabilities in the browser ;)

Do you mind my asking what you plan to build with it?

~~~
Schampu
Hey thanks for you comment.

The Ray-Tracing Extension is currently only available for Windows and Linux.

My next plan is to implement the extension into Dawn's D3D12 backend, so I can
build chromium with my Dawn fork and have Ray-Tracing available directly in
the browser (at least for myself) :)

------
darknoon
Hey, I really appreciate your work on these bindings. I have done a lot of
work with Metal on iOS, and found it frustrating to start over from scratch
when trying to combine graphics and deep learning (CUDA on Linux). It would be
awesome to see a future where you can write high-performance GPU-driven apps
in a cross-platform way with js/webgpu/wasm and slap in platform-specific UI,
without bundling all of Unreal or Unity. Running in a browser would be an
optional convenience.

Anecdotally, I was trying connect PyTorch's CUDA tensors to the GL textures
that Electron/Chrome uses to render in a Canvas without going through CPU
memory, but couldn't figure out where to inject my code. Chromium's GPU code
is quite a maze. Perhaps a smarter person will be able to accomplish that.

~~~
modeless
You may be interested in this Chromium fork that Intel is working on which
adds a machine learning API, loosely modeled on Android's NNAPI:
[https://github.com/otcshare/chromium-
src/commits/webml](https://github.com/otcshare/chromium-src/commits/webml)

It's not likely to be standardized as is, but the code demonstrates how to
integrate something like this into Chromium. There's a Web ML community group
that's working to figure out what could be standardized in this area.
[https://webmachinelearning.github.io/](https://webmachinelearning.github.io/)

------
CountHackulus
Now if only Apple would get off their high horse and allow SPIRV so that the
WebGPU standard can go forward.

~~~
macawfish
Totally! I share your frustration.

And I'm cautiously optimistic that the recent conversation the GPU Web working
group had with the Khronos liaison will spur some SPIR-V progress.

[https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25...](https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_253vEtKxuUkTGvxpuv8Ac/edit)

The meeting notes also reveal a clue as to why Apple might be pushing WSL so
hard:

 _> MS: Apple is not comfortable working under Khronos IP framework, because
of dispute between Apple Legal & Khronos which is private. Can’t talk about
the substance of this dispute. Can’t make any statement for Apple to agree to
Khronos IP framework. So we’re discussing, what if we don’t fork? We can’t say
whether we’re (Apple) happy with that._

 _> NT: nobody is forced to come into Khronos’ IP framework._

~~~
om2
We sincerely think a text-based language is better for the web. It’s honestly
weird that anyone thinks a binary format is a good webby choice. Both our
browser engine people and our GPU software people agree.

Khronos basically said in that meeting that it would be fine to fork SPIR-V,
which would solve Apple’s and Microsoft’s issues with their IPR framework.
We’ve also discussed using a textual form of the SPIR-V format. We’ve offered
all sorts of compromises. It’s Google that isn’t willing to budge, even
stating in a WebGPU meeting that they never even considered what compromises
would be acceptable to them. Encourage Google to be open to meeting in the
middle and maybe we will get somewhere.

~~~
mantap
I do agree that a text based format is better in many cases. A byte code
format would suck for debugging. People want to type their shaders directly
into a browser window and have compilation happen in less than a frame. Shader
compilation needs to be fast enough that it takes less than a frame because
people do have things like colour pickers in their in-browser shader editor
where you can drag the value around and it changes the text. To emulate this
without browser support would require a lot of work.

But that text based format should be GLSL because that's what everybody's
shaders are already written in for WebGL and obviously there will be a
transition period where both WebGL and WebGPU will have to be supported (which
is easy since most people use a library such as Babylon or Three).

Having a text based language that is not GLSL is pointless IMO. You have the
drawbacks of both a bytecode language (need to ship a compiler with the page
to compile GLSL into WSL) and textual formats.

As an outsider the most logical option is to support both GLSL and SPIR-V.

------
91edec
Is there any good WebGPU tutorials?

I've been wanting to play around with graphics programming for a while and the
web is such a perfect platform due to cross platform compatibility and lower
barrier for entry.

~~~
cogman10
It's barely got any support ATM.

[https://github.com/gpuweb/gpuweb/wiki/Implementation-
Status](https://github.com/gpuweb/gpuweb/wiki/Implementation-Status)

I doubt you'll see a lot of good tutorials for it until you start seeing it
land in stable versions of browser (or even nightly builds tbh).

While starting to show it's age, WebGL should have a lot of tutorials and you
can start working with it today.

~~~
Keverw
I wonder if people will use WebGPU directly or use a higher level
framework/library such as Babylon? Looks like it has support for it but
doesn't seem everything is supported yet but haven't personally played with
it.

~~~
Jasper_
I'm of the opinion that WebGPU should try not to be a friendly API on its own,
but instead practically mandate a framework on top. This is the philosophy
that the newer APIs like Vulkan/D3D12 adopt; that you should have a higher-
level renderer driving it that's able to reason about your entire scene graph.
I've suggested as much to the WG before, who roughly seem to agree.

[0]
[https://github.com/gpuweb/gpuweb/issues/171](https://github.com/gpuweb/gpuweb/issues/171)

~~~
flohofwoe
I don't agree. The Vulkan and D3D12 APIs didn't have to be so programmer
hostile, they're just badly designed APIs.

Thankfully, WebGPU took a lot of inspiration from the Metal API, and less from
Vulkan and D3D12, and thus is usable without a "sanity layer".

~~~
pjmlp
That sanity layer ended up being yet another way to foster adoption of
middleware engines.

