
From 0 to GlTF with WebGPU: The First Triangle - Twinklebear
https://www.willusher.io/graphics/2020/06/15/0-to-gltf-triangle
======
ArtWomb
Hi! Thanks for this tutorial. I notice you work with Ingo Wald at the fabled
Utah graphics program. I just recently linked Ray Tracing Gems right here on
HN and am working my way through it. Very insightful ;)

Am curious if your group investigates using WebGPU in contexts other than
rendering? Web scale GPU compute clusters, wgpu on native, scientific
simulations, ai research ... as just a few possible examples?

~~~
Twinklebear
Yeah! I've started to look some WebGPU compute applications with other
students in my group and I think there could be some cool use cases, like the
ones you mention. It sounds a bit odd, but yeah WebGPU on native (by directly
using Dawn, or wgpu-rs) is actually pretty compelling as a cross-platform low-
level graphics API.

What's really cool is that compute and rendering using WebGPU can get near-
native level performance. So a lot of scientific applications (which typically
rely on more FLOPs/parallel processing) can be implemented in WebGPU compute
without sacrificing much performance. I'm not sure how many simulations would
be ported to WebGPU, since they usually end up targeting large scale HPC
systems and CUDA, but for visualization applications I think the use case is
pretty compelling, especially for portability and ease of distribution. On the
compute side, I implemented a data-parallel Marching Cubes example:
[https://github.com/Twinklebear/webgpu-
experiments](https://github.com/Twinklebear/webgpu-experiments) , and found
the performance is on par with my native Vulkan version. You can try it out
here: [https://www.willusher.io/webgpu-
experiments/marching_cubes.h...](https://www.willusher.io/webgpu-
experiments/marching_cubes.html#vol=Skull) . There is a pretty high first-run
overhead, but try moving the slider around some to see the extraction
performance after that. WebGPU for parallel compute, combined with WebASM for
serial code (or just easily porting older native libs), will make the browser
a lot more capable for compute heavy applications. You could also combine
these more capable browser clients with a remote compute server, where the
server can do some heavier processing while the client can do medium scale
stuff to reduce latency or work on representative subsets of the data.

As for AI, people have started looking at compiling ML tools to WebGPU +
WebASM: [https://tvm.apache.org/2020/05/14/compiling-machine-
learning...](https://tvm.apache.org/2020/05/14/compiling-machine-learning-to-
webassembly-and-webgpu) with nice results, also getting to near-native GPU
performance.

------
ozten
Great article, thanks!

FYI: I have two GPUs but on all my browsers your article says "Error: Your
browser does not support WebGPU"

AMD Radeon Pro 5300M, Intel UHD Graphics 630 integrated MacBook Pro (16-inch,
2019) Mac OS 10.15.5 (19F101)

[https://get.webgl.org/](https://get.webgl.org/) works fine in all my
browsers.

~~~
donmccurdy
WebGPU is a much newer API than WebGL — from the first section of the article,
it sounds like you'll need to use Chrome Canary for now.

~~~
ozten
Thanks! Installed canary and enabled WebGPU with chrome://flags/#enable-
unsafe-webgpu

