
A Taste of WebGPU in Firefox - feross
https://hacks.mozilla.org/2020/04/experimental-webgpu-in-firefox/
======
thewebcount
I love how much this is like Metal! I've been really getting into Metal lately
and I find it so much easier to use than I did OpenGL.

The one glaring exception is binding. It's the thing I hate most about OpenGL.
I don't really have to think about it in Metal at all. The fragment shader
just takes a texture or buffer as a function argument and I can use it just
like I'd use any other C++ object or struct in the shader. From the calling
side, I just say, here's the texture that goes with that argument (by name in
the code, not by some random ID assigned at runtime). No figuring out which
texture unit to use, or which texture target. No cases of accidentally
forgetting to enable a texture unit or bind a texture to it, or using the
wrong texture target. All of that fiddling around makes OpenGL such a pain in
the ass to use. But it seems like WebGPU kept some of that nastiness. Why?

~~~
kvark
The binding model of WebGPU is nothing like OpenGL. I don't know where you get
that idea, especially considering a dedicated section in the article
describing this.

Metal binding model works well because each buffer can contain references to
other resources (`MTLArgumentBuffer`). So you can change whole packs of
resources at once with this. But argument buffers are not implementable on
other APIs. We chose Vulkan's descriptor set-like model as the least common
denominator.

------
MaxBarraclough
Will this API be opt-in, or can we expect another security disaster like with
WebGL?

[https://news.ycombinator.com/item?id=16457791](https://news.ycombinator.com/item?id=16457791)
, [https://www.contextis.com/en/blog/webgl-a-new-dimension-
for-...](https://www.contextis.com/en/blog/webgl-a-new-dimension-for-browser-
exploitation) , [https://www.contextis.com/en/blog/webgl-more-webgl-
security-...](https://www.contextis.com/en/blog/webgl-more-webgl-security-
flaws)

 _Edit_ I should have given a quote from one of the articles covering the
issue, so here's one:

> anyone running Firefox 4 with WebGL support is vulnerable to having
> malicious web pages capture screenshots of any window on their system

~~~
notyourday
Browsers should be run in a VM, via a fake GPU which is mapped to a window (
possibly root window ) of a main desktop. Giving code fetched from the
internet in real time full access to hardware, especially complex hardware, is
lunacy. I'm shocked that neither Google nor Mozilla still have internalized
this lesson.

~~~
MaxBarraclough
The browser should simply disable WebGPU by default. It's already intended to
be sandboxed, but given what happened with WebGL, I'm not convinced we can
trust it. There's nothing to gain by faking a GPU.

 _Edit_ For clarity, I'm saying you may as well just disable WebGPU. A CPU-
based simulated GPU would indeed improve security, but its performance would
be so atrocious that there would be no point having it compared to the
existing drawing APIs.

~~~
JoshTriplett
> There's nothing to gain by faking a GPU.

Moving applications that can do _anything_ into sandboxed pages in a browser
that can't do arbitrary things to the user's system is a win. The more capable
the browser is, the more things that once would have been downloaded
applications can become safely sandboxed pages.

~~~
MaxBarraclough
As I said, it's already intended to be sandboxed for security. As I've
clarified in an edit, my point was that CPU-based virtual GPUs perform so
poorly that they aren't worth implementing.

------
touchpadder
I hope it will bring real performance gains besides just easier API that is
abstracted away in most apps.

I have 2 projects using Three.js and pure WebGL:
[https://bad.city](https://bad.city) (multiplayer browser game engine +
editor) and [http://fonted.io](http://fonted.io) (shader toy + fonts)

~~~
kvark
Right. I think, if by 1.0 we aren't able to show significant performance gains
in a wide sense (latency, frame stability, and the actual frame time), it will
be a failure.

------
dchyrdvh
I know some people coded up raytracers with glsl texture shaders, but
rayracers are massive parrallel with no shared state needed algorithms (the
entire scene is tiny in those examples). Would this new API enable shared
state (to some extent)? For example, someone may want to solve numerically the
diff equation of the liquid flow. This usually involves dealing with big
lattices where parts of the lattices can be updated in parallel, but on the
edges, there need to be some data sharing involved. Same question, but for
multiplying big matrices, as many of those numerical methods allow to reduce
the task to multiplying huge matrices.

~~~
kvark
We are exposing writable storage buffers and textures in all shader stages
(there is a caveat discussion [1] on the topic though), which means you can
write data out directly. Sharing data between shader threads can be
efficiently in the compute shaders (via the shared group memory), which are
included in the core of WebGPU.

P.S. vange-rs [2] is written on wgpu-rs and uses ray-tracing in fragment
shaders for the terrain. I hope to run it in the browser one day!

    
    
      [1] https://github.com/gpuweb/gpuweb/issues/639
      [2] https://github.com/kvark/vange-rs

~~~
gpugpugpugpu
> writable storage buffers and and textures in all shader stages

Beware the behaviour of writes will not be consistent between vendors or
shader stages, and may be surprising.

~~~
kvark
Yes, we are fully aware of that. There must be a line drawn somewhere between
work that we can make portable, and the work that is platform-dependent.
Enforcing any ordering on storage writes is completely unfeasible.

------
suyash
This is not working for me on Safari even with turning on the experimental
feature, when will this land on regular Safari ? Excited to see WebGPU
possibilities.

~~~
kvark
Safari implements a large surface of the API ([1]), but they don't accept
SPIR-V shaders. When WGSL design is complete, and we have it supported in the
browsers, it will be time for truly cross-browser applications.

[1] [https://webkit.org/blog/9528/webgpu-and-wsl-in-
safari/](https://webkit.org/blog/9528/webgpu-and-wsl-in-safari/)

------
AtlasBarfed
Now the cryptominer JS scripts can REALLY get some performance while you are
on their site!

And of course its another way to do unique profiling of users.

------
kart23
Would this enable bitcoin mining using gpu in a browser?

~~~
Widdershin
It's already possible, see [https://gpu.rocks/](https://gpu.rocks/).

Would likely just make it faster as you wouldn't have to read the results out
of a canvas, although I haven't checked what the pure data API for WebGPU
looks like.

After a bit more searching:
[https://bitcointalk.org/index.php?topic=27056.0](https://bitcointalk.org/index.php?topic=27056.0)

------
kimown
Can I debug shader in webgpu like JavaScript in devtools?

------
lisamillercool
WOW! Finally WebGPU. Web Games will become a thing now!

------
WhyNotHugo
I really dislike when these articles say "works on Linux" (Reminds me of
people that say "works on PC" when it actually only runs on a single OS.

Some Linux setups run Xorg, others run Wayland (with the world moving towards
the latter). Yet, many times these new features (especially things related to
rendering!) only work on older Xorg setups, but not on Wayland.

For anyone Linux users out there: these updates only work if you're still
running Xorg.

~~~
kvark
I don't know why it wouldn't work on Wayland. We are currently using
"software" presentation, which is a CPU roundtrip, as described in the
article. This means we don't depend much on the actual Linux compositor. As
long as WebRender can run on it, and you have Vulkan support, WebGPU will be
available. If you are having issues, please file a bug!

~~~
WhyNotHugo
Setting `gfx.webrender.all = true` makes sites like google maps render as just
plain black (eg: no image). I'll try to debug in further detail and open a bug
when I have clear reproduction details.

~~~
kvark
That's worrying! Mozilla gfx team would be interested in fixing that, as we
are trying to roll WebRender on more platforms.

------
gmueckl
So this WebGPU stuff is pretty much a Javascript binding to Vulkan. The design
is almost 1:1 identical. Good luck securing that! I find bugs in the standard
Vulkan validation layer pretty regularly after it has been in development for
a couple of years. Sure, some of the stuff is pretty hard to check like device
memory addresses. But just this week I had the validation layer blow up
spectacularly because I simply dared to give it an unrealistically high index
for a vertex binding and the layer attempted to resize its internal state
tracking array for vertex bindings to accomodate that.

Vulkan has another fun aspect to it: the application doesn't have full control
over the layer loading mechanisms in the mandatory driver wrapper. A user's
system can be configured to load arbitrary validation layers in various ways
and the application cannot prevent any of that. It can only ask for more
layers to be loaded. This is great for development, but it also means that
browsers can't generally assume that the Vulkan API does what they think it
does. There might be a little extra in there.

~~~
enos_feedler
Is this unique to Vulkan or would the same hold true for DX12 or Metal? If
it’s unique, perhaps WebGPU could steer Vulkan closer to the others in this
regard. I don’t think WebGPU is meant to be a Vulkan binding exactly but
pushing secure and safe low level GPU computing across platforms. If Vulkan
isn’t good enough implementation layer it will just need to evolve

~~~
gmueckl
Vulkan itself is what it is now: an abstraction of (reasonably) modern GPUs on
a very low level. It is certainly very useful in that it allows applications
to use GPUs in ways that are very efficient and simply impossible with older
APIs, especially OpenGL. The flipside is that this flexibility and the
intentional thinness of the abstraction makes it hard and actually undesirable
to check the validity of the API usage in actual production use. That's why
the debug layer is an optional layer and not part if the Vulkan loader.

I need to check out a few things, but if WebGPU prevents access to a few hairy
bits of Vulkan like explicit synchronization and memory management (the Vulkan
app has to provide that), it might be securable. There are still some benefits
over OpenGL left like the entire pipeline state being expressed in pipeline
objects.

I don't know about DX12 or Metal in practice, so I will not comment on that.

