Hacker News new | past | comments | ask | show | jobs | submit login
Raw WebGPU (alain.xyz)
99 points by ingve on Jan 11, 2020 | hide | past | favorite | 41 comments

I'm extremely excited for WebGPU. Support for Compute Shaders and efficient batching of various tasks such as setting shader uniforms are going to be game changers with respect to power and performance of 3D graphics and GPGPU in web browsers. Right now, Chrome Canary on windows crashes for me when I try WebGPU, unfortunately.

Without push constants or persistently mapped uniform buffers, it's a long stretch to call our way of setting uniforms "efficient"...

I've lost track of the many buffer management proposals up in the air. What's the current motion? Would love to see devh's range tracking proposal brought back from the dead, but that's probably unlikely for MVP at this stage.

Current motion is maybe we'll get a Queue::setBufferSubData and setTextureSubData, or not. And even if we get them, it's problematic to allow the copy to be direct if this part of the buffer isn't used (while some other part is used) by the GPU.

I'm excited about having the option to generate portable SPIR-V shaders from a wide range of languages (GLSL, HLSL, Julia, Rust, OCaml and others).

I understand that SPIR-V is an excellent, flexible intermediate representation... I really hope it ends up as the shader language of WebGPU.

(Apple is trying to sabotage SPIR-V and it'll be really disappointing if they succeed.)

If by "Apple is trying to sabotoge" you mean "Apple has no plans to support", https://webkit.org/blog/8482/web-high-level-shading-language... goes into some detail about why SPIR-V bytecode isn't a good fit for them specifically, and reasonable (IMHO) arguments for why SPIR-V isn't a good fit for the web in general.

As SPIR-V is an intermediate language, I am curious how you imagine this would affect you. Mind elaborating?

Let me offer a rebuttal to that page.

> First, SPIR-V was not written with security as a first principle, and it’s unclear whether it can be modified to satisfy the security requirements of the web.

At the time of this post's authoring, Google already had a set of SPIR-V restrictions to make it suitable for the web [0]. The only response from Apple I heard was "it doesn't have tests", but, again, it had more than WSL at the time [1].

[0] https://github.com/gpuweb/spirv-execution-env/blob/master/ex... [1] https://github.com/KhronosGroup/SPIRV-Tools/blob/master/test...

> Forking the SPIR-V language means developers would have to recompile their shaders, possibly being forced to rewrite their source code anyway.

They might have to rewrite some small portions of their shader code to target WebGPU, or run some preprocessing tools to validate it. I don't think this was ever a problem. This is basically saying "if they have to change it anyway, why not rewrite it in an entirely new language", but most realistic shader code would be unaffected.

> Additionally, browsers would still be unable to trust incoming bytecode, and would be required to validate programs to make sure they are not doing anything insecure.

Browsers need to validate WSL/WHLSL too for things like out-of-bounds array accesses.

> And since Windows and macOS/iOS don’t support Vulkan, the incoming SPIR-V would still need to be translated/compiled into another language.

This is also true for WSL/WHLSL. But SPIR-V has a leg up here, as existing community cross-compile tools like SPIRV-Cross exist [2].

[2] https://github.com/KhronosGroup/SPIRV-Cross/

The argument that we have against WSL/WHLSL is that we do not trust the developers of these specifications to correctly understand the GPU programming environment, and involving IHVs in WSL/WHLSL's design is a waste of time, when they have already produced SPIR-V over many years.

That was a really helpful response, and I'm more informed for having read it. Thank you!

Regardless of the advantages of SPIR-V as an intermediate format, it's kind of unfortunate for step one of using WebGPU to be "choose a shader compiler and integrate it into your build system" (or alternately delivering a shader compiler on every page load).

That seems like a big usability regression over WebGL. It would sure be nice if there was some human readable format that was guaranteed to be available, even if all it did was compile to SPIR-V and submit it to a lower level API. Maybe that's too hard to specify adequately.

I'd personally be completely satisfied if they decided to compromise on letting you use either WSL/whatever-shader-language and SPIR-V. I just want to be able to use SPIR-V because I'm tired of high level shader language lock-in preventing me from:

1. using existing libraries written in other languages

2. using features that the graphics hardware has supported for years simply because the language/API has been abandoned

I agree that it could be convenient to have a readily available human writable shader language, but don't think it should replace SPIR-V. WSL could be that language for all I care. It already compiles to SPIR-V.

Yeah that seems like it would be ideal since it would satisfy both use cases.

I’m just using OpenGL right now and hoping to port to WebGPU (and the corresponding native libraries) when it comes out. If the performance is adequate maybe I’ll skip porting to Vulkan altogether.

So to dynamically load shaders I’d prefer not to have to ship a separate shader compiler on every platform. I also worry that a separate shader compiler implementation (used mostly offline) might not have as good portability/security/performance as something that’s part of a web standard and implemented directly by browser vendors, although that’s not guaranteed I guess.

Of course this doesn’t matter to game engines that always compile shaders up front and already support SPIR-V. But a lot of existing WebGL code isn’t like that. The rest of the API doesn’t look too complex, so if there was an easy way to input shaders (of course in addition to SPIR-V) I think it would go a long way to making it a total replacement for existing WebGL code.

I'll start off by sharing why I'm excited about SPIR-V:

There's much more incentive for people to write SPIR-V backends and compilers for existing languages than yet another hacky transpiler to an arbitrary, opinionated high level graphics shader language that may or may not be around long and only works in the web browser.

SPIR-V could very soon make it possible to more or less directly use libraries from existing language ecosystems. When I'm doing a project that needs GPU acceleration, I definitely want to tap into existing libraries! Especially existing math/science libraries. I don't want Apple to tell me what high level language constructs I should need to get stuff done. I would much prefer instead to start with a high level language that has good application specific libraries already written, documented, tested and refined.

A personal example: I dream of GPU accelerated sparse Clifford Algebra computations. Clifford Algebra is amazing, but writing an efficient Clifford Algebra library is hard work. I don't want to wait around for some slow, mediocre Clifford Algebra library written in WSL or HLSL. I also don't really want to write one myself. I want to use one that's already written in e.g. C++, Python, Rust, Haskell or Julia. As a bonus, I could use it in combination with other libraries in that language's ecosystem. SPIR-V could make this real.

As for Apple, I use the word "sabotage" because I've seen those PR blog posts Apple put out as well as some pretty cringy w3 gpuweb meeting notes. To be clear, I'm not trying to point fingers at individual developers or Apple users. I just don't like how Apple as a corporation has engaged the w3 process so far. That blog post would make it sound like Apple is leading the working group and presenting sensible decisions, when really there isn't consensus that people want a high level shader language that Apple designed and conveniently already has tooling for. What they didn't acknowledge until later is that because of a private IP dispute they're mostly just reluctant to work with Khronos directly, and appear to have some high level Apple legal aversion to Vulkan/SPIR-V. Rather than being up front about that, they went and made a whole new language that they claim everyone will love, and effectively launched a minor PR campaign against SPIR-V.

From my point of view, many of the arguments in the blog post you shared are just not convincing. A few seem downright disingenuous.

Take this quote for example:

> "However, that turned out not to be true; Asm.js, which is JavaScript source, is still faster than WebAssembly for many use cases".

But in the link they share to back that up, we basically see that Safari's WebAssembly implementation is slow. So that means bytecode might not be more performant than asm.js/high level language in certain cases? No, Safari is just slow.

I think that the most honest argument against SPIR-V in that post is the one in favor of using a high level, more human friendly language so that people can see what a shader does. I definitely understand why people buy that one and think it's a fair point. I used to feel the same way about WebAssembly, until I realized that WebAssembly would allow people to leverage existing language ecosystems in writing stuff for the web. From where I stand, that's way more human friendly.

As far as security goes: it's also completely possible to write obfuscated or misleading code in a high level language like WSL. Similarly, with the right tooling and a little practice, it's very possible to understand what IR bytecode does.

> "The language needs to be well-specified."

SPIR-V is extremely well specified, and well specified on a hardware level. It's like the most well specified. I just don't understand that argument. It's tricksy. Later on they're talking about how there isn't a web execution environment yet for SPIR-V. Well yeah, if Apple would just get onboard, that's what the working group would sit down and figure out. They'd have to do that anyway with their high level shading language _and_ duplicate the effort of SPIR-V in specifically coming up with a list of low level features that work for vulkan/metal/directx.

There's plenty of evidence that on a strategic level, Apple just isn't a big fan of cross platform tooling (or cross-platform anything for that matter). Vulkan has potential to be a real and useful cross platform GPU/TPU/accelerator API, and might pull loyal developers away from Apple's proprietary APIs. Apple is and has been all about vendor and platform lock-in for the longest time. I would know. At one point in time, I had subscription to Mac Addict magazine. In quitting the Mac, I battled through the all the annoying ways that Apple kept me locked into the platform. They do the same thing with developers.


Some relevant GPU Web meeting notes

Someone named Tom discovers WSL:


Apple people make their case for WSL:



The GPU Web group prepares for a meeting with the Khronos laison:


The GPU Web group meets with the Khronos laison:


An interesting detail here:

> MS (Apple): Apple is not comfortable working under Khronos IP framework, because of dispute between Apple Legal & Khronos which is private. Can’t talk about the substance of this dispute. Can’t make any statement for Apple to agree to Khronos IP framework. So we’re discussing, what if we don’t fork? We can’t say whether we’re (Apple) happy with that.

> NT (Khronos): nobody is forced to come into Khronos’ IP framework.

Apple shares feedback from developer outreach they did:


It is sabotage. They are doing it due to their unwillingness to admit, that they are wrong in not supporting Vulkan to begin with. If they'll support SPIR-V, there will be more pressure on them to support Vulkan proper.

Forgive my ignorance of GPU programming. I'm genuinely curious -- how does this differ from WebGL?

This is great and I would love to try this out. Unfortunately, support for Linux seems to be lacking? It’s definitely one of the web technologies I’m very excited about.

You can play with the native implementations on Linux for now, such as wgpu and Dawn, with an ability to build for the web in the future.

Hey everyone, I'm the original author here, thanks for reading! WebGPU's pretty exciting stuff, a modern graphics API for the Web.

Feel free to let me know if you have any suggestions/improvements! I'll keep an eye on the thread though.

Are there any examples of using webgpu for remote desktop applications?

this sounds cool, but why do we need it? why can't the web directly expose opengl?

OpenGL kind of sucks, tbh. I know because I work full time with it. There are so many things in OpenGL that are either inneficient, outdated, or plain stupid. You also can't directly expose it anyway because of poor support on Windows and MacOS. The former requires you to install proper drivers that users may not have, the latter deprecated it and to my knowledge already didn't properly support the latest versions.

So whatever you do in the browser, it would need some kind of translation layer to another API anyway. On windows that's usually DirectX via ANGLE. Even today, whatever you do with WebGL will actually be translated to DirectX calls. On MacOS, WebGL is probably be translated to Metal, right now. Same with Vulkan.

On macOS WebGL in all browsers goes through OpenGL, same as on Linux and Android. On Windows, WebGL generally calls into ANGLE-on-D3D, though both Firefox and Chrome support using "native GL" experimentally as well.

I'll be the first to agree that OpenGL is kind of garbage, but I will say that there's a lot of value in having a stable API over decades. I like platforms that stay backward compatible long-term, like web tech.

So given that, I don't really see anything wrong per se with webGL, apart from it being ugly, and I think we should just put up with that or abstract it.

It would also be great if webGPU becomes a big thing and we can just use old webGL code through a compatibility layer, either in JS or built into browsers.

it makes sense to me that people would want a cross platform way to write GPU code (I thought that was the point of opengl). I don't see why the web is a special case of this.

probably you can tell I don't work with gpus I'm just curious about them

Good question! Maybe with my side project, I can shed some light on the state of OpenGL exposed through the web:

I maintain a Matrix code rain project: https://github.com/Rezmason/matrix . The effect is basically a compute shader with post-processing steps applied on top of it. It's built on top of WebGL via ThreeJS, and I can attest that its cross-browser, cross-device support is currently terrible.

That's because ThreeJS's (otherwise nice!) compute shader implementation is written on top of WebGL 1's fragment shaders, rendering to floating point buffers enabled through the "oes_texture_float" extension.

This extension's implementation has been plagued by spec mistakes, made possible (I believe) by ambiguities and tacit assumptions about differences between the speccing process of OGLES 2 and WebGL 1 that proved incorrect. Fixing it has frayed implementors' constitutions and made the API more complicated; WebGL 2 incorporates this extension's behavior into its core, but isn't widely adopted, because (in my opinion) asking a browser maker to double down on their investment in WebGL again after the aforementioned debacle is a tall order. Or maybe there are security implications. Or maybe the only Mobile Safari programmers available decided OpenGL is a weird-shaped API to expose through JavaScript, and have lost interest. One can only speculate.

Anyway! In the meantime, to do any sort of GPU compute across all major browsers (including mobile), you're way better off enabling the "oes_texture_half_float" extension instead. Half floats, or Float16s, are common in the mobile graphics sphere, but not on the desktop, where most web development takes place; it's entirely understandable why a web developer might try and use "oes_texture_float", get it working on their desktop machines, fail to see it working in Mobile Safari, assume that the extension isn't supported, and call it a day. And even if they DO find the Stack Overflow answers that'd prompt them to switch to "oes_texture_half_float", JavaScript has no native Float16Array data view, so they have to make/download a slow JavaScript implementation or spin their own float to half float converter.

This is the state of things with WebGL.

WebGL 2 is pretty widely implemented. I think Safari is the only remaining laggard now that MS switched to Chrome based Edge?

It's true that WebGL support has been slow to mature in browsers, as will WebGPU be I fear...

WebGL 2 practically does not exist on Android. Implementation ranges from "horribly broken" to "barely usable". I've catalogued this before:


Interesting. The Looking at Chrome tests about WebGL2 conformance (https://cs.chromium.org/chromium/src/content/test/gpu/gpu_te...) it would look like Android is no worse than other platforms wrt known failing test cases (with link to relevant bugs) there. Firefox seems similar: https://hg.mozilla.org/mozilla-central/file/tip/dom/canvas/t...

(The conformance part is maintained by Khronos so there should in theory be less bias than vendors own tests)

Chrome on Android passes things through to the system OpenGL driver. This is almost certainly a case of terrible platform drivers, but there's nothing I can really do about that.

It'll still have to go through ANGLE or other GL sandboxing layer to be safe, and there are other OpenGL-backed Chrome platforms (Mac, Linux) that also have driver bugs. If vendor GL driver bugs hurt more on Android, maybe it's just for lack of testing which would lead to browsers or ANGLE adding workarounds for driver bugs. (There are amazing heaps of driver bug workarounds in ANGLE that rewrite malfunctioning GL calls)

edit: I guess in theory they might just rely on the Android sandbox without going through the ANGLE GLES implementation, since that should nominally be already hardened for untrusted code. But that sounds too risky from a security POV given how a lot of Android devices in practice don't get timely security updates etc.

Assuming the "directly" wasn't the most important part of your comment, that is WebGL: WebGPU is designed to be more like Vulkan/Metal (with an initial prototype having been done by Apple, that has since been renamed WebMetal).

WebGPU was prototyped by all parties, not just Apple. See https://en.m.wikipedia.org/wiki/WebGPU

No. It doesn't matter for the context, but if you are going to try to correct this, from the very link you pasted:

> On February 7, 2017, Apple's WebKit team proposed the creation of the W3C community group to design the API. At the same time they announced a technical proof of concept and proposal under the name "WebGPU", based on concepts in Apple's Metal.[5][6][7] The WebGPU name was later adopted by the community group as a working name for the future standard rather than just Apple's initial proposal.[2] The initial proposal has been renamed to "WebMetal" to avoid further confusion.[8]

> The W3C "GPU for the Web" Community Group was launched on February 16, 2017. At this time, all of Apple, Google, and Mozilla had experiments in the area, but only Apple's proposal was officially submitted to the "gpuweb-proposals" repository.[9][10][11] Shortly after, on March 21, 2017, Mozilla submitted a proposal for WebGL Next within Khronos repository, based on the Vulkan design.[12][13]

You must be reading this wrong. The quoted text says:

  1. WebGPU name was taken from Apple
  2. Apple initiated the working group in W3C (by refusing to work with Khronos).
What this does NOT say is:

  - Apple's prototype was the basis for the API we develop

Untrusted JS can't be given direct access to native graphics APIs because they are not hardened against malicious code. (Or even benign code, it sometimes feels like...) Hence WebGL/WebGPU.

this was my first thought, but I don't know what kind of malicious code a GPU can run

GPUs effectively have general-purpose compute now and they can DMA bytes into and out of system memory. At a basic level, sneaky on-GPU code run by an attacker could grab pixels off your screen (text from your emails, etc), but it's also possible for the GPU-based attack code to harvest data from system memory or even mess with data in order to attack a process running on the CPU.

Graphics APIs are defined and implemented in memory-unsafe languages on the CPU side, C/C++, and will read from and write to memory based on data and pointers you pass to them without any provisions for untrusted data. -> instant pwnage

(Not to say GPU code execution is safe, but you wouldn't need it to ruin the users day)

First of all, your code passes through the driver, so you can exploit vulnerabilities there.

Even after that, the GPU can do DMA requests, so you can abuse that to read/write arbitrary memory locations.

ok makes sense. didn't know it could access arbitrary memory

> Raw

> TypeScript


Is "raw" reserved only for chemical elements now? Or maybe atomic nuclei only?

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact