Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An In-Depth Look at WebGPU (unzip.dev)
85 points by asebold on Feb 7, 2023 | hide | past | favorite | 57 comments


For an "in-depth look" that was surprisingly shallow ;)

I was hoping for a bit of discussion about the trade-offs that WebGPU had to accept to create an as-thin-as-possible wrapper around Vulkan, D3D12 and Metal, while at the same time catering to web developers and guaranteeing the safety requirements of the web.


This was the first time I had ever heard of WebGPU (I didn't write the article), so I thought it was pretty in-depth. Perhaps I should have called it an "introduction" instead.


Thanks for including Unzip nonetheless! Yeah, it's more of an intro :)


A better place to start than this article for devs who know WebGL/OpenGLES would be https://toji.github.io/webgpu-gltf-case-study/ by one of the Google Chrome devs. I also really liked https://surma.dev/things/webgpu/ which was my first in depth intro into rendering via WebGPU.

I would also suggest https://web.dev/gpu-compute/ which is a good intro to WebGPU compute, but it was out of date and had broken APIs the last time I tried it. But it has useful theory if you've never used GPU compute before.


Thanks for the recommendations, adding to the extra section!


I'm curious if there are technical benefits to interfacing with WebGPU via JS or via WASM (Rust, specifically)?

I am currently teaching myself Rust + Bevy with the intent of shipping some proof-of-concept game following the ECS paradigm rendered using WebGPU. This is going fine and I am excited to learn the tech, but my bread-and-butter is JavaScript and I only desire targeting the web. At time of writing, Bevy's (WebGL) examples don't run on Android devices which is a bit concerning.

There is some compelling JS tooling out there for rendering - Babylon, AFrame, UseGPU, etc, but they all have their issues.

Babylon: not designed with ECS in mind, same as ThreeJS, declarative programming can be achieved through plugins, but it's fragile and significantly less performant than if built natively into the framework.

AFrame: powered by Three, but ECS-first and I assume it does a good job at it. Not practical for 2D rendering and really intends to be VR-first with a nod to non-VR 3d.

UseGPU: truly what I would like to be using as it's declarative by design, but it's so new that 2D Sprite is still on the TODO list. I'm not skillful enough in low-level graphics programming (yet!) to assist in the development.

I really want to be ready for this technological shift. I fully intend to build something as complex as RimWorld that runs in your browser. I'm taking baby steps, as quickly as I can, to get there, but am not sure what tools to be adopting to best prepare.

At the moment I'm betting on Bevy because it has a lot of wind behind its sails, its all-in on ECS, and I suspect avoiding GC for compute-heavy gaming will be beneficial. I don't know what interfacing to WebGPU via JS gets me aside from a potentially more rapid prototyping environment. I'd love to hear others' takes on this to ensure I don't burn months looking in the wrong direction.


> I'm curious if there are technical benefits to interfacing with WebGPU via JS or via WASM (Rust, specifically)?

In general WASM vs (well-written) JS shouldn't matter much for most scenarios, but there's one important drawback when using WebGPU via WASM: any buffer mapping operation needs to do an extra copy to get the data in and out of the WASM heap (e.g. there's currently no way to directly access (another) JS ArrayBuffer object from WASM). It's not a show stopper of course, but something to keep in mind when shuffling a lot of data between the CPU side and WebGPU.

PS: IMHO the biggest downside of using WebGPU natively is that it comes with a builtin shader compiler and/or cross-compiler to translate either WGSL or SPIRV to the shader dialects used by the various backend APIs. This adds a couple of complex dependencies and quite a bit of binary size (much more than "just" the WebGPU implementation). It would be nice if the native WebGPU implementations would allow to move all that shader compile/transpile stuff offline, and feed the WebGPU API with backend-specific shader blobs.


I believe both Dawn and wgpu are planning on supporting SPIR-V ingestion for native use cases,


Yep, but at least for the D3D12 and Metal backend the input SPIRV needs to be translated to HLSL (or maybe directly to DXIL?) and MSL. Not sure if the Vulkan backends accept the input SPIRV directly or whether there's also some translation happening.


wgpu supports both a SPIR-V passthrough mode [0] and a regular SPIR-V mode where it will get some transformations added to it (e.g. bounds checks are added on backends without VK_KHR_robustness). I don't believe Dawn has a similar passthrough mode, only a SPIR-V -> transform -> SPIR-V path.

[0] https://docs.rs/wgpu/latest/wgpu/struct.Features.html#associ...


As a general rule of thumb, WASM (and therefor Rust) will perform:

- better for compute heavy workloads which don’t need to interop with JS and JS-only APIs

- roughly the same or worse for mixed workloads with highly optimized interop

- worse for JS heavy workloads, no matter how much you optimize

Partly this is because interop has a steep cost that negates the perf benefits in such workloads. Mostly it’s because JS runtimes have very good performance characteristics, beating that is a big feat.

There are certainly performance focused mixed native/JS projects with great potential. Almost all of them are either very light on interop or very heavy on deep understanding of both the native language’s performance advantages and that of the JS runtime (which is often itself very runtime specific).

If you have a lucky workload that doesn’t need to cross interop boundaries much, the world is your oyster. If you have any other workload, you’re probably better off working with whatever you feel most comfortable to deliver and optimize for your use case.


I thought I had read something in the past ~6 months implying that performance gap had become less of an issue, but I'd have to dig for a bit to find it. I understand that I should always expect crossing the WASM boundary to be slow.

I'm basically trying to build something that's like a colony sim meets idleRPG. You define a strategy once per day, submit it, and the colony sim chews on the instructions and plays forward. The user can just observe and devise a plan for adjusting their strategy the next day. I think this makes it an especially good candidate for WASM as I can simulate hundreds of semi-intelligent entities and don't ask for responsive user input.

I think there's also benefits with garbage collecting, right? I can avoid GC entirely with WASM but would inherently be subject to it with JS which then pushes me into a more imperative/mutable style of coding in an effort to preserve object lifetimes. I'd really rather stick to a declarative style if I can get away with the performance.


Calling between JS and WASM isn't really a performance bottleneck anymore in itself [1] (but it is an 'optimization boundary' for the compiler, the same way that a DLL call would be in a native application).

[1] https://hacks.mozilla.org/2018/10/calls-between-javascript-a...


One drawback I ran into with wgpu.rs was that targeting WASM builds would result in a much weaker set of GPU features on the browser than a native Rust build. I don't know if this affect Bevy or your specific use cases, but it annoyed me enough that I decided to only focus on native wgpu development instead of supporting both native & the browser.


Thanks for the feedback. Indeed, I'm not of the impression that I will be able to perfectly compete with native, but hoping WebGPU is still a marked improvement over WebGL2. I am heavily prioritizing accessibility as I'm trying to build a game that users use for only a couple of minutes each day. So, a really really low barrier to entry is important as I do not see my player base being willing to open Steam for a couple of minutes of gameplay, but I do think they'll open a tab. I'm OK losing polish in exchange.


In my experience, avoiding GC is necessary to hit consistent a 60 FPS. GC pauses will mean randomly dropped frames.


There is one clear benefit to me. Targeting native platforms allow the use of RenderDoc or equivalent. This is of a tremendous help when debugging or profiling.

I've been working on a realtime softbody simulation in WebGPU for a while. After trying it in JS, C++ and Rust, wgpu is by far the best experience I got with WebGPU


If you're okay with voxels, check out veloren.net

ECS based game engine in Rust, wgpu rendering, and I've been really impressed with the code quality. I've been trying to find something to inspire me to really go all in with Rust, and I think I found it with Veloren


Thanks for the suggestion! I will check it out. Voxels are definitely interesting :)

At first glance, this does look like it will be a great resource. Even just having high-quality examples of how to configure Cargo and etc. boilerplate will save me a lot of hours.

Doesn't look like they target WASM at all, though. I wonder what the main concerns were/are?


I know WASM is used for the scripting system. WASM isn't really on my radar right now, so I can't offer advice, but I do know there are at least good Rust resources regarding WASM. Also, the Veloren team is really responsive on Discord, so that might be worth looking into.

The rendering systems are fully separated from game logic systems, so it's at least got that going for it


For something like RimWorld WebGL 1 should be more than enough, no? I don't think it's doing anything complex graphically. As a bonus, you'll be able to run on a lot more devices.


Honestly, I am not sure, lol. I guess my thoughts were:

- Dwarf Fortress has existed since forever and neither DF nor RimWorld nor anything remotely similar seems to exist as browser-first software. Clearly it's not due to graphics limitations, but perhaps market economics?

- WebGPU isn't just about graphical performance. It's also about GPGPU. I was thinking some of the in-game algorithms would become more viable to run in this environment.

I'm learning a few pieces of tech at the same time, though. It might be the case that I find that avoiding GC with Rust/WASM and gaining memory locality perf benefits with ECS address the main limiting factors. I think I'm just a bit caught up in the hype of WebGPU coming out, too, and so am assuming it must be relevant to my solution. It might not be!


Another player in the WebGPU field you may want to add is https://usegpu.live/


Added! thank you!


This does not seem particularly in-depth.


Exactly. Seem OP found a fun new tool and decided to go all in.

The euphemisms also are questionable.

> Everything seems to be moving to the web; I can see how many desktop-related headaches (like installing software) will become irrelevant over time

I've heard this since dial up days.

We GPU may be a natural progression sure. But it's not the game changer. (Bad pun)


The most "game changing" feature of WebGPU might be that it finally brings compute shaders to the web. The other improvements are more or less incremental, but (compared to WebGL2) still badly needed and long overdue.


I'm the author, not the poster. Unzip is more of an intro summary of a concept so the title is indeed a bit misleading ;)


Hi HN, I'm the author, but not the poster, I didn't plan on posting here yet.

Ps Unzip is a summary/intro not an "in indepth article". I try to summarize concepts. But thanks to whoever posted, appreciated either way.

AMA


It's quite late into the text that this mentions WebGPU isn't shipping yet and doesn't mention at all that the spec isn't ready. And in fact WebGL just recently got caught up in Safari to WebGL 2 many years after the spec, has new extensions getting specced and implemented all the time etc. Shiny chasing danger here vs actually shipping something.


What I hate on the 3D Web APIs, is how out of date they happen to be.

WebGL 2.0 is locked into OpenGL ES 3.0, when they rebooted work to finally catch up with OpenGL ES 3.1 set of APIs, and already existing protoytpe from Intel[0] for compute, Google dropped their support for WebGPU because anyone that cares should migrate to WebGPU instead [1]. Two years later still no WebGPU.

So WebGL 2.0 is stuck in what we could consider iPhone 5S 3D capabilities, with no known game that is capable of matching Infinity Blade 3 [2].

In similar vein when WebGPU finally comes out this year (they were targeting 2022), we will have an API designed as MVP 1.0, with the capabilities from Metal/DirectX 12/Vulkan when they were at version 1.0 in 2015, while asking everyone to rewrite their GLSL shaders into WGSL.

No wonder that with fast Internet connections, game studios are more interested into adopting pixel streaming technologies using latest 3D APIs than investing into pure Web technologies.

Usually the community talks are all about 3D models for ecommerce shops, tile rendering for online maps and the return of Flash like tooling.

Something like Nanite or mesh shaders on WebGPU? It will never happen.

[0] - https://registry.khronos.org/webgl/specs/latest/2.0-compute/ [1] - https://bugs.chromium.org/p/chromium/issues/detail?id=113199... [2] - https://www.polygon.com/2013/9/10/4715534/infinity-blade-3-c...


If mobile support is important, the Vulkan 1.0 feature set is basically all you can hope for unfortunately, with or without WebGPU inbetween.

For WebGPU it was the right decision to select the base feature set for the biggest possible reach. Desktop-GPU extensions can still be added on top when the need arises.


Android Baseline profile is Vulkan 1.1.

Just like it happened with WebGL, I bet WebGPU will stay at 1.0, and maybe around 2032 we might eventually get WebGPU 2.0 with the native capabilites available 2023.


How many Android versions does this go back though? (I can only find the baseline profiles for 2021 and 2022).


The ones that matter when Vulkan became a required API on Android, Android 10 onwards.

https://source.android.com/docs/core/graphics/implement-vulk...


You are painting WebGL2 progress much more rosy than it actually is ;)

For instance the WebGL2 compute extension was put on hold exactly because WebGPU solves this better and is "just around the corner".


Didn't mean to imply WebGL2 is getting feature parity. But WebGPU for cross-browser apps is easy to beat as long as it's unfinished and bugfixed to usable state and laggard implementations caught up which can take years based on how it went with WebGL.


That's fair, but if you're looking at say, coding up many many lines of Vulkan boilerplate for a new project, it does kinda make sense to look into WebGPU


Looking into middleware is a much more sane option.


WebGPU's native libraries are essentially middleware to abstract over the common subset of Vulkan, D3D12 and Metal.


That is a wrapper library, middleware means more richer tooling and development experience.

At very minimum something like Ogre3d or Open Inventor.


...which is then often too opinionated and bloated for specific use cases. E.g. nobody in their right mind would use Unity for a simple 3D viewer application.


You would be surprised.

No one outside AAA studios cares about Vulkan, and Khronos had to come up with ANARI to try to move visualization companies out of OpenGL/DirectX into Vulkan.


Nobody in their right mind would use 'raw' Vulkan either for a simple 3D viewer. But that's exactly why thin wrapper libraries are important.


A simple 3d viewer should just use WebGL. Low level APIs are targeted at large game engine developers.


Author here, I wrote at the begining of the text that it isn't fully supported yet :)

"WebGPU was first published in mid-2021 (super new!), so it isn’t fully supported on browsers (also shows how you can enable it) yet".

My main goal is to bring this new technology into attention, I don't intend nor do I have any incentive to recommend people using it in production. Sorry if that wasn't completely clear.


What do you mean by "Published in mid-2021"? It's still in draft and is undergoing changes. From the Github repo it seems the first (presumably public) commits were in 2019: https://github.com/gpuweb/gpuweb/commit/c325725d5e2b50dcd9e5...

Edit: I imagine it comes from here: https://www.w3.org/standards/history/webgpu - the first time a working draft was published on the w3c site, instead of only at the github repo, was then. But is it a meaningful date to mention?


Don't know current status but the stated target is to ship something in April with Chrome M113

https://groups.google.com/a/chromium.org/g/blink-dev/c/VomzP...


Is anyone here using WebGPU Native in production? In what state is it right now?


Does anyone know whether wgpu (the rust library) is ready for production, and if not when it might be? The 0.x version number suggests it isn't. I'm aware that WebGPU, and browser support for it, is still in development and subject to change (and that wgpu is the library underlying that), but I'm more interested in using wgpu in native rust programs than in browsers/wasm, and I'm not sure if I need to wait until the spec is finalised and browsers officially support it to do so.


> The 0.x version number suggests it isn't

In Rust land a 0.x version number only really suggests that the library has an unstable API (expect breaking changes on version bumps). Plenty of very battle tested libraries that are being used in production at huge scale (like the `hyper` HTTP crate for example) still have 0.x versions.

I'm not an expert in graphics programming, but my understanding is that wgpu is solid and ready for production if it covers your use case, but there are still things it can't do yet. As long as you're willing to deal with breaking changes, I'd say go for it.


>"WebGPU is an abstraction for modern graphics APIs such as Direct3D 12, Metal, and Vulkan"

Which means that someone who wanted to write a graphics API and/or graphics API abstraction layer --

would do well to study WebGPU...

(in addition to other graphics APIs and graphics API abstraction layers...)


We have had GPU access in the browser for more than a decade now. Are there actually any broadly used applications for it? Everything I come to see appears to be some kind of tech demo that lags behind from what native PC/console games could do 20 years ago...


Figma. Definitely broadly used.

There’s also a whole universe of Web based names (see facebook). In the VR niche I maintain https://moonrider.xyz/ that has had 100k MAUs sustained for 2+ years. Not sure if it qualifies as large scale but definitely non negligible.


Ever heard of Google Maps and Google Earth?


Fair enough. Those fullfil any definition of "mass usage". But was that it?


I guess Figma uses WebGL next to WASM but not sure. There are some pretty successful gaming niches like Facebook Instant Games where pretty much all games use WebGL via some sort of middleware, even though most games are simple 2D puzzles. In general, that those web 3D engines (three.js, babylon.js, playcanvas, pixi, etc.. ) even exist for so long must mean that there's some sort of demand for them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: