Aras is right, but the elephant in the room is still shitty mobile GPUs.
Most of those new and fancy techniques don't work on mobile GPUs, and probably won't for the foreseeable future (Vulkan should actually have been two APIs: one for desktop GPUs, and one for mobile GPUs - and those new extensions are doing exactly that - splitting Vulkan into two more or less separate APIs, one that sucks (for mobile GPUs) and one that's pretty decent (but only works on desktop GPUs).
WebGPU cannot afford such a split. It must work equally well on desktop and mobile from the same code base (with mobile being actually much more important than desktop).
I think it unrealistic management of expectations that desktop and mobile must or should be equal. There is plenty of web applications use cases one would like to run on a desktop, but they are irrelevant for mobile, for many other reasons as well. E.g. think editing spreadsheets.
WebGPU says the baseline should be what is supported on both desktop+mobile, and that extensions (in the future) should enable the desktop-only use cases.
Others seemingly argue that mobile should be ignored entirely, that WebGPU shouldn't work there, or that it should only work on bleeding-edge mobile hardware.
This is an odd analogy. We should reduce the API space for mobile so devs don't make mobile spreadsheets? I mean...what is this arguing exactly? UX is different, sure, but how does that translate into something this low level?
Can you explain what the split is supposed to be? I'm fairly confused because mobile GPUs (tile based) are creeping into the desktop space. The Apple Silicon macs are closer to tile based mobile GPUs than traditional cards.
What APIs are supposed to be separate, why, and what side of the fence is the M1 supposed to land on?
In places where Vulkan feels unnecessarily restrictive, the reason is mostly some specific mobile GPU vendor which has some random restrictions baked into their hardware architecture.
AFAIK it's mostly not about tiled renderers but about resource binding and shader compilation (e.g. shader compilation may produce different outputs based on some render states, and the details differ between GPU vendors, or bound resources may have all sorts of restrictions, like alignment, max size or how shader code can access them).
Apple's mobile GPUs are pretty much top of the crop and mostly don't suffer from those restrictions (and any remaining restrictions are part of the Metal programming model anyway, but even on Metal there are quite a few differences between iOS and macOS, which even carried over to ARM Macs - although I don't know if these are just backward compatibility requirements to make code written for Intel Macs also work on ARM Macs).
It's mostly on Android where all the problems lurk though.
Ah ok, so its not so much the mobile architecture as the realities of embedded GPUs and unchanging drivers compared to more uniform nVidia/AMD desktop drivers.
This is a real problem but I'm not sure splitting the API is a solution. If a cheap mobile GPU has broken functionality or misreports capabilities, I'm not sure the API can really protect you.
Uh, no ; it power and heat management so battery and fire risk that limits SFF -- It would be good for mobile devices to have external GPU/battery attachments via a universal connector... this will boost efficacy of devices... but you may not always need the boost provided by the umbilical - but when you do need it - just put it outside the machine, and connect it when needed...
> with mobile being actually much more important than desktop
How so?
I always thought the more common use case for GPU acceleration on the web for mobile were 2D games (Candy crush etc). Even on low end devices these are already plenty fast with something like Pixi, no?
We live in a bubble where we don't notice it, but desktop as a platform is... not dying exactly, but maybe returning to 90s levels of popularity. Common enough, but something tech-minded people use, and not necessarily for everybody. Mobile is rapidly becoming the ubiquitous computing paradigm we all thought desktop computers would be. In that world, WebGPU is much more important on mobile than on desktop.
I genuinely think personal computing has been severely hamstrung over the past decade+ due to the race to be all-encompassing. Not everything has to be for everyone. It's ok to focus on tools that only appeal to other people in tech. It really is.
A chromebook, internally, is more a mobile device than a "real" computer. Plenty of high school kids today will own their first real computer when they go to college. Until then, most of their computing is done their iPhone or iPad, and perhaps their school-issued chromebook.
We see this issue with kids of their generation entering the workforce with a lack of basic computer skills, or CS students in college who have to be explained the concept of a hierarchical file/directory structure.
> A chromebook, internally, is more a mobile device than a "real" computer
How is that? And if so how am I typing this on an Intel i5 Chromebook with 16G RAM that is hosting a Linux VM? If upgradeability is the issue, Framework's Chromebook is completely upgradeable.
In general, WebGL has more CPU overhead under the hood than WebGPU, so the same rendering workload may be more energy efficient when implemented with WebGPU, even if the GPU is essentially doing the same work.
Most of those new and fancy techniques don't work on mobile GPUs, and probably won't for the foreseeable future (Vulkan should actually have been two APIs: one for desktop GPUs, and one for mobile GPUs - and those new extensions are doing exactly that - splitting Vulkan into two more or less separate APIs, one that sucks (for mobile GPUs) and one that's pretty decent (but only works on desktop GPUs).
WebGPU cannot afford such a split. It must work equally well on desktop and mobile from the same code base (with mobile being actually much more important than desktop).