Hacker News new | past | comments | ask | show | jobs | submit | mschuetz's comments login

Trilinear requires an image pyramid. Without downsampling to create that image pyramid, you can't even do trilinear sampling, so your argument strikes me as odd and circular. Like telling developers of APIs such as ID3D11DeviceContext.GenerateMips to simply use ID3D11DeviceContext.GenerateMips instead of developing ID3D11DeviceContext.GenerateMips. Also, I never took this article to be about 3D rendering and utilizing mip maps for trilinear interpolation. More about 2D image scaling.

Have you never downscaled and upscaled images in a non-3D-rendering context?


> Have you never downscaled and upscaled images in a non-3D-rendering context?

Indeed, and I found that leveraging hardware texture samplers is the best approach even for command-like tools which don’t render anything.

Simple CPU running C++ is just too slow for large images.

Apart from a few easy cases like 2x2 downsampling discussed in the article, SIMD optimized CPU implementations are very complicated for non-integer or non-uniform scaling factors, as they often require dynamic dispatch to run on older computers without AVX2. And despite the SIMD, still couple orders of magnitude slower compared to GPU hardware while only delivering barely observable quality win.


OpenCL isn't nice to use and lacks tons of quality of life features. I wouldn't use it, even if it was double as fast as CUDA.


Sure is, but there is nothing stopping AMD or Intel from building a working alternative to CUDA, so how is it anti-competitive? The problem with OpenCL, Sycl, ROC, etc. is that the developer experience is terrible. Cumbersome to set up, difficiult to get working accross platforms, lack of major quality of life features, etc.


One example of how they might be abusing their monopoly is by forcing data centers to pay an inflated price for hardware that is similar to consumer GPUs: https://github.com/DualCoder/vgpu_unlock


If this is abuse, I have bad news for you about AMD and their arbitrary restriction of rocm for consumer GPUs.


The magic is that CUDA actually works well. There is no reason to pick OpenCL, ROCm, Sycl or others if you get a 10x better developer experience with CUDA.


OpenCL is an alternative to CUDA just like Legos are an alternative to bricks. The problem with OpenCL isn't even the performance, it's everything. If OpenCL were any good, people could use it to build similarly powerful applications on cheaper AMD GPUs.


OpenCL is just a spec. It's up to companies to implement it in a successful way or not. There is no reason in and of itself that OpenCL can't compete with CUDA on performance. The fact that Apple's Metal, which is pretty good, is actually implemented with a private OpenCL system is proof that the spec is not to blame.


Well, performance isn't the issue. Like the parent said, the problem is mostly that CUDA is such a radically mature API and everything else isn't. You might be able to reimplement all the PCIe operations using OpenCL, but how long would that take and who would sponsor the work? Nvidia simply does it, because they know the work is valuable to their customers and will add to their vertical integration.

OpenCL isn't bad, and I'd love to see it get to the point where it competes with CUDA as originally intended. The peaceable sentiment of "let's work together to kill the big bad demon" seems to be dead today, though. Everyone would rather sell their own CUDA-killer than work together to defeat it.


> The fact that Apple's Metal, which is pretty good, is actually implemented with a private OpenCL system is proof that the spec is not to blame.

I don't understand. If OpenCL was so good, why did Apple create Metal instead of just using OpenCL?


For similar reasons why Microsoft created DirectX. It allows them to have a system where the software and hardware are more tightly integrated than using a cross-platform spec. It also allows them to situate the API within the context of the operating system and other languages that are used on Apple platforms, making things easier on developers. And at least in that regard, they certainly succeeded. Metal is probably the easiest GPU API offered on any platform. Not necessarily the most powerful, but it’s almost trivial to hand write a compute kernel and spin it off.


AMD’s first attempt at displacing CUDA-as-a-runtime was called AMD APP (advanced parallel processing) so this layer did indeed exist. It just sucked as badly as the rest of AMD’s slop.

https://john.cs.olemiss.edu/heroes/papers/AMD_OpenCL_Program...

Bonus points: the rest of the software libraries intended to compete with the CUDA ecosystem are still online in the “HSA Toolkit” GitHub repo. Here’s their counterpart to the Thrust library (last updated 10 years ago):

https://github.com/HSA-Libraries/Bolt

Nvidia had multiple updates in the last year the last time I checked. That’s the problem.


Link seems to have gone dead since earlier, lol, but:

https://en.wikipedia.org/wiki/AMD_APP_SDK


It's currently unparalleled when it comes to realism as in realistic 3D reconstruction from the real world. Photogrammetry only really works for nice surfacic data, whereas gaussian splats work for semi-volumetric data such as fur, vegetation, particles, rough surfaces, and also for glossy/specular surfaces and volumes with strong subdivision surface properties, or generally stuff with materials that are strongly view-dependent.


This seems like impressive work. You mention glossy / specular. I wonder why nothing in the city (first video) is reflective, not even the glass box skyscrapers. I noticed there is something funky in the third video with the iron railway rails from about :28 to :35 seconds. They look ghostly and appear to come in and out. Overall these three videos are pretty devoid of shiny or reflective things.


Immunosuppressants are very commonly used for various diseases that are caused by overreactions or undesired reactions of your immune system. Glucocorticoids, for example, are very widespread for all sorts of stuff (rashes, asthma, allergies, inflammations, ...). Monoclonal antibodies are also getting popular as a way to treat allergic reactions by means such as "killing" your IgE antibodies (They're basically antibodies that, in this instance, are used against your own body's antibodies).


> No one wants that.

I very much do want that since the WebGPU API is far easier and nicer to use than Vulkan or OpenGL. Also, it makes apps much more accessible to distribute them over web, and it is much more secure to use web apps than native apps. Unfortunately WebGPU is way too limited compared to desktop APIs.


I'm really looking forward to 2034, when WebGPU features will catch up to 2024.


By that time, it might even get supported by Chrome for Linux.


Why is linux supporting taking a while? I figured the underlying graphics subsystem on Android is Vulcan right? Wouldn't that also be the main graphics subsystem on Linux these days?


Testing/Validation and bugfixing. Just having Vulkan isn't enough to enable it by default, everything actually has to work right. Even for Android this is only for specific types of devices. You should be able to force enable it on Linux right now though. It's just not GA quality guaranteed.


I tried and tried and tried, Chrome 120 will always stick an undocumented Origin Trial for disabling WebGPU.


You know we’ll have all moved onto Romulan by then, leaving Vulkan and Metal behind - including WebGPU.

In seriousness though, WebGPU in Chrome with “non-free” Linux GPU drivers should work, no?

Edit: I see it’s still behind a flag


about:flags in Chromium

search for "accel"

Disable the blacklist for your GPU.


Which is exactly why WebGL never really took off for games like Flash did, versus native games, or now streaming.

Having drivers installed is not enough as the browser lords decided the computer isn't worthy of playing games.


Flash was a buggy crap which made lots of older computers spawn cycles like crazy and had zero accesibility for the blind. It deserverd to die.


Usually the only ones complaining are Linux users.

The rest enjoyed the games, to the point that Flash is being brought back thanks WebAssembly, with Unity and Flutter as spiritual sucessors.


Doesn't stop disablement done through --origin-trial-disable-feature=WebGPU and I have yet to figure how to drop that without recompiling Chrome.


Afaik it is possible only in unstable and beta, not in stable. That's why GP mentioned Chromium, not Chrome.

Which fully supports pjmlp's point in the sibling comment.


On ChromeOS.


On devices by selected vendors, and only on selected models.


What features?


Personally, the ones I'm most looking forward to:

* Subgroup operations

* Push constants (available in wgpu, but not WebGPU the spec/web impl)

* u64 + atomic image ops

* Mesh shaders

* Raytracing

* Binding arrays / descriptor indexing + device buffer addresses


Similar. I've done experiments with subgroups suggesting approximately a 2.5x speedup for sorting (using the WLMS technique of Onesweep). Binding arrays will be very helpful for rendering images in the compute shader. A caveat is that descriptor indexing is not supported on mid-old phones like Pixel 4, but it is on Pixel 6. I somewhat doubt device buffer address will be supported, as I think the security aspect is complicated (it resembles a raw pointer), but it's possible they'll figure out how to do it.


Looking back to 10 years long WebGL adoption, and WebGPU being based on 2015 features, that is pretty much spot on.


In 2034 it'll be as dead as Flash because of security issues.


Not really, that is not the problem of WebGPU. The worst you can do is crash the tab. With an unstable graphics driver, there might even be the option to crash the system but that's hardly a security issue, only an annoyance.


Historically any time an attack surface as big as WebGPU has been exposed, "the worst you can do is crash the tab" has not ever been true.

Also note that for an unstable graphics driver, the way you usually crash the system is by touching memory you shouldn't (through the rendering API), which is definitely something that could be exploited by an attacker. It could also corrupt pages that later get flushed to disk and destroy data instead of just annoy you.

Though I am skeptical as to whether it would happen, security researchers have previously come up with some truly incredible browser exploit chains in the past, so I'm not writing it off.


WebGL has been around for more than a decade and didn't turn out to be a security issue, other than occasionally crashing tabs. Neither will WebGPU be.


By exposing vulnerable graphics drivers to arbitrary web code, WebGL has allowed websites to take screenshots of your desktop (https://www.mozilla.org/en-US/security/advisories/mfsa2013-8...) and break out of virtual machines (https://blog.talosintelligence.com/nvidia-graphics-driver-vu...), to use two examples I found via a web search.


Very curious what you see as the problems with WebGPU currently. I’ve been tinkering with it slowly as it has a bit of a learning curve.


OP is right. WebGPU is targeted towards the lowest common denominator, which is fairly old mobile phones. It therefore doesn't support modern features and is basically an outdated graphics API by design.


Care to give an example? From my viewpoint as a WebGPU user, we consistently get access to new GPU features with every refresh. E.g: https://developer.chrome.com/blog/webgpu-io2023

You just have to set limits correctly when you initialize a GPU instance in order to have access to the new features.


Mesh shaders, raytracing, DirectStorage, GPU work graphs, C++ features on shading languages, some of the post 2015 features that aren't coming to WebGPU any time soon.


The features you mention are mostly applications of compute shaders, which are fully capable of being written in WebGPU, as WebGPU supports buffer -> buffer compute shaders when the underlying GPU supports it.

I've personally implemented mesh shaders in my own project, and there are plenty of examples of WebGPU real time raytracers out there. DirectStorage I had to google and it looks like a Windows DMA framework for loading assets directly to GPU ram? That's not even in scope for WebGPU, and would be handled by platform libraries. Linux has supported it for ages.

Seriously I get the impression from your posts that really don't have any experience with WebGPU at all, and are basing your understanding off of misinformation circulating in other communities. Especially with your continued nonsensical statements about not supporting "post-2015 features." 2015-era chipsets are a minimum supported feature set, not a maximum.

Please just take the L and read up to inform yourself about WebGPU before criticizing it more.


I bet those implementation of yours weren't done in WebGPU actually running on the browser, otherwise I would greatly appreciate being corrected with an URL.

DirectStorage started as a Windows feature, is actually quite common in game consoles, and there is ongoing work to expose similar functionality in Vulkan.

Yes, I do have WebGPU experience and have already contributed to BabylonJS a couple of times.

Maybe I do actually know one or two things about graphics APIs.


WebGPU is not a browser API. It is a lower level API with official, supported implementations in C++ and Rust with zero browser dependencies. See for example:

https://dawn.googlesource.com/dawn

https://eliemichel.github.io/LearnWebGPU/

Yes it is exposed to Javascript applications by browsers that support it, much like WebGL exposes OpenGL ES. But that’s a separate thing.


Apart from the examples given by the other user: 64 bit integers and their atomics which are the bread and butter of efficient software rasterizers such as Nanite, or for point clouds which can be rendered multiple times faster with 64 bit atomics compared to using the "point-list" primitive; subgroup operations; bindless; sparse buffers; printf; timestamps inside shaders; async memcpy and copy without the necessity for an intermediate copy buffer and so much more. One of the worst is that they're adding limitations of all languages, but not the workarounds that may exist in them. Like WebGPU actively prohibits buffer aliasing or mixing atomic and non-atomic access to memory because of Apple's Metal Shading Language, but Metal supports it via workarounds! I mean... seriously? That actually makes WebGPU even worse than the lowest common denominator.

One of the turning points for the worse was the introduction of WGSL. Before WGSL, you could do lots of stuff with spirv shaders because they weren't artifically limited. But with WGSL, they went all in on turning WebGPU into a toy-language that only supports whatever last decades mobile phones support. I was really hopeful for WebGPU because UX-wise, it is so much better than anything else. Far better than Vulkan or OpenGL. But feature-wise, WebGPU so limited that I had to go back to desktop OpenGL.

In one way WebGPU really has become a true successor to WebGL though - it is a graphics API that is outdated on arrival.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: