I think this is a very impressive implementation in WebGL.
However, I believe the same approach could easily have 2x to 4x the frame rate (or 2x to 4x more battery lifetime) if it was using compute shaders, but those aren't available on WebGL. So I'd count this as an example of why WebGL will not replace "proper" desktop OpenGL anytime soon.
Also, it appears to be reflecting by the same amount everywhere, which makes it look more like glue than like glass. Typically, glass has a fresnel reflection, meaning that it reflects more strongly the lower the angle between the light and the surface. That's why glass bottles usually reflect at the border (which is curved away from you) but are fully refractive in the center (which is facing you).
So the hard part of this isn't computing the reflection color (that can be done with an environment cubemap and some math expressions), but sorting the transparency. This demo uses "depth peeling", which is a fancy term to just mean rendering the model in a few different frustum slices along the view direction, and using the previous framebuffer content to accumulate reflection.
The whole purpose of this trick is to abuse the rasterizer! I struggle to see how compute shaders would make this faster.
The problem is security, as usual. You can use opengl to exploit a host system, therefore all interactions have to be whitelisted and checked. Compute shaders I know can be crafted to bring your PC to a grinding halt.
Webgpu will be a lovely security nightmare. The underlying hardware is inherently insecure. It's fast, not safe. Safety is simply not a feature of graphics operations.
Yep. Browsers could protect against this much better by profiling the fragment shaders instructions, size of data buffers, and requiring the render loop to have a delay. But then we get into managed GL territory and it's no longer really GLES, it's a library that gives you some features of GLES.
If I buy and download a game from Steam, I trust it to not contain malware. That's why I allow that game to run with very little protection.
If I visit a random website, I have to be prepared for the worst. And if I visit any website that finances itself with ads, I can assume that they will behave like offensive attackers trying whatever they can to collect more information about me in ways that I do not want.
As such, WebGL needs a very strong sandbox around it, to prevent rogue websites from causing harm. For desktop games, that trust issue does not arise, because they have a different business model.
That's why in my opinion, desktop OpenGL will always remain faster than WebGL.
WebGL does have a strong sandbox, just like HTML/CSS/ES do. But desktop OpenGL has a strong sandbox too (by API design, process control, device memory paging protection, etc.), so I’m not sure why you’re thinking the threat models are any different, it’s just as bad if a game secretly hacks your computer as if a web site does. Desktop OpenGL development is dying anyway, so it doesn’t really matter how fast it is compared to WebGL in the future, but I don’t believe it is faster than WebGL due to any sandbox differences. Whatever speed differences exist are there due to features and design and speed of the host language.
Maybe yes, but then that's only market share lost to DirectX. I don't see the triple-A video game market shrinking anytime soon.
As for the Sandbox, Microsoft recently disabled GPU virtualization because, apparently, sandboxing a GPU is really difficult. For video games, that sandbox is not used in practice. You can access pretty much all the GPU memory that you want.
As for the different threat model, a video game tends to have a clear distributor. A website is more anonymous and, hence, inherently less trustworthy.
You know that I know that you know that this isn't really true, lol.
> Yet the linked document has been updated just a few weeks ago.
But that's just because they come from the same WebGL spec repo. The 2.0-compute spec indeed hasn't been updated for a year. I imagine it was halted because WebGL 2 never saw great adoption to begin with. Though even Apple / WebKit is finally adding it, because I think they realized that WebGPU is going to take how many more years to pan out. What a disappointment.
Uh.. It has fresnel reflection, you can see that the curved surfaces that don't face the camera are reflecting more of the mostly white environment, it's even more clear when you adjust the reflectionFactor.
In the past, I have supervised cross-platform Unity and UE4 game projects. I even found and fixed a mobile GPU heat death bug :) So that 2x to 4x is just my personal experience.
Some things like GPU pixel shaders tend to be magically slower on WebGL, even if you send the same raw HLSL/GLSL source code. The reason appears to be that due to security concerns, the same shader source code is compiled differently by WebGL than by desktop OpenGL.
Also, many smart tricks like DMA from the SSD bus to the GPU are inherently very unsafe, so WebGL sandboxing simply doesn't allow them. The result is that you need to copy the data twice instead of once and if memory bandwidth is a bottleneck (it usually is for triple-A), then that can easily completely waste performance.
In my humble experience this is exactly the main reason why WebGL hasn't taken off for Web games as Flash did.
For the common consumer, they don't get why a graphics card that plays their collection just fine struggles with an Amiga 500 like game on their browser.
Wait, are you saying Flash was fast compared to WebGL??
Which Amiga 500 like browser games are you thinking of? Would you link to one that demonstrates WebGL dramatically underperforming compared to a desktop?
This seems quite exaggerated to me. @fxtentacle gave some specific reasons, but even the 2x-4x estimate seems over-stated for the average shader, and pixel shaders are only a small fraction of a typical game’s run time. The 2-4x claim is relative and lacking specifics. It could happen, especially with tiny shaders, but on average, I don’t believe it, and it’s easy to verify using ShaderToy for example. (Also plenty easy to see just by visiting ShaderToy that typical WebGL shader perf is fine.)
There are far bigger reasons consumers don’t go to individual web sites for their games than the difference in perf between WebGL and desktop OpenGL. Just to mention two that can each separately account for it, 1) asset loading in the browser over the internet every time you play is awful, and 2) a web site is not a distribution channel -- most people making games don’t also have the capacity to market, publish, and host their own games, and most consumers are already looking for games on Steam and other app stores. Throw in browser UI restrictions and lack of support for game controllers, vs native apps on top of that. It’s really easy to see that WebGL game adoption has nothing to do with perf.
Hahaha, I appreciate the humor. If it was so great, why didn’t Unreal 4 support it? And when did web games on 3d Flash ever have high adoption? Your earlier claim was WebGL wasn’t used as much as Flash, but the only high adoption of Flash games on the web were 2d had nothing to do with UE3 flash player support.
“Why did Adobe decide to EOL Flash Player and select the end of 2020 date?
Open standards such as HTML5, WebGL, and WebAssembly have continually matured over the years and serve as viable alternatives for Flash content. Also, the major browser vendors are integrating these open standards into their browsers and deprecating most other plug-ins (like Adobe Flash Player).By announcing our business decision in 2017, with three years’ advance notice, we believed that would allow sufficient time for developers, designers, businesses, and other parties to migrate existing Flash content as needed to new, open standards”
Compute shaders in WebGL would be great. But why would they improve the performance in this case? Right now it's implemented as a fragment shader, which to me seems appropriate here.
Abusing the framebuffer contents to do convincing-unless-you-stare-closely-at-them reflections is a time-honored tradition stretching back to the early 2000s. Nintendo pulls off this effect a couple of times in this shot:
From the username & submission history, I'm guessing you're the author of noclip.website & the associated youtube videos explaining various graphics effects in SMG/WW. Love your content!
Is that website supposed to work on mobile? I get a couple of microscopic icons on a black background and a message about a non-responsive script after a few minutes.
That was even cooler, agreed. Although on Firefox Mobile I need to turn down the gui1_x parameter to almost zero, otherwise I get these weird red polygon artifacts. And I get 15-ish fps. Chrome on the same device is flawless at 60fps.
Is this actually refracting through the glass? The distortion doesn’t seem quite correct.
I wonder if this is actually a clever cheat. Perhaps it’s actually not transparent, but a mirror surface and the cube map for reflections is actually just the skybox inverted. So instead of actually looking through the glass sculpture, it’s reflecting the inverted skybox.
Of course it's not a real, physically accurate refraction. That could only be done via raytracing (or maybe some obscure depth-peeling technique, which would be super slow).
It isn't about inverting the skybox, it is done in the same way as reflection mapping, but instead of reflection ray directions you use refraction ray directions.
The reason these can look off is because the image is flat and on a cube or sphere and doesn't have depth and exists infinitely far away. Sometimes cubes or spheres with a defined size are used to give a little more plausibility.
I am impressed. After a graphics card failure I'm running on Intel® HD Graphics 4600. It's more than adequate for work, but in this demo the frame rate started at 21fps and dropped to 15fps
Same results on my Galaxy S10 (60fps) and Core i7 laptop (<20 fps). I'm pretty sure there is a software explanation for this, instead of the phone CPU being more powerful.
I was trying to find anything specific about the iphone 12 gpu and i only found the same Apple 4 Core GPU.
I was not able to find any benchmark comparision either.
I found this, which is very unspecific:
"We didn't see as big a leap in graphics performance. On the 3DMark Wild Life test, the iPhone 12 hit 39 frames per second, while the iPhone 11 Pro Max scored an even higher 42 fps. But when we switched to the off-screen version of the test, the iPhone 12 Pro notched a higher 51 fps to the iPhone 11 Pro's 42 fps."
So if you don't have more details then i do, the basic assumption, you should make, is that its much easier to put much more silicon in a MBP than in a Smartphone and the chances, that a MBP is loosing against a Smartphone, would be very weird and very unrealistic.
And this has nothing to do with if someone likes apple or arm or whatever.
It is also not very easy to compare something like this if you don't know if there is a feature set difference and if one GPU can control 2-3 displays and the other only works for one display.
Simply by comparing my graphics demos running on my 13" MBP versus a recent iPhone or iPad Pro. The i-devices are usually slightly ahead (okay, my MBP is 5 years old by now, but progress has been slower on the Mac side than on the iPhone side).
It might not just be about fillrate though, I'm also seeing much higher drawcall throughput with Metal on iOS devices versus running Metal on Macs with Intel GPU. I guess the entire graphics stack is much better optimized on iOS.
Alone CPU is probably more then 100% : 4522 vs. 2091 and 2091 is the CPU value from the 2017 model because i was not able to find 2015.
Lets see how it looks
I don't care if its arm or x86 as long as the mac book finally fixes its performance/overheating issue when running a 4k display on it and as long as i can build and run x86 docker images.
Strangely, only ~50 fps in Google Chrome on my ultrabook laptop -- which has a Ryzen 7 4700U with Vega 7 graphics.
The Ryzen 7 4700U's integrated Vega graphics is far more powerful than Intel's UHD and Iris, and even more powerful than Nvidia's MX line of GPUs (MX150, MX250, etc).
Strange that it only got ~50 fps when someone else on this thread with a Intel graphics got 60 fps.
It's not doing the same rendering on iOS as on the desktop. The desktop version allows the body to be seen through the arm while the mobile version doesn't (looks really like a variant of a cube map on mobile).
Portability, and the strong sandbox, are what make OpenGL in the browser special. This loads instantly without installation on an iPhone, an Android phone, a Windows PC, a Mac, an Xbox, a PlayStation, a Nintendo Switch, a Tesla, etc.
Even if you could write a native 3D app that runs on all of those (probably the only sensible way would be to use Unity or similar) you'd be stuck with a long and slow installation step instead of an instant load experience. Even worse, you'd be at the mercy of each platform's capricious gatekeeper. Have fun passing certification requirements for all those consoles, buying a Mac to build your iOS binaries, paying the associated developer program fees, and good luck getting your app on Tesla and the long tail of smart fridges etc.
> probably the only sensible way would be to use Unity or similar
You're going to need to explain this one.
> you'd be stuck with a long and slow installation step instead of an instant load experience
A 20mb app takes the same amount of time to download whether it's a zip file or a WebGL app, but the browser one will be much slower, much more error-prone, have less features, likely will require an internet connection every time you want to use it, and isn't guaranteed to be available indefinitely (and is usually non-trivial to download and play locally).
> Even worse, you'd be at the mercy of each platform's capricious gatekeeper. Have fun passing certification requirements for all those consoles, buying a Mac to build your iOS binaries, paying the associated developer program fees, and good luck getting your app on Tesla and the long tail of smart fridges etc
And this one isn't very accurate considering you can't run a WebGL app on any (non-jailbroken) game console. Idk about Tesla or smart fridges though.
The porting effort for a native app using 3D graphics APIs to run on all platforms is simply enormous, and completely out of the question for small demos like this one or really anything but the most profitable apps (and even many of those choose to use Unity or another cross platform framework rather than bear the cost themselves). Not only do the platforms differ in their supported APIs, they all have different bugs and quirks. In contrast, it is trivial to write a WebGL app with no framework and run it on practically any platform. Compatibility is dramatically higher.
> A 20mb app takes the same amount of time to download whether it's a zip file or a WebGL app
This is demonstrably false if you count the time that matters, which is time from intent to use the app to actually using the app. Just time installing and launching a 20mb app from the Play Store or iOS app store or Steam or even plain zip file (though in reality most apps require an installer) vs. loading a 20 MB web page. Also count the number of clicks required, and as a bonus count the permissions you need to feel comfortable granting for the app to run. It's partially technical and partially due to platform conventions, but the difference is not small.
> you can't run a WebGL app on any (non-jailbroken) game console
This is false. Xbox supports WebGL. So does Oculus Quest. You are right that PlayStation and Switch don't though, I was wrong about that (though PlayStation does use WebGL internally, and Switch has a webview that supports WebGL, just not a user accessible browser app). Yes, Tesla's web browser supports WebGL, as do Samsung and LG and Sony smart TVs and fridges and tons of other similar devices.
> A 20mb app takes the same amount of time to download whether it's a zip file or a WebGL app
You can do a lot more with that when so many dependencies are taken care of. No window creation, no extension loading, no audio layer, no image loading, no dynamic image loading, no libc or libc++, no 'visual studio runtime'
You just click a link in the browser and it runs instantly, instead of downloading an unsigned executable from a website you never heard of, which is then blocked by Windows SmartScreen, Apple Gatekeeper or doesn't run on your machine at all because you're using Linux.
> I never really get what’s so special about OpenGL in the browser vs on the desktop vs on a console.
Nothing in particular, besides being seamlessly integrated in a web page with existing web standards as opposed to having to download and run an executable perhaps?
It's just a cool demo. And it's interesting to see the browser being a universal runtime for stuff that's traditionally been out of its domain. Not everything has to be state of the art mind blowing brand new breakthroughs to be interesting.
I work for a 3D anatomy company and the biggest advantage is compatibility. You can embed 3D content using an iframe just like video. For example here's Healthline using our content to explain knee anatomy.
Thanks! Check out https://human.biodigital.com if you want to play with it yourself (free signup). You can edit the models and create your own presentations.
Browsers traditionally have not been the place where you can run expensive graphical calculations in an environment not originally made for them. I think it's quite devaluing to try to compare it to desktop and/or console, which are much more specialized for that.
For those wondering, Guan Yu was a general of legendary prowess during the Three Kingdoms period of Ancient China. He is deified and represents loyalty and justice. Ironically, in Hong Kong, both the police and the triad worship him.
That glaive (more correctly "guandao") has a name in the Romance of the Three Kingdoms, btw: 青龍偃月刀 (Green Dragon Crescent Blade), also known as 冷艷鋸 (Frost Fair Blade).
It's interesting that the frame rate in Firefox seems to be at least more than double than what I get in Chrome. It's always nice to see Firefox beat the competition. Great work Firefox team!
Getting a lot of black instead of refractions, on Firefox and Falkon (Chromium-based) on Linux. AMD RAVEN (DRM 3.39.0, 5.9.2-arch1-1, LLVM 10.0.1) (0x15d8)
Unfortunately, at least on Linux, only the NVIDIA drivers are guaranteed to render correctly any OpenGL.
During the last year I have not tried again the AMD and Intel drivers, to see how much they might have improved recently.
However, in the previous years, I always could find examples of OpenGL programs that were rendered incorrectly, with various annoying artefacts, when using either the AMD or the Intel drivers.
Sadly I'm on Linux with the latest NVidia driver, and I'm seeing the black regions too, on both Chrome and Firefox. It looks great on my Windows laptop on both the Intel integrated GPU and the Nvidia mobile GPU.
However, I believe the same approach could easily have 2x to 4x the frame rate (or 2x to 4x more battery lifetime) if it was using compute shaders, but those aren't available on WebGL. So I'd count this as an example of why WebGL will not replace "proper" desktop OpenGL anytime soon.
Also, it appears to be reflecting by the same amount everywhere, which makes it look more like glue than like glass. Typically, glass has a fresnel reflection, meaning that it reflects more strongly the lower the angle between the light and the surface. That's why glass bottles usually reflect at the border (which is curved away from you) but are fully refractive in the center (which is facing you).