Hacker News new | past | comments | ask | show | jobs | submit login
Rendering photo-realistic glass in the browser (domenicobrz.github.io)
391 points by anonytrary on Nov 2, 2020 | hide | past | favorite | 112 comments



I think this is a very impressive implementation in WebGL.

However, I believe the same approach could easily have 2x to 4x the frame rate (or 2x to 4x more battery lifetime) if it was using compute shaders, but those aren't available on WebGL. So I'd count this as an example of why WebGL will not replace "proper" desktop OpenGL anytime soon.

Also, it appears to be reflecting by the same amount everywhere, which makes it look more like glue than like glass. Typically, glass has a fresnel reflection, meaning that it reflects more strongly the lower the angle between the light and the surface. That's why glass bottles usually reflect at the border (which is curved away from you) but are fully refractive in the center (which is facing you).


So the hard part of this isn't computing the reflection color (that can be done with an environment cubemap and some math expressions), but sorting the transparency. This demo uses "depth peeling", which is a fancy term to just mean rendering the model in a few different frustum slices along the view direction, and using the previous framebuffer content to accumulate reflection.

The whole purpose of this trick is to abuse the rasterizer! I struggle to see how compute shaders would make this faster.


The problem is security, as usual. You can use opengl to exploit a host system, therefore all interactions have to be whitelisted and checked. Compute shaders I know can be crafted to bring your PC to a grinding halt.

Webgpu will be a lovely security nightmare. The underlying hardware is inherently insecure. It's fast, not safe. Safety is simply not a feature of graphics operations.


You can already do that with fragment shaders. I've already caused my computer to crash and reboot by going a bit to crazy with the fragment shader.


Yep. Browsers could protect against this much better by profiling the fragment shaders instructions, size of data buffers, and requiring the render loop to have a delay. But then we get into managed GL territory and it's no longer really GLES, it's a library that gives you some features of GLES.


Which has been the whole "portability" story of OpenGL, it is similar only in name and some common operations, and that is about it.

In big applications it is like coding to multiple 3D APIs anyway.


I can already imagine the tech support scams


I suppose it depends on your definition of 'anytime soon', but adding compute to WebGL 2.0 is something people are working on - https://www.khronos.org/registry/webgl/specs/latest/2.0-comp...


In my opinion, the main issue is trust.

If I buy and download a game from Steam, I trust it to not contain malware. That's why I allow that game to run with very little protection.

If I visit a random website, I have to be prepared for the worst. And if I visit any website that finances itself with ads, I can assume that they will behave like offensive attackers trying whatever they can to collect more information about me in ways that I do not want.

As such, WebGL needs a very strong sandbox around it, to prevent rogue websites from causing harm. For desktop games, that trust issue does not arise, because they have a different business model.

That's why in my opinion, desktop OpenGL will always remain faster than WebGL.


WebGL does have a strong sandbox, just like HTML/CSS/ES do. But desktop OpenGL has a strong sandbox too (by API design, process control, device memory paging protection, etc.), so I’m not sure why you’re thinking the threat models are any different, it’s just as bad if a game secretly hacks your computer as if a web site does. Desktop OpenGL development is dying anyway, so it doesn’t really matter how fast it is compared to WebGL in the future, but I don’t believe it is faster than WebGL due to any sandbox differences. Whatever speed differences exist are there due to features and design and speed of the host language.


"Desktop OpenGL development is dying anyway"

Maybe yes, but then that's only market share lost to DirectX. I don't see the triple-A video game market shrinking anytime soon.

As for the Sandbox, Microsoft recently disabled GPU virtualization because, apparently, sandboxing a GPU is really difficult. For video games, that sandbox is not used in practice. You can access pretty much all the GPU memory that you want.

As for the different threat model, a video game tends to have a clear distributor. A website is more anonymous and, hence, inherently less trustworthy.


It is not dying for CAD/CAM and visualisation folks that don't want to bother with Vulkan boilerplate for nothing, on their workloads.


Strange, I could've sworn I've read recently that the WebGL2 compute effort had been halted because WebGPU is (more or less) around the corner.

Yet the linked document has been updated just a few weeks ago.


> WebGPU is (more or less) around the corner.

You know that I know that you know that this isn't really true, lol.

> Yet the linked document has been updated just a few weeks ago.

But that's just because they come from the same WebGL spec repo. The 2.0-compute spec indeed hasn't been updated for a year. I imagine it was halted because WebGL 2 never saw great adoption to begin with. Though even Apple / WebKit is finally adding it, because I think they realized that WebGPU is going to take how many more years to pan out. What a disappointment.

https://github.com/KhronosGroup/WebGL/blob/master/specs/late...


If I am not mistaken, Google is the one doing the contributions to WebKit.


Uh.. It has fresnel reflection, you can see that the curved surfaces that don't face the camera are reflecting more of the mostly white environment, it's even more clear when you adjust the reflectionFactor.


What’s the basis for your 2x-4x speed difference estimate?


In the past, I have supervised cross-platform Unity and UE4 game projects. I even found and fixed a mobile GPU heat death bug :) So that 2x to 4x is just my personal experience.

Some things like GPU pixel shaders tend to be magically slower on WebGL, even if you send the same raw HLSL/GLSL source code. The reason appears to be that due to security concerns, the same shader source code is compiled differently by WebGL than by desktop OpenGL.

Also, many smart tricks like DMA from the SSD bus to the GPU are inherently very unsafe, so WebGL sandboxing simply doesn't allow them. The result is that you need to copy the data twice instead of once and if memory bandwidth is a bottleneck (it usually is for triple-A), then that can easily completely waste performance.


In my humble experience this is exactly the main reason why WebGL hasn't taken off for Web games as Flash did.

For the common consumer, they don't get why a graphics card that plays their collection just fine struggles with an Amiga 500 like game on their browser.


Wait, are you saying Flash was fast compared to WebGL??

Which Amiga 500 like browser games are you thinking of? Would you link to one that demonstrates WebGL dramatically underperforming compared to a desktop?

This seems quite exaggerated to me. @fxtentacle gave some specific reasons, but even the 2x-4x estimate seems over-stated for the average shader, and pixel shaders are only a small fraction of a typical game’s run time. The 2-4x claim is relative and lacking specifics. It could happen, especially with tiny shaders, but on average, I don’t believe it, and it’s easy to verify using ShaderToy for example. (Also plenty easy to see just by visiting ShaderToy that typical WebGL shader perf is fine.)

There are far bigger reasons consumers don’t go to individual web sites for their games than the difference in perf between WebGL and desktop OpenGL. Just to mention two that can each separately account for it, 1) asset loading in the browser over the internet every time you play is awful, and 2) a web site is not a distribution channel -- most people making games don’t also have the capacity to market, publish, and host their own games, and most consumers are already looking for games on Steam and other app stores. Throw in browser UI restrictions and lack of support for game controllers, vs native apps on top of that. It’s really easy to see that WebGL game adoption has nothing to do with perf.


Yes I am.

Unreal Engine 3 Support for Adobe Flash Player - Unreal Tournament 3

https://www.youtube.com/watch?v=UQiUP2Hd60Y

WebGL and WebAssembly still trying to catch up with 2011.


Hahaha, I appreciate the humor. If it was so great, why didn’t Unreal 4 support it? And when did web games on 3d Flash ever have high adoption? Your earlier claim was WebGL wasn’t used as much as Flash, but the only high adoption of Flash games on the web were 2d had nothing to do with UE3 flash player support.

“Why did Adobe decide to EOL Flash Player and select the end of 2020 date?

Open standards such as HTML5, WebGL, and WebAssembly have continually matured over the years and serve as viable alternatives for Flash content. Also, the major browser vendors are integrating these open standards into their browsers and deprecating most other plug-ins (like Adobe Flash Player).By announcing our business decision in 2017, with three years’ advance notice, we believed that would allow sufficient time for developers, designers, businesses, and other parties to migrate existing Flash content as needed to new, open standards”

https://www.adobe.com/products/flashplayer/end-of-life.html


Because iPhone and guys pushing for plugins free browsers, delaying progress for a decade.


Lol, Flash 3d is built on OpenGL and DirectX, which are still here. What “progress” was delayed? Adobe chose to kill flash because of WebGL.


Compute shaders in WebGL would be great. But why would they improve the performance in this case? Right now it's implemented as a fragment shader, which to me seems appropriate here.


I found this one by the same person even more amazing. It shows the background image being refracted through the glass: https://domenicobrz.github.io/webgl/projects/glass-absorptio...


Abusing the framebuffer contents to do convincing-unless-you-stare-closely-at-them reflections is a time-honored tradition stretching back to the early 2000s. Nintendo pulls off this effect a couple of times in this shot:

https://noclip.website/#smg/HeavensDoorGalaxy;ShareData=AY,m...


From the username & submission history, I'm guessing you're the author of noclip.website & the associated youtube videos explaining various graphics effects in SMG/WW. Love your content!


Is that website supposed to work on mobile? I get a couple of microscopic icons on a black background and a message about a non-responsive script after a few minutes.


That was even cooler, agreed. Although on Firefox Mobile I need to turn down the gui1_x parameter to almost zero, otherwise I get these weird red polygon artifacts. And I get 15-ish fps. Chrome on the same device is flawless at 60fps.


Very clean and at least 60 fps in Chrome on a 3 year old iPad. Amazing.


Note that you're really complementing WebKit here: alternative browsers on iDevices are just reskinned Safari.


I did not know that.


I see the polygon artifacts on chrome on android.


I get the same red polygons and low fps on Firefox and Chrome on my phone.


Is this actually refracting through the glass? The distortion doesn’t seem quite correct.

I wonder if this is actually a clever cheat. Perhaps it’s actually not transparent, but a mirror surface and the cube map for reflections is actually just the skybox inverted. So instead of actually looking through the glass sculpture, it’s reflecting the inverted skybox.


Of course it's not a real, physically accurate refraction. That could only be done via raytracing (or maybe some obscure depth-peeling technique, which would be super slow).


It isn't about inverting the skybox, it is done in the same way as reflection mapping, but instead of reflection ray directions you use refraction ray directions.

The reason these can look off is because the image is flat and on a cube or sphere and doesn't have depth and exists infinitely far away. Sometimes cubes or spheres with a defined size are used to give a little more plausibility.


Am I really naive to be so impressed by this running at 60fps on my Intel Iris Plus 645 graphics?

This is a potato-tier GPU and I can't make the frame rate drop below 60 with this demo.


Yes, because you should compare to how a similar DirectX 12, OpenGL 4.6 or Vulkan 1.1 like demo would achieve in FPS.

https://www.techpowerup.com/gpu-specs/iris-plus-graphics-645...


I am impressed. After a graphics card failure I'm running on Intel® HD Graphics 4600. It's more than adequate for work, but in this demo the frame rate started at 21fps and dropped to 15fps


45FPS on my 1050Ti, so it beats me.


60 FPS on iPhone SE (1st gen) zoomed in or out. Cool.


60fps on my iPhone X, impressive!


My MBP was absolutely crawling but my iPhone 12 had no problem keeping 60fps. Crazy. That gives me a lot of hope for the new Apple Silicon macs.

Edit: But the main link definitely doesn't work well on iPhone.


Same results on my Galaxy S10 (60fps) and Core i7 laptop (<20 fps). I'm pretty sure there is a software explanation for this, instead of the phone CPU being more powerful.


This has nothing to do with the processor in the system. If anything it is a bug that makes the rendering very inefficient on MacOS / your browser.

The Intel processor in your MBP is still leaps more powerful than the processor in your phone.


> The Intel processor in your MBP is still leaps more powerful than the processor in your phone.

Maybe the CPU part, but definitely not the integrated GPU (and this demo is all GPU).


Where do you take your assumption?

I was trying to find anything specific about the iphone 12 gpu and i only found the same Apple 4 Core GPU.

I was not able to find any benchmark comparision either.

I found this, which is very unspecific: "We didn't see as big a leap in graphics performance. On the 3DMark Wild Life test, the iPhone 12 hit 39 frames per second, while the iPhone 11 Pro Max scored an even higher 42 fps. But when we switched to the off-screen version of the test, the iPhone 12 Pro notched a higher 51 fps to the iPhone 11 Pro's 42 fps."

So if you don't have more details then i do, the basic assumption, you should make, is that its much easier to put much more silicon in a MBP than in a Smartphone and the chances, that a MBP is loosing against a Smartphone, would be very weird and very unrealistic.

And this has nothing to do with if someone likes apple or arm or whatever.

It is also not very easy to compare something like this if you don't know if there is a feature set difference and if one GPU can control 2-3 displays and the other only works for one display.


> Where do you take your assumption?

Simply by comparing my graphics demos running on my 13" MBP versus a recent iPhone or iPad Pro. The i-devices are usually slightly ahead (okay, my MBP is 5 years old by now, but progress has been slower on the Mac side than on the iPhone side).

It might not just be about fillrate though, I'm also seeing much higher drawcall throughput with Metal on iOS devices versus running Metal on Macs with Intel GPU. I guess the entire graphics stack is much better optimized on iOS.


Alone CPU is probably more then 100% : 4522 vs. 2091 and 2091 is the CPU value from the 2017 model because i was not able to find 2015.

Lets see how it looks

I don't care if its arm or x86 as long as the mac book finally fixes its performance/overheating issue when running a 4k display on it and as long as i can build and run x86 docker images.


In sustained multithreaded tasks, maybe, depending on how much money you paid on either and when you bought it.


Were you using safari on your mbp? The performance seem to be better in safari than in chrome or firefox.


MBP has a lot more pixels to push. If you scale your browser down, you'll see the framerate go up a bunch.


Strangely, only ~50 fps in Google Chrome on my ultrabook laptop -- which has a Ryzen 7 4700U with Vega 7 graphics.

The Ryzen 7 4700U's integrated Vega graphics is far more powerful than Intel's UHD and Iris, and even more powerful than Nvidia's MX line of GPUs (MX150, MX250, etc).

Strange that it only got ~50 fps when someone else on this thread with a Intel graphics got 60 fps.


It's not doing the same rendering on iOS as on the desktop. The desktop version allows the body to be seen through the arm while the mobile version doesn't (looks really like a variant of a cube map on mobile).


Zoom in and frame rate drops to under 20fps


Probably the GPU and the GPU drivers matter most.

On Linux Chromium browser with an NVIDIA GPU, using the NVIDIA Linux driver, the FPS is pinned to 60 no matter what zooms and rotations are done.


I tried on Chromium linux too, but probably driver's issue. My machine has Radeon pro card with just amdgpu driver that's bundled with kernel v5.4.


Zoomed in on iPhone X, still getting 60 fps


I agree. It is...um...smooth as glass on my iPad. Much more so, than the OP.


I never really get what’s so special about OpenGL in the browser vs on the desktop vs on a console. Also, not looking photo realistic to me tbh.


Portability, and the strong sandbox, are what make OpenGL in the browser special. This loads instantly without installation on an iPhone, an Android phone, a Windows PC, a Mac, an Xbox, a PlayStation, a Nintendo Switch, a Tesla, etc.

Even if you could write a native 3D app that runs on all of those (probably the only sensible way would be to use Unity or similar) you'd be stuck with a long and slow installation step instead of an instant load experience. Even worse, you'd be at the mercy of each platform's capricious gatekeeper. Have fun passing certification requirements for all those consoles, buying a Mac to build your iOS binaries, paying the associated developer program fees, and good luck getting your app on Tesla and the long tail of smart fridges etc.


This is all wrong.

> probably the only sensible way would be to use Unity or similar

You're going to need to explain this one.

> you'd be stuck with a long and slow installation step instead of an instant load experience

A 20mb app takes the same amount of time to download whether it's a zip file or a WebGL app, but the browser one will be much slower, much more error-prone, have less features, likely will require an internet connection every time you want to use it, and isn't guaranteed to be available indefinitely (and is usually non-trivial to download and play locally).

> Even worse, you'd be at the mercy of each platform's capricious gatekeeper. Have fun passing certification requirements for all those consoles, buying a Mac to build your iOS binaries, paying the associated developer program fees, and good luck getting your app on Tesla and the long tail of smart fridges etc

And this one isn't very accurate considering you can't run a WebGL app on any (non-jailbroken) game console. Idk about Tesla or smart fridges though.


> You're going to need to explain this one.

The porting effort for a native app using 3D graphics APIs to run on all platforms is simply enormous, and completely out of the question for small demos like this one or really anything but the most profitable apps (and even many of those choose to use Unity or another cross platform framework rather than bear the cost themselves). Not only do the platforms differ in their supported APIs, they all have different bugs and quirks. In contrast, it is trivial to write a WebGL app with no framework and run it on practically any platform. Compatibility is dramatically higher.

> A 20mb app takes the same amount of time to download whether it's a zip file or a WebGL app

This is demonstrably false if you count the time that matters, which is time from intent to use the app to actually using the app. Just time installing and launching a 20mb app from the Play Store or iOS app store or Steam or even plain zip file (though in reality most apps require an installer) vs. loading a 20 MB web page. Also count the number of clicks required, and as a bonus count the permissions you need to feel comfortable granting for the app to run. It's partially technical and partially due to platform conventions, but the difference is not small.

> you can't run a WebGL app on any (non-jailbroken) game console

This is false. Xbox supports WebGL. So does Oculus Quest. You are right that PlayStation and Switch don't though, I was wrong about that (though PlayStation does use WebGL internally, and Switch has a webview that supports WebGL, just not a user accessible browser app). Yes, Tesla's web browser supports WebGL, as do Samsung and LG and Sony smart TVs and fridges and tons of other similar devices.


> A 20mb app takes the same amount of time to download whether it's a zip file or a WebGL app

You can do a lot more with that when so many dependencies are taken care of. No window creation, no extension loading, no audio layer, no image loading, no dynamic image loading, no libc or libc++, no 'visual studio runtime'

> much more error-prone,

What is that based on?

> have less features,

Why would that be true?


And that native app could likely pown your machine, especially on Windows where it's the norm to ask for admin to install almost anything.


I believe the Switch doesn't have a general web browser, because that wouldn't be reliably family friendly.


But that's all just OpenGL.. nothing special about someone using it in the browser.

There's nothing special about being able to use a webcam in the browser either, right? The demo just isn't rechnically impressive.

The point you're bringing is "kudos to the browser developers"...


Isn't the point almost always "kudos to the developers?"


Then either link to webkit.org, opengl.org, or maybe the examples page of threejs.org :-)


You just click a link in the browser and it runs instantly, instead of downloading an unsigned executable from a website you never heard of, which is then blocked by Windows SmartScreen, Apple Gatekeeper or doesn't run on your machine at all because you're using Linux.


> I never really get what’s so special about OpenGL in the browser vs on the desktop vs on a console.

Nothing in particular, besides being seamlessly integrated in a web page with existing web standards as opposed to having to download and run an executable perhaps?


Same can be said for any browser technology.

What's so special about this specific post?


It's just a cool demo. And it's interesting to see the browser being a universal runtime for stuff that's traditionally been out of its domain. Not everything has to be state of the art mind blowing brand new breakthroughs to be interesting.


I guess if you have to ask, you wouldn't understand


Hey I got this new game I'd like to show you, just download this exe and run it. Or here's a link, run it safely in a sandbox environment.


I work for a 3D anatomy company and the biggest advantage is compatibility. You can embed 3D content using an iframe just like video. For example here's Healthline using our content to explain knee anatomy.

https://www.healthline.com/human-body-maps/knee#1

This makes it possible for all the online learning companies to integrate 3D into their courseware without requiring any additional download.


Just wanted to say this is a really cool use case, and it works pretty well too. Neat.


Thanks! Check out https://human.biodigital.com if you want to play with it yourself (free signup). You can edit the models and create your own presentations.


Browsers traditionally have not been the place where you can run expensive graphical calculations in an environment not originally made for them. I think it's quite devaluing to try to compare it to desktop and/or console, which are much more specialized for that.


accessible.


Is that... Guan Yu?

It is a little refreshing to see something other than the Utah Teapot for once.


Yep, I'm pretty sure that's Guan Yu.

For those wondering, Guan Yu was a general of legendary prowess during the Three Kingdoms period of Ancient China. He is deified and represents loyalty and justice. Ironically, in Hong Kong, both the police and the triad worship him.

https://en.wikipedia.org/wiki/Guan_Yu


Used to work in a newspaper in Hong Kong and we have a statue of Guan Yu as well...

Anecdotally, the triad worships Guan Yu with black shoes while the police worships him with red shoes.


> the triad worships Guan Yu with black shoes while the police worships him with red shoes.

That's a detail that I did not know!


Yup, that glaive is unmistakable. I knew that hours of playing Dynasty Warriors would pay off.


That glaive (more correctly "guandao") has a name in the Romance of the Three Kingdoms, btw: 青龍偃月刀 (Green Dragon Crescent Blade), also known as 冷艷鋸 (Frost Fair Blade).

https://en.wikipedia.org/wiki/Green_Dragon_Crescent_Blade

EDIT: Guan Yu's horse also has a name: 赤兔 (Red Hare)

https://en.wikipedia.org/wiki/Red_Hare


Erich Loftig has been working on a path tracer for ThreeJs for some time: https://github.com/erichlof/THREE.js-PathTracing-Renderer

The glass examples also look very promising.


That's pretty neat.

It's interesting that the frame rate in Firefox seems to be at least more than double than what I get in Chrome. It's always nice to see Firefox beat the competition. Great work Firefox team!


I'm getting 1fps in Chrome vs 20 in Firefox on Intel integrated graphics, remarkable difference!


Not on mobile, quite the opposite.


Getting a lot of black instead of refractions, on Firefox and Falkon (Chromium-based) on Linux. AMD RAVEN (DRM 3.39.0, 5.9.2-arch1-1, LLVM 10.0.1) (0x15d8)


Same here, on both Firefox and Chromium using an RX580.


Unfortunately, at least on Linux, only the NVIDIA drivers are guaranteed to render correctly any OpenGL.

During the last year I have not tried again the AMD and Intel drivers, to see how much they might have improved recently.

However, in the previous years, I always could find examples of OpenGL programs that were rendered incorrectly, with various annoying artefacts, when using either the AMD or the Intel drivers.


Sadly I'm on Linux with the latest NVidia driver, and I'm seeing the black regions too, on both Chrome and Firefox. It looks great on my Windows laptop on both the Intel integrated GPU and the Nvidia mobile GPU.



Does it look good to you? On firefox the contours are horribly aliased.


Same here on Firefox/Linux.


Neat! What's extintionCol1Ra do? It causes the glass to turn into a multicolored jellyfish looking thing, which is cool.


Truncated from "extintionCol1Random".


It doesn’t work on my ipad with safari.



Love it. Opened the website, was amazed by how good it looks and suddenly my RTX 2080 started screaming.


0.5 FPS on iPhone SE (1st gen) and it doesn’t look like glass at all.


Does the other one posted here work on your phone? https://domenicobrz.github.io/webgl/projects/glass-absorptio...

I have an XR and the OP is slow/doesn’t really work but this one works and looks great at 60fps. Curious if it’ll work on an SE too.


Runs smooth on my Pixel 3 but I had to hard-reset my Pixel 4a after like three frames (which took like 2 minutes ...)

Using Chrome on both.


wow that's incredible. one of those things that blow you away so hard, you can hear your heart beats.


This artist is a great developer.


Nothing on Safari iOS



Who else read the title and though it would render photo realistic glasses as filters in your face? :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: