I wasn't familiar with PlayCanvas until now and I think I found my weekend project.
I always wondered how much performance you could gain by implementing that model directly in silicon and ditch GPU shaders instead. The great problem of modern GPUs is that they are deigned as a multipurpose parallel computer used for all kinds of things and not as a dedicated GPU. Since Moores law is essentially dead this could be the only way forward.
If you assume the whole world is plastic, then yes! But the microfacet model is not the ultimate physical model, GGX-Smith and Epic/Disney PBR are compromises in a lot of cases, and you interact with hundreds of surfaces every day that can't be modeled accurately with it. Fuzzy or hairy surfaces are often modeled with a microfiber model! Epic's approximations (e.g. the split-sum approximation) can't easily handle anisotropic surfaces like brushed metal. And fleshy surfaces like skin, wax, paper, fruit pulp, etc. all require some form of subsurface scattering beyond the basic Lambertian diffuse.
> I always wondered how much performance you could gain by implementing that model directly in silicon and ditch GPU shaders instead.
Very little of the shader code that's being run is the microfacet BRDF stuff, and that's ALU so it's the fast bit! The expensive stuff is texture fetching for i.e. incident light, shadows, and that's all done by artists and graphics engineers above the BRDF layer. BRDF just tells you how to respond to incoming light, not where it is or where it comes from. And it makes you supply stuff like albedo color, normal direction, which are where shaders spend a lot of their time.
Realistic rendering has not solidified and in any way. It is arguably in a much greater state of flux and advancement these days with lots of useful research coming from game and game engine companies. Not only that, it is unlikely that it would buy much speed to hardcode some model into a GPU and those transistors could be used instead for flexible computation. It would need to do lots of floating point math, which is already what shaders are great at. Shaders were invented to 'shade' fragments, that's why they are called shaders. They have been gradually adopted as kernels that are run on other things.
The good news about Moore's Law is that the massive parallel model of GPUs has meant that they have continued to scale at tremendous rates.
They've had specialized silicon for rasterization and image sampling since the beginning. Video decoding came pretty early. Recently there has been new hardware features for machine learning and ray tracing.
But, the huge leaps in general purpose compute in GPUs is currently leading us back to the stylistic freedom of the earliest days of 3D when software rasterization was first being worked out.
I saw a great quote about the UE5 demo being crazy because "The ray tracing is done in hardware and the rasterization is done is software!" which is a total reversal of the past 20 years.
Apparently they also don't believe that much in Java and Kotlin for such tooling.
All their game related efforts are now focused on C++ or middleware like Unity.
As for other libraries, maybe something based on LWJGL.