Hacker News new | past | comments | ask | show | jobs | submit login
PlayCanvas launches glTF 2.0 Viewer Tool (playcanvas.com)
54 points by ovenchips 7 days ago | hide | past | favorite | 17 comments






I had always uses this[1] as a way of double-checking my GLTF output during game jams, but this looks really nice! It's nice to have more variety in this space. Blender's GLTF output works nicely with three.js or other WebGL libraries.

[1] https://gltf-viewer.donmccurdy.com


Is Draco the renderer? I'm looking for the pbr renderer in here.

Draco is a compression technology (https://github.com/google/draco) available in glTF files. PlayCanvas is both the engine and the renderer in this viewer.

Draco is a mesh compression tool. It quantizes and rearranges vertices and indices for smaller file sizes.

This looks great! It seems like a nice evolution for those familiar with tools like Unity and PhaserJS.

I wasn't familiar with PlayCanvas until now and I think I found my weekend project.


The glTF material model is essentially all you need for realistic rendering.

I always wondered how much performance you could gain by implementing that model directly in silicon and ditch GPU shaders instead. The great problem of modern GPUs is that they are deigned as a multipurpose parallel computer used for all kinds of things and not as a dedicated GPU. Since Moores law is essentially dead this could be the only way forward.


> The glTF material model is essentially all you need for realistic rendering.

If you assume the whole world is plastic, then yes! But the microfacet model is not the ultimate physical model, GGX-Smith and Epic/Disney PBR are compromises in a lot of cases, and you interact with hundreds of surfaces every day that can't be modeled accurately with it. Fuzzy or hairy surfaces are often modeled with a microfiber model! Epic's approximations (e.g. the split-sum approximation) can't easily handle anisotropic surfaces like brushed metal. And fleshy surfaces like skin, wax, paper, fruit pulp, etc. all require some form of subsurface scattering beyond the basic Lambertian diffuse.

> I always wondered how much performance you could gain by implementing that model directly in silicon and ditch GPU shaders instead.

Very little of the shader code that's being run is the microfacet BRDF stuff, and that's ALU so it's the fast bit! The expensive stuff is texture fetching for i.e. incident light, shadows, and that's all done by artists and graphics engineers above the BRDF layer. BRDF just tells you how to respond to incoming light, not where it is or where it comes from. And it makes you supply stuff like albedo color, normal direction, which are where shaders spend a lot of their time.


Some of these features (sheen, anisotropy, subsurface scattering) are being added to the format with changes like https://github.com/KhronosGroup/glTF/pull/1726. But realtime implementations can get a lot of mileage out of a simpler metal/rough PBR workflow, too.

First, if moore's law (transistor count growth) is dead why have faster GPUs with more transistors been released?

Realistic rendering has not solidified and in any way. It is arguably in a much greater state of flux and advancement these days with lots of useful research coming from game and game engine companies. Not only that, it is unlikely that it would buy much speed to hardcode some model into a GPU and those transistors could be used instead for flexible computation. It would need to do lots of floating point math, which is already what shaders are great at. Shaders were invented to 'shade' fragments, that's why they are called shaders. They have been gradually adopted as kernels that are run on other things.


I'm no expert, but I think that a lot of the GPU speedup has come through massively parallel execution rather than processor/bus/memory speed. NVidia's Quadro TRX 8000, for example, has 4,608 CUDA cores, 576 Tensor cores, and 72 RT cores. I don't know what all those do, but they are certainly doing a lot of it in parallel!

First, massively parallel execution takes more transistors and moore's law is about more transistors. Second, memory bandwidth has increased significantly.

Material models for real time rendering have made tremendous strides in the past ten years. But, they are far from done :)

The good news about Moore's Law is that the massive parallel model of GPUs has meant that they have continued to scale at tremendous rates.

They've had specialized silicon for rasterization and image sampling since the beginning. Video decoding came pretty early. Recently there has been new hardware features for machine learning and ray tracing.

But, the huge leaps in general purpose compute in GPUs is currently leading us back to the stylistic freedom of the earliest days of 3D when software rasterization was first being worked out.

I saw a great quote about the UE5 demo being crazy because "The ray tracing is done in hardware and the rasterization is done is software!" which is a total reversal of the past 20 years.


Does anyone know of a good library or implementation to view gLTF models in Android using OpenGL and Java?

https://google.github.io/filament/ from Google has become very impressive over the past two years. And, it has the most beautiful documentation you could ask for :)

Have you checked this PBR based renderer in Java using OpenGLES? https://github.com/rsahlin/graphics-by-opengl

There was SceneForm, heavily publicised a couple of Google IO ago, but then it joined the list of dropped Google projects.

https://github.com/google-ar/sceneform-android-sdk

Apparently they also don't believe that much in Java and Kotlin for such tooling.

All their game related efforts are now focused on C++ or middleware like Unity.

As for other libraries, maybe something based on LWJGL.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: