Hacker News new | past | comments | ask | show | jobs | submit login
Ever: Exact Volumetric Ellipsoid Rendering for Real-Time View Synthesis (half-potato.gitlab.io)
80 points by alphabetting 41 days ago | hide | past | favorite | 14 comments



It’s great how the field of realtime volumetric rendering is alive with so many options and new yet simple approaches are being invented.

Reminds me of the mid-1990s before everyone basically agreed that lots of triangles of any size are the right model representation. There were NURBs and metaballs and reyes micropolygons and who knows what else… Even Nvidia’s first hardware accelerator chip used quads instead of triangles (just before Microsoft made triangles the standard in Direct3D).

Looking forward to seeing where this settles in a couple of years! The intersection of video and 3D is about to get super interesting creatively.


maybe something like this:

https://arxiv.org/pdf/2405.16237


We all know where it's going since 1873. What matters is the fun along the way.


what are you referencing?


As @GistNoesis is being somewhat gnomic, I believe that they are referencing

https://en.wikipedia.org/wiki/A_Treatise_on_Electricity_and_...

Written in 1873 by James Clerk Maxwell

They also reference Genesis 1:3 "Let there be light"


Genesis 1:3 (Unknighted James Clerk Version).


The main selling point of Gaussian Splatting over NeRF-based methods is rendering (and training) speed and efficiency. This does come at the cost of physical correctness as it uses splatting instead of a real radiance field.

This method tries to move back to radiance fields, but with a primitive-based representation instead of neural nets. Rendering performance seems to be quite poor (30fps on a 4090), and rendering quality improvements seem to be marginal.

I'm not quite sure I understand where this fits in when NeRFs and 3DGS already exist at the opposite ends of the correctness-speed tradeoff spectrum. Maybe somewhere in the middle?


Primitive-based representations are a lot easier to manipulate (e.g. animate) than NeRFs. Also they can be a lot more efficient/scalable when there's a lot of e.g. empty space.


I think they're suggesting that their method is still faster than most NeRF based methods? They have slightly worse image quality compared to ZipNeRF while getting 72x the frame rate. Or perhaps I'm misunderstanding...


Neat, the concept is so elegantly obvious:

- your dataset is already an analytic representation of stretched spheres. Just assume that they are a hard shape of plasma with uniform density throughout the volume.

- for each pixel perform ray intersection against each spheroid, the thickness that the ray went through is precisely how much thickness the spheroid is contributing light to your pixel (obviously also multiplied by solid angle)

- since it’s light, there is no occlusion, so just stack the contribution from all the ellipsoids together and you’re done.

- since you are rendering a well-defined shape without self occlusion, there is no random popping in and out no matter the angle.

The computation is practically equivalent to rendering CSG shapes, except even easier since you only ever add and not occlude/subtract. It also scales with rat racing hardware directly.


I read this:

> since it’s light, there is no occlusion, so just stack the contribution from all the ellipsoids together and you’re done.

and then I scratched my head at how you can possibly do a credible rendering of any real scene without occlusion, contemplated that the images in the paper absolutely had occluded objects, and then read a bit more and figured it out:

Each ellipsoid has a “density,” which is a single number indicating the degree to which it absorbs light coming from behind it. And this formulation allows the integral along a path from infinity to the camera to be exactly evaluated. So there is occlusion! It just happens to work correctly even when ellipsoids overlap.

[0] It’s slightly more complicated, but not much. The raw density scales a term in the integral, but this results in a poorly behaved gradient, so the trained parameter is more or less the opacity when looking through the center of the ellipsoid through the shortest axis.


I'm actually in the market for some rat racing hardware. Anything you'd recommend that won't be obsolete in six months?


I always assumed this was already how those were rendered, because that's kinda obvious, and raymarching is a standard technique for real-time volumetric rendering.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: