Hacker News new | past | comments | ask | show | jobs | submit login

Also says:

> Every shot in Piper is composed of millions of grains of

> sand, each one of them around 5000 polygons.

I'm guessing the article is just confused. The important point, I think, is about using real geometry for sand particles rather than the beach being a surface with a displacement map.




But with instancing. So they could have a few dozens of grain types 5K tris each and the total number is from instances, no need to load 5 billion triangles into memory.


What does that look like from a data structure perspective? Each grain has a bounding box and you only look at the geometry for that grain if a ray crosses the box? (I'm way, way out of my wheelhouse here, if it's not already obvious.)


Reduced ad absurdum I thinks that is pretty much how it works. There is an acceleration structure that is traversed for each ray, to reduce the search space for ray-geometry intersections (probably a bounding volume hierarchy of some sorts), and using instancing only a very small subset of the total of all grains of sand actually need to be stored and processed. I imagine the sand is modeled using some particle system where each particle is actually a small scene of sand grain models in itself, with its own BVH etc, and the raytracer somehow reuses whatever happens inside them for every other particle that has the same lighting conditions.

It’s probably way way more complicated than that though. Extremely interesting stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: