Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe NeRFs can be used as an intermediate step to reconstruct the scene, and then extract the surfaces and their materials to more conventional representations like meshes, textures, refraction index, etc. I guess the main benefit is that it fills in the undersampled areas in a scene, whether that's an occlusion, a reflection angle, or something else.

My main problem with them is that it seems as if all the data is unstructured and interdependent, not like pixels, voxels, or similar where you can clearly extract and manipulate parts of the data and know what it means. To use your photograph example, a digital photo is a simple grid of colored points, and it's easy to change them individually. A regular 3D scene is a collection of well defined vertices, triangles, materials etc, that is then rendered into a digital photo using a easy to describe process. A NeRF on the other hand seems to be more like enter camera pose => magic => inferred image.

Maybe I'm overthinking it and it doesn't have to be as general as out current formats, maybe a binary blob that can represent a static scene is fine for plenty of applications. But it feels needlessly complicated.



https://www.matthewtancik.com/nerf is able to sample the 3D volume somehow. "We can also convert the NeRF to a mesh using marching cubes."




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: