Neural Radiance Fields (NeRF) alone by themselves will alter VFX fundamentally even without all the Stable Diffusion 2.0 (now with Depth!) etc generation.
NeRF lets you pick ANY smooth camera path in a 3D space built up with nothing more than a mobile phone. That's today.
These two phenomena will start to merge and the impact will be as transformative to the software side of media creation as the switch to digital from celluloid was for hardware. There was a time some still remember when not everyone had a camera in their pocket.
NeRF only works with static scenes, requires perfect camera pose, clean scene with clear edges and low exposure. For outdoors any tree with leaves is going to add a lot of noise and blur. For a natural scene indoors the camera exposure is going to be too high. While it is certainly a breakthrough, even “just” static scene reconstruction is still not solved, given the limitations above.
Note also, NeRF is not an “AI” model, it is more of fancy compact storage for objects, images or scenes.
Three years ago, our SOTA image generation models had very strong limitations (when compared to today's models), it can be difficult to see when and where the next breakthroughs will come - NeRFs are not there yet, but we may not be far away
NeRF lets you pick ANY smooth camera path in a 3D space built up with nothing more than a mobile phone. That's today.
These two phenomena will start to merge and the impact will be as transformative to the software side of media creation as the switch to digital from celluloid was for hardware. There was a time some still remember when not everyone had a camera in their pocket.