Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How fast could this run in a modern CPU with a modern GPU?


GPUs are almost irrelevant for VFX work at this scale due to the memory requirements. The render nodes used for Interstellar had 156GB of RAM, and a decade later the biggest GPUs still don't have that much memory (unless you count Macs I suppose but the existing software ecosystem is very CUDA-centric).

Small VFX shops do typically render on GPUs nowadays, but the high end is still dominated by CPU rendering.


As a matter of fact, Pixar render farm is mostly based on CPU rendering.


That goes for all the biggest players - Pixar, WDAS, Dreamworks, ILM, WETA, Framestore... all do their final rendering on CPUs. Some of them have adopted a hybrid workflow where artists can use GPU rendering to quickly iterate on small slices of data though.


> The render nodes used for Interstellar had 156GB of RAM, and a decade later the biggest GPUs still don't have that much memory

NVIDIA H200 GPU has 141 GB of memory, and the AMD Instinct MI300X GPU has 192 GB of HBM3 memory.


I was thinking of their graphics cards which currently max out at 48GB, but true, their HPC accelerators do have a lot more memory now. In the case of Nvidia they would be a trade-off for rendering though since their HPC chips lack the raytracing acceleration hardware which their graphics chips have.

Besides, in the current climate I don't think VFX companies would be eager to bid against AI companies for access to H100/H200s, the VFX companies don't have infinite venture capital goldrush money to burn...


There's a nice recreation on Shadertoy:

https://www.shadertoy.com/view/lstSRS

No idea if it's doing the same simulation though.


So beautiful, thanks for sharing.


Its a particle simulation, so a lot of it is memory management.

I can't remember the actual size of the simulation in terms of number of particles, but it was using something like 2 150TB fileservers to store it on.

With NVME and cheap ram, it would probably at the point where it'd be useful on a GPU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: