GPUs are almost irrelevant for VFX work at this scale due to the memory requirements. The render nodes used for Interstellar had 156GB of RAM, and a decade later the biggest GPUs still don't have that much memory (unless you count Macs I suppose but the existing software ecosystem is very CUDA-centric).
Small VFX shops do typically render on GPUs nowadays, but the high end is still dominated by CPU rendering.
That goes for all the biggest players - Pixar, WDAS, Dreamworks, ILM, WETA, Framestore... all do their final rendering on CPUs. Some of them have adopted a hybrid workflow where artists can use GPU rendering to quickly iterate on small slices of data though.
I was thinking of their graphics cards which currently max out at 48GB, but true, their HPC accelerators do have a lot more memory now. In the case of Nvidia they would be a trade-off for rendering though since their HPC chips lack the raytracing acceleration hardware which their graphics chips have.
Besides, in the current climate I don't think VFX companies would be eager to bid against AI companies for access to H100/H200s, the VFX companies don't have infinite venture capital goldrush money to burn...
Its a particle simulation, so a lot of it is memory management.
I can't remember the actual size of the simulation in terms of number of particles, but it was using something like 2 150TB fileservers to store it on.
With NVME and cheap ram, it would probably at the point where it'd be useful on a GPU.