Hacker News new | past | comments | ask | show | jobs | submit login

Is that really all? We regularly run multi TB memory clusters for big data processing and ML. I imagined it would be much bigger than that.

To put that in perspective, 24x 64 GB nodes is 1.5 TB.




> 24x 64 GB nodes is 1.5 TB

Looking at your calculations indicates that you mean RAM but it's 1.5 TB GPU VRAM (but this is assuming they use 64 bit precision, which is likely wrong so it's ~750 GB), not RAM.


My understanding is that all the memory has to be GPU memory, with proper interconnects. Still not that crazy, all things considered




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: