Hacker News new | past | comments | ask | show | jobs | submit login
Intel Unveils Aurora Specifications: 21,248 CPUs and 63,744 GPUs (wccftech.com)
18 points by ashvardanian on May 22, 2023 | hide | past | favorite | 3 comments



Intel and AMD both need to make LLM inference tools plug n play with their GPUs.

Otherwise Nvidia is the default choice when selecting a GPU for local LLM inference.

Aurora is working with OpenAI and Azure which is a very positive step in this direction.


Interesting that they're using intel GPUs, ~20 Petabytes of RAM. Wow.

FTA: "the Aurora supercomputer is outfitted with 10.9 PB of DDR5 system DRAM, 1.36 PB of HBM capacity through the CPUs, and 8.16 PB of HBM capacity through the GPUs. The system DRAM achieves a peak bandwidth of 5.95 PB/s, the CPU HBM achieves a peak bandwidth of 30.5 PB/s and the GPU HBM achieves a peak bandwidth of 208.9 PB/s. For storage, the system is equipped with a 230 PB DAOS capacity that runs at a peak bandwidth of 31 TB/s & is configured in a total of 1024 nodes."


I mean this is as much advertising that Intel makes GPUs and they perform "as well" as Nvidia's as it is anything else.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: