Hacker News new | past | comments | ask | show | jobs | submit login
Announcing New AMD EPYC-based Azure Virtual Machines (microsoft.com)
94 points by fniephaus 62 days ago | hide | past | web | favorite | 4 comments



How do they provide those enormous number of cores (80000) ? Are they talking about distributed computing ? If not, how do they connect all the cores to remaining peripherals.


It says "For MPI Workloads", meaning they are indeed talking about distributed computing.

The sentence before says the systems are interconnected with 200 gigabit Infiniband.

So they aren't talking about the number of cores in a single system, but rather in a cluster that has a high speed and low latency interconnect.

MPI: https://en.wikipedia.org/wiki/Message_Passing_Interface Infiniband: https://en.wikipedia.org/wiki/InfiniBand


200 gigabit is where PCI Express 4.0 is sorely needed, two such ports need more bandwidth than a PCIe 3.0 x16 slot can provide so they are using two such slots http://www.mellanox.com/related-docs/prod_adapter_cards/PB_C... look at the top. (I know I linked Ethernet, the bandwidth situation is the same.)

> Socket Direct technology is enabled by a main card housing the ConnectX-6 and an auxiliary PCIe card bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and connected using a 350mm long harness


Wow, that's really cool! I haven't seen anything like that in a long time (CrossFire & SLI excepted).

How often do companies band together x16 links? What else benefits from it?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: