
Announcing New AMD EPYC-based Azure Virtual Machines - fniephaus
https://azure.microsoft.com/en-us/blog/announcing-new-amd-epyc-based-azure-virtual-machines/
======
option_greek
How do they provide those enormous number of cores (80000) ? Are they talking
about distributed computing ? If not, how do they connect all the cores to
remaining peripherals.

~~~
mcpherrinm
It says "For MPI Workloads", meaning they are indeed talking about distributed
computing.

The sentence before says the systems are interconnected with 200 gigabit
Infiniband.

So they aren't talking about the number of cores in a single system, but
rather in a cluster that has a high speed and low latency interconnect.

MPI:
[https://en.wikipedia.org/wiki/Message_Passing_Interface](https://en.wikipedia.org/wiki/Message_Passing_Interface)
Infiniband:
[https://en.wikipedia.org/wiki/InfiniBand](https://en.wikipedia.org/wiki/InfiniBand)

~~~
chx
200 gigabit is where PCI Express 4.0 is sorely needed, two such ports need
more bandwidth than a PCIe 3.0 x16 slot can provide so they are using two such
slots [http://www.mellanox.com/related-
docs/prod_adapter_cards/PB_C...](http://www.mellanox.com/related-
docs/prod_adapter_cards/PB_ConnectX-6_EN_Card.pdf) look at the top. (I know I
linked Ethernet, the bandwidth situation is the same.)

> Socket Direct technology is enabled by a main card housing the ConnectX-6
> and an auxiliary PCIe card bringing in the remaining PCIe lanes. The
> ConnectX-6 Socket Direct card is installed into two PCIe x16 slots and
> connected using a 350mm long harness

~~~
HeWhoLurksLate
Wow, that's really cool! I haven't seen anything like that in a long time
(CrossFire & SLI excepted).

How often do companies band together x16 links? What else benefits from it?

