
Rosetta: The Engine Behind Cray’s Slingshot Exascale-Era Interconnect - blopeur
https://fuse.wikichip.org/news/3293/inside-rosetta-the-engine-behind-crays-slingshot-exascale-era-interconnect/
======
matt2000
This might be a super dumb question, but are super computers worth it? Meaning
anything that requires custom hardware instead of just groups of coordinated
commodity hardware. It seems you can get maaaybe 5x (10x?) current performance
but at a greater (100x?) cost multiple. Or is this extra spend on custom
supercomputer hardware what effectively sponsors the research that allows
Moore's law to continue?

~~~
monocasa
The custom hardware isn't in the processor chips, it's in the interconnect. In
a lot of ways they are groups of coordinated commodity hardware. It's the
coordination that's the secret sauce and what you're paying for. Not every
problem works great in the tree interconnects you see in data centers, so
these systems are big hypercubes. Also the latency is more like PCIe than TCP.
In fact PCIe is closer to Infiniband than original PCI in a lot of ways.

~~~
dekhn
I recently did an analysis and it looks like for most supercomputers, only 15%
of the total price is interconnect. I was surprised- I expected it to be more.
Also, most supercomputers aren't hypercubes, they're using things like folded
butterfly.

~~~
monocasa
I'm really curious what process you were able to use to separate the component
costs on systems like these. The manufacturers go out of their way to
obfuscate that information.

------
convolvatron
it looks like 'hpc ethernet' is really just a classic cray memory network
design with hop-by-hop retransmission and rigid flow control.

so really a port can fall back to supporting ethernet? maybe it would have
been easier to just put an adapter in there?

just wondering what the real meat is. i do think convergence with non-
supercomputer systems is a great idea and should help quite a bit with NRE

