

Lightweight Azure InfiniBand Cluster Setup - amckenzie
http://blog.rescale.com/?p=1533

======
insaneirish
InfiniBand needs to die, rather than continue being the walking corpse it is
today.

Ethernet won.

(Please spare me the technical dissertation on IB's advantages; I've worked on
large systems that simultaneously used IB and Ethernet, and IB was/is
continually nightmare.)

~~~
knappe
So nice of you to cut off any discussion while making a controversial point.

I had this same debate the other day taking with one of the people who worked
on the Janus [0] buildout. The conclusion was that ethernet hasn't won. If you
need the performance Infiniband is your choice.

[0]
[https://www.rc.colorado.edu/node/212](https://www.rc.colorado.edu/node/212)

~~~
itchyouch
Working with both IB and 10g, the latency numbers provided by the azure IB
setup are in-line with what a 10g kernel-bypass Ethernet setup can do. Ib's
main performance advantage comes from the userspace message processing as
opposed to any inherent advantage of IB itself as opposed to ethernet.

Kernel bypass Ethernet also doesn't require recompilation or rewriting the
networking stack. Plain ol' socket programs in any language can get a boost by
enabling kernel bypass.

I'd say that IB had the first mover advantage and is now entrenched in HPC,
but if it's just HPC, then it remains to be seen if the HPC market is big
enough to support IB on its own.

