

Author response to “HPC is dying, and MPI is killing it” objections - cing
http://dursi.ca/objections-continued/

======
nemothekid
I read the first post, and now reading the second, and its difficult to parse
because I'm not in academia.

As far as I can tell, MPI is actually a standard/protocol for doing
distributed computation, much like Hadoop & Spark. (Confused as to why I've
never heard about it before - does it get absolutely 0 use outside of
academia? Why?)

If I have this right, then a more general title is "Specialized high
performance machines are on the way out, commodity cluster computing is the
future"

~~~
tanderson92
It gets an incredible amount of use in the physical science and applied maths
communities, and that's not only in academia. It's an extensive standard for
message passing that allows distributed (non- shared-memory) computations.
Multiple implementations of the standard exist. It is so widely used that
entire software suites (Trilinos and PETSc, neither developed by academia but
largely by the DOE) are built on top of its message passing framework.

The author's point is that because it is such a large standard it is difficult
to grasp and he would rather use something else.

His opinion notwithstanding though, things are unlikely to change
significantly, at least in the near future: even just today we saw an
announcement of a massive new specialized cluster being built at Argonne. You
can be sure most application programming will be done with MPI, not Spark.
Commenter jeffsci on the original post is on point.

------
simonbyrne
Discussion of previous article:
[https://news.ycombinator.com/item?id=9335441](https://news.ycombinator.com/item?id=9335441)

