
Uncomplicate: Number crunching in Clojure - tosh
http://uncomplicate.org
======
neutronicus
I work on high-performance physics simulation, and I have some critiques of
this post. My number one question when I see something like

> High Performance Computing and GPGPU in Clojure: access the supercomputer on
> your desktop

is "what is your story for performing large computations across several
networked nodes communicating via MPI" and even more so "what is your story
for interacting with job schedulers like Slurm and Torque".

Following this link and clicking around leads me to believe that these
libraries aren't really for that at all, and that they're mostly for doing
high-performance computation on a single node (possibly in concert with
another Clojure library for dealing with MPI). Which is important and also
relevant to my interests! However, I'd encourage the author not to use the
word "supercomputer" and to be much more explicit, because I think a lot of
the people who click on links that say "high-performance parallel computing in
Clojure" will care about the same stories I do and will have a similar
definition of "supercomputer" to mine.

These pages certainly put your best foot forward, but they don't do a very
good job of knowing who your potential users are and what information they
want.

~~~
agibsonccc
Not sure the JVM world is really even targeting MPI. As an author of a
competing lib to this with traction in the big data world (we are mainly used
on large spark/flink gpu mixed cpu/gpu clusters), I'd actually do the reverse
pitch on you here and say "why should we care about HPC?"

Usually the folks using MPI and the like either know python or c++ and are
fine with that ecosystem where it's already well served often directly from
vendors like nvidia and intel.

~~~
neutronicus
Ha. Myopia on my part, I guess.

I am, in fact, a Python / C++ user who probably isn't switching to Clojure any
time soon (unless I switch jobs).

I still think that these posts are a little light on what exactly the
library's cluster story is. Your library, for instance, says "Distributed"
right there on the landing page, which is already super helpful for me to know
what I'm looking at at a glance.

~~~
agibsonccc
Yeah! Totally get it. The "distributed" part is an overloaded term though
which is why I called out the "JVM version" which is more focused on commodity
boxes.

I should note that ours also isn't "GPU first" but more "GPUs as an addon to
your cluster for Deep learning" which is how the resource managers typically
used in JVM land like mesos and yarn are treating it.

------
thom
This doesn't even mention Bayadera, his Bayesian data analysis environment:

[https://github.com/uncomplicate/bayadera](https://github.com/uncomplicate/bayadera)

------
skardan
ClojureCUDA, ClojureCL (as in OpenCL) and Neanderthal are libraries developed
by Dragan Djuric. Watch his talk from EuroClojure 2016 in Bratislava:

[https://www.youtube.com/watch?v=bEOOYbscyTs](https://www.youtube.com/watch?v=bEOOYbscyTs)

