
Scientific computing’s future: Can any coding language top a 1950s behemoth? - privong
http://arstechnica.com/science/2014/05/scientific-computings-future-can-any-coding-language-top-a-1950s-behemoth/
======
pwang
So was this just a commissioned piece on Julia? Julia is a very interesting
new development, and has a very sharp dev community - but I find it laughable
that any article about de-throning FORTRAN barely even makes a passing
reference to Python, which is _currently_ sweeping across universities and
scientific research labs across the world.

We won't solve tomorrow's problems with yesterday's perspective. Efficiency of
compute is only one portion of the problem, and so much of the article fixates
on things close to the machine, instead of recognizing that the real barrier
to innovation in scientific computing is _computational literacy_ among
scientists.

IMHO, Haskell and Clojure are DOA in this regard. They may be great for
software developers, but not so much for geneticists and astrophysicists and
nuclear chemists. Python's language, libraries, and community ethos are a very
compelling mix, and have already crossed the chasm into the "mainstream" of
scientific computing, even if it's still early days.

~~~
whyenot
> I find it laughable that any article about de-throning FORTRAN barely even
> makes a passing reference to Python

Python is just glue. It's not possible to do _any_ numerical heavy lifting in
python. Numpy and Scipy are not written in python, all the important stuff is
written in C and/or Fortran. Same goes for R. It's a great language for
exploratory programming, thanks to the REPL, but all the heavy lifting is not
done in R, it's done in another language. Python will never displace Fortran
or C for numerical code. It _may_ replace another language (R, Matlab, ...)
for the _user interface_. The power of Julia and to a lesser extent Clojure
and Haskell is that they have got both the wonderfully interactive REPL _and_
can do the heavy work that is traditionally done in Fortran / C. Common Lisp
is another possibility, but it already had its day in the sun.

~~~
pwang
There is work underway to bridge this gap. The fundamental bet by my company
is actually basically the same one that Julia makes: higher level languages
combined with dynamic compilation are the only route to performance in our
modern world of heterogeneous hardware.

My company sells a Python-to-GPU and Python-to-x86 compiler. It matches or
beats hand-rolled C and C++ for some cases, and is loads easier to deal with
when it comes to CUDA work. It's still in its infancy, but the approach (in my
mind) is definitely validated.

If you think about it, why _should_ C be the speed king? Most programmers
cannot reason about cache coherency to save their lives and that's the
dominant performance cost for real performance. Many of the big numerical
codes follow some high level patterns of data movement; if a compiler has
greater visibility into the structure of both the data and the algorithm, it
has an easier time parallelizing and optimizing, than if it has to rely on the
programmer to "lower" the algorithm to a layer that's just a hair above RTL.
"C is just portable assembly" and all that.

Additionally, FWIW, if you really want to pick nits, a lot of NumPy is
actually _not_ written in C, but rather a custom macro template system which
then creates C code for each of the core types (int8, uint8, int16, etc.
etc.). So even for the low-level guts, code generation (albeit a very simple
mechanism) is the current approach.

