Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fortran: The Ideal HPC Programming Language (acm.org)
24 points by xel02 on June 23, 2010 | hide | past | favorite | 5 comments



As someone who recently began to write Fortran for a living, no. The minute you get away from FORmula TRANslation and move into simulation logic the weaknesses of the language become hugely apparent. The complacent "What do you know! Fortran actually is perfect! HIGH-FIVE" mentality in HPC is a little bit frustrating as someone new to the field.


I had the questionable pleasure of using Fortran90/95 at university, for HPC of course. Ignoring its history I would have said it was a reasonable stab at an array manipulation DSL, but I think it's terrible for structuring of any kind. I haven't read the full article, but it seems to throw around numbers in the 1000 LOC order of magnitude, which is the size of undergraduate homework projects these days, not serious simulations. As soon as you go beyond a simple grid of scalar values, you desperately yearn for something better.

I always figured people stuck with Fortran because they literally hadn't used anything else other than maybe bash scripting or MATLAB since they switched from CDC 6600 assembly. I find it surprising that people researching languages would reach this conclusion. Okay, Fortran is a first-class citizen in MPI and OpenMP land, but still.

Another thing that struck me was that considering the only substantial contemporary use of Fortran is in HPC, the compilers do a terrible job of optimisation. Function inlining, even of trivial built-ins like the dot product, doesn't seem to work reliably. (this was with Intel and/or Portland IIRC)


Optimization is typically a strength of Fortran. Do you recall which compiler switches you were using for the respective compilers and a code snippet. I have access to many compilers and would be happy to take a look at the code gen you're seeing versus what is expected.


I graduated almost 4 years ago, so this would have been even longer ago. I therefore can't remember the exact situation, but I think I was for some reason performing dot products on 3-vector elements of one or two Nx3 (or 3xN?) matrices. (possibly to get magnitudes, I forget) Using the dot() (?) function was substantially slower than a simple multiply/accumulate loop. The latter was successfully unrolled, the former produced an actual function call, of course not unrolling the loop. I'm pretty sure this was -O2 on an Intel compiler.

This was in some exercise in optimising an existing algorithm's code. I remember scoring better than the lecturer's version despite missing the uninlined dot product. :)


"Ideal" probably needs to be approached with as much caution as "never."




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: