Hacker News new | past | comments | ask | show | jobs | submit login

Due to its legacy Fortran (similar to COBOL) is still being heavily used inside its niche application i.e. numerical computation. The irony is that some of the languages touted be Fortran replacement, Matlab, Python and to some extent Julia, are still using library that's written in Fortran (together with assembly) due its superior performance.

D shows that you can be in one language and still performs very fast. With its numerical library you can get both productivity and performance that is even better than the OpenBLAS (Matlab, Numpy and Julia library are based on) [1].

[1]http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...




Note that a lot of Julia are using Julia-based BLAS implementations now (of which there are multiple)

https://discourse.julialang.org/t/ann-paddedmatrices-jl-juli...

https://github.com/MasonProtter/Gaius.jl

https://github.com/mcabbott/Tullio.jl

These libraries have been incorporated in various spots in the machine learning, scientific machine learning, differential equation, and other libraries due to how they often outperform BLAS's like OpenBLAS. While the core language still links to OpenBLAS by default, this is up for debate for future versions of Julia given these developments (and movements in the standard library are being done to better incorporate the use of different BLAS implementations).


How much is "a lot" and how much of a debate is there?

As far as I can tell from glancing through Gaius.jl is no longer maintained and both it and Tullio still consistently lose out to MKL across a wide range of matrix sizes and usually lose out to OpenBLAS on larger matrices. Moreover most victories for Tullio seem to be very machine dependent with different users reporting OpenBLAS still beating Tullio even on smaller matrix sizes.


>As far as I can tell from glancing through Gaius.jl is no longer maintained and both it and Tullio still consistently lose out to MKL across a wide range of matrix sizes and usually lose out to OpenBLAS on larger matrices.

It has mostly been replaced by PaddedMatrices.jl

>Moreover most victories for Tullio seem to be very machine dependent with different users reporting OpenBLAS still beating Tullio even on smaller matrix sizes.

On kernels which OpenBLAS directly represents. But Tullio is building more general kernels and has a pretty big win on tensor calculations that are not just one GEMM.


The goal of Tullio.jl is certainly not to beat MKL at its own game. But it can still be 4-5 times faster on some permutedims + matmul operations, by fusing them.

Besides handling operations which aren't standard kernels, handling weird number types efficiently would be nice. (Being able to compile lighter-weight Julia images without BLAS libraries might be nice too, for other purposes.)


I'd be hugely surprised if they beat tuned-for-local-machine OpenBLAS or BLIS. Beating pre-compiled OpenBLAS is pretty trivial; I've seen interpreted code do that.


OpenBLAS supports dispatching based on architecture. On Skylake-X, it was using SKYLAKEX kernels, not NEHALEM.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: