
Futhark 0.9.1 released – now with CUDA backend - Athas
https://futhark-lang.org/blog/2019-02-08-futhark-0.9.1-released.html
======
pdimitar
Interesting. I wonder if there's a curated list of languages that have GPU
backends? I think it's high time for most mainstream languages to add GPUs to
their concurrency / parallelism story.

~~~
pjmlp
Out of my head, Haskell, Java, .NET, Rust, C, C++, Fortran, D, Julia.

~~~
nostrademons
So LLVM?

~~~
pjmlp
Actually SPIR and PTX, eventually OpenCL as well.

~~~
nostrademons
But is it a SPIR/PTX backend for LLVM (and any language that targets it), or
is it a direct backend for Haskell/Rust/Java/C# and all the other
implementations that you listed?

~~~
pjmlp
It a mix of SPIR/PTX backends for LLVM and proprietary toolchains.

Java,

[https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/...](https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.80.doc/docs/gpu_developing_cuda4j.html)

[http://www.jcuda.de/](http://www.jcuda.de/)

[https://clojurecuda.uncomplicate.org/](https://clojurecuda.uncomplicate.org/)

Haskell

Accelerate, [http://www.acceleratehs.org/](http://www.acceleratehs.org/)

Obsidian,
[https://github.com/svenssonjoel/Obsidian](https://github.com/svenssonjoel/Obsidian)

.NET

Hybridizer , [http://www.altimesh.com/hybridizer-
essentials/](http://www.altimesh.com/hybridizer-essentials/),

Alea GPU, [http://www.quantalea.com/](http://www.quantalea.com/)

NMmath, [https://www.centerspace.net/nmath-
premium/](https://www.centerspace.net/nmath-premium/)

D,

DCompute,
[https://github.com/libmir/dcompute](https://github.com/libmir/dcompute)

Julia,

JuliaGPU, [https://github.com/JuliaGPU](https://github.com/JuliaGPU)

------
olodus
Had the wonderful opportunity to not only play around with Futhark last year
but also have a lecture from Troels himself when I took a course in parallel
functional programming last year. Really like the language and are really
interested in what it could do in the future. When I used it last there were
still a few kinks to work out to use it but from what I've seen from the
releases lately more and more of them are being worked out. Love the direction
it is going. Best of luck for the future of the project.

------
4thaccount
Anyone use Futhark for genetic algorithms or other evolutionary computing uses
yet?

~~~
sevensor
I've done a lot of work with GAs, and in my experience the effort is better
spent parallelizing the model rather than the evaluations. (You can use a fast
model for lots of other stuff besides optimization.) However, if you already
have a compact model that's fast to evaluate yet difficult to optimize, then
you might have a good candidate for this approach.

~~~
4thaccount
The problems I'm interested in have over 12k constraints and need to solve
pretty fast. They're among the hardest MIP problems out there and I'd love to
see how well a GA compares on a real model, but it will take me a long time to
create a comparable GA model.

~~~
sevensor
12k constraints? Without a major change in formulation, you're almost
certainly way better off with convex optimization. As long as the problem is
convex, anyway.

------
fenollp
Points for helping against monopolies. I have been following the project for
some years and am very happy to see it grow and on top of a functional front
end at that!

Now where is Futhark's GEMM implementation & the vs. CuDNN benchmarks?

~~~
twtw
cuBLAS does GEMM.

