
Writing extendable and hardware-agnostic GPU libraries - simondanisch
https://techburst.io/writing-extendable-and-hardware-agnostic-gpu-libraries-b21c145a8dad
======
m_mueller
Very interesting. How does this deal with data regions? Do you have an array
type that explicitly recides on GPU, i.e. to pass it to a CPU routine like
file i/o you need to wrap CUDAmemcopy? or does it manage the device state?

~~~
simondanisch
>type that explicitly recides on GPU

Yes, that's what we do! More domain specific types will come, for e.g. views,
lazy arrays and sparse arrays.

> or does it manage the device state?

It aims to manage most of the device state! Some more complex gpu interactions
will need to drop back to manual management, though - but the hope is to turn
more and more into a high level API when needed!

------
fulafel
How does the current Julia GPU support compare to other high level GPU
languages like Futhark and Halide?

~~~
simondanisch
First of all, Julia is a proper multi purpose programming language! I consider
that to be important, since I want to learn one language and then do most of
my very different projects in it.

I haven't used Futhark or Halide, but I know they do some quite interesting
compiler optimization.

We plan to use Julia's meta programming and code generation capabilities to
enable similar optimizations. Hopefully, we can publish some new insights
soon!

From a look at Halides and Futharks examples, I'd say that it will be more
pleasant to write generic Julia GPU kernels and Julia enables a more
interactive workflow.

