
Diffeq.jl v6.4: Full GPU ODEs, Neural ODEs with Batching on GPUs, and More - ViralBShah
http://juliadiffeq.org/2019/05/09/GPU.html
======
eigenspace
Wow, that's really cool. I feel like DiffEq is one of those libraries that
seems to leverage every one of Julia's cool features that I can think of.

Making differential equations 'just work' on a GPU is super cool. I feel like
if this were almost any other language, it'd be impossible to get this level
of code-reuse where a user can just bring their own differential equation that
may not even know about GPUs and then have the differential equation library
solve it on the GPU by just giving a GPUarray as the initial condition.

________________________________

Also, since the blog post doesn't seem to link to it, here's a link to the
actual DifferentialEquations.jl repo:
[https://github.com/JuliaDiffEq/DifferentialEquations.jl](https://github.com/JuliaDiffEq/DifferentialEquations.jl)

~~~
The_rationalist
well openmp or openacc are still simpler.

~~~
gugagore
Do you mean they are simpler to use? Because those are just models of parallel
computation. A little like saying Turing machines are simpler than python.

------
microcolonel
Which GPUs? I'm guessing by the "CuArrays" identifier that this is CUDA-only,
it'd be nice if they said _NVIDIA_ GPUs instead of getting my hopes up. :'\- (

~~~
ChrisRackauckas
Well the implementation is generic to the array type, so if someone creates an
array type which builds broadcast kernels on with OpenCL then it would work
with OpenCL. Sadly, CLArrays.jl never got updated to Julia v1.0, so until
someone does that it's just a dream. However, DifferentialEquations.jl will be
waiting ready for that to happen :).

~~~
dnautics
I'm honestly surprised that amd doesn't get more aggressive about developing
towards Julia as a deep learning target, it would be way easier for their devs
to optimize for performance parity than say tensorflows spaghetti code. Iirc,
the raw flops performance of AMD gpus is even or better than Nvidia but their
unoptimized libraries kick it in the shins (I could be misremembering thus)

~~~
ViralBShah
There are folks working on the AMDgpu target for Julia, and I see that some
early work has already happened in terms of building LLVM and such. I suspect
the AMD toolchains are not as strong - but with the right contributors, it
should be possible to leverage the work done on the Nvidia GPUs for the AMD
GPUs and even other accelerators.

~~~
celrod
As someone who owns a couple of Vega (AMD) GPUs, I'm watching a few of
jpsamaroo's GitHub repos.

In particular:
[https://github.com/jpsamaroo/HSAArrays.jl](https://github.com/jpsamaroo/HSAArrays.jl)

I don't know how far along it is (no documentation yet), but those are
definite steps.

------
adamnemecek
I've been writing julia a bunch lately. It's a joy to use.

~~~
amrrs
Were you a Python-convert? Could you share you transition journey like how
much time or difficulty you faced.

~~~
speps
In my own recent experience, Julia is awesome but documentation could be
sometimes clearer (eg. missing examples with each method, missing cross
references). Also, when searching for details on some features, it's hard to
figure which version someone is talking about, it seems some versions (0.6 for
example) were there for ages and most forums refer to it but some stuff is now
deprecated or replaced.

Otherwise, I would probably use it for everything now given the performance
and the very good FFI story.

~~~
ViralBShah
Please file issues when you find something is less than perfect. That's the
first step towards making it better!

------
skdotdan
Noob question, having played around with CUDA and NVCC a bit: in languages
such as Julia or Python, why does every algorithm have to be specifically
adapted to be able to run on GPUs? Couldn't algorithms be written from higher
level building blocks? Eg. implement map, reduce and scanl in CUDA/OpenCL, and
have a default parameter like backend='cpu' than can be set to 'gpu'.

~~~
krastanov
Julia is doing exactly that (even better than setting a flag, it is based on
subtypes). Algorithms are usually written to work with `AbstractArray`. The
usual RAM/CPU arrays (i.e. the type `Array`) is a subtype. The CUDA array
which is stored on the GPU is also a subtype. Most general purpose code does
not care which type you use.

~~~
ViralBShah
Definitely look at the whole JuliaGPU ecosystem, and the CUDAnative.jl
package. There are two big questions. First, how do you get code generation
for targets such as GPUs from a dynamic high level language such as Julia, and
that is what CUDAnative.jl achieves. The second question is what abstractions
should Julia present to programmers so that increasingly larger codebases can
automatically leverage GPUs. Packages such as CuArrays.jl are early answers,
but there is much work to be done here.

------
pepijndevos
I'm contemplating writing some fun electromagnetic simulator, and so far had
my eyes on Rust and/or Futhark with some prototypes in Python even Matlab, but
now I may need to consider Julia too...

------
Certhas
Incredible work. Thank you everyone for the great release. This massively
improves our usecase (sparse matrix equations) and I am looking forward to
playing with it.

~~~
ChrisRackauckas
Oh boy, just wait until next release! Sparse matrices are one of our big
focuses for the summer:
[https://github.com/pkj-m/SparseDiffTools.jl/issues/2](https://github.com/pkj-m/SparseDiffTools.jl/issues/2)

------
mlevental
this happens literally every single time i try to hop on the julia bang wagon:
run example -> example breaks.

1\. fresh install of juno (1.1.0)

2\. add latest packages

3\. something goes wrong. this time it's

    
    
      p = destructure(model)
    
    

from example page gives

UndefVarError: destructure not defined top-level scope at none:0

[https://i.imgur.com/8fCpVrR.png](https://i.imgur.com/8fCpVrR.png)

how the hell does anyone get anything when the codebase is shifting this
fast!?

~~~
ChrisRackauckas
That's my bad. Somehow an earlier draft of the post was used. The post was
updated with working code. I guess that's what happens when you lose internet
and push to a Github repo from your phone :P.

~~~
mlevental
:) well thanks!

edit:

UndefVarError: JacVecOperator not defined

:(

also this

[https://gist.github.com/makslevental/fe9144d196cade685d333cd...](https://gist.github.com/makslevental/fe9144d196cade685d333cd862c012c0)

i realize it's just a warning but it shows another undef error

~~~
ChrisRackauckas
Are you on the latest versions? See what running a package update does. If
that doesn't work, file an issue with your package Manifest so we can track
this down. Thanks.

------
vsskanth
Anyone tried using Julia's solvers to run FMI ModelExchange binaries ?

~~~
ChrisRackauckas
Not yet, but we are looking into it. I'm hoping to get some funding for this,
since it's a no-research software dev project though, which would be great for
a just-out-of-undergrad student looking to transition to software dev, hence
need money to pay such a student.

~~~
vsskanth
Thank you.

