Hacker News new | past | comments | ask | show | jobs | submit login

How does this compare/contrast vs opencl.jl[1]?

When working in julia, what are the benefits of tying oneself to CUDA (and not running accelerated on on-die graphics or on amd gpus) -- or doesn't nvidia work reliably/well with opencl?

[1] https://github.com/JuliaGPU/OpenCL.jl




OpenCL.jl is purely the runtime part, ie. it still requires you to write manual OpenCL code, after which you can use the julia wrapper to manage that code.

My project also provides compiler support for lowering Julia to CUDA assembly, so you don't need to write CUDA code yourself. Added to that, my runtime also contains (PoC) higher-level wrappers, making it easier to call CUDA kernels, upload data, etc.

Concerning tying yourself to the NVIDIA-stack: it's still the most mature and versatile toolchain, which is why I picked it in the first place. My long term plan was to switch over to SPIR (or some other cross-vendor stack) as soon as possible. At that point, switching user-code over to that new back-end would (theoretically) not require that much effort, since the kernels are written in julia-code instead of CUDA C (except for the runtime interactions, of course).


Thank you for emphasising the part about kernels being Julia code -- I missed that entirely!

As for Nvidia/CUDA being more mature -- that was what I feared -- it seems a common sentiment in the discussions I've seen on OpenCL/CUDA.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: