Hacker News new | comments | show | ask | jobs | submit login
Show HN: PlaidML, open source deep learning for any GPU
52 points by hedgehog 8 months ago | past | web | 16 comments
Our company Vertex.AI has been working on this for a while but this is the first public release. We're starting with using PlaidML to bring OpenCL support to Keras and more frameworks, platforms, etc are coming. Yes, this means you can use use your AMD GPU for deep learning dev. Sorry, no Mac or Windows support yet although the brave can try building from source (it should work).



Congrats! Is there a short summary about how your approach is different from existing stacks (like tensorflow, or nnvm)?

Thanks! Our functional goals are similar to NNVM (deep learning for all devices) but I don't know much about their technical approach. I think some of the developers are here in Seattle actually so hopefully we'll meet them at some point.

As far as differences vs TensorFlow, Keras, etc, we're not aiming to replace the developer-facing Python APIs. You can run Keras on top of PlaidML now and we're planning to add compatibility for TensorFlow and other frameworks as well. The portability (once we have Mac/Win) will help students get started quickly. It's much easier to write kernels in our Tile language than raw OpenCL or CUDA so we think this will help speed up research as well. On the throughput side we already outrun cuDNN in some cases and it's likely we can give Volta users a boost once we add support for its tensor unit.

To expand on this a bit, NNVM is mostly a graph serialization format and graph optimizer with a cuda/cudnn (and now TVM) backend. In this NNVM is very similar to XLA. Our approach handles both full graph optimization (though we have a lot of work to do there) and kernel creation and optimization through an intermediate language called Tile. TVM seems somewhat derivative of our approach, though it lacks a reasonable mechanism for optimizing kernels.

PlaidML and Tile are able to create optimal kernels for just about any architecture. This approach reduces dependencies and ensure that new hardware will just work.

We intend to have NNVM and Tensorflow backends in the future. The keras backend is only 2000 lines of code (thanks to tile).

As the owner of a mac with an amd gpu, I'm very excited to see support coming soon. Been wanting to get into ml but been pretty discouraged with all of the popular libraries being cuda-only.

Actually, if you're adventurous, you can clone it from github and build on your mac with bazel, but, your experience may suck in terms of performance (that's why it's not officially released yet). But, because you have an AMD, it will probably work well.

You would need to: bazel build -c opt plaidml:wheel plaidml/keras:wheel

and then

sudo pip install bazel-bin/plaidml/whl bazel-bin/plaidml/keras/whl

The Mac build should be coming in the next two weeks. We're just tweaking compiler parameters to make sure Intel GPUs work as well as they should.

Great! I look forward to that announcement.

Pretty good perf on AMD compared to CuDNN! What conv kernels are you using?

Is PlaidML for inference only?

Nice! It's all there in the source, they are generated at runtime. Training has some problems with memory consumption but we will fix that.

Sweet! If you can really get 16bit training going at 26 TFLOPS with Vega 64 then you've done something AMD themselves can't seem to be able to pull off

Yeah we'll be able to as soon as their OpenCL driver supports it or we write a direct ROCm backend. We have one in the lab now -- we definitely have room to improve its perf. We'll be looking at that a lot more in the future.

Nice! I will try it. Did you test the inference on Intel GPUs?

We have done some preliminary tests. We need to tweak the configuration before we formally support them. When we officially release os-x support we will also support Intel GPUs (should happen in the next two weeks)

Good for you, but AGPL is basically a nonstarter for me.

What's your application? AGPL is compatible with research and education, for closed-source commercial use we have other options available.

I'm just not willing to put time into learning an AGPL system, especially since I already have NVIDIA cards.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact