Nvidia has a complete monopoly on all deep learning hardware and tooling. With the possible exception of Google (and maybe Facebook), 100% of all serious academic researchers are training their models on Nvidia hardware with Nvidia's propietary CUDA toolkit. Using anything else is currently completely unthinkable. Amazon and Nvidia have even teamed up to make CUDA training cheap (on the short term) for EC2 users.
I'd love to be able to switch to OpenCL, but there's so much momentum and very little perceived benefit when your lab already has four (very expensive) Titan X cards.