Hopefully we'll get vendor independent GPGPU support before that, I'm tired of Nvidia lock in. There's nothing free or open about software that only works on one company's hardware.
That's great, but I'm hoping that NumPy will incorporate something like this because that will better ensure that the APIs remain compatible in the future, and that they will get continued support.
(I can't convince my boss to use any library unless it has a reasonable guarantee of long-term support.)
it is confusing to see scientists use vendor specific tools over neural ones. this promotion of one corporation over the rest. IMHO this is against the spirit of academia. I will personally not use Pytorch or any tool based on CUDA.
I don't think scientists would hold up scientific progress to take a hard-line stance like you are presenting; but that's even irrelevant as PyTorch supports CPUs already (AMD, Intel, ..?) and AMD/PyTorch are close to supporting AMD GPUs, too, if that's your jam. https://github.com/pytorch/pytorch/issues/10670
Proprietary or not is orthogonal to a scientific question. In case of PyTorch use, you can run with or without CUDA or even port it to eg OpenCL. The vendor hardware is only incidentally Nvidia.
That said, many projects are compute bound and the quality may be limited by speed of hardware, thus vendor dependent.
It the vendor specific tool facillitates the work of the user, why not use it? An academic is another type of a job, and using proprietary tools occurs in multiple professions. It’s use is not that different compared to other jobs.
It was scientists that started the whole accelerated computing thing. We tried everything that you could do a matrix multiply on. CellBE on the PlayStation, shader language on GPU's ... It was the success with GPU's in the early days that prompted NVIDIA to do CUDA. Yes, I wish NV wasn't proprietary! On the other hand it's the best thing going right now. There are other hardware projects in the works that may compete in the future
do you object to the point of a boycott for even interoperating with CUDA?
because PyTorch is still quite pleasant to use in CPU mode. it should also have support for AMD chips quite soon (and AMD's CUDA-equivalent, ROCm, at least appears to be open source).
I understand what you're saying. On the idealistic side in the battle between idealism and pragmatism, you're trying to negate the premise that the other side's instruments are even useful. If you can't be certain what the closed product does, how can it be a reliable source of data?
Put another way: if your goal is to not ever harm your child, how can you responsibly feed them applesauce or let doctors administer them medicine when you can't be certain what went into the making of the applesauce or of the medicine?
With a very rigorous standard, you can't. With the most rigorous standard, you can't even if the chain of trust has a single link, and you'd only use food or medicine that you, yourself, produced. But we know that few, if any people, use such rigorous standards, and that, if they did, they'd be much worse off. It's not a perfect analogy for software or hardware, but it's certainly a salient one. With your standards, it's malpractice all the way down[0].