Wonder how long AMD's hand crafted upscalers can keep up with Nvidia's neural networks. Probably not long, and AMD is adding a tensor core equivalent in their next gen:
They already can't keep up (DLSS has always been better), but not having to buy a new nVidia core for FSR is a massive selling point for the technology. Even Steam Deck supports it.
I don't know which game implements it but the Nintendo Switch's system license lists FSR as one of the libraries that it uses. So I guess even Nintendo Switch supports it.
How do AMD gpus compare with Nvidia for deeplearning workloads? Configuration is tough enough on your own hardware with Nvidia. I have zero experience with AMD cards and suspect the challenge would be greater. I do note that I saw support for AMD pretty early on last week when Stable-Diffusion was released.
Nvidia may pull some pricing shenanigans for the rest of 2022 and into 2023 according to this video[1] I watched. I would be thrilled to see some real competition for workloads that are not gaming. Nvidia just dominates so far as I can tell.
Just don't bother with AMD if you want wide support for deep learning. While it is getting better, CUDA is king and will remain so for the future. I had an AMD GPU and I gave up and bought an Nvidia one for ML workloads. The gaming features are also superior to AMD's equivalents as well, like RTX and DLSS.
I just set up pytorch on a machine the other day and while I don't own an AMD GPU I saw that they offer a ROCm version on their "getting started" page [1] and it doesn't seem to be more difficult than the other options.
I could imagine if you run other frameworks, or if you compile custom (c++) modules for pytorch then it could be an issue, but frankly that has been becoming exceedingly rare (at least for me).
Would love to hear your or anyone's experiences since I, like the sibling comment, don't want to shove any more money to nvidia.
ROCm (AMD's answer to CUDA) is already pretty great on supported GPUs. For me the problem at this point is that not many of their consumer GPUs have "official" support for it (although unofficially things are a bit better).
installing ROCm is bit of a pain (there is little packaging, so you have to rebuild it yourself, and like fellow comment is saying, support is uncertain (and it does _not_ work on APU))
Search who's running Stable Diffusion on Nvidia and who's running on AMD: if you are using AMD, you are kind of on your own.
AMD doesn’t support SR-IOV on consumer GPUs, you can do only VFIO passthrough.
Open source SR-IOV support used to be accessible via GIM but there only supported GPUs upto Tonga and the closest you had for a consumer GPU was the W series of workstation cards.
Very easy actually. This is not officially documented, but with a recent enough kernel you don't have to install anything. You can grab the official rocm container and it'll just work. For example for Stable Diffusion see https://github.com/AshleyYakeley/stable-diffusion-rocm/blob/...
https://videocardz.com/newz/amd-adds-wmma-wave-matrix-multip...