Hacker News new | past | comments | ask | show | jobs | submit login
It's time to upscale FSR 2 even further: Meet FSR 2.1 (gpuopen.com)
60 points by WithinReason on Sept 8, 2022 | hide | past | favorite | 19 comments



Wonder how long AMD's hand crafted upscalers can keep up with Nvidia's neural networks. Probably not long, and AMD is adding a tensor core equivalent in their next gen:

https://videocardz.com/newz/amd-adds-wmma-wave-matrix-multip...


They already can't keep up (DLSS has always been better), but not having to buy a new nVidia core for FSR is a massive selling point for the technology. Even Steam Deck supports it.


I don't know which game implements it but the Nintendo Switch's system license lists FSR as one of the libraries that it uses. So I guess even Nintendo Switch supports it.


Steam deck supports FSR 1, which is basically unrelated to FSR 2 and DLSS.


Steam deck also supports FSR2 just not globally like it does FSR1.

For FSR2 you need game-specific data so it's dependent on the game developers to adopt it.


What is the cost (in both time and money) of having to run every patch and update through Nvidia's upscaling service?


That isn’t a thing in DLSS 2.


It can even be implemented in consoles version of the game.


How do AMD gpus compare with Nvidia for deeplearning workloads? Configuration is tough enough on your own hardware with Nvidia. I have zero experience with AMD cards and suspect the challenge would be greater. I do note that I saw support for AMD pretty early on last week when Stable-Diffusion was released.

Nvidia may pull some pricing shenanigans for the rest of 2022 and into 2023 according to this video[1] I watched. I would be thrilled to see some real competition for workloads that are not gaming. Nvidia just dominates so far as I can tell.

[1] https://www.youtube.com/watch?v=15FX4pez1dw


Just don't bother with AMD if you want wide support for deep learning. While it is getting better, CUDA is king and will remain so for the future. I had an AMD GPU and I gave up and bought an Nvidia one for ML workloads. The gaming features are also superior to AMD's equivalents as well, like RTX and DLSS.


Out of interest, when did you try this?

I just set up pytorch on a machine the other day and while I don't own an AMD GPU I saw that they offer a ROCm version on their "getting started" page [1] and it doesn't seem to be more difficult than the other options.

I could imagine if you run other frameworks, or if you compile custom (c++) modules for pytorch then it could be an issue, but frankly that has been becoming exceedingly rare (at least for me).

Would love to hear your or anyone's experiences since I, like the sibling comment, don't want to shove any more money to nvidia.

[1]: https://pytorch.org/get-started/locally/


ROCm (AMD's answer to CUDA) is already pretty great on supported GPUs. For me the problem at this point is that not many of their consumer GPUs have "official" support for it (although unofficially things are a bit better).


installing ROCm is bit of a pain (there is little packaging, so you have to rebuild it yourself, and like fellow comment is saying, support is uncertain (and it does _not_ work on APU))

Search who's running Stable Diffusion on Nvidia and who's running on AMD: if you are using AMD, you are kind of on your own.

Finally, you have model with custom CUDA code (e.g. https://github.com/sniklaus/3d-ken-burns )


Just tried to setup Stable Diffusion tonight on Windows/AMD tonight.

ROCm doesn't support Windows. Pytorch doesn't support OpenCL. WSL2 can't plug into HIP even through it's apparently hiding in Windows somewhere.

Nvidia would in theory just work.

Granted AMD supports SR-IOV in it's consumer GPUs, so when I have a moment, I may setup a full Hyper-V VM and pass thru the GPU.

Lastly, most AI researchers simply assume you are running CUDA, and don't develop around portability.


AMD doesn’t support SR-IOV on consumer GPUs, you can do only VFIO passthrough.

Open source SR-IOV support used to be accessible via GIM but there only supported GPUs upto Tonga and the closest you had for a consumer GPU was the W series of workstation cards.


There is DirectML build of Pytorch that should work on AMD.


I just don't want to support nvidia :(


Very easy actually. This is not officially documented, but with a recent enough kernel you don't have to install anything. You can grab the official rocm container and it'll just work. For example for Stable Diffusion see https://github.com/AshleyYakeley/stable-diffusion-rocm/blob/...


Looking forward to the Digital Foundry video.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: