> This is (partly) outdated. MPS (metal performance shaders) are now (since torch 2.x) fully integrated in standard Pytorch releases, no external backends or special torch versions are needed.
Not sure what you're referring to, the link I provided shows how to use the "mps" backend / device from the official PyTorch release.
> lots of energy goes into developing architectural work-arounds in order to limit the copying between graphics HW and CPU memory
Does this remark apply to PyTorch running on NVidia's platforms with unified memory like the Jetsons?
Not sure what you're referring to, the link I provided shows how to use the "mps" backend / device from the official PyTorch release.
> lots of energy goes into developing architectural work-arounds in order to limit the copying between graphics HW and CPU memory
Does this remark apply to PyTorch running on NVidia's platforms with unified memory like the Jetsons?