Hacker News new | past | comments | ask | show | jobs | submit login

I dont believe so...and looks like neither does either Apple or Google/Tensorflow

- https://developer.apple.com/documentation/mlcompute

- https://blog.tensorflow.org/2020/11/accelerating-tensorflow-...

For a more personal take on your answer - do consider the rest of world. For example, Ryzen is very popular in India. Discrete GPU are unaffordable for that college student who wants to train a non-English NLP model on GPU.




How does that contradict my point - Apple has to support the M1, since their M1-based SoCs don't support dGPUs (yet), so the M1 is all there is.

Besides, Apple is the most valuable company in the world and has exactly the kind of resources AMD doesn't have.

> or example, Ryzen is very popular in India. Discrete GPU are unaffordable for that college student who wants to train a non-English NLP model on GPU.

Well Google Collab is free and there are many affordable cloud-based offers as well. Training big DL models is not something you'd want to do on a laptop anyway. Small models can be trained on CPUs, so that shouldn't be an issue.

Inference is fast enough on CPUs anyway and if you really need to train models on your Ryzen APU, there's always other libraries, such as TensorflowJS, which is hardware agnostic as it runs on top of WebGL.

That's why I don't think this is a big deal at all - especially given that Intel still holds >90% of the integrated GPU market and doesn't even have an equivalent to ROCm. Again, niche within a niche, no matter where you look.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: