Hacker News new | past | comments | ask | show | jobs | submit login

Exactly.

It's easy to see in retrospect, but hard in prospect: the original paper [1] on GPU acceleration of NNs reports a measly 20x speedup. Assuming a bit of cherry-picking on the author's side to make get the paper published, the 'real-world speedup' will have been assumed by the readership to be less. But this triggered a virtuos cycle of continuous improvements at all levels that has been dubbed "winning the hardware lottery" [2].

[1] K.-S. Oh, K. Jung, GPU implementation of neural networks.

[2] S. Hooker, The Hardware Lottery. https://arxiv.org/abs/2009.06489




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: