Hacker News new | past | comments | ask | show | jobs | submit login
Measuring the Algorithmic Efficiency of Neural Networks (arxiv.org)
37 points by grlass 11 days ago | hide | past | favorite | 3 comments





I'm curious about the opportunities for co-design of hardware and algorithms, especially since we are building more ASICs for neural networks - they're an increasingly used workload.

Things like grouped convolutions were invented for AlexNet as a practical engineering step because of limited GPU memory, but ended up giving nice cost/accuracy trade-off choices.

Perhaps algorithms will move too fast for dedicated hardware to be worth it, but there will be primitives that should be relevant for a while that can be integrated into whatever hardware we use - see Nvidia Tensor Cores, which also include things like sparsity support.


Just use "Huang's Law": AI capability more than doubles roughly every two years due to advances in both software and hardware. Translating that capability to real world autonomy and drug design and cosmological simulation and not just ImageNet will be the actual measure ;)

The larger story today is Azalia Mirhoseini's paper in Nature: Deep RL for next gen chip floorplanning in AI accelerators. Future doubling rates of two hours rather than two years on the horizon

https://www.nature.com/articles/d41586-021-01515-9


For the chip planning, isn’t Moore’s law a transistor prediction (rather than layout)?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: