> Go back a few decades and you'd see articles like this about CPU manufacturers struggling to improve processor speeds and questioning if Moore's Law was dead. Obviously those concerns were way overblown.
Am I missing something? I thought general consensus was that Moore's Law in fact did die:
The fact that we've still found ways to speed up computations doesn't obviate that.
We've mostly done that by parallelizing and applying different algorithms. IIUC that's precisely why graphics cards are so good for LLM training - they have highly-parallel architectures well-suited to the problem space.
All that seems to me like an argument that LLMs will hit a point of diminishing returns, and maybe the article gives some evidence we're starting to get there.
The article you pointed out says the end came in 2016: Eight years ago.
My point is those types of articles have been popping up every few years since the 1990s. Sure, at some point these sort of predictions will be proven correct about LLMs as well. Probably in a few decades.
Am I missing something? I thought general consensus was that Moore's Law in fact did die:
https://cap.csail.mit.edu/death-moores-law-what-it-means-and...
The fact that we've still found ways to speed up computations doesn't obviate that.
We've mostly done that by parallelizing and applying different algorithms. IIUC that's precisely why graphics cards are so good for LLM training - they have highly-parallel architectures well-suited to the problem space.
All that seems to me like an argument that LLMs will hit a point of diminishing returns, and maybe the article gives some evidence we're starting to get there.