Hacker News new | past | comments | ask | show | jobs | submit login

So your claim for optimism here is that something today that took ~10^22 floating point operations (based on an estimate earlier in the thread) to execute will be running on phones in 25 years? Phones which are currently running at O(10^12) flops. That means ten orders of magnitudes of improvement for that to run in a reasonable amount of time? It's a similar scale up in going from ENIAC (500 flops) to a modern desktop (5-10 teraflops).



That sounds reasonable to me because the compute cost for this level of reasoning performance won’t stay at 10^22 and phones won’t stay at 10^12. This reasoning breakthrough is about 3 months old.


I think expecting five orders of magnitude improvement from either side of this (inference cost or phone performance) is insane.


I don't, because most of the improvement will be algorithmic, not physical.


By a factor of 10,000,000,000? (that's the ten orders of magnitude needed, since the physical side is out). Algorithmic improvements will make this 10 billion times cheaper?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: