Hacker News new | past | comments | ask | show | jobs | submit login
We are a lot closer to the Landauer limit than I thought (twitter.com/realgeorgehotz)
47 points by mutant_glofish on Aug 13, 2023 | hide | past | favorite | 15 comments



He's assuming transistors got 100x more efficient in 13 years, but they just haven't. CPUs of similar architectures got about 5x more efficient in the very best case (embarrassingly parallel workload where advances in core count and instructions level parallelism means effectively lower switching rates).

For GPUs it's a similar story - a 780Ti does 5.5Tflops@250W, a 4080Ti does 66Tflops@400W, for a 7.5x increase. Certainly not 100x, and again a lot of that is from much much more efficient GPGPU architectures.

TSMC themselves claim ~30-40% improvements on transistor efficiency every generation these days, which fits nicely with a 4-5x improvement. Manually multiplying their claims in efficiency, you get a 4.85x improvement from 28nm to 3nm.

I think we'll be hitting the limits of what silicon can do 3-4 orders of magnitude before Landauer's limit.



The original link works fine with no login requirement or anything?


Original link doesn't work without javascript


It doesn't show the whole thread, Nitter does.


> the whole thread, Nitter does

(Sorry, transversal practical question.) Nitter has many elisions. Do you know how to have them all expanded (see all originating posts)?


I'm sure George could solve this in a 12 week internship.


Yes, he could definitely use that time to find someone else who's been working on it for a long time and steal the glory for himself, as he is known for.



What's that chart measuring? Energy due to gate capacitance? Power in divided by transistor count and frequency?

Almost a decade and a half is quite a while for this sort of thing, has it actually started on the same trajectory?


Your brain works remarkably close to that limit. (Maybe 3-4x?) Likewise cell nuclei.


You don’t need to worry, you let the pros(tsmc) handle it.


I’m curious about the division of innovation between ASML and TSMC.

I know TSMC does incredible r&d, process engineering, HVM etc, but wouldn’t this land in the ASML bucket?


If I'm understanding correctly, the OP is claiming that our current efficiency of computation is relatively close (within 1-2 orders of magnitude) to a theoretical lower bound of how much energy is required to flip a bit (physics and information theory).

i.e. making things infinitely smaller (even if we thought it possible) might not actually result in improved power efficiency per operation anymore, sometime soon.

Since ASML's technology (to the layman) seems to be mostly wrapped up in making things smaller, accurately, then I can't see how this would be ASML's problem to solve, if we believe that the Landauer limit can't be broken.

The onus would be on the architectural side, of finding more efficient ways of getting the higher-level results we care about, reducing the number of minimum-energy operations required to get to those results.

Or maybe I'm misunderstanding something?


The way I understand this matter: TSMC runs a process in which ASML is a tool, an important one, that needs to run TSMC's process reliably.

Both the process and tool needs to innovate to enable each other to get the most out of both.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: