He's assuming transistors got 100x more efficient in 13 years, but they just haven't. CPUs of similar architectures got about 5x more efficient in the very best case (embarrassingly parallel workload where advances in core count and instructions level parallelism means effectively lower switching rates).
For GPUs it's a similar story - a 780Ti does 5.5Tflops@250W, a 4080Ti does 66Tflops@400W, for a 7.5x increase. Certainly not 100x, and again a lot of that is from much much more efficient GPGPU architectures.
TSMC themselves claim ~30-40% improvements on transistor efficiency every generation these days, which fits nicely with a 4-5x improvement. Manually multiplying their claims in efficiency, you get a 4.85x improvement from 28nm to 3nm.
I think we'll be hitting the limits of what silicon can do 3-4 orders of magnitude before Landauer's limit.
Yes, he could definitely use that time to find someone else who's been working on it for a long time and steal the glory for himself, as he is known for.
If I'm understanding correctly, the OP is claiming that our current efficiency of computation is relatively close (within 1-2 orders of magnitude) to a theoretical lower bound of how much energy is required to flip a bit (physics and information theory).
i.e. making things infinitely smaller (even if we thought it possible) might not actually result in improved power efficiency per operation anymore, sometime soon.
Since ASML's technology (to the layman) seems to be mostly wrapped up in making things smaller, accurately, then I can't see how this would be ASML's problem to solve, if we believe that the Landauer limit can't be broken.
The onus would be on the architectural side, of finding more efficient ways of getting the higher-level results we care about, reducing the number of minimum-energy operations required to get to those results.
For GPUs it's a similar story - a 780Ti does 5.5Tflops@250W, a 4080Ti does 66Tflops@400W, for a 7.5x increase. Certainly not 100x, and again a lot of that is from much much more efficient GPGPU architectures.
TSMC themselves claim ~30-40% improvements on transistor efficiency every generation these days, which fits nicely with a 4-5x improvement. Manually multiplying their claims in efficiency, you get a 4.85x improvement from 28nm to 3nm.
I think we'll be hitting the limits of what silicon can do 3-4 orders of magnitude before Landauer's limit.