Hacker News new | comments | show | ask | jobs | submit login

Since this thread is young and full of expertise, I'm going to ask a question about a nuance of Moore's Law that's been bugging me for a while. (I study CS and currently have very little background in physics and electrical engineering.)

Over the past few years I've read engineers talk about the manufacture and design of CPUs in terms of an XX nanometer processes, where XX is some ever smaller integer - presumably representing the density of transistors in the CPU, or something proportional to that.

But for many decades, we've known that there are fundamental physical limits of computation as-we-know-it with Moore's law predicting a smooth logarithmic curve until that point, doubling every 18 months.

My question is why does it have to be a smooth increase? Don't we have enough computation and understanding now to leapfrog straight to the physical limits using standard run-of-the-mill CPU architecture? After all we can manipulate individual atoms and I think we have transistors made of a just few atoms or less.

Why do we have to go through the 20 nanometer processes then presumably through the picometers until finally stopping at some point maybe in the femtometers (where the lengths of many atomic nuclei can be expressed in single digit integers). Development that Moore's Law presumably predicts will take multiple decades to reach.

Why can't we build processors using modern CPU paradigms on the atomic scale earlier? Is it mass-production issues? Or some physical bottlenecks? Do we need to continually develop ever slightly more dense processors at great expense slowly or can we skip the interval and develop straight at the atomic scale?

I just feel like at some point there should be step function straight to the physical limits of processors as-we-know-them. Why hasn't this happened?




First, you should know that computing is done with electrons, not nucleons. That means the fundamental limit is at a length scale of the size of atoms (i.e. electron orbits) and not nuclei. Atoms are about 0.1 nm in size, whereas nuclei are about 0.00001 nm = 10 fm across. That's a difference of four orders of magnitude, so it's pretty important.

Second, your intuition is off. The jump from "being able to manipulate X in a laboratory" to "being able to build useful devices with X on a large scale" is large. We've been able to handle individual protons and neutrons with the right equipment too, but that doesn't mean we're close to building computers out of them.


Much of this depends on your time scale, of course. If you stretch out the time granularity just a bit, we really did make a pretty rapid jump akin to a step function.

Indeed, I wonder if historians will look back and drop all references to Moore's law, and just recognize that From 1950-2050, we developed the whatever-the-end-game-is transistor, and pretty much ignore all the interesting (to us) developments that got us there.


After reviewing http://en.wikipedia.org/wiki/MOSFET#Difficulties_arising_due... it makes sense to just point you there for a starting point.

tl;dr Current Intel processors are already at the physics-imposed limits for shrinking the size. Every process shrink Intel achieves represents a fundamental change in the way semiconductors are produced.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: