
Are processors pushing up against the limits of physics? - ub
http://arstechnica.com/science/2014/08/are-processors-pushing-up-against-the-limits-of-physics/
======
Artemis2
A metric that really puts things in perspective is the following: take a
common consumer CPU, clocked at 3.4Ghz. That means it executes 3,400,000,000
cycles per second. Divide the speed of light by this number, and you obtain
approximatively 0,088m.

During the time your standard desktop CPU takes to finish a cycle, light only
travels 9 centimeters.

~~~
getsaf
Not much perspective to be had here unless you are assuming a single photon's
distance traveled.

The analogy breaks down when you consider thousands or millions/billions of
photons traveling simultaneously, then you can measure in miles.

A single photon's travel-distance doesn't mean much in this context.

~~~
vanderZwan
As far as I know, those 9 centimetres are still the fundamental speed limit
for _information_ to travel.

~~~
Artemis2
That's really what makes me fear about how far we can push our processors. I
don't think Moore's law will hold a lot longer for CPUs, unless we manage to
get very good quantum computers very soon.

------
Jweb_Guru
The answer to this question is yes. People like to talk about how the real
issue is economics, not physics, but the inability to make chips much smaller
economically is very much a reflection of physical constraints on the
processes we're currently using to manufacture chips (particularly the
lithographic process), and as of right now there's no clear successor to those
processes that's going to enable them to get smaller for cheaper.

~~~
wlievens
Isn't ASML still pursuing EUV?

------
tedsanders
Right now, the laws of economics are a bigger problem than the laws of
physics. Field effect transistors have been shown to work 5nm and even 3nm.
However, the new lithography technologies needed to reach those resolutions
cheaply are nowhere near ready.

~~~
idlewords
I heard a rule of thumb that fab costs go up about 40% with each reduction in
size. Is that accurate?

~~~
_username_
Intel paid ASML ~$4.1 billion to deliver 10nm lithography. That was ~ 1.5yrs
ago
([http://www.intc.com/releasedetail.cfm?ReleaseID=690165](http://www.intc.com/releasedetail.cfm?ReleaseID=690165)).
ASML stock is a good indicator what's happening in this business.

Intel has running 14nm which is 16% better on SRAM cell than 16nm TSMC.

------
Symmetry
Not even close, but there's good evidence they're pushing the limits of
silicon transistors.

------
grondilu
In June of this year HP announced its plan to build "The Machine". Regardless
on how feasible their project is, I think they are right in pointing out that
memory is the current bottleneck in computer engineering. We don't need faster
processors. Focusing on the size of transistors, which are already insanely
small when you think about it, may be a mistake.

------
DiabloD3
The question I want to ask is, are generic CPUs now fast enough that people no
longer need faster CPUs? I have an i7-4771 on my desktop (bought instead of
4770K because I wanted TSX... thanks Intel ;), and I can't really imagine much
use for an even faster CPU unless I'm gaming or doing heavy compute work.

~~~
userbinator
No, I don't think the things most people do with computers _should_ require
any faster hardware; the problem is that software is often being written to
require increasingly more resources under the false assumption that processing
speed and memory are "infinite" or close to it.

The exponential growth that started many decades ago has promoted a culture of
_extreme_ waste. From the earliest notions of "premature optimisation", and
the rise of structured programming and OOP with its many-layered abstractions,
to the latest trend of ultra-high-level frameworks and the web-application
movement, there is this constantly present notion that "abstractions and
computing power is free, but programmer time is expensive". Although
opposition to this seems to have increased in the recent years, it's still a
prevalent attitude and being taught currently in many schools. People are
being forced to frequently upgrade their hardware (with the associated waste
and manufacturing costs) just so they can run the latest versions of software
- often to do the same things at the same speeds they were doing them before.
It's likely not too far of a stretch to say that software on average is now a
few orders of magnitude larger and slower than it should be.

This trajectory follows similarly to the early part of what happened to the
car industry - fuel was initially cheap so manufacturers (and consumers)
concentrated little on fuel efficiency, but starting in the 70s oil shortages
made for some pretty rapid changes as people became aware that what they were
doing was not sustainable. There has been much growth in interest in efficient
hardware recently, which is good, but the other part of the equation,
software, is also very important. Thus I think anyone who still believes in
that mantra about programmer time, when working on software intended for a
large number of users, is as absurd as someone in the car industry saying
"engineer time is expensive, but fuel is cheap". Processors may be getting
limited by the laws of physics but I don't think many programmers have reached
the limits of their brainpower yet. :-)

~~~
collyw
>The exponential growth that started many decades ago has promoted a culture
of extreme waste.

In fairness, I look back at what I was working on ten years ago, and I am
currently writing my companies database and web front end, on my own, thanks
to the layers of abstractions (scripting language, web framework, improvements
in database technology). This database is able to do more than what a team of
5 could accomplish ten years ago. You can't deny that sort of efficiency is
not beneficial to many (most?) organisations.

~~~
claudius
It depends. For in-house software, I can fully understand that wasting a few
developer hours less may well be enough to justify buying beefier CPUs for the
couple of users who need them.

On the other hand, if thousands of people need better CPUs to load imgur
comments (e.g.) instantaneously instead of forcing the browser to a grinding
halt for a task that should be essentially trivial, things may look different.

------
m_mueller
Accelerator based computing is a tell that this is happening already.
Shrinking everything down alone is not bringing the big speedups in
performance-per-watt anymore, so what chip manufacturers do is putting as much
ALUs as they can on the same die size, curbing lots of built-in management
features in the process, that our software programming models have been built
upon over the decades. Hardware is still getting faster at Moore's law, but
_only_ given constantly adapting software, i.e. "The Free Lunch Is Over"[1].

[1] [http://www.gotw.ca/publications/concurrency-
ddj.htm](http://www.gotw.ca/publications/concurrency-ddj.htm)

------
lnanek2
They say we can't get smaller than an atom, but electrons are smaller than an
atom, and we don't even have to use just the charge but can also use
properties like spin and momentum to get more values out of them. I.e.
spintronics. Then of course there are photons as well. The article mentions
itself that we use light for etching features even less than the wavelength of
the light nowadays. Sometimes you need a big read-write head or something, but
then you can just push magnetic domains past it on a wire, etc. so that isn't
necessarily the size of piece of computation in the device if we move beyond
transistors.

~~~
Xcelerate
I agree with you there. It's not unreasonable to me that people might devise a
way to push against the Planck limit in terms of the space that computation
takes up.

------
tsotha
Depends on exactly what you mean. You can keep adding cores until the cows
come home. And I don't really buy the "we don't know how to use all those
extra cores" argument. Multi-threaded code isn't the rocket science it's
portrayed to be in the press.

One thing that may become practical is die stacking, depending on what they
can do about extra heat.

~~~
badsock
Amdahl's law is a SOB:
[http://en.wikipedia.org/wiki/Amdahl%27s_law](http://en.wikipedia.org/wiki/Amdahl%27s_law)

~~~
tsotha
Except that's not how multiple cores are normally used.

------
agumonkey
And now people are rethinking processor architecture :
[http://millcomputing.com/docs/](http://millcomputing.com/docs/)

------
bradshaw1965
I love how the engineers in "Halt and Catch Fire" are always talking about
"the laws of physics".

------
hatethis
No. Every single time someone supposes that any piece of technology is
approaching its limits, the answer is no. Technology will continue to improve
and advance as we make new discoveries. The simple fact is we do not know the
future, so pretending like we know what things will be like in 20 or 50 years
is just pointless. What's with this obsession of taking today's technological
knowledge and assuming that things won't drastically change? Nobody should
ever pretend to know what the future holds.

~~~
saintgimp
That's a very unreasonable point of view. Sure, it's true that in the past 200
years we've had an explosion of technological progress but there's absolutely
no reason to assume that it's possible to continue it for an infinite amount
of time. It's rather more likely that over time we'll discover the lines that
separate "hard but possible" from "theoretically impossible given the physics
of this universe". The only real question is when that will happen for any
given area of technology.

Asserting that technology will just continue to improve and advance _forever_
simply makes you sound like a stock-market broker in 2007.

~~~
lotsofmangos
In terms of the limits of computer development, we have a way to go yet before
we make something like a Matrioshka brain.

