
Transistors Will Stop Shrinking in 2021, Moore’s Law Roadmap Predicts - dbcooper
http://spectrum.ieee.org/tech-talk/computing/hardware/transistors-will-stop-shrinking-in-2021-moores-law-roadmap-predicts
======
Nokinside
Moore's law was the result of constant development in the lowest level of
integrated circuits - lithography and chemistry.

Moore's law made most higher level hardware architecture innovation almost
meaningless. Developing new kinds of parallel computers (Gray, Thinking
Machines) was dead end when conventional computers made from Intel chips
reached the same level in few years without specialized software.

Things have already changed. First specialized Graphics processors and now
GPGPU are the first break from the normal. GPGPU's technology is one level up
from lithography and silicon chemistry. It's different architecture and
different programming paradigm. This will be the first time Intel is not going
to shake it off with faster GPU.

We should expect more higher level hardware innovations. Integrating GPU
functions into DRAM chips might be the next step.

~~~
rasz_pl
Micron had ram integrated state machine for some time now, nothing came out of
that :(

[http://www.micronautomata.com/](http://www.micronautomata.com/)

They might be missing the boat by not releasing cheap entry level hardware
platform, ~$20 ddr3/4 stick.

~~~
agumonkey
Considering how often the RAM bottleneck was mentioned in computer
performances talks I'm still surprised simple logic / arith instructions
didn't hit mainstream RAM modules.

~~~
lomnakkus
I'm obviously just speculating and handwaving wildly here[1], but ISTM that --
_currently_ \-- the limitation here lies mainly in programming languages and
semantics. AFAICT all of the non-traditional machine models require radically
different programming paradigms[2] and we still aren't _quite_ at that point
where gaining more performance is painful _enough_ that the necessary
research/innovation will happen. That's assuming that it's even possible.
Obviously, any model of computation places some limitations on what can happen
and all information transfer (thus computation) is still limited by the speed
of light. Some things are just going to remain out of reach forever and not
even Quantum Computers are going to help.

[1] As I suppose we all are. C'mon we're talking about predicting the future
here.

[2] If you want a practical example, look at the PS/3 Cell processor. I can't
claim to have done any 'native' programming for the PS/3, but I've heard (from
very clever people that I trust) that it was quite alien and required
radically different program structure from what one is used to. And that
architecture isn't even _that_ different from 'normal' architectures.

~~~
agumonkey
Naive thoughts, compilers could emit ram-local instructions instead of
register spill and the likes. Maybe not fancy stuff but I thought there could
be a nice class of block swap, or block (inc) that could be generated without
too much burden in terms of semantics.

------
ghshephard
Note that they bury an important concept deeper in the article:

 _The new report embraces these trends, predicting an end to traditional
scaling—the shrinking of chip features—by the early 2020’s. But the idea that
we’re now facing an end to Moore’s Law “is completely wrong,” Gargini says.
... If a company really wanted to, Gargini says, it could continue to make
transistors smaller well into the 2020s, “but it’s more economic to go 3-D.
That’s the message we wanted to send.”_

~~~
tim333
Moore's law has always been down to economics - it was worth the industry's
investment to develop more transistors per chip area for the next year. If
that stops then it stops. Computers will keep getting faster and more powerful
though by whatever means makes economic sense.

~~~
ghshephard
Right - but the subtle, and easy to miss message here, isn't that its not
longer possible, or even economical to continue to increase density of
transistors absent other opportunities, but that with the advent of 3D
technologies, increasing the number of transistors per chip can be done _more
efficiently_ using 3D approaches. It may be, that once that approach has run
it's course, the industry switches back to increasing density again.

~~~
ci5er
Ultimately you hit a 2D density wall. I'd imagine it's harder to get more
dense than one bit per several atoms. A silicon atom itself is 0.2nm, so ...
not that far off of where we already are. That pretty much maxes out the thin-
film 2D approach, and to pack in more, we'll need to go vertical. And to do
that, we'll need to create novel fabrication technologies (unless they come up
with some photo-lithography magic that I can't imagine right now), and,
problematically, they'll need to deal with heat.

This is not to say that I am disagreeing with you on the switch-back-to-2d
front, but I don't think we have much further to go on the 2d-density front.

~~~
out_of_protocol
Use 0.54nm in your calculations - distance between silicon atoms (lattice
spacing)

~~~
ci5er
Yup. Depending on, well, a number of factors related to doping and oxides and
the like, usually even more. I'm just shooting for the right number of zeros
in an overly optimistic boundary/wall hypothetical! :-)

------
heisenbit
This is 5 years from now that transistors won't shrink anymore. Computers
stopped a while ago having higher clock speeds. More integration, wider data
paths compensated to some degree. The discussed alternatives like vertical
sound in my ears fairly exotic for an industry that has been shrinking,
shrinking and shrinking the nodes as the main mode of technical progress.

The potential impact is game-changing. I really wonder how this is going to
shape where investments in IT go.

~~~
userbinator
_I really wonder how this is going to shape where investments in IT go._

Hopefully we'll start to see even more emphasis on optimisation, possibly
revived interest in Asm while compilers catch up... and then realise just how
efficient software can really be.

~~~
ant6n
Or C++N, which gives you abstractions for not very expensive.

I wish we had a better, higher level, cross-platform access to SIMD.

~~~
maga
Your wish will soon be granted: SIMD is a stage 3 proposal for JavaScript,
already implemented in nightly versions of FF and Edge, coming to Chrome as
well.

[https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Refe...](https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Global_Objects/SIMD)

------
azeirah
As a computer engineering student, I don't understand how adding more
transistors to a cpu makes it faster, gpu's, sure. If I wire an 8-bit adder
circuit, adding more transistors will definitely not make it process those
bits faster.

~~~
pharrington
If you had more transistors, you could make a 16-bit adder circuit!

The difficulty is in getting all the features you want within the space
provided while adhering to the laws of physics.

~~~
mtanski
Or 8x16 or 8x32 adders which is SIMD.

------
LittleSpider
It seems in the article there is a discussion that uses two different
definitions for Moore's Law - number of transistors per "space" (and by proxy,
size of each transistor) and overall computing power per "space."

When Mr. Moore put his law out there 50+ years ago, perhaps the concept of
number/size of transistors was effectively a proxy for computing power, or
perhaps even more true, no one saw a difference, or cared to see a difference
between the two. Since Mr. Moore is still around, I would love to have his
feedback about this- I guess, like Woody Allen's character pulling in Marshall
McLuhan in Annie Hall.

Enough of the aside, though: since 1965, much has been learned on the hardware
and software sides, industries have grown and shrunk, and we are now able to
have debates that - at least somewhat - bifurcate the issue of size/density of
hardware from the issue of the somewhat more intangible computing power.

~~~
Zardoz84
Moore was talking explicit about transistors, not computing power.

~~~
LittleSpider
Agreed that he said that in his 1965 paper; research indicates he was asked
what would happen in the semiconductor industry (which I see as "the hardware
side." He was Director of R&D at Fairchild at the time). Per several areas I
have read, it was after Mr. Moore reviewed his thoughts in 1975 that David
House made reference to an 18-month doubling of _performance_ of circuits.
This leads to my question about proxy regarding what the thoughts were (or
weren't) 10+ years earlier. Was there any awareness, in 1965, of a difference
between transistor count and operating performance, or did this awareness come
about later on as industries matured? I tend to think that without any history
to define a difference, they would have been viewed as one and the same in
1965. I theorize that at that time Mr. Moore - as a young, 35-36 year old,
Director of semiconductor R&D and quite intelligent, undoubtedly - focused on
the question as solely about circuits, and not necessarily taking a wider view
in his article about overall performance (however performance would have been
defined at the time, if it was considered as anything different from
transistor size/count/density) when he was asked to write an article about
future projections in the semiconductor industry.

------
kylehotchkiss
What in consumer computing needs too much more power than we have now? It
seems like the next steps are really battery life and covering more of the
plantet with reliable, fast internet service.

~~~
hughperkins
machine learning

------
Rexxar
Does anybody know why the 2015 prevision go faster to 10nm than the 2013 one ?
They take some delay at 14/16 nm but they want to keep the 10nm target for
2021 ?

------
pier25
How long until quantum computing will be available in consumer equipment?

~~~
sandworm101
You will have to redefine what "consumer equipment" means. I don't see in-home
cyro-chilled machines being a thing anytime soon. But perhaps such things will
be available at co-location facilities and easily accessed by conventional
home machines.

~~~
pier25
I meant laptops, mobile devices, etc.

Sure, these things are huge today, but we have now in our pockets more power
than was available in a Cray super computer from the 80s.

~~~
sandworm101
Quantum computing isn't just a faster tech. It's a very different means of
answering very specific questions. I cannot see how it would ever be useful
for something like a GPU running a game, or a CPU running a macro on a large
spreadsheet file. I could see a use for a quantum computing module attached to
a computer much like a graphics card, but I cannot see how a quantum processor
would do the role of a CPU. Given that, I don't see a market for it in
everyday computing.

~~~
pier25
Could you expand on that? Is it a matter of efficiency like running graphics
calculations on a CPU?

