
The Moore's Law blowout sale is ending - eksith
http://www.itworld.com/hardware/385701/moores-law-blowout-sale-ending-broadcoms-cto-says
======
ghshephard
Whenever I'm wondering if Moore's law is still prevailing as a trailing
indicator, I check out:
[http://www.top500.org/statistics/perfdevel/](http://www.top500.org/statistics/perfdevel/)

If that (logarithmic) chart starts to flatten - particularly for device #500
(more price sensitive than the first part of the list) on the list, then we're
running into real obstacles.

Let's check back in 2020 and see if Device #500 is running at 10 Petaflops.

~~~
fiatmoney
Most (all?) of those machines are clusters that scale mostly by throwing in
nodes. Even if you have completely static technology, you'll see an
exponential trend as long as you're scaling with economic growth by adding
nodes.

~~~
ghshephard
Completely agree with you, which is why system #500 is more interesting than
the "Nation State" systems in the first hundred positions or so. System #500
is going to be more closely aligned to the growth in technology.

Also - Economic growth [1] since 2001 has averaged 3.78% /year, for a total of
61% growth in 12 years - if computers had progressed in accordance with
economic growth, than I would expect, all else equal, for systems to be .61x
more powerful.

In that same period, system #500 went from 100 gigaflops to 100 teraflops,
resulting in systems approx 1000x more powerful.

So - I agree that we'll see an exponential trend, regardless of whether
technology improves, it's just that the curve is going to flatten
(particularly on a log10 scale).

[1] [http://www.multpl.com/us-gdp-growth-rate/table/by-
year](http://www.multpl.com/us-gdp-growth-rate/table/by-year)

------
tedsanders
Here's my impression of the future of CMOS:

(1) Scaling will be technically feasible for many more years. (Intel's roadmap
goes to 5 nm, and researchers have already made transistors as small as 3 nm.)

(2) However, even if scaling is technically feasible, it may not be
financially feasible. Quadruple patterning may get you smaller feature sizes,
but at a very high cost. The alternative to quadruple patterning, x-ray
lithography (called EUV), is also quite expensive, but at least there's some
hope that its costs will drop.

(3) Only if scaling is both technically feasible and financially feasible, we
will continue to scale. But even then, the benefits of scaling will diminish
(as they've been diminishing for years). Traditional scaling hasn't done much
for silicon performance in years: recently most improvements in CMOS have come
from straining the silicon with germanium. And the other beneficial side of
scaling, cost, is also ending. Shrinking transistors gets you more chips per
exposure/wafer, but at the same time it raises the costs of each
exposure/wafer. We're finally reaching the point now where the increasing
costs of each exposure/wafer are matching the savings of scaling.

In light of all this, it's hard for me to see performace per dollar per year
(which is not technically Moore's law I grant you) continue at the rapid pace
of the past, even if technologies like x-ray lithography, III-V
semiconductors, optical interconnects, and whatever end up panning out.

~~~
yaantc
I fully agree. A lot of people say Moore's law is the transistors density
doubling every 18 months (or 2 years, it varied). But this misses the economic
angle that was present from the start: Moore talked about the density of the
least cost process.

So Moore's law stops either when we cannot cram more transistors in a given
area, or when we cannot increase density without raising the cost per
transistor. From what I've read I agree it's the economics variant that will
end Moore's law. We'll have ways to shrink / increase performance, but not
sure if there will be enough market to justify the rising costs.

A lot is riding on the continuation or end of Moore's law, so we should expect
a lot of spin on this issue. I note that Intel claims they will decrease the
cost per transistor until 10 nm at least, which is not the case for other
fabs. But those other fabs are more transparent on costs than Intel, and Intel
said they will produce a low cast modem+AP chip at... TSMC. Mmmm...

We have very interesting times ahead of us. Not only in silicon: a lot of
progress in all areas was fueled by cheap computing power. A halt in silicon
progress would be felt everywhere. Case in point: I'm in telecommunications.
Every big steps was enabled by more computation per watt. That's what enabled
to move from single channel TDM to CDMA to OFDMA with wider bands. If we can't
increase exponentially the amount of computation per watt, then don't expect
an exponential increase in wireless throughput either. It will progress, as
everything, but at a slower pace.

------
beloch
The end of Moore's law will probably come at about the same time true human-
like AI appears. i.e. It seems like it will always be just a few decades in
the future. e.g. Here's an economist article (sadly, paywalled) predicting
that the end was nigh in 1995!

[http://www.highbeam.com/doc/1G1-17272668.html](http://www.highbeam.com/doc/1G1-17272668.html)

If you extrapolate current designs out then, yes, a size barrier is
approaching. So, we should expect new designs to start appearing. For example,
non-planar designs offer a lot of potential to reduce average trace-length,
which could reduce power dissipation and increase speeds in turn. There are
huge challenges to solve, but it is reasonable to expect that they will be met
so long as the demand is there. In this light, Moore's law is almost a self-
fulfilling prophecy. The demand for computational resources grows
exponentially, so a way to meet that demand is always found.

~~~
tedsanders
Just because people were wrong in the past, doesn't mean that people are wrong
now. This time, I really do think it's different.

Historically, scaling provided a few benefits. As transistors got smaller,
they went faster and used less power. They also got cheaper. The first
benefit, going faster and more efficient dried up back in the early 2000s when
leakage currents because a problem. This is why transistors have been stuck at
3 GHz for so long. The second benefit, cost reduction, is now disappearing.
The cost per transistor actually went up with the last node! These days,
lithography costs dominate all else. The only way for chips to start getting
cheaper is if EUV pans out (so far it's not looking great, but we'll see) or
if we radically change the fabrication process and turn transistors sideways
(this requires extreme etch control).

Even if it is technically feasible to go smaller, unless EUV costs go down it
may not be financially feasible.

~~~
nly
> or if we radically change the fabrication process and turn transistors
> sideways

This reminded me so much of this:

[https://www.youtube.com/watch?v=xb_PyKuI7II](https://www.youtube.com/watch?v=xb_PyKuI7II)

------
harshreality
Hopefully the end of cost-effective process shrinks will mean programmers
start cleaning up the mess of hugely inefficient everyday software.

The party may be winding down for full x86-64 cores, but there's still some
room for improvement by increasing efficiency of instructions on those cores
and adding new instructions for common tasks (like hashing acceleration
instructions, sse9000, etc). Intel already does that with each new processor
family.

There's a large one-time improvement available for computation-heavy workloads
by integrating many simpler high-performance low-power cores into everyday
machines (like the epiphany architecture:
[http://www.adapteva.com/](http://www.adapteva.com/)). Intel seems to think
only servers will go that route (Xeon Phi), but I don't know about that.

~~~
klibertp
> start cleaning up the mess of hugely inefficient everyday software

I kind of wish for it too. Mainly because it's so enjoyable and fun to spend
hours on really groking the code and optimizing it instead of duct taping
another feature at the top of similarly messy pile of other features.

Things like Go, D, Nimrod, Felix, OCaml and Haskell could help too.

That said I don't see this happening in the near future: for business side of
things optimization starts to matter when it's too late to change technology.
What's probably going to happen is one-click, transparent (for the application
logic) clusters, where performance issues are solved by adding nodes. It
happens right now with AWS and others, but it will be much easier and better
in the future, once again making optimization not needed. For a while.

------
hyp0
I like Kurzweil's backtrapolation over non-transistor computing hardware [jpg]
[http://upload.wikimedia.org/wikipedia/commons/c/c5/PPTMoores...](http://upload.wikimedia.org/wikipedia/commons/c/c5/PPTMooresLawai.jpg)
(IC, transistor, tube, relay, mechanical)

His future predictions can be fanciful, but his data from the past is solid.
He thinks it could be nanotubes next. But it seems fair to say it could be as
surprising to us as vacuum tubes were to people using relays...

------
aquaroris
I wonder what the post-Moore's law world will look like.

The past couple centuries has been dominated by consumers and businesses
automating and digitizing increasing parts of their lives as computers become
more capable to handle them at the same low cost. (And the power the Internet
gives us for being able to share information)

As creating faster chips becomes harder, institutions will start to invest
more in other promising fields and technologies. I wonder if the next few
decades will be dominated with developments from an entirely new discipline
and culture.

~~~
ekianjo
> The past couple centuries

?? Centuries? You mean twenty/thirty years, probably? We don't really have
computers until about 50 years ago, and even then it was not widespread.

You don't have to wonder what post-Moore's law will look like. Look at other
industries when they become mature. The car industry is one example: there's
not that much innovation that goes into making a car better in terms of
performance. I mean, a car from 30 years ago may be less comfortable but still
drives pretty well as long as it's maintained. What makes the difference then
is not performance anymore but whatever added value you bring on it:
durability, reliability, service, etc... no doubt sooner or later electronics
makers will have to innovate in different directions as well.

~~~
bluedino
Cars are miles ahead of where they were in 1983 (pardon the pun). Compare
everything from a family sedan to a sports car of the current era to then.
Tires, creature comforts, horsepower, emissions, fuel economy, safety...

~~~
ekianjo
The comparison is moot. You can't drive 100 times faster with a car from 2013
than from 1983. With computers, you saw that huge progression in a matter of
10 years. Now what you have in your pocket is close to what used to run
servers 10 years ago in terms of power (or maybe exceeds it). If at all, your
argument strengthens my point that cars have stopped evolving in performance
and now car manufacturers work on other attributes.

------
draugadrotten
_The suit is back!_ Seeing this exact headline about a blowout sale all over
the web[1] at once, reminds me once again of paul graham's submarine -
[http://www.paulgraham.com/submarine.html](http://www.paulgraham.com/submarine.html)

[1]
[https://www.google.com/search?q=Moore%27s+Law+Blowout+Sale+I...](https://www.google.com/search?q=Moore%27s+Law+Blowout+Sale+Is+Ending&ie=utf-8&oe=utf-8&rls=org.mozilla:en-
US:official&client=firefox-a&gws_rd=cr&ei=HqqhUvT3KouO4gTuv4DwCQ)

------
pushedx
"You can't build a transistor with one atom." Actually, you can, it's called a
single electron transistor (SET), and is a hot research area right now.

~~~
2muchcoffeeman
[http://luciano.stanford.edu/~shimbo/set.html](http://luciano.stanford.edu/~shimbo/set.html)
[http://en.wikipedia.org/wiki/Coulomb_blockade#Single_electro...](http://en.wikipedia.org/wiki/Coulomb_blockade#Single_electron_transistor)

Looking at the pictures, these devices are not made from single atoms. They
consist of multiple atoms but only require a few _electrons_ to function.

------
Bsharp
> _After another three generations or so, chips will probably reach 5nm, and
> at that point there will be only 10 atoms from the beginning to the end of
> each transistor gate, he said. Beyond that, further advances may be
> impossible. "You can't build a transistor with one atom," Samueli said._

I don't get it - did people assume that chips could get infinitely small?

Seems pretty obvious that Moore's Law would fail at some point.

Disclaimer: I didn't know about this law before today so I'm probably ignorant
of something.

~~~
tedsanders
Anyone with knowledge of transistors knows that transistors cannot scale
forever. I imagine this point is emphasized for people who know nothing about
transistors except their amazing history of scaling.

------
Joeboy
Can somebody explain how and why this will hurt consumers like me? As long as
I can still buy some general purpose computing device with 2013 desktop
performance that I can plug a screen and keyboard into and run/write software
the end of the upgrade treadmill seems pretty welcome. Am I missing something?

~~~
archangel_one
Yes - that the progress so far has allowed other form factors like
smartphones. If it continues, it may open up new possibilities as well that
wouldn't be feasible with the current level of chip technology.

~~~
Joeboy
So my loss is that I won't be able to buy hypothetical new devices that I
don't currently know I need. This seems more like a problem for manufacturers
than for me.

~~~
Sambdala
This hypothetical new device was the Internet not too long ago, and the PC not
too long before that.

------
walid
I don't see this as a major problem since my nearly 4 year old computer is
still working fine for most of my computing needs. Computer chips are so
powerful now that they don't use a lot more than older computers didn't use.

~~~
ChuckMcM
Hmm, consider that you haven't spent any money on a computer in 4 years. Look
at the 'health' of the computer companies like HP, Dell, Acer, etc.

The problem here is a bit more insidious than you might assume at first
glance. The reason you can buy a motherboard for $100 which has close to a
thousand individual components soldered to a 16 layer board is because a
manufacturer invested tens of millions of dollars in a highly automated
manufacturing line that could turn out millions of them a month, and all the
connectors on them had injection molds made because billions of connectors
would be needed, and testing companies built amazing bed-of-nails testers that
can run 25,000 tests on that board.

When I was at NetApp our partners built systems for us (so called ODMs) to our
specifications but in the thousands, not the millions, and our motherboards
that had a similar complexity to the ones that Gigabyte or ASUSTek could sell
cost us $800 each. Smaller runs, those in the hundreds, are like $2,500 to
$3,000 each.

The lack of buying is taking the oxygen out of the system. These guys are
cutting costs where they can, but there will come a time in your lifetime
where buying a cost effecting "PC" class machine out of parts will be very
very hard to do, they just won't have the volume. A new TV with similar
smarts? Still cheap. But a desktop machine?

They call them economies of scale, when there is a scale, when there isn't
they don't work so well. Factories in China are starting to switch over to
making systems for server farms, not individuals. Amazon, Google, Microsoft
might by 50,000 at a time. You and I, not so much.

We are on the leading edge of that change. I have a sense that it is going to
be a fairly dramatic change.

~~~
wycx
However, look at the direction desktop hardware is going. Integration of more
and more functionality into SoCs is the future for non-high performance
desktop computing.

The more components that are integrated into the chip, the less there needs to
be on the motherboard. Even better, all the high speed stuff can move onto the
chip, making motherboard design easier. How long until SoCs come with RAM on
die? Could you get away with a 4 layer motherboard in that case? So whilst you
may lose economy of scale on the motherboard side, I think you will gain it
back via integration, however, you will lose the ability to customise.

~~~
walid
> So whilst you may lose economy of scale on the motherboard side, I think you
> will gain it back via integration, however, you will lose the ability to
> customise.

True but if the SoC was advanced enough it can make customization unnecessary.
After all a phone or tablet or laptop or a desktop or a server is just a
computer with some processing power and memory. Old super computers can now
fit on a Samsung Galaxy Gear. Add more of the same to the SoC or make SoCs of
varying specs. Who needs customization in such an environment.

------
ithkuil
[http://en.wikipedia.org/wiki/Three-
dimensional_integrated_ci...](http://en.wikipedia.org/wiki/Three-
dimensional_integrated_circuit) ?

------
CurtMonash
Company CEO says he'd prefer not to slash prices every year. News at 11.

