
The Moore's Law free lunch is over. Now welcome to the hardware jungle. - gioele
http://herbsutter.com/welcome-to-the-jungle/
======
JoshTriplett
Moore's Law says absolutely nothing about performance, as this article
repeatedly implies. Moore's Law says that the number of transistors in an
integrated circuit would double every two years, and that has fairly
consistently held true and continues to do so.

For years, those transistors went into increasing CPU speed. Now, additional
transistors go into building more CPU cores, or more execution units. Either
way, Moore's Law still holds.

~~~
gamble
This is technically true, but it happens that clock speeds and MIPs have also
increased at a geometric rate over long periods of time. It's a bit harder to
characterize because processor architecture changes have a discontinuous
effect, but on average MIPs doubled every 36 months. [1]

[1]
[http://www.meaningprocessing.com/personalPages/tuomi/article...](http://www.meaningprocessing.com/personalPages/tuomi/articles/TheLivesAndTheDeathOfMoore.pdf)

~~~
waitwhat
Except that isn't Moore's Law.

~~~
joe_the_user
No it's not but it is what people have often taken as one of the main effects
of Moore's law and it is the phenomena the implications of who's end the
article is discussing. And those implications are important even if they
aren't directly related to Moore's law.

------
duggan
_"But that’s pretty much it – we currently know of no other major ways to
exploit Moore’s Law for compute performance, and once these veins are
exhausted it will be largely mined out."_

This is an awfully big post on Moore's Law not to include any mention of
memristors*

[1] <http://www.hpl.hp.com/news/2010/apr-jun/memristor.html>

~~~
sliverstorm
The man is discussing paradigm shifts. While memristors will be pretty awesome
if they ever come online in commercial production, I'm not aware of any major
paradigm shifts they will cause (or prevent)...?

~~~
duggan
I've only a casual interest in the area, but these are some things I read
around the time of the announcement:

 _"Memristive devices could change the standard paradigm of computing by
enabling calculations to be performed in the chips where data is stored rather
than in a specialized central processing unit. Thus, we anticipate the ability
to make more compact and power-efficient computing systems well into the
future, even after it is no longer possible to make transistors smaller via
the traditional Moore’s Law approach."_

– R. Stanley Williams, senior fellow and director, Information and Quantum
Systems Lab, HP

 _"Since our brains are made of memristors, the flood gate is now open for
commercialization of computers that would compute like human brains, which is
totally different from the von Neumann architecture underpinning all digital
computers."_

– Leon Chua, professor, Electrical Engineering and Computer Sciences
Department, University of California at Berkeley.

<http://www.hp.com/hpinfo/newsroom/press/2010/100408xa.html>

It just seems odd to go into such depth on transistor density and CPU/memory
architectures (and potential future architectures) without mentioning
memristors.

I agree that utilization of cloud resources will be an increasingly
fundamental component of modern device architecture, but - at the risk of
sounding hyperbolic - if memristors live up to the promise, we're talking
about supercomputers the size of the human brain*

[1] <http://www.physorg.com/news190483253.html>

------
loup-vaillant
> _Note that the word “smartphone” is already a major misnomer, because a
> pocket device that can run apps is not primarily a phone at all. It’s
> primarily a general-purpose personal computer that happens to have a couple
> of built-in radios for cell and WiFi service […]_

I love this. It explains both why locking down your customers' iPhones is evil
(assuming it is Wrong™ to sever freedom 0 from a computer), _and_ why people
accept it (somehow they act as if it is not a "real" computer).

------
ars
A common misconception:

Moore's law says nothing about the speed of a CPU!

It talks about how many transistors there are on a chip. So Moore's law is not
ending and has not changed (so far).

------
ChuckMcM
Sigh. There is a huge elephant in this room, it was the beast of Christmas
past. Specifically, for years Microsoft colluded with Intel to make systems
which consumed more memory and CPU power such that an 'upgrade' cycle would be
required.

Have you ever wondered how a machine which is less than 1/100th the speed of
the machine on your desk computed the bills, interest, and statements for
millions of credit card users. The IBM 370 that computed and printed those
statements for MasterCard back in the late 70's had a whole lot of I/O
channels.

I would love to see it get relevant again that one consider that the computer
they were targeting would be roughly the same speed and that all of their
'features' had to be implemented with no loss of speed of the overall system.
There is a lot of room for optimization, nobody has seriously attacked that
problem yet, no one has needed to because people who optimized were left in
the dust by people who could assume the next generation of machines would be
fast enough to make bloated code good enough.

We haven't done anything though about Amdahl's law, and of course the thing
that gives us parallelism is an interconnect between compute nexii (nexuses?).
I was hoping there would be some insights along those lines in the article but
I was disappointed.

~~~
xyzzyz
_Specifically, for years Microsoft colluded with Intel to make systems which
consumed more memory and CPU power such that an 'upgrade' cycle would be
required._

It does not seem right. Recent Ubuntu releases will not perform better on
older machines than Windows 7, and I seriously doubt that there's a pact
between Canonical and Intel as well.

~~~
klipt
It's really just a generalization of

<http://en.wikipedia.org/wiki/Parkinson%27s_law>

"Work expands so as to fill the time [or other resources] available for its
completion."

~~~
ChuckMcM
I think this is a large part of it. When you look at something like Compiz and
you say "well that is nice and cool and all but its not really contributing to
the work being done." Its eye candy that can be done because we have a GPU
sitting there otherwise idle.

There is a story, probably apocryphal at this point, that a long time ago in
what seems like a different universe, an engineer working on the Xerox Star
system, was looking into why it was bogging down. It was reported there were
nearly 800 subroutine calls between a key stroke and a letter was rendered on
the screen. They managed to cut that number in half and the performance
improved by a third. Mostly it was abstractions,

People always find uses for every available CPU cycle, as predicted, now we're
entering a time when you will need to optimize something to get more cycles.

------
wglb
I would ten to agree with Knuth's suggestion that multicore stuff and the
"hardware jungle" is a symptom of a lack of imagination on the part of
hardware designers.

~~~
rbanffy
As a hardware engineer myself, I must agree. We have only explored a tiny
subset of all possible configurations with our computer designs, held back,
perhaps, by the need to be binary-compatible with a hardware/software
architecture that was obsolete in the mid 80's.

It's time to do better.

~~~
sounds
Arm seems to do better than Intel.

Perhaps an FPGA (using the term more broadly to include any field-
reprogrammable logic array) can become more competitive performance/watt?

~~~
jgw
That can't really happen. FPGAs are just programmable ASICs.

(Also, I'm not sure if you're suggesting that ARM cores are by their nature
field-programmable. They're not, they're just IP cores that can be integrated
into larger designs).

~~~
Kliment
I think sounds is saying that a runtime-reconfigurable processor could be the
next step forward, adapting itself to whatever task it has to do. I don't see
much promise for speed improvements from this, but it is a cool idea.

------
mark-r
Naturally there are always applications that can use more raw power, but the
article focuses on those segments and I think that's misguided. The real
growth powered by Moore's Law in the immediate future will be processors that
are lower power and cheaper, not faster. The success of the iPad has proven
that faster processors are not what the public is clamoring for right now.
Mainstream tasks really hit diminishing returns after two cores. Not that
processors won't get faster, just that the transistor budget won't get used to
that end exclusively.

~~~
jiggy2011
True, another factor is that we can now offload more stuff to "the cloud"
which is especially useful on handhelds.

As we get better connectivity everywhere it might be that we actually see a
regression in performance (or at least staying static) on many mobile devices
in exchange for better battery life.

~~~
mark-r
I'm pessimistic for the future of the cloud on mobile devices, because we'll
reach a bandwith limit. As the frequencies fill up the pipes are going to get
more sluggish over time, unlike the trends we see in other tech sectors.

------
akg
Has anyone looked at lock-free data structures and algorithms in projects with
large amounts of concurrency? The approach looks promising but it's unclear
how practical it is.

~~~
duggan
I think for tackling a subset of these problems (well, _circumventing them_ )
ZeroMQ is generating a good deal of interest.

<http://www.zeromq.org>

<http://www.zeromq.org/blog:multithreading-magic>

~~~
plainOldText
I was thinking about the same thing. My view is: building applications on top
of heterogeneous nodes (in the cloud) and then coupling them all together via
ZeroMQ.

------
ZephyrP
"For over a decade prophets have voiced the contention that the organization
of a single computer has reached its limits and that truly significant
advances can be made only by interconnection of a multiplicity of computers in
such a manner as to permit cooperative solution."

This now pithy statement was written by the famous Gene Amdahl in the year
1968; A time when computers ran at speeds that are dwarfed by today's digital
clocks, but also it gives us insight into a time when people were still
dealing with the same problems that we deal with today in developing faster
and faster CPUs.

The truth of statement may be something that functionalism or parallelism
advocates don't want to hear - the so called Parallelism Revolution will never
come, at least not in it's current incarnation.

The end of serial advancement, and thus the parallelization revolution was
"supposed" to happen in the 80s, and despite the considerable advances in
methodologies of parallelization, it did not come. The 90s brought us
standards and technologies like MPI which standardized procedures in
developing cooperative computing solutions, but still, it did not come. The
2000s sought to simplify the very act of programming by reappropriating the
ideas of programming back to the realm of pure mathematics - by representing
programs as a mathematical description of time and work itself, with languages
like Haskell and ML we sought to build machines which model math, and thus,
the parallel nature of computation within universe itself.

I feel it myself, the sublime glitter of gold that is locked in the idea of
parallel computation - It is irresistible for a curious individual. To feel as
if all the power of the world is in your hands in this moment (as opposed to
20 years from now), to wipe away the frailty that underlies all of computation
today; We all would like to be able to lift a trillion billion bytes into the
heavens.

Theres only two problems.

The first problem lies squarely within our own human inadequacies, and it
could be argued that this is where parallelism fails deepest. It is certainly
true that parallelization is complex, but like all things, abstractions of
complexity are nessasary, and designing the abstractions in such a way they
are understandable to 'mere mortals' is a greatly undervalued aspect of
technology today. So, I would posit as a result of insufficient desire to
establish simplified abstractions of parallelization, to most programmers,
ideas like parallelism remains in the domain of machine learning and condensed
solids analysis - A kind of electronic black art, only used by those with
sufficient training to know what horrors they may wrought upon the world if
they're to make some trivial programming mistake. As a result (ceteris
paribus!) serial power will always be valued greater than parallel
computational capacity, which many have claimed to be the predominant driver
of commercial development of scientific ideas.

The second problem is more controversial, but I think time will prove it so --
Computer have managed and will continue to manage getting faster at an
alarming rate. Regardless of our preconceptions about the mechanics of
computation, I believe it is reasonable to say that computers will continue to
get faster at exponential rates, even after the so called quantum limits of
computation come into play. This is reasonable for the same reason the Normal
distribution manifests itself in disparate natural phenomenon - Central Limit
Theorem. Sutter himself admits that people have been using the exact same
logic to claim the beginning of the end for the past 60-70 years (Before
'real' computers even), I fail to see where he justifies his reasoning after
giving this enlightened point.

~~~
jerf
Your argument would be a lot more compelling if computers were, you know,
_getting faster_. The idea that they might someday stop getting faster is out
of date, in the sense that they stopped getting faster at least five years
ago, and that's being _very_ conservative.

That people decades in the past were wrong doesn't do anything about the fact
the people _one_ decade in the past were _right_. Computers have _already_
stopped getting faster at an "alarming rate", it's a past event, it's not
speculation. They're still improving and there's still some room for
improvement, but we've already fallen off the exponential curve and I don't
anticipate getting back on it anytime soon.

~~~
jquery
I'm not sure what you're talking about, because even for single-threaded
applications, my current computer is a couple orders of magnitude faster than
the computer I had 5 years ago.

~~~
jerf
No, it's not. Not in the same price range it's not. Not on the same tasks it's
not. 100 times faster? You need to sell that beast for some real cash because
you've got something nobody else does.

The only way that can be true is if you didn't realize your 2007 computer was
continuously in swap.

Edit: Oh, sorry, read further down the thread, wherein your secret definition
of "orders of magnitude" is revealed. Even then it's not true; the only place
you're getting 4x speed improvements in single threading for the same price is
either the very bottom of the market (maybe) or in certain benchmarks that
carefully test only certain aspects of the single thread performance. It's
certainly not across the board.

~~~
jquery
You made the claim that computers had stopped getting faster. Do you stand by
that statement? Please define what you mean by speed.

------
artsrc
What will drive performance is human ingenuity and innovation.

How can we tell if the slower rate of performance increase has more to do with
decreased marginal utility, rather than physical limits.

Computers can now using lower power and laptops can run for up to 10 hours
without a charge.

Rather than investing silicon in technologies that make things harder, perhaps
we can improve performance by making them easier.

Maybe computers can be more garbage collection friendly, run high level
languages at full speed etc. Perhaps the pendulum needs to swing towards Lisp
machine type architectures.

~~~
nickik
Getting away from x86 would be a start :)

Azul is basiclly a modern lisp (well java ahh) maschine but its kind of the
same. There is an awesome discussion between Cliff Click from Azul and Dave
Moon (one of the guys who worked on the lisp maschine).
[http://www.azulsystems.com/blog/cliff/2008-11-18-brief-
conve...](http://www.azulsystems.com/blog/cliff/2008-11-18-brief-conversation-
david-moon)

Another direction we should go in is security, we should have a trusted
computingbase. Some awesome stuff done by DARPA: <http://www.crash-
safe.org/papers> (one of the lisp maschine guys, is working there too)

This would allow developer to focus more on alorithems and speed. That all
said we still have do deal with the multicore problem :)

------
etaty
Is Hacker News broken?
[http://www.hnsearch.com/search#request/submissions&q=her...](http://www.hnsearch.com/search#request/submissions&q=herbsutter.com&sortby=create_ts+desc&start=0)

Since when the same url can be used for many stories?

------
GigabyteCoin
A very well put writeup. Bravo.

By the second paragraph I felt informed and eager to keep reading. Thank you.

~~~
wladimir
Yes it is a very thought-provoking article, similar his "the free lunch is
over" in 2005.

The upcoming jungle sounds like a great adventure. Many interesting challenges
ahead. Though it's very, very hard to get rid of the sequential mindset, we'll
really have to think in new ways.

~~~
GigabyteCoin
It's a good thing us humans like to think ;)

------
Devilboy
People have been warning developers to 'get ready for multithreading' for at
least a decade and somehow everything is mostly the same. Mostly because of
abstractions (e.g. on GPUs) and also because the USERS of our software are
ALSO getting parallelised! So we're back to one thread for one user, since
most of the time you really don't want to do loads of work for one user
request. Cases where parallelism matters (graphics, data stores, query
engines) are already pretty solid on multithreading anyway.

Meh.

~~~
wladimir
_somehow everything is mostly the same_

Because it's really hard to let go of the "cosy" single-threaded, sequential
model. I expect there will always be a place for it, as it gives important
guarantees. Just like mainframes still exist (the article mentions this too -
"different parts of even the same application naturally want to run on
different kinds of cores"). Also, it may be that the free lunch is extended
with graphene or other radically different semiconductor technology (except
for quantum computing as it will also need a complete rethinking of software).

Heterogeneous, parallel computing exists _in addition to_ the sequential model
and won't replace it. I do expect cases where parallelism matters to grow as
AI (voice recognition, human language recognition, driverless cars, etc)
becomes more prevalent.

