
The future of computing: After Moore's law - martincmartin
http://www.economist.com/news/leaders/21694528-era-predictable-improvement-computer-hardware-ending-what-comes-next-future
======
Animats
The use of highly parallel GPUs as general purpose compute elements is a major
trend. Most graphics boards have more compute power than the main CPU they
serve. The Titan and Summit supercomputers have most of their compute power in
the GPUs. The limits of GPU parallelism haven't been reached yet. Machine
learning can be done on GPUs, and that's the biggest CPU hog problem right now
that is getting mainstream use. That technology isn't near a ceiling.

As machine learning/AI/neuron emulation becomes more useful, we'll see
hardware specialized for that. It's not yet clear what shape that hardware
will take. Look for "works fine, but is slow on GPUs" results to lead to such
hardware, rather than "build it and they will come" projects like the Human
Brain Project.

There's still more improvement possible in storage devices. With the 20TB SSD
drive expected in 4 years, things are looking good in that area. For compute
elements, cooling limits transistor density. Storage tends not to be heat
dissipation limited.

~~~
pixl97
>With the 20TB SSD drive expected in 4 years,

4? Samsung is shipping (a very few select partners) 16TB SSDs _now_! I suspect
in 4 years we'll be closer to 50TB.

~~~
eikenberry
I would say 20GB at scale with a decent storage/$ ratio in 4 years sounds
right.

------
ghaff
It's a pretty good piece. The main thing that it does skip over is what the
economic implications might be to no longer declining transistor costs. Sure,
Google can add a more or less arbitrarily large number of servers but what are
the implications of the likely reality that those servers aren't improving in
price/performance to the degree that they once were.

At Hot Chips a couple of years back Robert Colwell, who was director of the
microsystems technology office at DARPA at the time, had a very interesting
presentation on where things were going. One of the things that stuck with me
at the time was his contention that there are lots of ways to improve
performance etc. over time but CMOS was really pretty special.

"Colwell also points out that from 1980 to 2010, clocks improved 3500X and
micro architectural and other improvements contributed about another 50X
performance boost. The process shrink marvel expressed by Moore’s Law
(Observation) has overshadowed just about everything else."

[http://bitmason.blogspot.com/2016/01/beyond-general-
purpose-...](http://bitmason.blogspot.com/2016/01/beyond-general-purpose-in-
servers.html)

~~~
jacobr1
Even if we don't produce smaller circuits, we still might produce cheaper
ones. There is still plenty of room to go to reduce fab costs. Especially
since we basically replace them every few years. Consider if intel solely
spent time iterating on cost. We also have made headway on power-reduction. I
could see further improvements there. In aggregate I could imagine even if we
don't see greater chip density we could see, say, AWS compute power/cost
continuing Moore's trend for some time.

~~~
nickpsecurity
That's true. Seeing mask sets and EDA tooling get cheap at 90nm or 65nm could
open up serious possibilities. The laptop still serving me well runs a CPU
done on 65nm with other chips from even older nodes.

------
pedalpete
I was recently thinking about Moore's Law and if it is truly coming to an end.
My initial reaction is that it doesn't matter in itself, if the number of
transistors doubles, what might matter is that our compute power is doubling.

This led me to think that maybe Moore's Law is looking at the wrong metric,
and is there a more fundamental law regarding increased capacity. Some proof
of this is in the drop in prices of cloud computing and increases in network
capacity. If chips stopped getting more powerful, in theory, we may not notice
as the capability of the entire network continues to increase at the familiar
rate.

However, this begs the question, did this increased capacity actually begin
with Moore's Law and computer processing in general? Or is there an
overarching law of progress which has always existed. To prove this, I'd need
to go back and look at the growth of industrialization. I suspect as the
capabilities of the factories slowed, our network speed (rail and sea, then
road and then air) increased. As the network speed reached a plateau, we began
to find places where we could manufacture more cheaply, thereby increasing the
output per cost. Does the same growth exist in agriculture? What major
industry does not fit?

There is some evidence that a law similar to Moore's law exists not only for
technology. What effect does that have in our understanding of why things
grow?

[edit: this is was an except from my YC application to the question regarding
what have you discovered]

~~~
mtdewcmu
I think technological progress is more like a logistic curve, with
exponential-like growth at the beginning and then a leveling off. Look at
technologies that have already had time to mature. The speed of airplanes grew
tremendously while jet engines and wing shapes were undergoing heavy
refinement, but then it leveled off and hasn't really budged in decades.

~~~
gozur88
Technologies can take leaps and bounds, too, though. If you view the modern
computer as an evolution of the abacus the flat part of the curve was the
first few thousand years. It's possible quantum computing, say, or neural nets
will be leaps of that order.

Or maybe not. It's impossible to predict true breakthroughs.

~~~
mtdewcmu
>> If you view the modern computer as an evolution of the abacus

I'm not sure what value that has for predicting the future. You could view the
airplane as the evolution of the chariot. Or not. Does it make a difference?

~~~
gozur88
It does, because otherwise you think technology never changes quickly. My
point was while it's true we're probably reaching the point of diminishing
returns for silicon etched circuits, that's not the end of the line. I'd be
surprised if we didn't have ubiquitous quantum computers in thirty years or so
unless something even better came along.

------
wolfram74
While we're reaching the physical limits of classical chip design, do we have
any ideas what the limits are on the algorithm side of things? As much speed
up has come from software as hardware according to a few reports.

[http://www.johndcook.com/blog/2015/12/08/algorithms-vs-
moore...](http://www.johndcook.com/blog/2015/12/08/algorithms-vs-moores-law/)

~~~
ars
> do we have any ideas what the limits are on the algorithm side of things?

I've always believed that humans do not have the ability to program things
smarter than themself, because we do not understand our own intelligence, so
we have no way to reproduce it.

At the time, I said the only alternative I can think of is make random
permutations and pick the best one, and go from there. But I said this as a
ridiculous suggestion that no one would actually do.

But then Google actually did exactly that with their Go machine!

So, that's what I think: The future of computing will be based on randomness
and the job of the programmer will be to guide it, but not program it
directly. (Can you imagine programming a webpage this way? Or writing a book
this way?)

~~~
stochastician
(Totally self-promoting here) I did a lot of work along this line in my PhD --
stochastic architectures for probabilstic computation.
[http://ericjonas.com/pages/circuits.html](http://ericjonas.com/pages/circuits.html)
There's an increasing amount of interest in this space.

~~~
nickpsecurity
You're work is interesting. The concepts remind me of work in analog
implementations of neural circuitry. See, the brain is likely a general-
purpose, analog computer with digital-like parts. So, trying analog
implementations was thought to have improvements. It did with one wafer-scale
method I didn't see coming.

[http://yann.lecun.com/exdb/publis/pdf/boser-92.pdf](http://yann.lecun.com/exdb/publis/pdf/boser-92.pdf)

[http://www.kip.uni-
heidelberg.de/Veroeffentlichungen/downloa...](http://www.kip.uni-
heidelberg.de/Veroeffentlichungen/download.cgi/4713/ps/1856.pdf)

It would be interesting to see someone combine the principles of your work
with analog implementations on a decent process node. Yours is kind of like a
hybrid between properties of analog and digital cells. The real thing might be
even more effective albeit harder to automate. There's some analog EDA but
it's almost always custom work.

------
dev1n
I like PG's idea [1] of trying to write a compiler that can utilize code to
run on multiple cores, as if the cores were running in series, not parallel
(think batteries).

[1]:
[http://paulgraham.com/ambitious.html](http://paulgraham.com/ambitious.html)

~~~
seiji
That's the holy grail of The Cloud: just write a description of what you want
and what you want happens. DWIM programmatic casting.

I think pg originated his "sufficiently smart compiler" startup idea in his
pycon talk. You can find it online somewhere. The other take away from his
talk was: just lie to customers about it being automated, manually farm out
the parallelize-all-the-code tasks to works/interns/turks while saying it's
"automatic," then eventually figure out how to automate it yourself later so
you don't need pesky humans in the loop.

~~~
vkou
I hope he was being facetitious... That exposes your customer base to an
_enormous_ amount of risk.

~~~
seiji
Not really factitious. The whole "do things that don't scale (grit! be
resilient! schlep!)" is pretty central to modern cult of pg startup dogma.

------
danjoc
While "cloud" may be a big part of the future of computing, I think the
relationship painted between that and the end of Moore's law is tenuous at
best.

I believe a more plausible link exists between the end of Moore's law and the
rise of open hardware as explained in this article:

[http://www.eetimes.com/document.asp?doc_id=1321796](http://www.eetimes.com/document.asp?doc_id=1321796)

TL;DR

If eight year old hardware is almost as fast as today's hardware, there is
ample time to reverse engineer competitive open hardware.

~~~
bcrack
The article argues a very interesting point and there might definitely be an
opportunity for more competitive open hardware. At the same time, it feels
kind of sad that it would take a technical constraint for this to happen; that
is, rather than a change in culture.

------
kumarski
Limitless exponential growth doesn't happen in the physical world, it happens
on S-curves.

Black phosphorus anyone?

~~~
selimthegrim
Care to elaborate?

~~~
chipsy
The logistic function (or "S-curve") describes systems that expand first at
exponential rates, then logarithmic ones. With respect to new technologies,
it's been observed that adoption rates and most measurable improvements follow
a logistic function.

For example, people did not go from buying 1 car to buying 10 cars and then
100 cars - most of us hit saturation somewhere between 1 and 2, and stayed
there. Similarly, the average speed of our cars did not increase at an
increasing rate except in the earliest years of automotive technology; we have
a "highway maximum" between 50 and 70 MPH, and a "technical maximum" in the
high 200's range for production cars which do not rely on rockets or other not
street-legal tricks (a quick search brings up the Hennessey Venom GT at 270.49
MPH). Likewise applied to fast moving vehicles as a whole, including
supersonic aircraft and rockets, we've already brushed up against vehicle
weight and power density limits that slow the rate of improvement in
acceleration.

Applied to the Moore's Law measures, that indicates we have a "endless sunset"
period ahead of us where we'll still get more doublings of semiconductors, but
they'll come increasingly slowly as more fundamental innovations become
necessary to realize them.

What we don't know is whether there is a logistic curve on technology as a
whole. Belief in the Singularity is premised on this not being the case.

~~~
daveguy
I am of the strong opinion that most technological advancement that is
mistaken for endless exponential improvement (eg Moore's law) actually follows
a physical sigmoidal curve. I also agree that "the singularity" is a belief
based proposition and I personally think it amounts to techno-woo.

However, one interesting argument for continuation of technological
advancement beyond what we might call "singularity" levels today. If
technology maintains an exponential tragectory for another century or so
through a few more breakthroughs then we would have some amazing tech.

So, it is not required that tech advancement is exponential -- as long as we
are still early enough in the sigmoidal curve that more exponential (and
linear) advancement is still to come. With biotech, quantum and the
algorithmic side of AI I think we still have quite a bit of advancing to do.
That said, the singularity stuff is still ridiculous woo woo.

------
poseid
Moore's Law also enables a new kind of hardware (open-source) and the internet
of things: [http://thinkingonthinking.com/hardware-cost-
reductions/](http://thinkingonthinking.com/hardware-cost-reductions/)

------
graycat
Research by K. Ebcioglu on _very long instruction word_ (VLIW) shows that 24
way VLIW can get 9:1 speedup on ordinary code.

~~~
kevinnk
Compared to what? What does ordinary code mean?

VLIW has been around for a while (Itanium is probably the most famous "general
purpose" example) and has failed to gain traction outside of GPUs and DSP (ie
not "ordinary code").

~~~
graycat
> Compared to what?

A single core processor, of course.

> What does ordinary code mean?

Say, simple random sample of all the other code being run.

> VLIW has been around for a while

Yup. I never claimed otherwise.

> Itanium is probably the most famous "general purpose" example

Yup.

> and has failed to gain traction outside of GPUs and DSP (ie not "ordinary
> code").

Yup.

Still, yet again, over again, one more time, once again, VLIW can get 9:1
speedup.

Why mention this? Because the OP was talking about the challenges of getting
faster computing. Well, if want faster computing, one approach, that, indeed,
works on general purpose code, and gives ballpark 9:1 speedup is, and may I
have the envelope please [drum roll, please], VLIW. Really new? Nope. Tried
with Itanium? Yup. Works? Yup.

Bottom line -- we still have 9:1 speedup available to us.

Maybe an objection is that pay a factor of 24 in transistor count and
electrical power but get only a factor of 9 in performance.

Not everyone knows about VLIW.

Where did I say something wrong?

~~~
TheOtherHobbes
Citing a paper from the mid 90s isn't a credible answer in 2016.

All VLIW does is move a lot of on-chip logic off-chip into the compiler. This
only works for a small set of computing tasks - which is why the closest thing
we have to VLIW today lives in GPUs. And why Itanium was nicknamed Itanic.

It's a non-starter for general computing because as soon as you start dealing
with real-time conditions the compiler can't optimise in advance, the speed
advantage turns into a speed penalty.

~~~
over
> Citing a paper from the mid 90s isn't a credible answer in 2016.

I think it's unfair to generalize from VLIW to everything published 20+ years
ago. Plenty of the things discovered back then, or earlier, are still
applicable and in production today. Your compiler picked most of its low
hanging fruit ages ago. Lots of PhD theses are rehashing old ideas, often
unknowingly. Results are results, what matters more is whether there's a
relevant context, as you've highlighted.

------
sounds
The article is click bait. Moore's law may slow down, but it will not be
predicted in the Economist.

For example, a quote from the article:

"Moore’s law was never a physical law, but a self-fulfilling prophecy—a
triumph of central planning"

The physics and triumphs of engineering were all about physical law; "The end
of Moore's law" was always just around the corner because of physics.

No central planning led Intel to invest their billions in R&D.

Central planning can neither force nor halt additional refinements in
transistor density or alternate ways to compute.

~~~
Retric
Moore’s has been slowing down for a long time. In 1965 he wrote a paper
predicting a doubling every year. In 1975 it was revised to double ~every 2
years. With 18 months as suggested by someone else being the accepted target
for quite a while.

Intel failed to keep up with every 2 years back in 2012.

 _CEO of Intel, announced that "our cadence today is closer to two and a half
years than two.” This is scheduled to hold through the 10 nm width in late
2017._

