
The End of Moore’s Law - kristianc
https://rodneybrooks.com/the-end-of-moores-law/
======
megaman22
I dunno, it's very different than things were. I built my current desktop in
2011, and aside from an ssd and upgrading from a 560 to a 970 gtaphics card,
haven't made any changes. I'm running everything I want, multiple VMs all the
time, Visual Studio, games at max graphics. Ten years ago, using a six year
old machine meant you wouldn't have a snowball's chance in hell of running new
software, but this one is chugging along admirably, and probably will for the
foreseeable future. Unless everything moves to Electron, but if that happens,
it really is the end times.

~~~
runeks
> Ten years ago, using a six year old machine meant you wouldn't have a
> snowball's chance in hell of running new software [...]

Do you have some evidence for this claim? I'm not saying I disagree -- indeed,
I suspect it might be true -- but I'd appreciate some evidence to back it up.

~~~
electrichead
I don't know if that needs any evidence of the person was simply around back
then. Basically this is saying that 2001 hardware wouldn't be able to run the
new software in 2007, which at least to my recollection, is absolutely true.
Not only is there 32bit vs 64bit, we also started to see more cores rather
than the standard single core in 2001

~~~
ghostbrainalpha
I agree with you that this claim doesn't need evidence, but for anyone
interested.

Starcraft 2 came out in 2007. These are the original system requirements.
[https://www.lifewire.com/starcraft-ii-wings-of-liberty-
requi...](https://www.lifewire.com/starcraft-ii-wings-of-liberty-
requirements-813007)

I remember receiving the game as a gift and having to tell my grandma that my
single core PC wouldn't be able to play it. I think my PC was only 3 years old
at the time, not even 6.

~~~
Brockenstein
I think you're miss-remembering. Starcraft 2 Wings of Liberty was released in
2010. I played it on release on my Core 2 Quad/GeForce 8800 GTS machine that I
built in 2009.

Blizzard announced they were making SC2 in 2007.
[https://en.wikipedia.org/wiki/StarCraft_II:_Wings_of_Liberty...](https://en.wikipedia.org/wiki/StarCraft_II:_Wings_of_Liberty#Development)
there wasn't even a closed beta until early 2010.

Regardless, the minimum requirements you posted are single core CPU's.

------
thechao
> Once you get down to 5nm features they are only about 20 silicon atoms wide.

This is a complete misunderstanding of modern processes.

It’s conflating _feature size_ with some measure like “wire width”. Feature
size is best thought of as the tightest corner radius you can cut in a piece
of wood. Actual logic (and wires) are far larger than this. There’s actually a
shocking amount of room “down there”, still. The problem is that lithography
probably won’t scale down after some magic number (say 2 or 3 nm), and we’ll
have to do something else clever. It’s been a while since I looked into this,
but IBM’s smallest AFM-created transistor is ~10 atoms, which is something
like 4 _orders of magnitude_ (or more!) smaller than, say, 10nm.

~~~
notheguyouthink
Really ignorant question, but does size reduction affect heat dissipation at
all?

~~~
tlb
Yes. Most of the power goes into charging and discharging the capacitance
inherent in each wire and transistor gate every time a signal changes. When
you shrink by 2x, the capacitance goes down by about 4x (because it's
proportional to area) and power goes down 4x also.

(It's not exactly square-law, because the sides of wires have capacitance too,
but at least at larger sizes it was close.)

See
[https://en.wikipedia.org/wiki/Dennard_scaling](https://en.wikipedia.org/wiki/Dennard_scaling)
for details.

------
waivek
One point that I wish the article had expanded on is the impact on software
development. In a recent speech, John Carmack talks of how the power of the
desktop will never reach mobile.

[https://m.youtube.com/watch?v=vlYL16-NaOw&t=14m20s](https://m.youtube.com/watch?v=vlYL16-NaOw&t=14m20s)

This is incredibly exciting. I see the end of Moores law as the return of the
era of performance-oriented software development. The past decade was ruled by
software methodologies which optimised for developer time at the cost of
performance. Moores law was the crutch on which such latency ridden software
limped into userland.

With the stabilization of hardware performance, developers can once again put
dedicated time and effort into crafting performant software with the assurance
that hardware gains will not render their efforts useless in 2 years.

~~~
BoiledCabbage
> This is incredibly exciting. I see the end of Moores law as the return of
> the era of performance-oriented software development.

And I see this as the exact opposite. Optimizing for developer time is one of
the largest productivity enhancements we have. A true/pure positive feedback
loop in technological progress. Optimizing for performance is getting little
real gain for large human cost.

The end game of optimized for developer efficiency is we can design and
implement "anything" we can think of in almost no time flat. The end game for
optimizing for performance is it takes significantly longer to develop a
program just to run it on the equivalent of 5 years ago hardware.

An analogy: it's early in human writing. People have trouble expressing
average thoughts. We can either teach society to have a better grasp on
writing and expressing complex ideas precisely. Or we can teach people to
write really small because we're running out of paper.

The positive feedback loop of improved development is enormous.

~~~
waivek
If I may, a rebuttal.

The software methodology that has evolved over the last two decades has
focused on development time. As such, there has not been much focus/research
on performance-first methodologies. I see no reason that developer time cannot
be optimized in a performance-first methodology, it's just that a developer
time oriented methodology has years of fine tuning in it.

An example for this is video game development. If you look at some talks given
by Mike Acton, you can see that it is possible to ship a game while optimizing
for developer time. A triple A game is arguably one of the most intricate and
complex projects under the label of "Software Engineering". I see no reason
why we can't learn from such sub-fields and apply it to general software
development.

I also disagree with you saying that "Optimizing for performance is getting
little real gain for large human cost". Not optimizing for performance has had
led to the creation of unresponsive text editors, slow word processors and a
plethora of other latency-ridden software. This in turn affects the user and
unfortunately alters their expectations for software.

There is no technological reason why Visual Studio 2017 should not run on a 5
year old laptop, yet that is the state we are in.

~~~
BoiledCabbage
> If I may, a rebuttal.

Of course - when it's performing well, it's what this site shines at.

> I see no reason that developer time cannot be optimized in a performance-
> first methodology

Well clearly it can but it won't be nearly as much of a priority. Just like
performance isn't as much of a priority when focusing on developer time. It's
not clear what you're specifically rebutting in this section. We're both
agreeing that developer-performance will improve significantly less in this
model.

And unresponsive text editor gives at most what a 3x slowdown in performance.
Very little of a programmer's time is actually inputting text or data. While
let's take the other extreme to show a point: Building a distributed
transactional relational data driven application from scratch (because our
tools weren't built because people would have been optimizing performance of
late 90s hw) would be a 1,000x loss of developer productivity.

The scaling factor of productivity just utterly dwarfs human interface
performance.

> There is no technological reason why Visual Studio 2017 should not run on a
> 5 year old laptop, yet that is the state we are in.

True, but almost everyone would rather 2017 VS not run on a 5 year old laptop,
then the best available product right now be Visual Studio 2005 era
functionality highly optimized so it can still run on a 5 year old laptop.

~~~
waivek
If text editors slowed down by 3x, I'd be happy. The latency I am complaining
about is orders of magnitude higher than 3x. This is the most rigorous
benchmarking that I am aware of:

[https://pavelfatin.com/typing-with-pleasure/](https://pavelfatin.com/typing-
with-pleasure/)

Each of those tables show that the difference in performance is not 3x but at
least an order of magnitude, if not more. I think this is where our
disagreement stems from.

Your next point is valid, it would not make sense to develop such an
application from scratch. However, I'm not talking about rebuilding from
scratch merely prioritizing performance. This can be done by learning from
Data Oriented Design where developers give a lot of importance to performance
critical issues such as cache locality. This can be done without rewriting our
tools from scratch, it just means that the developer will have to understand
how memory works. It's not a tools issue, it's a knowledge issue.

Your last point on Visual Studio makes me want to re-iterate the core of my
argument: Performance and developer time optimizations need not be mutually
exclusive.

~~~
BoiledCabbage
> Performance and developer time optimizations need not be mutually exclusive

Completely agreed. But since they're not mutually-exclusive why aren't you
happy with the status quo? As stated we already can work on some perf
improvements while focusing on developer productivity. I believe the reason
why you aren't is logically, you'd rather the balance be shifted more towards
performance. Which makes sense, but is illustrative of why saying "they aren't
mutually exclusive" isn't a counterpoint. We're both discussing where the
focus should be - not that the only options are all are nothing. If non
mutually-exclusive were the solution, you wouldn't be happy to see this change
and a shift towards performance.

> Each of those tables show that the difference in performance is not 3x but
> at least an order of magnitude, if not more. I think this is where our
> disagreement stems from.

Agreed. But taking sublime text as an example. I see peak ide/editor is
roughly 7x faster across various tests, and slowest ide/editor is roughly 7x
slower. Even combining both we see at most a 50x change in technical measures.
Not in human productivity. The majority of the time people are sitting
waiting. I think most people would argue that it doesn't take user 50x as long
to write any program in Eclips/IDEA as it does in GVim. So that technical
slowdown while annoying has a significantly lower impact on user performance.

Any my argument isn't that users shouldn't have to know about performance, or
that performance is bad, it's that resources spent on performance come from
being spent on productivity.

Here is an alternate view: Writing most software improves developer
performance (text editors, communication tools, file system drivers,
languages, compilers...). That better software ecosystem allowed developers to
then write more / better software.

Over some timescale we can approximate this as P(t2) = P(t1) * e^(r _t). Where
P(x) is developer productivity at time x. And t is a timespan. And r is the
"rate of return" or productivity improvement by writing software. Saying a
developer now needs to spend 30% of their time on performance means we now
have: P(t2) = P(t1) _ e^(.7 * r * t). Due to exponential growth this means
significantly less productivity over the long term. This is no different than
annually getting 20% returns or 13% returns in the stock market. I sounds like
a little, but over a long time scale (or a high scaling rate) this adds up
significantly. Meaning right now in 2017 we'd be using a decade or older
technology (If peak performance slowdown had hit in some year in the past).

Overall I appreciate the debate and enjoy hearing different opinions. In this
one, I feel we may just have different values here in the software space. I
feel in an ideal world a developer would spend zero-time on performance. Since
we don't live in an ideal world we have to spend some, but every bit of time
spent on performance is time not spent on building a product. So we should
want to minimize the amount of engineering time focused performance. My take
is that you feel differently.

~~~
waivek
I think your last sentence sums it up perfectly. It does seem that we have
different views on software. Rebutting your points would just rehash my above
comment.

The only new point I'd like to add is one that requires imagination. Imagine
if we had adopted performance oriented mindset 20 years ago, instead of today
and we solved the developer productivity conundrum that we both care about so
much. What a wonderful state software would be in, where IDE's can be
installed in seconds, each action is instant and the engineer is completely
intimate with every aspect of their baby. This depth of knowledge offers a
certain creativity that results in amazing things. It lets engineers rise
above what is good for the business and gives them the freedom to push
software to it's utter limit. That's evolution. That's progress. That's
engineering.

PS: The reason I quoted John Carmack specifically in my first comment is
because he is such an engineer.

------
larkeith
Oh goody, it's time for the annual 'End of Moore's Law' article.

[http://www.sciencedirect.com/science/article/pii/S0375960102...](http://www.sciencedirect.com/science/article/pii/S0375960102013658)

~~~
tzahola
Reminds me of IPv4 address space exhaustion articles of the past.

~~~
PakG1
That's gotta finally happen SOME day, no?? I speak seriously.... Of course, by
the time it happens, I wouldn't be surprised if we come up with a solution
that people like better than IPv6?

~~~
notheguyouthink
Out of curiosity, what do people dislike about IPv6? I know next to nothing
about networking, but it all I've _observed from people_ against IPv6 is that
it looks weird. It's downfall seems to be that we're so embedded into IPv4.

So ignoring IPv4, what is the problem with IPv6?

I was hoping we were still slowly making the switch. In the same way that the
modern web took ages, but eventually adapted to update browsers more than once
a decade and now it's reasonable to use modern JS features. I didn't give up
hope on the "modern" web features.. should I give up hope on IPv6?

~~~
pjc50
It's above most people's memorisation threshold, more than just looking weird.
But the real limitation seems to be that nobody wants to make a change which
has no benefit for them - and _so long as you can still access all you need_ ,
there's no benefit to switching over to IPv4.

And ISPs are holding us up too.

------
zwischenzug
I recently wrote this article on a book from 1965 called 'Electronic
Computers'

[https://zwischenzugs.com/2017/11/11/towards-a-national-
compu...](https://zwischenzugs.com/2017/11/11/towards-a-national-computer-
grid-electronic-computers-1965/)

In there is an image from 1965 of a graph similar to Moore's Law:

[https://zwischenzugs.files.wordpress.com/2017/11/20171105_11...](https://zwischenzugs.files.wordpress.com/2017/11/20171105_1137192-e1510393753138.jpg?w=840)

I'd be interested to know if anyone has the first edition of this book, or
whether there were other similar graphs floating around at this time.

Note that the image also shows a storage graph alongside computing power. Does
anyone know if this graph was continued, or extended into a 'law' also?

~~~
BoiledCabbage
That's roughly the same rate as Moore's law. 100x every 10 years.

------
scarface74
It may not be the end of Moore's Law, but the increase in resources needed to
run modern software has definitely slowed down.

In 2017, my mom is using my old 2006 era Core Duo 1.66Ghz Mac Mini running
Windows 7 with only 1.5GB of RAM. She mostly uses it for tutoring when she
doesn't want her students on her main computer. It still runs Chrome and
Office surprisingly well. It has plenty of USB2 ports, Bluetooth and Gig E
Ethernet.

i cant imagine getting any use out of the computer I used in 1995 - a PowerMac
6100/60 with a PPC 601/60Mhz processor and a 486Dx/2 66Mhz Dos Compatibility
Card in 2006.

My Plex Server is a 2008 era Core 2 Duo 2.66Ghz Dell Laptop running Windows 10
with 4Gb of RAM.

------
juanmirocks
The end of Moore's Law doesn't necessarily have to imply a deceleration in
technological evolution, as Ray Kurzweil and others have pointed out. In their
view, Moore's Law is just one single phase in the "law of accelerating
returns".

~~~
ChrisSD
Which is what this article also argues. As we can no longer rely on Moore's
exponential increases we will start looking elsewhere to evolve technology. A
new "golden age" as he calls it.

Maybe that'll mean looking beyond von neumann for general purpose
architecture. Maybe it'll mean designing a lot more specialised architectures,
similar to how the GPU is specialised for graphics processing (and useful for
some other tasks) or how most mobile devices come with specialised processors.
Maybe it'll mean something completely different.

The future of computing is exciting again.

~~~
hyperpallium
Like peak oil. But there might be a (seemingly) interminable lull.

Meanwhile, I like the idea of physically massive GPUs if they can't get
smaller (also doing ML, CFD etc)

------
no_gravity
The author argues that the number of meaningful operations a device can do per
second will not keep growing exponentially. The reason he gives is that while
we can add more parts that work in parallel, this has limited benefit:

    
    
        The speed up starts to disappear as silicon is left idle
        because there just aren’t enough different things to do.
    

I think there will be enough 'things to do'. Looking at real world
applications of more processing power like AI and simulations, I expect them
to be perfectly parallelizable.

~~~
ianhowson
> I expect them to be perfectly parallelizable

Unfortunately, the real world does not meet your expectation. You're talking
about a class of problems called 'embarrassingly parallel':
[https://en.wikipedia.org/wiki/Embarrassingly_parallel](https://en.wikipedia.org/wiki/Embarrassingly_parallel)

Only a very tiny proportion of problems fit into that category. AI and
simulations tend to be more easily parallelizable, but CPU-CPU communication
is slow at any scale and imposes a limit to how much parallelism can be
exploited.

The original statement is accurate:

> The speed up starts to disappear as silicon is left idle because there just
> aren’t enough different things to do

Efficient parallel algorithms require a problem that can be divided into
smaller independent components and solved separately. Not every problem can be
divided in this way. Crypto algorithms are the classic example; there's no way
to perform round N+1 without first performing round N, and this is very much
by design.

~~~
eleitl
> Only a very tiny proportions fit into that category

Actually, it is exactly the other way round: most problems in the real world
are about local communications of cells in a 3 dimensional lattice. Including,
relativistic limits to communication. So an ideal architecture for that is a
3D cellular automaton. With bigger cells, you've got nodes on a 3d mesh
(torus). Incidentally, the topology of most supercomputers.

~~~
wott
> _most problems in the real world are about local communications of cells in
> a 3 dimensional lattice._

Uh?

~~~
njarboe
Multi-cellular life, fluid dynamics, etc. The physical world generally.

------
qznc
Here is a graph from Wikipedia data until the AMD 8-core Ryzen:
[https://twitter.com/azwinkau/status/869461530324107264](https://twitter.com/azwinkau/status/869461530324107264)

As soon as a plot like this shows a "stop", there will pop up lots of
possibilities for research and funding thereof. Physical restrictions predict
this stop within the next years.

~~~
eleitl
Except that Intel's own data show that the constant doubling no longer
applies. I have no idea what he means by Wikipedia data. See no source there.

~~~
wott
> _I have no idea what he means by Wikipedia data._

I imagine it is this:
[https://en.wikipedia.org/wiki/Transistor_count](https://en.wikipedia.org/wiki/Transistor_count)

------
perseusprime11
How do we think about AI/ML stack moving into GPUs and custom chips? Maybe
Moore's law as envisioned is ending but it is probably changing in new ways to
help scale the jobs and tasks we have based on the type of task.

------
amelius
If only Moore's law applied to clock speed instead of some useless measure
like number of transistors.

~~~
qznc
That is more like Dennard's law [0] but that one stopped roughly 2001. You
could also watch Koomey's law [1].

[0]
[https://en.wikipedia.org/wiki/Dennard_scaling](https://en.wikipedia.org/wiki/Dennard_scaling)
[1]
[https://en.wikipedia.org/wiki/Koomey%27s_law](https://en.wikipedia.org/wiki/Koomey%27s_law)

~~~
Clubber
Moore's law is number of transistors.

 _Moore 's law is the observation that the number of transistors in a dense
integrated circuit doubles approximately every two years._

[https://en.wikipedia.org/wiki/Moore%27s_law](https://en.wikipedia.org/wiki/Moore%27s_law)

------
TrickyRick
Does anyone have a summary?

~~~
ci5er
Learn to read?

EDIT: I try to not degrinitgrate the illiterate, but as the first commenter on
this thread (you), I've gotta go "gosh!" Please tell me how I am wrong. (It's
possible that I am both wrong and having a bad day)

