
Moore’s Law is not Dead - ricw
https://www.tsmc.com/english/newsEvents/blog_article_20190814.htm
======
tomxor
I've seen this interpretation before in HN comments and I think it is an
unnecessary distortion of Moore's Law:

Note the historical context the observation was originally made in (1965) when
doubling was achieved through process size reduction, the effect this had on
speed was twofold: operating frequency would increase roughly in proportion
(increasing about 40%); secondly and more uniquely to the time, since
predecessors transistor count were so limited there was significant room for
improvements in instruction implementation and specialization as available
transistors increased.

Although it's implied, Moore also explicitly stated this in his 1965 paper:

> [...] In fact, shrinking dimensions on an integrated structure makes it
> possible to operate the structure at higher speed for the same power per
> unit area. [1]

Later this effect was more explicitly defined as Dennard Scaling in 1974 [2]

Transistor count increases in recent years has very little to do with dennard
scaling or improving individual instruction performance and everything to do
with improving some form of parallel compute by figuring out how to fit, route
and schedule more transistors at the same process size, which does not have
the same _effect_ Moore was originally alluding to.

[1]
[https://drive.google.com/file/d/0By83v5TWkGjvQkpBcXJKT1I1TTA...](https://drive.google.com/file/d/0By83v5TWkGjvQkpBcXJKT1I1TTA/view)

[2]
[https://en.wikipedia.org/wiki/Dennard_scaling](https://en.wikipedia.org/wiki/Dennard_scaling)

~~~
kurthr
I'll link to the wikipedia graphic of Moore's Law:
[https://upload.wikimedia.org/wikipedia/commons/8/8b/Moore%27...](https://upload.wikimedia.org/wikipedia/commons/8/8b/Moore%27s_Law_Transistor_Count_1971-2018.png)

It makes the point that Moore's law is dead. You can see the slope change in
in 2006 with the Dual Core Itanium2. So it died after Dennard scaling toppled
(and some of the fastest growth ever), and although transistor count continues
to increase it is noticeably slower... I fear that it may go slower still
since economic costs continue to increase, but profits don't (as fast). When
the CFO figures out it's not worth investing in scaling, you just won't build
new fabs (unless the gov just says so).

To put some numbers to this (topline growth):

1971 TMS1000 (8k) to 1979 M68000 (70k) 2.6yr/double

1979 M68000 (70k) to 2003 Itanium M6 (400M) 1.9yr/double

2003 Itanium2 6M (400M) to 2006 Dual Itanium (1.6B) 1.5yr/double

2006 Dual Itanium2 (1.6B) to 2018 GC2 IPU (24B) 3.1yr/double

Personally, I think that an era of backend improvements (wafer stacking etc)
to combine processes (Memory/Logic/Flash) into locally interconnected 2.5D is
possible, if there remains sufficient investment, and that in turn could drive
some improvements to performance and pricing. However, that relies on ever
more complex system design and integration... and heat dissipation issues. I
don't see full 3D scaling or Quantum compute on the 3-5 year horizon where big
new investments will be required.

(edit formatting commentary and qualification)

~~~
Veedrac
That looks like a pretty straight line to me, TBH. You're doing more
subdivision than is fair to the data. I did my own graph of just density,
which is also pretty clearly straight.

[https://docs.google.com/spreadsheets/d/1NNOqbJfcISFyMd0EsSrh...](https://docs.google.com/spreadsheets/d/1NNOqbJfcISFyMd0EsSrhppW7PT6GCfnrVGhxhLA5PVw/edit#gid=1105655921)

~~~
kurthr
Yours is noticeably straighter than the Wikipedia page. I see that you are
plotting density and not #transistors?

Is that what Moores law says? Or does die size count? If you plot total
#transistors your fitted slope will plummet in the later years and you'll see
the change post 2006.

~~~
perl4ever
The Wikipedia chart looks pretty straight to me. What I think you are calling
"a change in slope" looks to me more like a period in which there were a bunch
of outliers, and those outliers formed a kind of hump.

------
petra
Instead of an ad, here's a good description of the
situation:[https://semiengineering.com/why-scaling-must-
continue/](https://semiengineering.com/why-scaling-must-continue/)

Tl;Dr - old Moore's law is dead , but our new systems(ai , gpu...) still need
a huge number of transistors , beyond what can be made on a single chip today,
so scaling is still valuable.

~~~
velox_io
I thought about upgrading my GPU recently (an rx480 8gb from 2016), I was
disappointed to find that even the rx5700xt was only double the performance of
an rx580/rx480 (they're essentially the same GPU) [0]. It also costs double,
this is THREE years later. (It turns out the rx 500 series have hardly dropped
in price since release ~£200.)

So while Moore's law is still ticking along (albeit a little slower), and
transistors are still shrinking (7nm vs 14nm). It doesn't really affect the
price, so instead of being able to buy 4x the performance (transistors) for a
similar price, it hasn't changed. It seems pricing is no longer based on cost,
it appears to be based on what the market will pay.

Strangely, it doesn't seem to be the same with CPUs. During the same period,
AMD has doubled in core/ transistor count between Zen 1 and Zen 3 while has
stayed the same price[3].

[0][https://www.guru3d.com/articles_pages/msi_radeon_rx_5700_xt_...](https://www.guru3d.com/articles_pages/msi_radeon_rx_5700_xt_evoke_review,4.html)
[1][https://en.wikipedia.org/wiki/List_of_AMD_graphics_processin...](https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#Radeon_RX_400_Series)
[3][https://en.wikipedia.org/wiki/Ryzen](https://en.wikipedia.org/wiki/Ryzen)

~~~
wyxuan
This is a little misguided. The only reason that AMD is so cheap on the
consumer side is that they genuinely want to take a crack at Intel's market
share. With GPUs, they're only competitor is Nvidia, and as long as they are
cheaper than Nvidia at a reasonable performance, they are fine.GPUs are
extremely profitable while CPUs aren't as profitable (yet)

~~~
velox_io
I have a feeling AMD wants some of Nvidia's market share too. In fact the
reason Nvidia recently released all those 'Super' cards was to counter AMD's
release, it's also rumoured that AMD had to reduce their prices as a result.

In many ways, Nvidia is acting like Intel did. AMD simply couldn't compete on
performance so this allowed them to get cosy (Intel's high-end for consumers
stayed at 4 cores for a decade, Nvidia's prices have been steadily raising for
a while).

Also, a lot has happened in the last few years: RAM prices skyrocketed,
Bitcoin mining drove up demand for AMD GPUs. However both of those have been
over for a while now.

The rx480 was priced very aggressively, the $199 price grabbed some headlines
(8gb card cost a little more). Given the die shrink and three years (plus the
Ryzen cores doubling), you'd be forgiven for expecting the rx5700 to come in
closer to $199, than the $349 (XT is $399) keep in mind that these new cards
aren't much over double the performance.

While, it's no secret that shrinking fabrication is getting harder and more
expensive (it's hurting Intel right now). Prices per transistor have dropped
for CPUs and SSDs has dropped noticeably, even DRAM to an extent, but not
GPUs...

Even demand for machine learning doesn't explain the price hike (GPUs are now
being artificially handicapped, just as professional graphics have been,
shifting demand to the even more expensive Tesla cards).

------
dahart
Gordon Moore himself is quoted in 2015 as saying that the transistor count
interpretation of Moore’s law will be dead within 10 years, in the Wikipedia
article on Moore’s Law.

If something changed and it’s not dead, I’d love to hear more about the new
processes that are making more transistors possible. Working at a chip maker
now, but I’m a software guy. My understanding of the problem is that we’ve
reached the optical limits of resolving power for the lithography processes.
Trace widths are a tiny fraction of the wavelength of light, and chip area is
so large we don’t have lenses than can keep the projection in focus at the
edges. While there is theoretical potential to get smaller due to gates being
many atoms across, actually building smaller gates has real physical barriers,
or so I’m told.

I’d love to hear more about the manufacturing processes in generally, and more
specifically whether something really has changed this year. Does TSMC have a
new process allowing larger dies or smaller traces, or is this article mostly
hype?

~~~
AstralStorm
Well, electron (X-ray) lithography is possible, EBL, one step beyond EUV, it
is slow and used to make masks today.

Extremely hard to make economical though.

------
cbarrick
This article is complete fluff.

There's no _real_ discussion of Moore's law. No new revelations about chip
design. You say workloads need to exploit parallelism these days to see
increased performance gains? No shit. Putting memory closer to the logic cores
is a good idea? duh. Hell, the author makes the common mistake of conflating
AI with ML, because it's clearly illegal for a businessman in any industry to
not buzz about "AI".

> by Godfrey Cheng, Head of Global Marketing, TSMC

Yeah, this is fluff.

~~~
perl4ever
I agree. Seems like almost every article I see about an interesting topic
fails to address the core of whatever it's supposed to be covering. If Moore's
law does not address speed increase, that's fine; we've heard that before. If
this guy thinks that the shrinkage will continue, despite some people's
expectations, he should have said _something_ about how and why. Otherwise,
just don't publish the article because you have nothing to say.

It's like documentation these days that is all autogenerated boilerplate.

------
boyadjian
It depends what you call Moore's Law. For me, Moore's Law was essentially that
the cost per transistor was divided by two every 18 month. Nowadays, it's not
true anymore. But of course, it doesn't mean that progress has stopped.

~~~
Fnoord
You wrote:

> [..] divided by two every 18 months [..]

Wikipedia wrote:

> [..] whose 1965 paper described a doubling every year in the number of
> components per integrated circuit [..]

Godfrey Cheng, Head of Global Marketing, TSMC wrote:

> [..] The number of transistors in an integrated device or chip doubles about
> every 2 years. [..]

(Even if we ignore the difference on the cost versus number of transistors or
components.)

12 months, 18 months, 24 months. _How long is it?_ I have no clue with regards
to the answer. It seems to me we first need to agree on the definition of a
law before we can discuss it.

~~~
boyadjian
18 month or 24 month, this is not very important. I was essentially talking
about the fact that the cost per transistor was steadily going down, which is
almost finished now.

~~~
tim333
Is there any data on the cost per transistor not going down? They seem to be
able to pack an awful lot onto chips these days, the current record apparently
being Samsungs 1tb memory chip with 2 trillion.

~~~
mrep
I cannot find a source for this outside of HN, but I have read here that R&D
costs to build the next generations fabrication plants double each iteration
which limits the total number of sustainable independent chipmakers.
Currently, only 3 are left (TSMC, Samsung and Intel) and they all might give
up within a few generations when it gets too expensive to scale down further.

------
baybal2
About the industry at large: lots of changes ahead.

Take a look at this:
[https://www.hotchips.org/program/](https://www.hotchips.org/program/)

For the first time in a long while they gave the entire first day to non-semi
companies: Amazon, Google, Microsoft

Nobody could've imagine the industry turning this way a decade ago.

~~~
rrss
> entire first day to non-semi companies: Amazon, Google, Microsoft

They've given the entire first day to the semiconductor divisions of these
hulking behemoths.

------
ksec
It is not the density that I worry about. Judging from TSMC's Investor notes
and technical presentation, they don't see any problem with 2nm or may be even
1nm. 3nm is currently scheduled for 2022, and 2nm for 2024. So it isn't much
about the technical but achieving those within budget.

The problem is somewhere along the next 5 years we may see cost per transistor
stop decreasing. i.e Your 100mm2 Die would be double the price of previous
100mm2 Die assuming the node scaling was double.

At which point the cost of processors, whether that is CPU, GPU or others
becomes expensive and the market contracts, which will slow down the foundry
pushing for leading node. We could see Hyperscaler each designing their own
Processor to save cost, and we are back to the mainframe era, where these
Hyperscaler has their own DC, CPU, and Software serving your via Internet.

~~~
sitkack
And they will get the yields in 7nm up and wafer costs down. It ultimately is
about the economic cost of computation, which is still dropping. Smaller node
sizes are not the only way to get more transistors and/or reduce total costs.

------
ffwd
Can someone here talk about what desktop CPU's might look like for the
consumer in 2029?

Like, I'm a gamer in 2029 and I'm looking for the equivalent of todays Intel
Core i7 or AMD Ryzen. How much faster will it be? How different will it be
from today? Etc.

~~~
SECProto
I'm a gamer in 2019 and I'm using an i5 2500k processor from 2011. It works
well enough for the games I run. I might upgrade if I scrounge up $700 for
cpu/mb/ram.

The point I'm making is that the processor doesn't matter much for gaming. An
older computer can easily outperform a new computer if it has better
components where it matters for gaming and casual use (SSD and GPU)

~~~
ffwd
Yeah good point, I shouldn't have mentioned gaming. I guess what I'm looking
for is the general desktop workstation situation 10-20 years down the line,
rather than gaming specifically. A workstation for visual arts, music,
streaming, linux, programming, whatever, and especially if there will be a
point in the 2020s or 30s where you'll have long pauses between cpu releases
if they don't improve as fast etc.

~~~
xbmcuser
I think the 10-20 years down the line will see us move back to dumb terminals
with servers over the internet. Microsoft has already moved to subscriptions
for Windows and office i.e corporate use. And all the major companies are
looking at game streaming i.e home use. Education is moving towards
chromebooks i.e education. The rest of the uses for PC's like media
consumption has already moved to smart devices and smart phone/tablets. So
apart from enthusiasts with money to burn on PC's the actual use case for PC's
will be gone hence the current form of PC's will be gone.

~~~
AstralStorm
Well, when you can put computing power on par with a whole cloud in a single
box, which you can nowadays, the equation changes again.

Cloud is limited by Amdahl scaling like everyone else, but distances and
latencies between nodes are higher than in a single box. So instead you should
think of time sharing like in the ancient days of supercomputers. Most tasks
do not require neither massive parallelism of a cloud or cluster nor
computational resources of a supercomputer.

If it's the question of ownership or price... Why not make it free? (Ignore
this junk, make libreoffice etc. better.)

Just think of it as a PC that gets smaller. You cannot quite go super small
unless you replace input and output devices. That will require more
understanding of our own sensorium.

------
doener
Previous discussion:
[https://news.ycombinator.com/item?id=20712889](https://news.ycombinator.com/item?id=20712889)

~~~
dahart
That would be helpful if there was more than 3 opinion comments.

------
crb002
We have a long way to go in packing the RAM next to the CPU to reduce speed of
light latency.

