
Another Step Toward the End of Moore’s Law - furcyd
https://spectrum.ieee.org/semiconductors/devices/another-step-toward-the-end-of-moores-law
======
est31
Moore's law itself may be dead but there is still additional potential to
explore. Right now, CPUs are mostly two dimensional with only a few layers
stacked on top compared to the extent in the other two dimensions.

Compare this to a human brain which too, has a sulcated layering system for
the grey matter but the white matter connectome turns it into a fully three
dimensional object. We can't produce such a thing with technological means
yet.

The CPUs we produce have to be deployed in datacenters and computers within
distance to each other, mostly for heat dissipation reasons. We can't just
stack $N CPUs onto and next to each other until we reach an object of the size
of a brain. It would not be practical to cool that thing. Compare this to
human brains which require much less energy (20 Watts on average).

Moore's law is dead but it's about size of transistors. Human axons have
diameters of hundreds of nanometers, even the gap of intracellular space of a
synapse is approximately 20 nanometers. Compare this to the 5 nanometers we
got now. We don't need size improvements to model brains, we need power
consumption/heat dissipation/electric loss improvements. Also, CPUs are still
very expensive. We also need improvements in manufacturing them more cheaply.
Both isn't really requiring node sizes to shrink further.

Sadly, there's a different law called Dennard's Scaling which was precisely
about power consumption, but it has been dead for about 13 years.
[https://en.wikipedia.org/wiki/Dennard_scaling](https://en.wikipedia.org/wiki/Dennard_scaling)

~~~
bognition
To be fair growing a brain is also a VERY expensive process. To do it properly
takes decades and can cost a lot of money.

Luckily it looks like humanity has finally figured out how to do it at scale
/sarcasm/

~~~
rstuart4133
> To be fair growing a brain is also a VERY expensive process.

Nah, you are born with 3 times the number of synapses you will have when you
are an adult. The expensive part is training that network, not growing it.

------
noobiemcfoob
I, for one, welcome the end of our Moore's Law overlords.

This race to increase various aspects of performance has long let the fields
of architecture and the software on top of it be lazy in their optimizations.
When there's no more juice in that fruit to squeeze, I know there is much more
to be found elsewhere.

/I can't wait for my non-von neumann SoC FPGA hybrid to use for...I'll figure
it out then I'm sure.

~~~
StavrosK
Do you think that "FLOPS/OPS/whatever per year" will slow down or speed up now
that Moore's law is ending? Ie do you think that the
hardware/architecture/software combo will accelerate more now, or less?

If you think it will accelerate more while hardware gains drop to zero, why
were we leaving these _absolutely massive_ gains on the table? This means
that, if a new processor gave us double the speed, we could get more than
double _just from software alone_ and get the hardware doubling _on top of
that_.

If you think it will decelerate, why are you happy Moore's Law ends? We're
worse off than we were before, even though we aren't "lazy" now.

This approach seems to me a bit like saying "Oh I'm happy I lost my job, now I
can finally make money by looking for pennies on the ground all day, which the
job made me too lazy to do".

~~~
UnFleshedOne
It is also possible we are in a local optimum and there are huge performance
or efficiency or other kind of gains available, but to realize them
investments in radically different architectures need to be made, and those
are not economical while existing approaches can be improved.

~~~
noobiemcfoob
This is exactly my perspective. Existing ISAs have a de facto stranglehold on
the mindset around computer architecture because they could provide a
consistent foundation while the underlying implementation got better and
better faster than anyone could build something on top of it.

There have been plenty of small experiments in other architectures but never
enough of a mindshare investment to really evaluate viability and performance
of alternatives. There are costs to a more diversified architecture eco-
system, sure, but I fully believe we will find more value in that space than
we have so far taken advantage of.

ASIC implementation of a PLC, anyone?

------
femto113
"After all, there aren’t many numbers left between 5 and 0."

Math purists may roll their eyes at this statement and clearly even if you are
obsessed with integers you can just switch to picometers, but I wonder if it
is time to start counting atoms. The diameter of a silicon atom is about 0.21
nanometers, so a 5nm process is dealing with features only about 20-30 atoms
wide.

(edit: found more accurate diameter number)

~~~
tombert
Wow, this puts things in perspective for me. I knew 5nm was small but I didn't
really realize _how_ small until we started getting into "I-have-more-dollars-
in-my -wallet-than-this-thing-is-atoms-wide" territory.

~~~
tyleo
One thing I’m always amazed with - and it’s partly due to my lack of
understanding of physics at this scale - is that when a processor with
components on this scale is dropped a few feet, they aren’t reduced to
smithereens. It’s hard for me to contemplate that whatever is holding these
few atoms together is so resilient.

~~~
keldaris
If you find that amazing, contemplate for a moment how nuclear fission and
fusion work and the amounts of energy involved there. Apart from a few
comparatively exotic things (black holes, neutron stars, etc.), some of the
largest force densities that exist are all around us in atomic nuclei (of
course, those forces also decay very quickly with distance, but that's a
different matter).

------
chr1
What ways to get faster cpu remain after this? Nanoscale vacuum-channel
transistors [1] seem promising as they can work at terahertz frequencies but
they do not look anywhere close to production.

[1] [https://en.m.wikipedia.org/wiki/Nanoscale_vacuum-
channel_tra...](https://en.m.wikipedia.org/wiki/Nanoscale_vacuum-
channel_transistor)

~~~
falcrist
I remember hearing about research into germanium as a replacement to silicon
or compound III-V semiconductors [1]

Nanoscale devices seem like an interesting development, but all of these
technologies seem like they're a decade or more away from mainstream use.

[1]
[http://www.sandia.gov/%7Ejytsao/WCS.pdf](http://www.sandia.gov/%7Ejytsao/WCS.pdf)

~~~
holy_city
I think you may mean gallium based compounds, not germanium. Germanium
semiconductors are only good for sounding like Hendrix. The RF biz debated
sticking with Si or moving to "wide bandgap" materials like Gallium Arsenide
(GaAs) or Gallium Nitride (GaN), because the wider bandgap means the devices
have much higher breakdown voltages and better power density. That means you
can run them at a higher baseband and lower power without a die shrink.

It's my understanding they're important for 5G devices, and they've been in
the wild for awhile. But the kinds of circuit they implement are like
demodulators and filters, not so much something with the transistor density of
a microprocessor. I think the fab processes are well more than a decade away
from reaching the die sizes of contemporary Si, even if hypothetically they
could run at ridiculous clock rates and low TDP.

~~~
Junk_Collector
SiGe is a serious competitor to GaAs and InP. It doesn't typically have the
power density of Ga derived microcircuits, but has good channel noise
performance plus linearity and can operate at higher frequencies than GaAs as
long as you don't need high power density. GaN has amazing power
characteristics but has all kinds of issues with signal purity that are still
being worked out. Eventually, I believe GaN will eat everything's RF lunch
just out of sheer power efficiency gains with the last hold out being cell
tower base stations, where spectral purity trumps efficiency by a mile. InP
gets used into the 100GHz+ territory.

These processes are mostly larger like you say, but they are also used in high
end data converters (ADCs and DACs) and some processes are in the mid to low
10's of nanometers now. Which is almost exactly where silicon was a decade
ago.

Silicon of course isn't used because of it's performance, but rather, because
silicon is the easiest of the semi-conductors to manufacture and get good
yields out of. It is also comparatively very low cost. These factors have led
to the lion's share of research dollars being spent there as well.

------
tntn
I think moore's law has spoiled a generation on what technology can do. It is
extremely ridiculous that lithography has gotten so much better in the last
few decades, and IMO it has resulted in a lot of people thinking that
exponentials are a) common, and b) automatic. Neither of these are true.

There are few, if any, other exponentials in the history of humanity that
worked as consistently for as long with as massive of impacts on technology,
and TBH I'm doubtful that we will see anything like it again.

Moore's law worked because loads of engineers and scientists made it their
life's work, but everyone watching seems to think it just happened.

There was one magical exponential that changed the world, and it's coming to
an end.

There's certainly a lot that can be done in architecture and software, but
most applications probably aren't worth the effort.

------
trimbo
I recommend watching the Turing Award speech by Patterson and Hennessey which
covers this topic.

Spoiler: the way out is "Domain Specific Architectures".

[https://www.youtube.com/watch?v=3LVeEjsn8Ts](https://www.youtube.com/watch?v=3LVeEjsn8Ts)

~~~
tambourine_man
Which is sad, in a way. There was something amazing about a general purpose
computer that next generations may not be able to appreciate.

~~~
tempguy9999
General purpose chips aren't going to disappear, not even slightly. They're
just going to offload specific work to circuitry that it's best suited for.

I regret we're about to hit the wall but I have long though that we're going
to start moving back to analogue, and probabilistic algorithms, and in one
way, having half-width floats is a step in that direction. I expect it to go
further.

(All this is my opinion and may be wrong).

~~~
joquarky
> They're just going to offload specific work to circuitry that it's best
> suited for.

Like the Amiga? That actually worked out pretty well. I look forward to it.

~~~
tambourine_man
Pow’s law at work here for me

[https://en.m.wikipedia.org/wiki/Poe%27s_law](https://en.m.wikipedia.org/wiki/Poe%27s_law)

------
Causality1
The end of Moore's law doesn't bother me too much from a practical point of
view. My phone and computer are about as fast as I need them to be. Why it
does bother me, though, is how it's an inescapable reminder that there are
hard limits to everything, and sooner or later we're going to hit them. The
fact we're hitting this limit so fast is damaging my hope for a whiz-bang sci-
fi future. I didn't expect to be alive when we invented strong AI and perfect
3D projection systems and retinal display screens and all the rest, but I did
hope we eventually would. Slamming into the end of per-volume and per-watt
performance before the year 2100 makes me think maybe those things will never
happen at all.

~~~
sixothree
It is a strange and sad feeling to come to the realization that everything is
finite.

~~~
tim333
With possible exceptions for space, time, prime numbers and the like.

~~~
sixothree
For all we know space is finite and there is a countable number of stars in
the sky.

~~~
tim333
Yeah hard to know really on that one.

------
ars
$100million per lithography machine!

A bit of a side topic, but can you imagine if there were fewer people in the
world? Computing power would never improve because it would be too expensive.

I wonder what other technology could be done, if only there were more people
(i.e. larger market), but is too expensive right now.

People always like to talk about the downsides of population, but the high
tech world we live in now could not exist otherwise.

~~~
dfrage
And there wouldn't be enough smart people.

These lithography machines are completely wild: the part that fails the most
spits out a tiny droplet of molten tin. That's hit with a laser, and the
excited electrons emit what's called extreme ultraviolet (EUV) which is very
close to the softest of X-rays. Most of the power released is wasted bouncing
off a series of mirrors, and then wild stuff happens as these energetic
photons hit the semi-optional mask cover (pellicle), mask, and then chip:
[https://en.wikipedia.org/wiki/EUV_lithography](https://en.wikipedia.org/wiki/EUV_lithography)

------
sanxiyn
"After all, there aren’t many numbers left between 5 and 0." For some reasons,
I found this sentence very funny.

~~~
0xffff2
Probably because it's both patently untrue and still informative in a way. It
would be more precisely stated as "there aren't many atoms left between 5(nm)
and 0". If my math is right, 5nm is only a width of about 25 silicon atoms.

~~~
patrick5415
Yes you would think ieee would know there are uncountably many numbers between
0 and 5.

~~~
dougmwne
IEEE surely does know. I'm sure the point is to imply the marketing
ridiculousness. After 1nm they'll need a new vanity metric.

~~~
tim333
1nm is about 5 atoms so I guess the next metric would be the number of atoms.
Single atom features would be kind of cool but not buildable with present
technology.

------
_Microft
Plotting the cost per compute power over time gives a different image:

[https://sv.wikipedia.org/wiki/Teknologisk_singularitet#/medi...](https://sv.wikipedia.org/wiki/Teknologisk_singularitet#/media/File:Moore%27s_Law_over_120_Years.png)

In this display, the show is far from over. It even start to look like over-
exponential progress. The latest data points show GPUs instead of CPUs but
this is most likly OK since it is the suitable technology for easy to
parallelize workloads like machine learning applications.

------
dmix
Anyone know if there's a longer version of this chart which goes back farther
in time?

[https://spectrum.ieee.org/image/MzMwNzU1Ng.jpeg](https://spectrum.ieee.org/image/MzMwNzU1Ng.jpeg)

------
moflome
Anyone care to confirm that Intel Custom Foundry is still up & running?

~~~
blu42
Do you mean at 14nm? I guess once they successfully migrate their actual
production line to 10/7nm, whenever that might be..

------
ahartmetz
But think of all the interesting work ahead! :)

I kind of welcome the death of the "just wait for hardware to improve"
approach to optimization. I find computers interesting _because_ the field is
so fresh and you have to figure out so many things on your own and through
communication with people that are still alive. Due to the death of Moore's
Law, it's going to stay that way for some longer and the end result is going
to be more varied.

------
sytelus
Nice article with a bit of low-level details. We are indeed encountering the
problem that light wavelength is now small enough for us! So using 15nm EUV
for multiple patterns to get 5nm. Only two foundries left in the entire world
which can do such process (Intel and others are years behind).

~~~
dfrage
> Intel and others are years behind

Not clear anyone other than Intel is trying, but Intel's situation is much
more complicated. They tried for a purely optical for its life "10nm" node
more aggressive than TSMC's initial "7nm" node and catastrophically failed.
There's clearly some top level management issues there, a big problem for
Intel for decades, and it's extremely worrying they let them destroy a
generation of their crown jewel.

But that doesn't tell us much about their "7nm" node very roughly equivalent
to Samsung and TSMC's "5nm" nodes, except those companies have a lot more real
world EUV experience, but it's not always good to be a pioneer. Intel _could_
conceivably get back in the game in about the same time frame as these two
companies exit "5nm" risk production, we just don't know. All we know is that
they're buying EUV machines and installing them in multiple fabs.

------
Darmody
Stupid question. Could trinary processors play a role here?

I keep hearing people talking about them and their benefits over que binary
system.

~~~
layoutIfNeeded
>I keep hearing people talking about them and their benefits over que binary
system.

Who are these people?

