
A Bright Future for Moore’s Law? - lelf
https://venturebeat.com/2020/01/07/a-bright-future-for-moores-law/
======
ksec
Interesting this is coming from Intel.

Considering 14nm is nearly 1 year late, and 10nm is nearly 3-4 years late
depending on which schedule you looked at. Especially when you consider the
absolute majority of shipment from Intel in 2020 will still not be on 10nm.
Majority of the Server and Desktop Roadmap, as well as H Series Laptop are
still on 14++++.

Did they just restart their Moore's Law counter in 2020?

And for anyone reading and watching the tech space for the past decades, you
might notice the sudden increase in Intel's article, information leaks as well
as PR pieces in the past 12 months. Just when AMD are doing well. I am not
entirely sure this is coincidence.

~~~
emodendroket
Apple has also been making mobile chips that rival x86 In some benchmarks,
right?

~~~
ksec
Rivals might not be the right word, within the 5W profile, Apple, using TSMC's
leading edge node has been making chips that _exceed_ Intel x86 in _many_
benchmarks at a _significant_ cost saving.

------
hurricanetc
>At its simplest level, Moore’s Law refers to a doubling of transistors on a
chip with each process generation.

That is an interesting way of moving the goalposts. The actual observation is
that the transistors will double "about every two years."

With Intel moving from a tick-tock to a tick-tick-tick-tick-tick-tick-tick-
maybetock I can see why they want to redefine Moore's Law to reference their
new reality.

~~~
ramzyo
Yeah, wow, that's egregious. And now the vague "system scaling" also keeps
this Intel-definition of "Moore's Law" alive and well. Clearly Intel are
setting up their own hoops to jump through.

> System scaling improvements are the gains that help us incorporate new types
> of heterogeneous processors via advances in chiplets, packaging, and high-
> bandwidth chip-to-chip interconnect technologies

------
majewsky
> Moore’s Law refers to a doubling of transistors on a chip _with each process
> generation_.

Yeah, I can see why Intel would want you to believe that.

------
Nokinside
Moore's law (1965) was that number of transistors per single chip doubles
every year. The revised law (1975) is that it doubles every two years.

You can achieve that by increasing chip area, increasing density. In the near
future also stacking transistor on the same chip over each other.

Packaging multiple chips into one is not part of the Moore's law. AMD gets
better yield by quitting the race and it seems to be real end of Moore's law.
Density increase is still going on but it can't keep Moore's law going.

Just putting more and more chips in package would be stupid measure for
Moore's law. Arbitrary sized circuit board can be packaged in epoxy.

~~~
marcosdumay
> Moore's law (1965) was that number of transistors per single chip doubles
> every year.

Not exactly. It was on the cheapest chip cut.

You can't fix it by trading area for yield, nor by creating more expensive
denser processes. Chip manufacturers are currently doing both of those, what
is good, obviously, but won't bring the kind of evolution we used to get.

~~~
zozbot234
I assume that the "cheapest chip cut" is also improving over time, if only in
the sense of cost per unit of transistor area. That's perhaps the facet of
Moore's law that's going to be comparatively easiest to keep going.

~~~
marcosdumay
Yes, the cheapest size (in mm2) has been increasing since the beginning.

But it has always been a minor gain. Boubling it every few years will stop
being useful very quickly.

------
riskneutral
This is sponsored content. From Intel about a topic that threatens Intel.
Honestly there should just be an indicator on HN to flag sponsored content.

~~~
tompccs
The author discloses that he works at Intel. It was actually an informative
article about the state-of-the-art in CMOS

------
clarry
I don't know if it sounds bright. It sounds like there are developments to be
expected in the short term (the way the article presents it sounds like it's
just supposed to be good PR for Intel), but it's not clear these tricks scale
like process used to scale.

Can you stack three, four, five, ten transistor layers on a chip? And how do
you power & cool that stack?

~~~
grogers
Cooling may be an issue for stacking tons of cpu cores, true. But further
increases in cache, RAM, SSD sizes (which may not affect power/cooling as
much) will also have benefits.

~~~
clarry
My understanding is that caches are about as power hungry as the rest of the
core, and consume lot of space on the chip, which is why you find large high-
TDP desktop & server chips with huge L3s and then mobile chips with rather
tiny caches. And huge caches are getting problematic in terms of latency too;
if you're doing high performance stuff, you might just have to treat L3 like
you used to treat RAM as far as cost of access goes. AIUI we just don't have
much to gain on that front unless we can keep actually shrinking the process.

RAM and SSD sizes would mainly benefit from becoming cheaper (although even
that's not a given if we're stacking layers and effectively multiplying the
area -- with possible defects -- by number of layers) but that unfortunately
doesn't translate to performance. Both do also run into thermal limits if you
try to push performance. Alternatively you can add lanes for more bandwidth,
but that doesn't help with latency, and you need more silicon on the CPU to
actually handle it.

Right now I do not have any performance problem that I could solve by throwing
more RAM or SSD at it (but I could just walk into a store and buy either and
have enough for years to come; I can't walk into a store and buy a CPU that's
fast enough to not be a bottleneck for years to come). Where there are
bottlenecks, they are due to CPU execution speed or I/O bandwidth & latency.

------
Quarrelsome
I thought Moore's Law effectively ended over a decade ago when we switched to
multi-core chips?

This ruined all sorts of single-threaded architectures that relied on these
performance increases over time and resulted in programmers being less able to
realise performance gains in released chips without having to rework their
code to accommodate the change.

~~~
clarry
Moore's Law is about transistor count, not clock frequency.
[https://en.wikipedia.org/wiki/Moore%27s_law#/media/File:Moor...](https://en.wikipedia.org/wiki/Moore%27s_law#/media/File:Moore's_Law_Transistor_Count_1971-2018.png)

~~~
coldtea
That's the "technically correct" version nobody cares about.

What people cared about, and referred to as "Moore's law" was the cheap
automatic doubling of performance (mostly due to clock frequency increases).

That's long gone with multicore...

~~~
Nokinside
Hey, just because layman don't care does not mean that there are not people
who actually care and use terminology correctly and discuss it.

You should never be proud about dumbing down concepts in Hacker News. This is
not consumer forum.

~~~
coldtea
> _Hey, just because layman don 't care does not mean that there are not
> people who actually care and use terminology correctly and discuss it._

What matters is the way the grandparent meant it, since that's what we are
discussing here, not the etymology or lexicographical definition of the term.
That said...

> _You should never be proud about dumbing down concepts in Hacker News. This
> is not consumer forum._

Thanks for the sanctimonious sermon.

If one actually did their research they would find out that it's not just some
"layman doesn't care" thing, but that industry insiders, pundits, and others,
have used Moore's law in several different ways over the decades, regardless
of the original observation by Moore.

Also note that the law was since forever linked with performance increases.
Wikipedia: "Moore did not predict a doubling "every 18 months". Rather, David
House, an Intel colleague, had factored in the increasing _performance_ of
transistors to conclude that integrated circuits would double in _performance_
every 18 months.

Note also despite the name it's neither a physical "law", nor a scientific or
technical term or high concept to be closely guarded from laymen. It's just a
casual ad-hoc statistical observation.

~~~
Nokinside
Everyone can go and check what was said:
[http://www.eng.auburn.edu/~agrawvd/COURSE/E7770_Spr07/READ/G...](http://www.eng.auburn.edu/~agrawvd/COURSE/E7770_Spr07/READ/Gordon_Moore_1975_Speech.pdf)

It's all about complexity per chip. Complexity as number of components per
chip.

~~~
coldtea
Not what TFA said, what "Quarrelsome" said that started this subthread:

"I thought Moore's Law effectively ended over a decade ago when we switched to
multi-core chips? This ruined all sorts of single-threaded architectures that
relied on these performance increases over time and resulted in programmers
being less able to realise performance gains in released chips without having
to rework their code to accommodate the change."

To which someone pedantically replied that Moore's Law is not about
performance increases, but transistor count -- to which I replied that that's
just one way it has been used (and not the most popular one either, or the
more pertinent to the grandparent's question).

------
mikehollinger
Moore’s law is dying. The transistor and clock speed improvements we saw with
regular cycles (33 - 66 - 133 - 266 - 433 - 1Ghz!) hasn’t happened in a while,
and throughput improvements have happened by architectural cleverness (some of
which you might say contributed to spectre/ meltdown et al).

A really really interesting corollary goes like this: \- Moore’s law delivers
a doubling of compute capacity every 2 yrs

\- capacity yields efficiency improvements in compute capacity per human being

\- more efficiency per human yields net productivity gains per human

\- net productivity gains drive economic growth

\- therefore Moore’s law or something like it is a critical driver of economic
growth

This is a bit scary of an implication - and why as an industry we’re highly
incented to come up with something to keep feeding the masses. If we don’t - a
key driver of worldwide economic growth will die.

interesting that the article says they’ll keep delivering on the promise - We
have to - but is repaint the picture and say that we’re obligated to do so via
more exotic software and hardware architectures, so expect to see more purpose
built compute in all arenas

~~~
thrower123
We're going to have to relearn how to wring every bit of performance out of
the chips we have. I'm bearish on the current trend of bloatier and bloatier
runtime environments; stuffing an entire isolated instance of Chrome in a
container to run a Todo-list app shouldn't have ever made sense, but it
especially does not in an environment where compute and memory is once again
precious.

I hope that betting on Rust over JavaScript would be the winning play over the
next decade, but I'll probably be wrong.

~~~
randcraw
Certainly many kinds of apps (e.g. web- and GUI- and disk- or DB-based) won't
care if Dennard scaling has essentially stopped on a uniprocessor. But in
other domains uniprocess speed is still their life's blood, like graphics,
AR/VR, data-parallel number crunching, secure computing, and increasingly I
suspect, non-deepNN forms of AI (like search).

As clock rates fall further behind the venerable curve of CPU clock rate
doubling, I too wonder how long the 10x to 30x slowdown obliged by interpreted
languages can last. It's lovely to write code a bit faster using REPL. But if
that code's slow runtime or likelier troubles with portability or error
recovery diminish its utility, then that bargain is Faustian.

I see some of that already after talking to a few startups who are in need low
level coding abilities. These days there seems to be a deficit of folks with
proficiency in fare like the GCC toolchain, that is, at the levels of
assembler, binary objects and libraries, linkers, and device-drivers.

I wonder, given the proclivity of CS academia for the past 20 years to ground
their instructional SW concepts mostly at a higher levels than binary (like
reliance on libraries, OOPL objects, and pseudocode), if this canny valley is
likely to grow into a real obstruction -- esp. now that we likely can afford
it least.

------
MayeulC
A very interesting point I heard in a seminar* about Moore's law is that is
was before all a commercial roadmap: at that point, Gordon Moore was probably
more business-oriented than research-oriented, and such predictable
improvements are above all useful to plan investments, so it seems that the
semiconductor industry tried very hard to remain on that roadmap all these
years (and obviously failed recently).

* by Jean-Pierre Raskin (UC Louvain) about what would the semiconductor industry look like tomorrow, and how to make it more responsible.

------
albertzeyer
[https://en.m.wikipedia.org/wiki/Betteridge%27s_law_of_headli...](https://en.m.wikipedia.org/wiki/Betteridge%27s_law_of_headlines)

So I guess the answer is no.

~~~
blowski
As a rule, Betteridge's Law applies to controversial statements where the
author wants to provoke people but doesn't have enough evidence to say it
affirmatively. I don't think that applies here.

~~~
Tenoke
>Betteridge's Law applies to controversial statements where the author wants
to provoke people but doesn't have enough evidence to say it affirmatively

That applies perfectly here though, no?

Moore's law is dead, the statement is controversial, without enough evidence
to prove their point and this is mostly a PR piece, as others have pointed
out.

------
Kaiyou
I always wondered if we could get more powerful computers faster if we worked
on it 24/7 and if not, why not.

~~~
ksec
You might want to rephrase your question. As it is not clear what you are
asking.

~~~
Kaiyou
Why couldn't we have computers as fast as today a decade ago. What's different
now and why couldn't we have gotten there faster.

~~~
ksec
There has never been any other product in modern history, or possibly in the
entire human history that has advanced as fast as transistor fabrication.
Moore's law has been doubling transistor every 2 years for the past 50 years
from 1965 to 2015. It wasn't until then we hit a hiccup.

~~~
Kaiyou
Sure, but I've always been wondering what's the reason it cannot be even
faster.

