Hacker News new | past | comments | ask | show | jobs | submit login
Moore’s Law is not Dead (tsmc.com)
74 points by ricw 59 days ago | hide | past | web | favorite | 52 comments



I've seen this interpretation before in HN comments and I think it is an unnecessary distortion of Moore's Law:

Note the historical context the observation was originally made in (1965) when doubling was achieved through process size reduction, the effect this had on speed was twofold: operating frequency would increase roughly in proportion (increasing about 40%); secondly and more uniquely to the time, since predecessors transistor count were so limited there was significant room for improvements in instruction implementation and specialization as available transistors increased.

Although it's implied, Moore also explicitly stated this in his 1965 paper:

> [...] In fact, shrinking dimensions on an integrated structure makes it possible to operate the structure at higher speed for the same power per unit area. [1]

Later this effect was more explicitly defined as Dennard Scaling in 1974 [2]

Transistor count increases in recent years has very little to do with dennard scaling or improving individual instruction performance and everything to do with improving some form of parallel compute by figuring out how to fit, route and schedule more transistors at the same process size, which does not have the same effect Moore was originally alluding to.

[1] https://drive.google.com/file/d/0By83v5TWkGjvQkpBcXJKT1I1TTA...

[2] https://en.wikipedia.org/wiki/Dennard_scaling


The article, and the parent comment, share a common theme of "moving the goal posts" by re-defining (or at least attempting to re-contextualize) Gordon Moore's original conjecture.

There isn't anything wrong with that, but for TSMC's Global Marketing guy to try to do that, it makes me think that TSMC doesn't get it.

The argument "That isn't what Gordon meant, if you look at what he really said..." Attempts to dismiss four decades of what every engineer expected it to mean, and nobody was was correcting them because, well why should they because it was working.

It wasn't until it started obviously failing to be true, that semiconductor companies started arguing in favor of an interpretation they could meet, rather than admit that Moore's law was dead, as pretty much any engineer actually building systems would tell you. Somewhere there must be a good play on the Monty Python Parrot sketch where Moore's law stands in for the parrot and a semiconductor marketing manager stands in for the hapless pet shop owner.

It is really hard to make smaller and smaller transistors. And the laws of physics interferes. Further its really hard to get the heat out of a chip when you boost the frequency. Dennard, others, have characterized those limits more precisely and as we hit those limits, progress along that path slows to a crawl or stops. Amdahl pretty famously characterized the limits of parallelism, we are getting closer to that one too, even for things that are trivially parallelized like graphics or neural nets.

The fear semiconductor companies have is clear, if the only way to improve performance is better software, then you don't need new chips, and an idle fab, ready to do a wafer start, costs nearly as much as an active one.


Unless they figure out how to make even more advanced nanotechnology. (MEMS comes to mind. It's hard too. As are analog systems.) Also less power intensive.


I'll link to the wikipedia graphic of Moore's Law: https://upload.wikimedia.org/wikipedia/commons/8/8b/Moore%27...

It makes the point that Moore's law is dead. You can see the slope change in in 2006 with the Dual Core Itanium2. So it died after Dennard scaling toppled (and some of the fastest growth ever), and although transistor count continues to increase it is noticeably slower... I fear that it may go slower still since economic costs continue to increase, but profits don't (as fast). When the CFO figures out it's not worth investing in scaling, you just won't build new fabs (unless the gov just says so).

To put some numbers to this (topline growth):

1971 TMS1000 (8k) to 1979 M68000 (70k) 2.6yr/double

1979 M68000 (70k) to 2003 Itanium M6 (400M) 1.9yr/double

2003 Itanium2 6M (400M) to 2006 Dual Itanium (1.6B) 1.5yr/double

2006 Dual Itanium2 (1.6B) to 2018 GC2 IPU (24B) 3.1yr/double

Personally, I think that an era of backend improvements (wafer stacking etc) to combine processes (Memory/Logic/Flash) into locally interconnected 2.5D is possible, if there remains sufficient investment, and that in turn could drive some improvements to performance and pricing. However, that relies on ever more complex system design and integration... and heat dissipation issues. I don't see full 3D scaling or Quantum compute on the 3-5 year horizon where big new investments will be required.

(edit formatting commentary and qualification)


That looks like a pretty straight line to me, TBH. You're doing more subdivision than is fair to the data. I did my own graph of just density, which is also pretty clearly straight.

https://docs.google.com/spreadsheets/d/1NNOqbJfcISFyMd0EsSrh...


Yours is noticeably straighter than the Wikipedia page. I see that you are plotting density and not #transistors?

Is that what Moores law says? Or does die size count? If you plot total #transistors your fitted slope will plummet in the later years and you'll see the change post 2006.


The Wikipedia chart looks pretty straight to me. What I think you are calling "a change in slope" looks to me more like a period in which there were a bunch of outliers, and those outliers formed a kind of hump.


I think what Moore's law is depends on who you ask. Density makes sense and seems to be holding true, but so does cost/transistor, which hasn't.


I’m not sure I agree with your point about the Itanium2 specifically on this graph. It looks like the Itanium2 is an anomaly on the high side of the hockey stick to me, rather than being the rule.


Since this chart doesn't show transistors/mm2 you kinda have to use the peaks and assume they are close to reticle/density limite. You could look at the lower ones or the "average", but it's not indicative of what is possible. In fact both die size and reticle size have been growing, which makes the plots pre-300mm wafer over performing... little of that happened in the last 15 years which has not helped continue the trend, and that's over since GPUs have been at max reticle for years now. I'm guessing that die size increase was originally about cooling.

(edits corrected grammar and double negative added cooling comment)


> and although transistor count continues to increase it is noticeably slower.

Yes, I would be surprised if the rate manages to be even linear - it can never be exponential as it was with process shrinkage because now the opposite is happening - we are increasing the total die size, but without changing the frequency. The first problem of this method is dealing with increased propagation delay between distant blocks, requiring different architecture and scheduling (assuming we just accept parallel computing as the only way forward). Then there is the much harder problem to sustainably solve: power consumption and heat dissipation, requirements that grow in proportion to the transistor count.

... all of these issues were naturally avoided with process shrinkage, propagation delay reduced in proportion, and the power density per unit area remained roughly the same, and so up to the physical limits of the material it was a truly sustainable window of exponential growth.


My key point about Moore's law is that it's about being able to 'throw new hardware' at a problem. If you're changing the game by saying, hey we've now got 1000 cpu chiplets that can compute way more and all you have to do is parallelize all your software--different game, different name.


> My key point about Moore's law is that it's about being able to 'throw new hardware' at a problem. If you're changing the game by saying, hey we've now got 1000 cpu chiplets [...]

You're ignoring the physics. You can do all you want with new hardware designs, larger dies, more CPUs etc but ultimately it cannot make the same exponential gains that came with process shrinkage because power consumption and heat dissipation problems increase with every extra transistor if the process size remains the same.

I'm not saying gains cannot be made without shrinking, but they will be incremental, at the most linear with a short window, but they will not be exponential and they will come with another cost (more power consumption). For Moore's Law to be revived, look to material science for answers.


>You're ignoring the physics. You can do all you want with new hardware designs, larger dies, more CPUs etc but ultimately it cannot make the same exponential gains that came with process shrinkage because power consumption and heat dissipation problems increase with every extra transistor if the process size remains the same.

He's not ignoring the physics, he's saying Moore's law (the "throw more hardware at the problem" version) is dead because of physics...


Tl;dr: Moore's Law is simply about doubling of transistor, we got performance and reduced cost CPU along with it for the first 30 - 40 years. But Somehow the world interrupt the performance and cost as Moore's Law.


What matters concerning what a term means is what the world interprets, not what something was etymologically, or what was in the mind of the person who coined it.


Instead of an ad, here's a good description of the situation:https://semiengineering.com/why-scaling-must-continue/

Tl;Dr - old Moore's law is dead , but our new systems(ai , gpu...) still need a huge number of transistors , beyond what can be made on a single chip today, so scaling is still valuable.


I thought about upgrading my GPU recently (an rx480 8gb from 2016), I was disappointed to find that even the rx5700xt was only double the performance of an rx580/rx480 (they're essentially the same GPU) [0]. It also costs double, this is THREE years later. (It turns out the rx 500 series have hardly dropped in price since release ~£200.)

So while Moore's law is still ticking along (albeit a little slower), and transistors are still shrinking (7nm vs 14nm). It doesn't really affect the price, so instead of being able to buy 4x the performance (transistors) for a similar price, it hasn't changed. It seems pricing is no longer based on cost, it appears to be based on what the market will pay.

Strangely, it doesn't seem to be the same with CPUs. During the same period, AMD has doubled in core/ transistor count between Zen 1 and Zen 3 while has stayed the same price[3].

[0]https://www.guru3d.com/articles_pages/msi_radeon_rx_5700_xt_... [1]https://en.wikipedia.org/wiki/List_of_AMD_graphics_processin... [3]https://en.wikipedia.org/wiki/Ryzen


This is a little misguided. The only reason that AMD is so cheap on the consumer side is that they genuinely want to take a crack at Intel's market share. With GPUs, they're only competitor is Nvidia, and as long as they are cheaper than Nvidia at a reasonable performance, they are fine.GPUs are extremely profitable while CPUs aren't as profitable (yet)


I have a feeling AMD wants some of Nvidia's market share too. In fact the reason Nvidia recently released all those 'Super' cards was to counter AMD's release, it's also rumoured that AMD had to reduce their prices as a result.

In many ways, Nvidia is acting like Intel did. AMD simply couldn't compete on performance so this allowed them to get cosy (Intel's high-end for consumers stayed at 4 cores for a decade, Nvidia's prices have been steadily raising for a while).

Also, a lot has happened in the last few years: RAM prices skyrocketed, Bitcoin mining drove up demand for AMD GPUs. However both of those have been over for a while now.

The rx480 was priced very aggressively, the $199 price grabbed some headlines (8gb card cost a little more). Given the die shrink and three years (plus the Ryzen cores doubling), you'd be forgiven for expecting the rx5700 to come in closer to $199, than the $349 (XT is $399) keep in mind that these new cards aren't much over double the performance.

While, it's no secret that shrinking fabrication is getting harder and more expensive (it's hurting Intel right now). Prices per transistor have dropped for CPUs and SSDs has dropped noticeably, even DRAM to an extent, but not GPUs...

Even demand for machine learning doesn't explain the price hike (GPUs are now being artificially handicapped, just as professional graphics have been, shifting demand to the even more expensive Tesla cards).


Wasn't this also true 3 years ago? I.e. it only explains profit margins, not the absence of a price/performance drop. And, well, they have the same number of competitors in both CPU and GPU, and I think even their market shares are very similar, no?


I only wrote about the GPUs initially (as I had been looking at them in the last week). Then I released the comparison to CPUs, which even included the same company. It just seems strange that two similar markets are functioning very differently. Increased fabrication costs don't explain what is going.


Gordon Moore himself is quoted in 2015 as saying that the transistor count interpretation of Moore’s law will be dead within 10 years, in the Wikipedia article on Moore’s Law.

If something changed and it’s not dead, I’d love to hear more about the new processes that are making more transistors possible. Working at a chip maker now, but I’m a software guy. My understanding of the problem is that we’ve reached the optical limits of resolving power for the lithography processes. Trace widths are a tiny fraction of the wavelength of light, and chip area is so large we don’t have lenses than can keep the projection in focus at the edges. While there is theoretical potential to get smaller due to gates being many atoms across, actually building smaller gates has real physical barriers, or so I’m told.

I’d love to hear more about the manufacturing processes in generally, and more specifically whether something really has changed this year. Does TSMC have a new process allowing larger dies or smaller traces, or is this article mostly hype?


Well, electron (X-ray) lithography is possible, EBL, one step beyond EUV, it is slow and used to make masks today.

Extremely hard to make economical though.


This article is complete fluff.

There's no _real_ discussion of Moore's law. No new revelations about chip design. You say workloads need to exploit parallelism these days to see increased performance gains? No shit. Putting memory closer to the logic cores is a good idea? duh. Hell, the author makes the common mistake of conflating AI with ML, because it's clearly illegal for a businessman in any industry to not buzz about "AI".

> by Godfrey Cheng, Head of Global Marketing, TSMC

Yeah, this is fluff.


I agree. Seems like almost every article I see about an interesting topic fails to address the core of whatever it's supposed to be covering. If Moore's law does not address speed increase, that's fine; we've heard that before. If this guy thinks that the shrinkage will continue, despite some people's expectations, he should have said something about how and why. Otherwise, just don't publish the article because you have nothing to say.

It's like documentation these days that is all autogenerated boilerplate.


It depends what you call Moore's Law. For me, Moore's Law was essentially that the cost per transistor was divided by two every 18 month. Nowadays, it's not true anymore. But of course, it doesn't mean that progress has stopped.


You wrote:

> [..] divided by two every 18 months [..]

Wikipedia wrote:

> [..] whose 1965 paper described a doubling every year in the number of components per integrated circuit [..]

Godfrey Cheng, Head of Global Marketing, TSMC wrote:

> [..] The number of transistors in an integrated device or chip doubles about every 2 years. [..]

(Even if we ignore the difference on the cost versus number of transistors or components.)

12 months, 18 months, 24 months. How long is it? I have no clue with regards to the answer. It seems to me we first need to agree on the definition of a law before we can discuss it.


> Moore also affirmed he never said transistor count would double every 18 months, as is commonly said. Initially, he said transistors on a chip would double every year. He then recalibrated it to every two years in 1975. David House, an Intel executive at the time, noted that the changes would cause computer performance to double every 18 months.

From https://www.cnet.com/news/moores-law-to-roll-on-for-another-... (a source for the the Wikipedia page).


18 month or 24 month, this is not very important. I was essentially talking about the fact that the cost per transistor was steadily going down, which is almost finished now.


Is there any data on the cost per transistor not going down? They seem to be able to pack an awful lot onto chips these days, the current record apparently being Samsungs 1tb memory chip with 2 trillion.


I cannot find a source for this outside of HN, but I have read here that R&D costs to build the next generations fabrication plants double each iteration which limits the total number of sustainable independent chipmakers. Currently, only 3 are left (TSMC, Samsung and Intel) and they all might give up within a few generations when it gets too expensive to scale down further.


It's quite easy to compute:

https://en.wikipedia.org/wiki/Transistor_count

2019 - Epyc ROME: 32B transistors 2009 - Six-core Opteron 2400 - 0.9B transistors

(32/0.9)^(1/5) = 2.04, so it seems that Moore's law is actually working well.


Rome exists in a gray area between "single integrated device" and "9 integrated devices tightly coupled together."


I don't see what difference 12/18/24 makes, as it's still exponential regardless.


Big difference is impact of the law.

If it was once 100 years it was also still exponential.


About the industry at large: lots of changes ahead.

Take a look at this: https://www.hotchips.org/program/

For the first time in a long while they gave the entire first day to non-semi companies: Amazon, Google, Microsoft

Nobody could've imagine the industry turning this way a decade ago.


> entire first day to non-semi companies: Amazon, Google, Microsoft

They've given the entire first day to the semiconductor divisions of these hulking behemoths.


It is not the density that I worry about. Judging from TSMC's Investor notes and technical presentation, they don't see any problem with 2nm or may be even 1nm. 3nm is currently scheduled for 2022, and 2nm for 2024. So it isn't much about the technical but achieving those within budget.

The problem is somewhere along the next 5 years we may see cost per transistor stop decreasing. i.e Your 100mm2 Die would be double the price of previous 100mm2 Die assuming the node scaling was double.

At which point the cost of processors, whether that is CPU, GPU or others becomes expensive and the market contracts, which will slow down the foundry pushing for leading node. We could see Hyperscaler each designing their own Processor to save cost, and we are back to the mainframe era, where these Hyperscaler has their own DC, CPU, and Software serving your via Internet.


And they will get the yields in 7nm up and wafer costs down. It ultimately is about the economic cost of computation, which is still dropping. Smaller node sizes are not the only way to get more transistors and/or reduce total costs.


Can someone here talk about what desktop CPU's might look like for the consumer in 2029?

Like, I'm a gamer in 2029 and I'm looking for the equivalent of todays Intel Core i7 or AMD Ryzen. How much faster will it be? How different will it be from today? Etc.


Major changes will include the introduction of more 3D packaging, which among others probably means much larger caches. I wouldn't be shocked if last level caches reached, say, half a gigabyte or more, if things go well, though it could certainly not go that well.

Depending on how much companies try to milk servers, we should see commoditization of large and fast non-volatile memories. This has many impacts; we might see laptops with hundreds of GB of effective RAM, and filesystems, with appropriate software to handle it, when run from these devices should approach the performance of in-memory data structures.

Cores themselves will probably be a fair bit wider, but struggle even more than they currently do to utilize their full throughput. Typical programs will run faster mostly because of the increased cache size and (potentially) vastly faster IO; optimized programs will benefit more from the actual throughput increase.

Hopefully chips will have continued increasing their core counts—it's certainly possible, if the will is there. Programmers will follow through after a delay; programming for parallelism is only done when cores are numerous. I could imagine 8 core 24 thread being fairly standard, for instance.

GPUs will just continue to get better, as density and memory and packaging allows them to do so. Neural network accelerators will be a lot better than today, and will very much appreciate the larger, faster memories that the future offers.

The x86 tax continues to grow as competition from Apple's CPUs (though not necessarily direct competition) results in cores wider than x86 can handle without more and more caching and other expensive optimizations.

Mill Computing finally start work on their FPGA.


unless something changes dramatically, single-thread performance will probably not be much better, 50-100% speedup at most I'd guess.

on the other hand, game developers are only just starting to spread the workload across more than one or two cores. far cry 5 is a pretty unfun game to actually play, but if you look at benchmarks you'll see that it actually gets some decent speedup when you go beyond four cores. ultimately it's still bottlenecked by that one main thread though. if the trend continues, gaming might actually benefit from the current race to cram in as many cores as possible.

a final random thought: I wonder if we will start to see big.LITTLE style architectures on x86 desktop parts. maybe 2-4 big cores optimized for very high clocks paired with 16+ smaller, more efficient cores for parallel tasks.


I'm a gamer in 2019 and I'm using an i5 2500k processor from 2011. It works well enough for the games I run. I might upgrade if I scrounge up $700 for cpu/mb/ram.

The point I'm making is that the processor doesn't matter much for gaming. An older computer can easily outperform a new computer if it has better components where it matters for gaming and casual use (SSD and GPU)


Yeah. My six year old 4670K is over half as fast as a brand new three hundred dollar CPU even without its significant overclock. It's hard to exaggerate how unimpressive the processor gains of the last five years have been.


Yeah good point, I shouldn't have mentioned gaming. I guess what I'm looking for is the general desktop workstation situation 10-20 years down the line, rather than gaming specifically. A workstation for visual arts, music, streaming, linux, programming, whatever, and especially if there will be a point in the 2020s or 30s where you'll have long pauses between cpu releases if they don't improve as fast etc.


I think the 10-20 years down the line will see us move back to dumb terminals with servers over the internet. Microsoft has already moved to subscriptions for Windows and office i.e corporate use. And all the major companies are looking at game streaming i.e home use. Education is moving towards chromebooks i.e education. The rest of the uses for PC's like media consumption has already moved to smart devices and smart phone/tablets. So apart from enthusiasts with money to burn on PC's the actual use case for PC's will be gone hence the current form of PC's will be gone.


Well, when you can put computing power on par with a whole cloud in a single box, which you can nowadays, the equation changes again.

Cloud is limited by Amdahl scaling like everyone else, but distances and latencies between nodes are higher than in a single box. So instead you should think of time sharing like in the ancient days of supercomputers. Most tasks do not require neither massive parallelism of a cloud or cluster nor computational resources of a supercomputer.

If it's the question of ownership or price... Why not make it free? (Ignore this junk, make libreoffice etc. better.)

Just think of it as a PC that gets smaller. You cannot quite go super small unless you replace input and output devices. That will require more understanding of our own sensorium.


I'm a developer. I don't spare much expense on my work computer - for example, I sit in front of six QHD IPS panels. I also started on a 286 where you were woefully behind the times if your CPU was more than about a year old.

I last upgraded my CPU when it was nearing 6 years old, and I could barely tell the difference. My current CPU is nearing 5 years old, and if it wasn't for some of the enticing stuff AMD's recently put out, a CPU update wouldn't even be on my mind at all yet - in fact I was surprised to recently notice it had been more than 4 years since I last bought a workstation CPU.

I expect I'll buy 2 workstation CPU updates between now and 2030, 3 at most. and even that would take something special to drop in the early-to-middle of my normal timeline.



That would be helpful if there was more than 3 opinion comments.


We have a long way to go in packing the RAM next to the CPU to reduce speed of light latency.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: