Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is likely to improve with 10/7nm CPUs?
52 points by gameswithgo on July 30, 2018 | hide | past | favorite | 18 comments
I've done a bit of googling and haven't found any clear, non marketing explanations of this. When Intel and AMD start shipping 10nm CPUs, what is likely to improve? Will clock rates go up any? Will power use go down? Or will we simply be able to cram more cores onto a cpu? Can L1 cache size increase?



In you believe Intel's tech brief, the move to 10nm vs their current 14nm+++ will:

1) Have somewhat lower peak performance, i.e. peak clock rates are lower and won't surpass 14+++ till 10++.

2) Have ~2.7x higher density

3) Have ~45% lower power usage

My thoughts: #1 and #3 are intertwined, lower peak performance doesn't mean that products will actually be clocked lower. In a lot of cases the operational frequency is limited by thermals, so cutting power usage can mean higher frequencies in practice. Adding in #2 and what I think we'll see is that single threaded performance isn't going to improve much except in thermally constrained situations. We'll see CPUs of similar core count / performance as today but at notably lower power, and CPUs with similar clock speeds/power as today with ~50% more cores (or transistors in general).

EDIT: One other possibility on the single threaded front - higher density can mean tighter timings are possible. e.g. it's possible they can do an architecture refresh that cuts the number of cycles for executing slower instruction, or possibly shorten cache latency, etc.


Is the lower power usage at the same clock rate?

Is the same clock rate at the lower power usage?

These questions sound logically equivalent but to marketing they may be not. The values for 1, 2 and 3 might be taken from 3 different processors.


Not much. Clock wall, power wall, and latency wall are here.

The only thing improving is memory size, bandwidth, and number of cores.

Now, we are still very far off from every worload being at its theoretically maximum throughput (that would be achieved with an ASIC).

Over the next years, the interesting things won't come from Intel or AMD. The generic processor killing all competition has reached an end. Intel could just wait till the next technology node to thwart off any attempt of adapting to a corner of the market. This is no longer the case; with so much money at stake, expect a Cambrian explosion.

It will come from specialized workloads getting an accelerator (think integrated TPUs, NICs with FPGAs, etc); it will come from tighter chip packaging/integration (think close-access DRAM a few μm from the CPU with silicon vias, early filtering with predicate pushdown directly into the SSD controller, etc.).

Now, the code will have to adapt; and I posit that the languages better suited for this should have:

* An easy way to control the memory layout

* An easy way to insert special-purpose instructions

* A compiler infrastructure flexible enough to be able to coerce code into the accelerators

* An easy way to serialize (possibly half-executed-)code that should be transported closer to the data


>Not much. Clock wall, power wall, and latency wall are here.

This. I will add one more, price Wall. The cost to design a CPU along with cost of leading node is increasing, while the scale and ASP are not increasing. Cost of building new Fab is increasing, cost of Fabs equipment is increasing, cost of design tools are increasing, they could be double the price of what it was 10 years ago, and the scale of PC unit has been mostly flat or declining for number of years. As this trend continues, 5nm, 3nm somewhere around these TSMC nodes there will be less players able to afford these. For Intel that means they will sell it at a smaller die ( higher margin ) and try to recoup their investment asap.

For AMD there are lots of improvement, mostly catching up Intel. Zen 2 will increase IPC, 7nm allows them to double the core count AGAIN. Zen 3 with 7nm+ will have DDR5, PCI-E 5?, and other improvement.

So the 10/7nm cycle will dramatically increase core count, thanks to AMD.

Unfortunately DRAM and SSD are still expensive, and it seems the Cloud Sector has infinite appetite for DRAM and SSD.


SSDs isn't still expensive. Prices has gone way down lately...


>SSDs isn't still expensive. Prices has gone way down lately...

Not expensive, may be that is not the word I should have use. But according to [1] ,its price is barely the same as 18 months ago, and if you look at longer term, price hasn't dropped much compared to 24 to 36 months ago. And I was speaking in the context of servers, in terabyte of Memory and Petabytes of storage.

[1] https://pcpartpicker.com/trends/price/internal-hard-drive/


Have you seen the Burst Compiler in Unity3d?


Just looked it up, and I am sort of failing to find a proper resource. It is about using SIMD capabilities?


there is a video by mike acton about it somewhere. But yes, the current iteration lets you use a subset of the usual language (C#) and will SIMD it, but I believe its intended to be able to send the same stuff off to a GPU if desired as well.


all of the above can happen. smaller transsistors give you room to put more transistors on a chip or reduce the power. One can do a mix also. So vendors choose what is going to happen. In the competition between Intel and AMD I see that Intel only wins on one point: single thread performance. So to compete better AMD will do its utmost best to improve their single thread performance.


Not much for you I’m afraid. L1 size likely to not change significantly to maintain access latencies. Frequencies are not going to increase because of power. Core capabilities will not change I’m any significant ways either. The best bet is more LLC.


Don't smaller transistors reduce power consumption?


Yes, but for the previous few generations of transistor manufacturing, transistors' power consumption has not scaled down as well as their size. This is known as the failure of Dennard Scaling[0].

[0] https://en.wikipedia.org/wiki/Dennard_scaling


Power leakage is an increasingly large problem nowadays. (Because yes, semiconductors are nowadays suffering from observable quantum effects.)


Yeah but power consumption increases with something like the 4th power (heh) of frequency so there's not much you can do even with really low power transistors.


According to this article on Wikipedia, it's a linear relationship: https://en.wikipedia.org/wiki/CPU_power_dissipation

Edit: maybe this takes into account that when you increase frequency you might also have to increase the voltage to keep the processor stable, as well? But a 4th power effect seems quite extreme.


Yes, but just like Moore's law is on the edge of being mined out of the easy gains, so is Dennard scaling.


AMD has come out and said that they will be skipping 10nm in favor of 7nm chip designs, and, as for what will improve, it is likely everything.

Overall performance per watt is up in addition to raw clock speed ceilings and core counts, and subsequently we will see low-power mobile devices (ie: laptops, tablets) receive impressive performance uplifts.

That said, Intel is currently floundering so we will see AMD taking a much larger slice of pies across all device profiles.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: