
Intel’s first 10nm Ice Lake CPUs revealed: here’s your decoder ring - Tomte
https://www.theverge.com/2019/8/1/20748224/intel-first-10nm-ice-lake-11-cpu-processor-laptop-decoder-ring
======
theshadowknows
I really wish I knew and understood more about the things that govern cpu
performance (not just the physics but also the problems people are trying to
solve). For example it seems that CPUs have been stuck at around 2ghz for a
decade now despite them getting harder to make and more expensive. Are we
really seeing what amounts to I’m sure billions of dollars of research worth
of benefits from seemingly incremental performance gains? Or am I just so far
off the mark that it’s ludicrous?

~~~
zlynx
Not sure what you mean about stuck at 2 GHz? My current Dell laptop and the
previous one used 45 W CPUs and can hold a sustained 3.2 GHz while compiling
with 8 threads.

That immediately drops as soon as I start using the iGPU or even the Nvidia
dGPU, because of power limits and thermal limits.

On the desktop, my 5960x from 2014 sustains 3.3 GHz and overclocks to 4.1 all-
core, if I want to bother. Ryzen 1700x does 3.7 and the Ryzen 3900 seems to do
4.2 pretty easily. So there's no 2 GHz limit there. Intel is even better, with
many desktop chips able to do 5 GHz.

~~~
bluedino
They aren't desktop CPU's but both the Sparc M8 and Power8 CPU's ran at 5GHz

------
leetbulb
Are margins on CPU binning getting smaller? I've always wondered something:
how likely is a CPU to fail after some time if it's been enabled for X, Y, Z
features when it should've actually been some other variation and just barely
managed to pass QC for its bin?

I believe my question is more about material engineering... is it possible the
CPU could degrade a certain way and cause it to fail? Or is the enabled
feature set, if passed QC, basically solidified forever?

~~~
baybal2
> Are margins on CPU binning getting smaller?

Very likely so for Intel if their yields are as low as rumoured, and they
don't have "a room to manoeuvre" with binning.

In that case, they will struggle to fill even the baseline model, and will be
putting most of their output into it.

------
dis-sys
when you can't beat your competitor on performance or performance per $/watt,
just upgrade your naming scheme to confuse everyone

well done!

------
baybal2
> [The] main performance difference between the 12W and 15W will be seen on
> multithreaded applications, which are more power limited. 3W of extra power
> will be used to increase average frequency into higher performance. We have
> seen between 5-15% performance increase on some benchmarks such as Spec06,
> SYSmark and 3DMARK.

So, nearly no gain. A disappointment.

------
pascoej
Sounds like we might finally get more than 16GB of ram on the 13 inch MacBook
Pro. Intel has been dragging their feet on supporting LPDDR4 for way too long.

