Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel’s first 10nm Ice Lake CPUs revealed: here’s your decoder ring (theverge.com)
24 points by Tomte on Aug 1, 2019 | hide | past | favorite | 14 comments


I really wish I knew and understood more about the things that govern cpu performance (not just the physics but also the problems people are trying to solve). For example it seems that CPUs have been stuck at around 2ghz for a decade now despite them getting harder to make and more expensive. Are we really seeing what amounts to I’m sure billions of dollars of research worth of benefits from seemingly incremental performance gains? Or am I just so far off the mark that it’s ludicrous?


Not sure what you mean about stuck at 2 GHz? My current Dell laptop and the previous one used 45 W CPUs and can hold a sustained 3.2 GHz while compiling with 8 threads.

That immediately drops as soon as I start using the iGPU or even the Nvidia dGPU, because of power limits and thermal limits.

On the desktop, my 5960x from 2014 sustains 3.3 GHz and overclocks to 4.1 all-core, if I want to bother. Ryzen 1700x does 3.7 and the Ryzen 3900 seems to do 4.2 pretty easily. So there's no 2 GHz limit there. Intel is even better, with many desktop chips able to do 5 GHz.


They aren't desktop CPU's but both the Sparc M8 and Power8 CPU's ran at 5GHz


Higher frequency transistors require stronger electric fields. Stronger electric fields consume more power and produce more heat. CPUs are very small to benefit from low latency between components, but the tradeoff is that they have very little surface area to transmit heat. The ultimate governor of CPU performance is how much heat can be dissipated from the chip during standard operation. High-end chips have been able to hit 7+GHz with liquid nitrogen cooling for years.[0] Higher electric fields and smaller transistors also leads to gate leakage, where electrons tunnel across the potential barriers that are supposed to prevent conduction. This requires all sorts of microarchitecture tricks to fight, while at the same time wasting more power and producing more heat.

[0] https://www.engadget.com/2017/06/04/gskill-hwbot-overclockin...


Well we've seen decent IPC gains from Intel and massive gains finally from AMD, clock speed increases are less of an issue as we get more threads packed into smaller packages with higher IPC.

The desktop side of things doesn't really have an issue with clock speed since cooling is less of an issue, its the thin and light progression of mobile that's holding it back.

Personally I wish people would accept a bit more heft and noise from their laptops so that we could get a better selection of plain chunkier laptops. I still use and love my X201 Thinkpad, but sometimes I think about saving the money to make it a bit more powerful[1].

Small gaming laptops pack a lot of punch if you are willing to accept some fan noise and flashy exteriors.

[1]https://liliputing.com/2018/03/x210-mod-turns-classic-lenovo...


More like 5 GHz...or about 2 inches, in terms of how far light travels in that duration.


All the gains are in multicore performance. There just isn't anything that needs that much performance, really. The biggest performance problem your every day users might encounter is a memory leak causing the computer to swap.

Other than for real time video encoding and C++ compilation our CPUs are definitively fast enough.


https://www.maketecheasier.com/why-cpu-clock-speed-isnt-incr...

This article from last year details some of the challenges in increasing clock speed while decreasing transistor size.


Energy efficiency is one important factor limiting the clock rate. (Otherwise, 4 GHz does not sound extreme these days.)


Are margins on CPU binning getting smaller? I've always wondered something: how likely is a CPU to fail after some time if it's been enabled for X, Y, Z features when it should've actually been some other variation and just barely managed to pass QC for its bin?

I believe my question is more about material engineering... is it possible the CPU could degrade a certain way and cause it to fail? Or is the enabled feature set, if passed QC, basically solidified forever?


> Are margins on CPU binning getting smaller?

Very likely so for Intel if their yields are as low as rumoured, and they don't have "a room to manoeuvre" with binning.

In that case, they will struggle to fill even the baseline model, and will be putting most of their output into it.


when you can't beat your competitor on performance or performance per $/watt, just upgrade your naming scheme to confuse everyone

well done!


> [The] main performance difference between the 12W and 15W will be seen on multithreaded applications, which are more power limited. 3W of extra power will be used to increase average frequency into higher performance. We have seen between 5-15% performance increase on some benchmarks such as Spec06, SYSmark and 3DMARK.

So, nearly no gain. A disappointment.


Sounds like we might finally get more than 16GB of ram on the 13 inch MacBook Pro. Intel has been dragging their feet on supporting LPDDR4 for way too long.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: