I really wish I knew and understood more about the things that govern cpu performance (not just the physics but also the problems people are trying to solve). For example it seems that CPUs have been stuck at around 2ghz for a decade now despite them getting harder to make and more expensive. Are we really seeing what amounts to I’m sure billions of dollars of research worth of benefits from seemingly incremental performance gains? Or am I just so far off the mark that it’s ludicrous?
Not sure what you mean about stuck at 2 GHz? My current Dell laptop and the previous one used 45 W CPUs and can hold a sustained 3.2 GHz while compiling with 8 threads.
That immediately drops as soon as I start using the iGPU or even the Nvidia dGPU, because of power limits and thermal limits.
On the desktop, my 5960x from 2014 sustains 3.3 GHz and overclocks to 4.1 all-core, if I want to bother. Ryzen 1700x does 3.7 and the Ryzen 3900 seems to do 4.2 pretty easily. So there's no 2 GHz limit there. Intel is even better, with many desktop chips able to do 5 GHz.
Higher frequency transistors require stronger electric fields. Stronger electric fields consume more power and produce more heat. CPUs are very small to benefit from low latency between components, but the tradeoff is that they have very little surface area to transmit heat. The ultimate governor of CPU performance is how much heat can be dissipated from the chip during standard operation. High-end chips have been able to hit 7+GHz with liquid nitrogen cooling for years.[0] Higher electric fields and smaller transistors also leads to gate leakage, where electrons tunnel across the potential barriers that are supposed to prevent conduction. This requires all sorts of microarchitecture tricks to fight, while at the same time wasting more power and producing more heat.
Well we've seen decent IPC gains from Intel and massive gains finally from AMD, clock speed increases are less of an issue as we get more threads packed into smaller packages with higher IPC.
The desktop side of things doesn't really have an issue with clock speed since cooling is less of an issue, its the thin and light progression of mobile that's holding it back.
Personally I wish people would accept a bit more heft and noise from their laptops so that we could get a better selection of plain chunkier laptops. I still use and love my X201 Thinkpad, but sometimes I think about saving the money to make it a bit more powerful[1].
Small gaming laptops pack a lot of punch if you are willing to accept some fan noise and flashy exteriors.
All the gains are in multicore performance. There just isn't anything that needs that much performance, really. The biggest performance problem your every day users might encounter is a memory leak causing the computer to swap.
Other than for real time video encoding and C++ compilation our CPUs are definitively fast enough.
Are margins on CPU binning getting smaller? I've always wondered something: how likely is a CPU to fail after some time if it's been enabled for X, Y, Z features when it should've actually been some other variation and just barely managed to pass QC for its bin?
I believe my question is more about material engineering... is it possible the CPU could degrade a certain way and cause it to fail? Or is the enabled feature set, if passed QC, basically solidified forever?
> [The] main performance difference between the 12W and 15W will be seen on multithreaded applications, which are more power limited. 3W of extra power will be used to increase average frequency into higher performance. We have seen between 5-15% performance increase on some benchmarks such as Spec06, SYSmark and 3DMARK.
Sounds like we might finally get more than 16GB of ram on the 13 inch MacBook Pro. Intel has been dragging their feet on supporting LPDDR4 for way too long.