
Intel 10nm Chip Gets Mixed Reviews - JoachimS
https://www.eetimes.com/document.asp?doc_id=1334991
======
std_throwaway
Have these tests been run with or without the security patches? Since
Spectre/Meltdown/etc. it has increasingly been difficult to compare numbers.
Lately intels IPC advantage compared to AMD has been melted away completely by
these patches. It will probably take some time until we can judge if these new
IPC gains are solid or if they have been bought with new compromises.

EDIT: To clarify: If you compare both "out of the box" then the old one would
be unsafe while the new one hopefully is safe because it comes with
hardware/firmware patches built in. If you compare both patched, the old
architecture gets a large performance hit compared to when it was launched.
With time the patches usually evolve and change the performance
characteristic. So you have to be careful which OS or firmware revision you
are using. However way you do it, it's not an easy apples-to-apples comparison
anymore unless you're talking about a very specific use case at a very
specific point in time.

~~~
userbinator
I see the whole Spectre/Meltdown/etc fiasco as an interesting tradeoff: you
can have higher performance if you don't care about those side-channel
attacks, which is what a lot of applications like HPC are going to do anyway
because they don't run untrusted code. That still gives Intel an advantage.

~~~
AsyncAwait
If they need to be explicitly disabled, which they do, not many are going to
do it, not much of an advantage if you ask me.

~~~
ajross
Big datacenters have large and talented engineering staff and routinely
customize their machines and firmware heavily. Consumers aren't going to do
it, that's true (and relevant to the article: all the Ice Lake parts mentioned
are consumer chips). But on a per-revenue basis, most of the market is
amenable to this kind of thing.

~~~
posixplz
Big data centers are also most likely to be executing customer input. They
almost certainly have all side-channel mitigations applies.

~~~
deelowe
Not every physical die is running security sensitive code. In fact, most
aren't.

~~~
penagwin
Sure the datacenter infrastructure won't require the mitigations, but every
single multi-tenant die will.

And I'm assuming my 5$/mo DO droplet isn't on it's own dedicated die....

~~~
ajross
> every single multi-tenant die will.

To be fair though, those chips are a comparatively small part of the
datacenter market. Most of them are sitting in IT closets, or per the example
above are running HPC workloads on bare metal. Cloud services are the sexy
poster child for the segment, but not that large in total.

------
hectormalot
I don't think the title is doing Intel justice in this case. The graphics
performance went up substantially, as did the IPC. Intel decided to trade-off
part of the IPC improvements for a lower power consumption. Combined, Ice Lake
seems like an improvement across all dimensions for me.

Perhaps we've just been spoiled with the leaps that AMD has been making
recently.

~~~
w0utert
> Intel decided to trade-off part of the IPC improvements for a lower power
> consumption

It's probably not a deliberate trade-off, their 10nm process is simply not
good enough yet to get high yields when pushing clock speed. Their 14nm
process was excellent in this regard, after they optimized it for so long, it
is _the only_ main advantage 9th-generation Intel parts still have compared to
Ryzen 2. It's not a surprise they have to take a step back in that regard
until 10nm improves.

~~~
kristofferR
Did you mean Ryzen 3000/Zen 2?

~~~
w0utert
You're right, I keep getting confused by this ;-)

------
alluro2
The relative IPC performance is impressive, since with (a big surprise) much
lower base clocks (e.g. ~1.0-1.2Ghz vs ~1.8Ghz of previous gen) and slightly
lower boost, these are getting slightly lower, equal or slightly better
results in CPU tests (not looking at GPU). That said, it seems like they just
couldn't clock them more and get better graphics in within the same TDPs, and
overall performance is completely unimpressive compared to 8th gen, and, for
top level ones, even worse. Ok, obviously they focused on the GPU and the
results are great compared with HD graphics, but I didn't see any comparison
with Iris, and I doubt it looks that good.

3700U is currently a mediocre mobile CPU, between 8th gen i7 and i5. Base
clock is 2.3Ghz. But that's the Zen+ architecture and the graphic performance
is already on the Iris Pro level. If AMD can get the same IPC, clock and TDP
improvements as on the desktop for mobile Zen2, where the clocks have not been
reduced 30% like Intel did, I think Ice Lake won't be able to compete at all,
from what we've seen so far. Of course, there is much more nuance there in
terms of heat / power envelopes and now it all ends up together while
boosting, but definitely doesn't look very good for Intel based on this...

------
greatjack613
Intel has been digging this hole for a while, due to the delay of 10nm, they
have just shipped rebadged 14nm chips with higher and higher clock speeds. No
way a brand new node can match these clocks with decent yields, so even with a
healthy IPC increase, they are still fighting an uphill battle.

~~~
techntoke
One thing that Intel has that AMD doesn't is graphics integrated into nearly
all their chips and virtual GPU for KVM. If AMD can add something similar then
Intel will be decimated, especially if AMD really starts to take over laptops.
Unfortunately I don't even think Ryzen 3rd gen laptops exist right now.

~~~
tempguy9999
I don't see why on-chip graphics support is necessary. An off-chip,
independent graphics chip would surely suffice as redrawing the screen is
loosely coupled with computation (describe screen, send to GPU, 60 times a
second).

Other than some extra power draw needed to couple the 2 chips together, which
I _assume_ is minimal, splitting computation from rendering seems like a very
good idea - where am I wrong? Would the extra monetary cost be significant -
if so roughly by how much?

~~~
djsumdog
I recently build this AMD based development rig with an ITX board and the
cheapest GPU I could find:

[https://penguindreams.org/blog/louqe-ghost-s1-build-and-
revi...](https://penguindreams.org/blog/louqe-ghost-s1-build-and-review/)

I would have much rather left the GPU out entirely and used the space for
something else, but the high end Ryzens don't have any graphics supports. I'd
have to go down to the APU units, where I'd trade off power.

There's little point to even having HDMI/DisplayPort outs on the board itself;
they're unusable except for a small subset of APUs.

------
mafuyu
Anandtech articles looking at the Ice Lake uarch and performance:

[https://www.anandtech.com/show/14514/examining-intels-ice-
la...](https://www.anandtech.com/show/14514/examining-intels-ice-lake-
microarchitecture-and-sunny-cove)

[https://www.anandtech.com/show/14664/testing-intel-ice-
lake-...](https://www.anandtech.com/show/14664/testing-intel-ice-lake-10nm)

I'm personally still excited about these chips for laptops. Lower power and
higher IPC mean same-ish performance as the previous generations, but with
better battery life and thermals. Plus you get better turbo boost, better
graphics, built-in support for TB3, WiFi 6, etc. Seems perfect for something
like the Surface Pro. The Core uarch is getting dated, yeah, but Intel is
going for breadth and better integration here and it looks compelling.

------
tuananh
Intel Xeon is getting slaughter as well.

[https://www.tomshardware.com/news/amd-epyc-7742-vs-intel-
xeo...](https://www.tomshardware.com/news/amd-epyc-7742-vs-intel-xeon-
benchmarks,40089.html)

~~~
std_throwaway
Those aren't 10nm Xeons.

~~~
ch_123
While that is true, it is also important to keep in mind that Intel is quite
conservative with the Xeons, meaning that they lag behind the consumer chips
by one tick/tock cycle (or whatever they call it these days)

IOW - by the time we see 10nm Xeons hit the market, AMD will most likely be on
the next iteration of the Zen architecture.

~~~
mort96
I believe they call it the tick-tock-tock-tock-tock-tock cycle; shrink,
microarchitecture, optimization, optimization, rebranding, rebranding.

------
iopq
Oh well, better Lake than never

------
gameswithgo
overall there are many positive changes for the laptop domain. integrated
thunderbolt and a better gpu means cost power and soace savings for many
laptops.

------
bitL
Perfect for a NUC-based 1080p SteamBox...

------
raxxorrax
I am actually not that interested in CPU performance increases but the lower
frequency could be advantageous for thermal properties, which are often a
problem in mobile devices.

They might also be more energy efficient which I think is the most relevant
advantage Intel has against their competitors. So I don't really get the
impression that the new chips don't perform well.

Spectre probably knocked of Intels performance advantage, but is CPU
performance really our current bottleneck?

~~~
llampx
Why would you say that the lower frequency would be better for thermals and
power consumption? It would only be better than the same chip at a higher
clock rate. If Intel could increase the clock and maintain the power
consumption that's what they would do. From that it follows that these chips
consume more power at a lower clock rate.

~~~
dcbadacd
> Why would you say that the lower frequency would be better for thermals and
> power consumption?

Because transistors have a capacitance that one has to drive, higher frequency
requires more voltage and more voltage means bigger leakage currents which
means more heat and power consumption.

