Hacker News new | past | comments | ask | show | jobs | submit login
Intel 10nm Chip Gets Mixed Reviews (eetimes.com)
120 points by JoachimS 73 days ago | hide | past | web | favorite | 96 comments



Have these tests been run with or without the security patches? Since Spectre/Meltdown/etc. it has increasingly been difficult to compare numbers. Lately intels IPC advantage compared to AMD has been melted away completely by these patches. It will probably take some time until we can judge if these new IPC gains are solid or if they have been bought with new compromises.

EDIT: To clarify: If you compare both "out of the box" then the old one would be unsafe while the new one hopefully is safe because it comes with hardware/firmware patches built in. If you compare both patched, the old architecture gets a large performance hit compared to when it was launched. With time the patches usually evolve and change the performance characteristic. So you have to be careful which OS or firmware revision you are using. However way you do it, it's not an easy apples-to-apples comparison anymore unless you're talking about a very specific use case at a very specific point in time.


I see the whole Spectre/Meltdown/etc fiasco as an interesting tradeoff: you can have higher performance if you don't care about those side-channel attacks, which is what a lot of applications like HPC are going to do anyway because they don't run untrusted code. That still gives Intel an advantage.


Intel lost me when they released Windows 10 microcode that killed my overclock by rendering the multiplier ineffective. I can get it back by removing the file from the system folder, but if feels dodgy as.

Also that 6850k CPU / Asus X99 Gaming Strix which I bought just after release is the worst combo I've ever experienced; so many problems, which included frying the CPU if using the then promoted as easy XMP settings (took me 2 replacements to figure it out).

3900x or 3950x is coming up as soon as possible. I am peeved about upgrading so early (it'll be only 3.5 years), but I've had it. The old 2600k CPU PC is still ticking along very nicely at 4.4ghz.


Those sandy bridge chips just overclocked amazingly. Slap on an aftermarket cooler (Hyper 212) and you could easily have 4.4Ghz all core boost, and hit higher on water.


Yes! I built the 2600k in 2011, put on a very chunky Noctua NH-D14, found a stable clock and it's just been running like that ever since. Not a single BSOD.

The 6850k was such a massive let down. Yes, when it works the extra performance was great, but it has been anything but stress free. Thus: The new Ryzens look simply astonishing, at a much lower power / heat cost to boot.


To people who purchase their systems and run single tenant on metal. To cloud providers providing shared infrastructure their lunch got ate a bit here.


Between side-channel attacks and the steady improvement in rowhammer techniques, the mantra of "there is no cloud, just someone else's computer" deserves a renaissance.

Tech promoters have spent a lot of time and energy explaining that anyone saying that is just an idiot who doesn't understand that cloud is the future (e.g. 1, 2). But the basic insight was never wrong, and the people saying it knew just fine what they were talking about. 'The cloud' means giving up physical-layer control, essentially by definition. That's a real tradeoff people ought to make consciously, and it's one that lost some ground lately.

[1] https://www.zdnet.com/article/stop-saying-the-cloud-is-just-...

[2] https://www.techrepublic.com/article/is-the-cloud-really-jus...


With a dedicated server one can have have the same isolation in a cloud as with a server in a basement.


I think that's just a question of how "cloud" is defined.

Certainly a server in a datacenter can be as isolated as a server in the basement. And unless your threat model involves governments, a reputable hosting company having physical access to the box shouldn't be much scarier than having it in your office.

But lots of people (including those cloud-hyping articles I linked) claim that dedicated servers, even with virtualization, are just "remote hosting". Their standard for 'cloud' is basically "computing as a utility", with on-demand provisioning and pooled resources. I know some huge companies have attempted "private clouds" that provision on-demand from within a dedicated hardware pool, but I think most smaller projects have to choose between on-demand and dedicated.


IBM Cloud has fast provisions of bare metal servers, you can use your own image stored in object storage as well (disclaimer I work there). The fast provision uses already built and racked baremetal. There is a limited selection of options compared to non fast provision baremetal which you select every component. It takes about 30 minutes last I heard for a fast provision so its not instant but its still pretty cool.


If they need to be explicitly disabled, which they do, not many are going to do it, not much of an advantage if you ask me.


Big datacenters have large and talented engineering staff and routinely customize their machines and firmware heavily. Consumers aren't going to do it, that's true (and relevant to the article: all the Ice Lake parts mentioned are consumer chips). But on a per-revenue basis, most of the market is amenable to this kind of thing.


Big data centers are also most likely to be executing customer input. They almost certainly have all side-channel mitigations applies.


Not every physical die is running security sensitive code. In fact, most aren't.


Sure the datacenter infrastructure won't require the mitigations, but every single multi-tenant die will.

And I'm assuming my 5$/mo DO droplet isn't on it's own dedicated die....


> every single multi-tenant die will.

To be fair though, those chips are a comparatively small part of the datacenter market. Most of them are sitting in IT closets, or per the example above are running HPC workloads on bare metal. Cloud services are the sexy poster child for the segment, but not that large in total.


Gamers are absolutely going to tweak every setting to increase performance of their machines.


The trouble is gamers are susceptible to it. They're running running all kinds of untrusted code (javascript, custom game levels created by other users) as well as receiving untrusted game data from other users in multiplayer games which commonly then goes through a data parser optimized for performance over security.


While perhaps true in theory, has there ever been a known case where game DATA from a multiplayer game was able to exploit a remote system and say obtain root access?

Stuff like rowhammer is very different vs something like a SQL injection on a website.



Plenty of gamers download mods that straight up execute code on their machines. They also download game tools that are just code.

They also run games that are just not secure. I know one that was storing user credentials in plain text in the registry (where no special permissions are needed to access it)


They also download pirate games from sketchy Russian sites.


In HPC software is often optimized for the specific machine it is running on.


How many of these security leaks still need software patches for Sunny Cove? I thought Intel implemented hardware mitigations for pretty much all of them in the 10th generation chips?


Some security issues require software patches but those CPUs also include hardware improvements which strive to reduce overhead to negligible value.

According to Anandtech [0] only Spectre V1 requires pure software mitigation.

[0] https://www.anandtech.com/show/14664/testing-intel-ice-lake-...


This sounds very promising. But it also sounds like something that PR would write. I think we'll have to wait some time until independent researchers get their hands on these chips and give them a thorough testing.


I'm also taking that with a grain of salt since AT has a habit of not challenging Intel that often, and only issuing some weak response even when it turns out it was PR. Of course this may not be the case this time.


I read somewhere that it would take several years and generations for them to be fixed on the hardware side.


Some "hardware patches" are microcode changes that also result in performance degradation.


Maybe, but that's hardly relevant when comparing performance of these chips to e.g. Ryzen 2, as there is no way to disable these mitigations on Ice Lake anyway, it's simply what you get out of the box.


The big gotcha is that meaningful mitigation of ZombieLoad on affected chips requires disabling Hyperthreading (the hyperthreading-based version of the attack cannot be fixed in microcode or software) but Intel has taken the position that this isn't actually necessary on normal consumer machines. So when Intel say that they have hardware fixes for all this stuff it's not clear whether they mean they've actually fixed it.


I don't think the title is doing Intel justice in this case. The graphics performance went up substantially, as did the IPC. Intel decided to trade-off part of the IPC improvements for a lower power consumption. Combined, Ice Lake seems like an improvement across all dimensions for me.

Perhaps we've just been spoiled with the leaps that AMD has been making recently.


> Intel decided to trade-off part of the IPC improvements for a lower power consumption

It's probably not a deliberate trade-off, their 10nm process is simply not good enough yet to get high yields when pushing clock speed. Their 14nm process was excellent in this regard, after they optimized it for so long, it is the only main advantage 9th-generation Intel parts still have compared to Ryzen 2. It's not a surprise they have to take a step back in that regard until 10nm improves.


> Intel decided to trade-off part of the IPC improvements for a lower power consumption

That's not something given. The "more performant under same power envelope" chips in the lineups all have more cache, more vector extensions, smt unlocked, more turboboost, but no actual qualitative improvements.

Some even argued that Ice Lake is less power efficient than latest Skylakes derivatives.


Did you mean Ryzen 3000/Zen 2?


You're right, I keep getting confused by this ;-)


I don't think the title is doing Intel justice in this case.

The chip is getting mixed reviews.


I don't think its really a tradeoff in a normal sense, its very likely from what we know that intel 10nm is just incapable of high clocks, at least for now.


So, Intel produced new APUs for low-end/middle class laptops. Great, if that's what they were aiming for.


and high end ultras.

personally am more excited about the prospect of less-crippled ultras than a few % bump in single thread performance.


Just understand that 15"+ laptops are never getting Ice Lake.


Graphics are "improved substantially" from a low base.

Just before the smartphone revolution happened, Microsoft decided to make a GPU-centric graphics architecture (WDDM, which Wayland) imitates and Intel decided to make an integrated GPU that was just barely adequate to run Windows Vista.

Intel's plan has always been to take as much of the BOM for a PC for themselves, so they hoped to vanquish NVIDIA and ATI and might have even succeeded if it wasn't for cryptocurrency and deep learning.

The trouble is that Intel has been pursuing phoneishness for the last ten years instead of performance, systematically erasing the reasons why you would buy a PC instead of a phone. They've tried to sell phone chips in China where there is no free press and they can keep people from talking about the benchmark results, how slow their phone is, how hot it gets, etc.

Intel's idea has been that gaming is playing Candy Crush and they've let AMD steal their fire by making the CPU/GPU SoC for the XB1 and PS4 -- PC gaming has converged with console gaming in many ways, but the common denominator is that Intel integrated graphics is tolerable for the most casual of casual games, and even the recent performance improvements get you to 2 frames per second in League of Legends as opposed to 1. Intel is entirely AWOL when it comes to GPGPU on their integrated graphics but it just isn't worth the effort with their low performance parts.


> They've tried to sell phone chips in China where there is no free press and they can keep people from talking about the benchmark results

Chinese gov blocking specific keywords and network ranges is one thing. Intel censoring reviews is an extraordinary claim... could use some extraordinary evidence.


> graphics performance went up substantially

Maybe this is due to design improvements unrelated to the process node?


And much increased area dedicated to graphics


This was my thought. I wish they spent that area on giving me more cores. But whatever.

I hope AMD starts using more than 1 chiplet for laptops. Until they do an AMD laptop is limited to 4 cores.


Zen 2 compute chiplets are 8 cores, the 12/16-core desktop parts have two of them. Right now the Ryzen 3000 APU's are still using the classic Zen(+) cores with Vega graphics on the same chiplet, hence the limit. The next-gen APU's will likely feature the Zen 2 core chiplet plus a Navi GPU and give you 8 cores on mobile.


How would you feel about a 10-core Comet Lake?


It was not a trade off, it was a last minute fix take attention away from the cpu deficit


The relative IPC performance is impressive, since with (a big surprise) much lower base clocks (e.g. ~1.0-1.2Ghz vs ~1.8Ghz of previous gen) and slightly lower boost, these are getting slightly lower, equal or slightly better results in CPU tests (not looking at GPU). That said, it seems like they just couldn't clock them more and get better graphics in within the same TDPs, and overall performance is completely unimpressive compared to 8th gen, and, for top level ones, even worse. Ok, obviously they focused on the GPU and the results are great compared with HD graphics, but I didn't see any comparison with Iris, and I doubt it looks that good.

3700U is currently a mediocre mobile CPU, between 8th gen i7 and i5. Base clock is 2.3Ghz. But that's the Zen+ architecture and the graphic performance is already on the Iris Pro level. If AMD can get the same IPC, clock and TDP improvements as on the desktop for mobile Zen2, where the clocks have not been reduced 30% like Intel did, I think Ice Lake won't be able to compete at all, from what we've seen so far. Of course, there is much more nuance there in terms of heat / power envelopes and now it all ends up together while boosting, but definitely doesn't look very good for Intel based on this...


Intel has been digging this hole for a while, due to the delay of 10nm, they have just shipped rebadged 14nm chips with higher and higher clock speeds. No way a brand new node can match these clocks with decent yields, so even with a healthy IPC increase, they are still fighting an uphill battle.


Not to mention consumer expectations. It seems as if we've been hearing "just wait until 10 nm" for so long, I think a lot of people have been expecting a big generational leap.


One thing that Intel has that AMD doesn't is graphics integrated into nearly all their chips and virtual GPU for KVM. If AMD can add something similar then Intel will be decimated, especially if AMD really starts to take over laptops. Unfortunately I don't even think Ryzen 3rd gen laptops exist right now.


It’s a timing mismatch, Intel always releases their mobile parts first, AMD it’s the other way around.


Sure, AMD doesn't have mobile chips on Zen 2, but that's not far down the road and AMD does have a decent Zen+ mobile chip and it's being built into relatively high-end laptops (Lenovo T series, for example). I'm guessing AMD will release mobile chips on Zen 2 before Intel can fix their issues with 10nm, which will make for an interesting 2020.


I don't see why on-chip graphics support is necessary. An off-chip, independent graphics chip would surely suffice as redrawing the screen is loosely coupled with computation (describe screen, send to GPU, 60 times a second).

Other than some extra power draw needed to couple the 2 chips together, which I assume is minimal, splitting computation from rendering seems like a very good idea - where am I wrong? Would the extra monetary cost be significant - if so roughly by how much?


I recently build this AMD based development rig with an ITX board and the cheapest GPU I could find:

https://penguindreams.org/blog/louqe-ghost-s1-build-and-revi...

I would have much rather left the GPU out entirely and used the space for something else, but the high end Ryzens don't have any graphics supports. I'd have to go down to the APU units, where I'd trade off power.

There's little point to even having HDMI/DisplayPort outs on the board itself; they're unusable except for a small subset of APUs.


13" or smaller laptops don't really have room for a discrete GPU (I supposed they could fit one by shrinking the battery, but why would you do that) and IGPs are generally "free" compared to ~$50 for a cheap discrete GPU. Logically the GPU is a separate unit whether it's on the same chip as the CPU or a separate chip.


Anandtech articles looking at the Ice Lake uarch and performance:

https://www.anandtech.com/show/14514/examining-intels-ice-la...

https://www.anandtech.com/show/14664/testing-intel-ice-lake-...

I'm personally still excited about these chips for laptops. Lower power and higher IPC mean same-ish performance as the previous generations, but with better battery life and thermals. Plus you get better turbo boost, better graphics, built-in support for TB3, WiFi 6, etc. Seems perfect for something like the Surface Pro. The Core uarch is getting dated, yeah, but Intel is going for breadth and better integration here and it looks compelling.


Intel Xeon is getting slaughter as well.

https://www.tomshardware.com/news/amd-epyc-7742-vs-intel-xeo...


Those aren't 10nm Xeons.


They are Intel's latest and greatest processors in the respective product lines. 10 nm Xeons belong in a somewhat hypothetical future; comparing actual AMD products with Intel roadmaps and projections wouldn't be fair.


While that is true, it is also important to keep in mind that Intel is quite conservative with the Xeons, meaning that they lag behind the consumer chips by one tick/tock cycle (or whatever they call it these days)

IOW - by the time we see 10nm Xeons hit the market, AMD will most likely be on the next iteration of the Zen architecture.


I believe they call it the tick-tock-tock-tock-tock-tock cycle; shrink, microarchitecture, optimization, optimization, rebranding, rebranding.


I think now it's process-architecture-optimize (but the 10nm took so long that they just kept optimizing and Sunny Cove became a process and architecture change).


10 nm Ice Lake Xeons are coming in H1 2020.


Generally Xeons lag a process node behind the desktop lineup.


Oh well, better Lake than never


overall there are many positive changes for the laptop domain. integrated thunderbolt and a better gpu means cost power and soace savings for many laptops.


Perfect for a NUC-based 1080p SteamBox...


I am actually not that interested in CPU performance increases but the lower frequency could be advantageous for thermal properties, which are often a problem in mobile devices.

They might also be more energy efficient which I think is the most relevant advantage Intel has against their competitors. So I don't really get the impression that the new chips don't perform well.

Spectre probably knocked of Intels performance advantage, but is CPU performance really our current bottleneck?


The reason none of the reviewers are talking about how the CPU performs thermally is that Intel didn't allow them to. See page one of the Anandtech review for instance: https://www.anandtech.com/show/14664/testing-intel-ice-lake-... Not that they could have got terribly meaningful results anyway, since this was a fairly large and thermally unconstrained test system with the fan locked at 100%. (Since the kinds of systems this CPU would be used in are very thermally limited, this means the performance figures may not be terribly representative of the real world either.)


Huh, I hoped these test-NDA were a thing of the past again. But yeah, tests would probably not be conclusive in that case.

Still the topic that would interest me most compared to CPU performance.


Not when you need to make up some lost momentum and the competition is already running with it.

Intel keeps doing these paper launches and early announcements meant to keep them a little in the spotlight. They're not having a good time for a while now. Some websites take it with a gran of salt, some still don't, even after being repeatedly used to tout a nonexistent horn.


When you do something, the cpu works at _full_ speed for a given duration, which is shown as %/time used. If you make a processor half as fast, all CPU tasks will be performed two times slower.


> Spectre probably knocked of Intels performance advantage, but is CPU performance really our current bottleneck?

this is what my computers have to say about that :

https://i.imgur.com/2MGzz58.png

https://i.imgur.com/h2d8FO8.png


Links are broken


No, they aren't. (That said, they're completely uninformative and I have no idea what the heck these tiny unlabeled ASCII graphs are trying to convey. The ability to draw a lot of | characters quickly, perhaps?)


Look up "htop output" on Google image search.


I was expecting a before & after set of graphs.


it's my cpu usage when building stuff - 100% on all of my cores


Quite a sight to see. I'm surprised that the build process is able to efficiently use up all of those cores. Plus I see you've got 64 GB memory, quite a beast of a machine. Care to share some more hardware specs and some details about the build process? (Compiler, language, purpose?)


sure - cpu is an i7 6900k so 8 core / 16 threads, overclocked at 4ghz. The software (https://ossia.io) is in C++17 built with cmake / ninja / clang / lld so an incremental rebuild is between 1 and 5 seconds for most files (which is still waaay too slow for quick edits), but a full build is 5-7 minutes depending on the optimization options, debug mode, LTO, etc.

Also, most ram here is not taken by the build process but by firefox tabs & a windows VM.


Hugely interesting bit of nerd porn, thanks for the reply.


Why would you say that the lower frequency would be better for thermals and power consumption? It would only be better than the same chip at a higher clock rate. If Intel could increase the clock and maintain the power consumption that's what they would do. From that it follows that these chips consume more power at a lower clock rate.


> Why would you say that the lower frequency would be better for thermals and power consumption?

Because transistors have a capacitance that one has to drive, higher frequency requires more voltage and more voltage means bigger leakage currents which means more heat and power consumption.


Good question, that was just an assumption. I thought that the improved manufacturing process would limit power consumption and was the reason for the decreased clock. But yeah, there could be completely different reasons for that.


[flagged]


That's a bit of a silly statement. And misogynist to boot.

Hardware is cheap. Programmers are not. It makes much more economic sense to have programmers "use layers upon layers" and buy extra CPU, than to have a couple of x86 assembly gurus hand craft every opcode of your application.


And force your users to buy extra CPU*

Development time may be more expensive than hardware if you're developing internal software for a small-ish company, or something extremely niche, but less so when the bloat saves some time for your relatively small team in exchange for bringing massive pain to the thousands/millions of your users.

I personally refuse to use "bloated" software even when there's no alternative, but it's a drop in the ocean. Some of the folks I know can't take the same stance simply because they don't know that a text messenger doesn't have to eat half of your machine resources and slurp the battery to zero in half an hour of usage. For them it's just something you have to deal with.


Had to look that term up. I have nothing against women. The opposite. I just don’t like layers upon layer of fake to conceal problems.

Same with programmers.

It doesn’t make sense to make 1billion people buy more hardware bc a small bunch of programmers.

It has nothing to do with handcrafting asm. All the virtualization, virtual machines own little patchy frameworks which are unnecessary, virtual dom, the actual dom. Just to show a some text or a button on the screen which doesn’t adhere the OS Interface guidelines and probably doesn’t support screen readers, color scenes, font changes, dpi changes, proper behavior, keyboard navigation, scriptability.

Yes, I like things and people lean.


> Hardware is cheap. Programmers are not.

I hate this line of reasoning. Yes it is true, as long as you keep things reasonable. But many use this principle to go beyond the reasonable.

Slack probably has cost much more in electricity alone than it would have taken to build a more efficient client.


Maybe. But I just program cake face for myself!

But seriously, if programmers hadn't several software levels to rely upon, a simple software tool would cost millions and would require an army of engineers or a few years to be completed.

Programmed a scroll bar for an UI toolkit once. That isn't as trivial as it seems and takes a while. A scrollbar...


I've never implemented one so I'm sure it's more complex than what it looks like.

So you piqued my curiosity. What are some examples of those non-trivial things?


It has been a quite while... determining the length of the content you are scrolling and the size of your viewport was a challenge as well that catching every event that could change the position of the viewport or generally handling input.

It was implemented in C for a small display for debugging controller for a µC. It did take quite even without any dynamic content for something we take for granted. And in the end it still was pretty clunky.

I believe that browsers still disallow customizing scrollbars and there are countless examples of people having build their own scrollbars to replace them. Many of those are quite wonky, so that calms me down at least.


Interesting.

I guess it's one of those things we take for granted now and thus you think that it can't be that hard to create... and then you try to do so and you realize how complex some things really are.

Thanks for the details!


Not all abstractions cost anything, they're simply useful. In addition IDEs, compilers and other parts of the toolchain pump up the output quality significantly compared to what would otherwise be available at a given time.

Programming doesen't exist in a vacuum. It has real business constraints, and you simply cannot make a perfect program. You must make the right compromises in order to be succesfull.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: