Hacker News new | past | comments | ask | show | jobs | submit login
Intel Announces 9th Gen Core CPUs (anandtech.com)
217 points by Tomte on Oct 8, 2018 | hide | past | web | favorite | 219 comments



Once again it's amazing what Intel finds in the couch cushions when pushed by AMD. i7 consumer chips went from 4-core to 8-core since Ryzen launched. Where was this innovation before? Oh yeah, why would they bother. Ryzen is the best thing to happen to CPU architecture in years, even if you don't buy one.


Intel isn't innovating outside of process. They've been on the same µarch for most of this decade. They've gotten rid of hyper-threading for most of their desktop parts with this "generation" and have taken the "moar cores" approach to progress.

Sure, Intel's clocks are slightly higher than before, and their parts use less electricity, but they've been standing still in virtually all other areas.


Higher clock speeds and lower power consumption are not exactly trivial improvements when the process technology hasn't changed significantly...


The process has changed significantly. They are on their 4th iteration since 2012, and that has provided all their performance increases.


Indeed, the speed increases seem superficial - they probably could have achieved those speeds in earlier iterations but couldn’t be damned to do so.


Judging by how well earlier generations overclocked (when not artificially restricted), they had this clock headroom pretty much the entire time.


Skylake-X is quite innovative. The mesh-design was never-before seen, and AVX512 is an excellent instruction set. Sure, Skylake-X is expensive as all hell, but its still innovation.

AMD managed to innovate at a price point where most people can afford. But its a bit disingenuous to ignore the advances that Skylake-X has brought forward.

--------

Skylake-client is very similar to Haswell, but still has larger ROPs, decoder, and branch predictor. It doesn't lead to much better clocks, but IPC is still up over the last few cores.


> and AVX512 is an excellent instruction set

I do like AVX512 in principle, but in practice the runtime cost of using it is so high that I've offloaded anything highly parallel to other hardware (i.e. GPGPU) -- there's always work enough to keep the CPU busy anyway.

Have you found AVX512 great in practice? I'd love to hear about it!


I can't say I've used AVX512 personally.

But what I can say, is that AVX and AVX2 are rather limiting. And I want those new instructions that are found only in AVX512.

Not even to run 512-bit computations mind you. I want to run 128-bit computations in AVX512. You can use XMM registers in AVX512 ya know!!

The main benefits of AVX512 are the mask registers, scatter-instructions (AVX / AVX2 have gather, but no scatter), and an extension all the way to 32 XMM / YMM / ZMM registers

True, using ZMM registers causes severe downclocking issues. But XMM (128-bit) and YMM (256-bit) registers are still sufficient for CPU-based tasks.

-----------------

Practical, pragmatic use of XMM registers include:

* Cuckoo Hashing (See: https://github.com/stanford-futuredata/index-baselines/blob/...) -- Have 8-bins per hash value for a total of 16-bins. Use SIMD to perform 8x comparisons at once.

* Huge number of database applications: http://www.cs.columbia.edu/~orestis/sigmod15.pdf

Bloom Filters are the most obvious "SIMD" data-structure, with applications to databases and many other tasks. Sorting networks are best implemented in SIMD.

* See this discussion: https://news.ycombinator.com/item?id=16171806

In effect, you have Base64 encoding / decoding that is faster than a PCIe x16 slot. You can encode / decode Base64 using AVX registers faster than you can even pipe data to the GPU.

---------

True, the GPU is the ultimate SIMD processor. But in many cases, the SIMD task is too fast to go to the GPU. In particular, the Base64 encoder was encoding 20GB/s, while PCIe can only transport 15.6GB/s!! The CPU is done before the data even gets to the GPU.

Ditto with latency-specific code, like Cuckoo Hashing. SIMD speeds up the overall hash-table, but you're only parallel by 8x. There's no point to actually offload a Cuckoo Hash to the GPU.

In effect, you look for "small" parallelism of size 8 to 16 or so, and that's where AVX / AVX2 / AVX512 really shines. Its too expensive to move to the GPU, but you still get a HUGE speedup when you process it on the CPU.


> True, the GPU is the ultimate SIMD processor. But in many cases, the SIMD task is too fast to go to the GPU. In particular, the Base64 encoder was encoding 20GB/s, while PCIe can only transport 15.6GB/s!! The CPU is done before the data even gets to the GPU.

That seems like a good case for using integrated graphics that has access to the CPU's memory controller and so can access data directly without copying it over PCIe.

Though if you're running that kind of volume you may have PCIe as the bottleneck in any case to access the data on whatever storage/network device.


> Skylake-X is quite innovative. The mesh-design was never-before seen

The Xeon Phi (KNL) used a mesh topology and the KNL was released before Skylake-X.


Is the removal of HT part of the security concerns? I wonder what security fixes are in places compared to the last generation (or do they still depend on the Win/Lin/Mac kernel workarounds?)


Product segmentation. It's still available in the i9 CPU, which they charge a little over $100 more for the non-HT 8 core CPU.


I'm important to mention that the 100$ include a 33% larger L3 cache.


It seems like SMT would create considerably more cache pressure, as well as requiring more decode and instruction cache hardware. I have been assuming it actually requires more die area, and is not (always) just disabled for segmentation. Maybe there is someone here who knows more about architecture? :)


Binning.

So you do all the lithography, and vapor deposits on a wafer. That wafer has ~100 physical processors on it (100 just to make rounding easier). You split them (into individual chips), and you start testing.

Say on ~10 Hyper Threading, all the cache cells, and all the magic virtualization stuff works. These become some pro-sumer Xeon type deal.

On another ~10 Hyper Thread, and all cache cells works. This is your i9's

On another ~30 no Hyper Threading, and only some cache cells works. This is your i7's

The rest there is no Hyper Threading, only some cache cells work, and wow only 4 physical cores work. This is your i5's and i3's (kind of).

The idea is yeah, whole parts of a CPU are defective, or unperforming. So they just get disabled and "binned" as another lower tier CPU of the same micro-architecture. All of these get solid at >100-5000x markup to offset the $50bil+ in R&D Intel spends each year. Yes their margins, are... amazing.


Generally true but in practice there are almost no CPUs where the only thing that is broken is hyperthreads. All CPU companies will disable fully-functional chips to make that segment of their product lineup.

Also, the reality is that the consumer market is a massive beneficiary of this whole scheme. The server market is effectively subsidizing consumer processors to the tune of billions, if consumers had to pay full freight on their dies prices would be several times higher than they are.


    if consumers had to pay full freight on their dies prices
     would be several times higher than they are. 
This is doubtful, Intel's margins are amazing as they're fully integrated vertical monopoly. They make the wafers, own the fabs, cut their own masks, etc. Very literally sand comes in one end, and chips come out the other.

The fundamental processes of producing an equal die space SoC as a Xeon on the same (node) is likely roughly equal cost (or I imagine the fabs as a service would go out of business). So saying Intel -needs- the consumer market to subsidize their server line is a total lie.

Intel puts an extreme markup on their server class processors, and a milder mark up on the consumer segment


It's definitely there but disabled.


That would certainly jive with the discussions around disabling HT by the OpenBSD community.

Looks like they could just be trying to brute force the loss of speculative execution simply by augmenting the number of cores.


Well if I was Intel, and developed some new technology that is better than my current lineup, I would not immediately release it. I would release it when the time is right. Maybe drip feed it into the market, at times when AMD challenge for example.


No argument.

But I'm a developer on Linux who uses a couple of languages that compile to machine code. Which means that I don't care if AMD has something twice as fast, I'm going to buy Intel for one reason alone: their performance counters support running rr.

That's at least a factor of two improvement in my productivity right there.

Last I heard, Ryzen's perfcounters weren't deterministic enough for rr, so it's dead to me.


For others who aren't as knowledgeable, could you give a quick rundown of rr and what causes this massive AMD penelty? I've never heard of anything that is "switch to Intel for double the performance".


I bought 2


Yep and look at the price point thats very competitive


Not really....

Core i9-9900K 8/16 Cores 3.6-.5.0 GHz - $488

Core i7-9700K 8/8 Cores 3.6-.4.9 GHz - $374

Ryzen 7 2700X 8/16 cores 4.3 GHz - $320

Easy pick for me, Intel isn't even a consideration.


You should see the prices here in the UK, we can get a Ryzen 2700X 8C/16T for £300 or an Intel 9900k 8C/16T for £600.

The Intel is probably faster... but enough to justify literally double the money? I seriously doubt it.


>The Intel is probably faster...

Only in specific IPC heavy workflows.


The amd probably can be built with ECC ram while the intel cannot, as well. I hope that’s a hat trick amd pushes them to release on the i7s.


When was the last time your computer crashed because of a cosmic ray?


The problem isn't a crash, the problem is silent data corruption. I have a system with 128 gigs of ECC and it catches a soft bit flip or two about every 6 months.


Most people don't have that amount of RAM. What are the chances that a soft bit flip will cause silent data corruption that matters for a user with, say, 8 GB of RAM?


The answer to the question you've asked depends heavily on the particular user and their work, of course. If the user is just playing games, who cares? But maybe they're doing financial calculations or compiling software that lots of users run or maybe they're doing aircraft structural analysis.

In any case, bit flips are much more common than were suspected: https://arstechnica.com/information-technology/2009/10/dram-...

I believe strongly that ECC should be standard, because you can't safely assume that your users are doing worthless work. Apple got this right on (non-Mini) desktops a long time ago. Not yet on laptops, unfortunately.


If you’ve got files you care about then care about bit rot. I want ECC so my ZFS volumes don’t silently corrupt.


That study is faulty. The intern who did that study didn't know that Google would buy DRAM chips that failed manufacturers QA, but them on DIMMs themselves, and retest them at lower frequencies and with ECC turned on. When they already have to be tolerant of any node failing because of their scale, they can start playing fast and loose with this sort of thing if it makes financial sense.

EDIT: At -3 so far, does anyone want to explain the downvotes? I saw the google slides first hand, and there are comments from 2009 in that article saying the same thing.


Didn't down it either, but: "The intern" is a well-respected CS professor; the paper had two Google authors whom one would expect to have knowledge of Google's oddities. https://ai.google/research/pubs/pub35162

Your comment provided no substantiation of your claim, merely hand-waving, while casting aspersions on someone else's work.


All ram should be ecc. No one would accept this incorrectness BS if the precedent hadn’t been set by the monopolist.


It's really the memory manufacturers who are enforcing and profiting from the ECC shakedown though, AFAICS.


Intel probably gets more from selling a Xeon.


I didn't downvote, but I'd guess that they (a) want an authoritative reference to the story you're sharing, or (b) figure the point is moot because these errors still happen, even if less often than depicted by the study, and that ought to be enough to justify using ECC.

Like I said though, just a guess.


If you think bit flipping is rare, check out this write-up: http://dinaburg.org/bitsquatting.html

Bitsquatting: DNS Hijacking without exploitation

When bit-errors occur they can change memory content. Computer memory content has semantic meaning. Sometimes, that meaning will be a domain name. And applications utilizing that memory will use the wrong domain name.


Nice point. As with all things that follow a distribution and given the large number of machines this error will occur.


Run a filesystem that checksums data (e.g. ZFS) on a system without ECC RAM for a few weeks and then come back to us... I've done it, the results were surprising, and I replaced the RAM with ECC.


> Most people don't have that amount of RAM.

Most people don't buy desktops with 16 or 32 or 64-thread processors designed to maximize throughput.

Those who do tend to want to max out how much RAM they can shove in their box.


As a counter example, my desktop has had 128 gb of ram since two years ago. I've never needed that much, 64GB would have been enough for the number of vms and containers that I run. But I couldn't pass up a good deal. I got the ram for $600.


Real statement of the times that I had a tough time finding 128 gig ram sets (on a short search - this should be easy).


Bit flip in the dirty cache that gets written to disk? Depends on what data structure was corrupt...


most bit flips won't cause a full-on crash, but may corrupt data in funny ways, or affect unused memory. I think people underestimate how common they are[1]

[1]https://ai.google/research/pubs/pub35162


You computer? Not sure. Computers in general? A noticeable fraction of Firefox crash reports are due to single-bit memory flips.


Huh, this is really interesting. Can you go more into this? How do you know the cause of the error? What kinda numbers are we talking here?

Also, are cosmic rays really the main source of single bit flips as apposed to just bad ram maybe?


Firefox crash reports include the contents of registers and a few instructions around the instruction pointer. If the crash is in compiled code, not JIT code, you also know where in the binary you were and can get symbols from a symbol server, then compare what the instructions should be to what they actually were.

Some of the time the instructions don't match up, indicating corruption _somewhere_.

For the specific case of crashes in JIT-generated code, the contents of registers and the instructions can be related in various ways (e.g. if you have a jmp instruction the register better contain your code location). And if you know where your code locations might be (because you're a JIT, and are generating the code and aligning it in memory yourself) and the register with the code location looks like the sort of address you would end up with but with one extra low bit set, say...

I am having trouble right now finding the bug report where some of the JIT engineers were analyzing crashes in jitcode, but about 1/3 of those were due to bitflips if I recall correctly. What that means in terms of absolute numbers (or numbers per user-hour, which would be even more useful), I don't know.

Note, by the way, single-bit flips can be a consequence of a bad memory chip, not just of cosmic rays.


Thanks! Incredibly interesting.


As AMD shareholder I am glad you see it that way :)


Full disclosure: I am also an AMD shareholder :)

I am not fanboying AMD here either. The facts for me are clear, AMD is going to continue to eat Intel for breakfast for the next 2 years at least. Intel isn't going to be competitive until they release a modular designed chip, which although they haven't announced they ware working on, I am 100% certain they are. Why am I certain, first they now have Jim Keller, who designed the Ryzen chip. Second, they acquired NetSpeed, which has IP around modular CPU design (likely a play to ensure they don't get sued by AMD when they release a modular chip). I know a lot of people are looking at Intel right now, thinking it is a bargain prices and a good time to buy. I think they have a long way more to fall. When the chips that Jim Keller is working on are about to hit the market, that is when it will be time to buy Intel. Until then I have no interest in anything Intel has to offer.


Would be really interested if you could provide a source or explanation on modular chips. Couldn't find anything with a couple of searches


Sure I would be happy to.

The tech is called "infinity fabric", you can look it up there is lots out there about it. It basically allows AMD to make several smaller dies and have them function together as one CPU. Here is an excellent video that explains it all (and also goes into why this is such a huge advantage that allows AMD to have significantly better yields with their wafers)

https://www.youtube.com/watch?v=ucMQermB9wQ


Not easy pick for me. Intel has 16% performance advantage. Pricing is bad, but they charge because they can. If I wanted to buy best CPU, I probably would have spend 1.5x more. If I wanted to save money, it's another story.


The upper-end of Intel's offerings start to get into AMD's Threadripper price range.

AMD is offering a 16-core / 32-thread 4.4GHz monster for $899, and a 32-core / 64-thread for $1799.

If you wanted the best under $5000 (total cost of a computer), it seems like AMD Threadripper is the best. If you wanted the best below $1000, it seems like AMD Ryzen is the best.

Only if you are single-thread bound (5GHz clocks!!), AVX2 or AVX512 bound, or PEXT / PDEP bound (Stockfish 9) should you consider Intel. But otherwise, AMD is offering more performance at all price points up to the EPYC 7601 (~$3000 CPU: 32-core/64 threads / 8-memory controllers / 64 PCIe lanes direct to CPU, support for dual-socket).

--------

With that being said, I'm definitely interested in Intel's Xeon Silver platform. If Intel pushed dual-socket out better, then they would be cheaper AND faster. IMO, its a bit weird that Intel isn't taking advantage of their dual-socket solutions to counter AMD Threadripper (I mean... Threadripper really is just a dual-socket or quad-socket NUMA chip combined into a single socket).

As it is, Xeon Silver is hard to find and seems to be ticking up in price unfortunately. But their nominal prices are actually quite good, although their clocks are kinda low. But Xeon Silver really seems like Intel's price/performance champ (even if its still a bit more expensive than Threadripper or EPYC).


Really depends on what you're doing. If you're doing a lot of builds or running virtual machines, AMD is the clear winner here. No contest.

If you're only doing primarily single threaded things (i.e. gaming) then the intel chips will give you better performance.


Gaming has not been single-threaded for a while, even on mobile.


Gaming has multiple threads, but often is single-threaded bound due to Ahmdal's law.

Lets say you can split up your threads to AI, Physics, Rendering, Networking, etc. etc. But lets say Physics dominates: then your game is still single-threaded bound. You only get faster if you make your physics faster.

You can split your game up into work-queues, thread pools, and such. Except not everyone is up-to-date with the latest techniques yet. Furthermore, thread-pools aren't always cache friendly and may hamper your performance. (If Core1 works on something, then Core4 completes the work, you have to transfer all the data out of your L1 and L2 cache to continue working on it on Core4).

So its not exactly an easy thing to program.


newer AAA games/3d drivers lead to a pretty nice workload split over multiple cores really. Battlegrounds was coded in a rush and inexpertly so but it still lights up 4 cores pretty well. Probably in part due to drivers and unreal engine helping out even if the game programmers aren't explicitly working at it.


Oh yeah, SOME games are certainly multithreaded well. Doom, Battlegrounds, and more. DX12 apparently helped a lot with that.

But many games are still single-thread bound (despite being multithreaded). As such, Gamers should still prefer single-thread performance.


I would say not just SOME but MOST, if you look at recent releases. As of PS4/XB1 single threading isn't really viable for an AAA title. PC ports are generally seeing the benefits of that.


People have been saying that for 5 years (which is, in fact, how long XB1/PS4 have been on the market). Single-threaded performance is still the dominant factor in gaming performance.

Games make use of up to 6 threads, but single-threaded performance still determines overall framerate to a large extent. Games are not exempt from Amdahl's law: as long as you've got enough threads to offload work, it comes down to the single-threaded portion of the workload. The faster you can run the main game loop the faster the game runs.


Games that are single thread bound are not maximising their potential on the common multi-core CPUs. Not many games strictly dictate sequential computations of the numerous things a game has to calculate.

Gamer's who prefer cutting edge performance should prefer multi-thread performance.


In almost all cases, modern processors have enough multi-threaded performance to play optimized (multi-threaded) games acceptably. It's the single-threaded ones that will be challenging to run, and it's those challenging tasks that you should optimize your system around.

Optimizing for MT-heavy games is the gaming equivalent of premature optimization - everybody can run Doom at a million FPS, but Fallout 4 or PUBG shit all over every system and you'll be begging for every frame you can get.

You can "prefer" whatever you want but developers don't care. This site of all places knows that time-to-market is what really matters. If you want to play those titles, you have to deal with it. Either optimize for the shitty titles, accept that you're going to be losing a fairly significant amount of frames (the 2700X is behind as much as 30% in some titles), or don't play those titles.


I'm not talking about getting the same done, different ways. Ie multi vs single threaded.

I mean that if the software is implemented in the correct manner, "more can be done" by utilizing multiple threads.


I'm playing WoW and Factorio and both games are single-core limited. So while it might be true for most games, it depends on user, I'm not really interested in games, I just have few I'm playing for years and I want them to run fast.


It's worth remembering that WoW dates to 2004.


Maybe AAA games can scale beyond 2 or even 4 cores, but historically (and probably still true on the low end) games were heavily depended on 1-2 core perf.


Game engines are multi-threaded, but there is still a critical path. And passed a certain amount of cores (typically 4 in modern games), you won't get any improvement by adding more.

That's because once you are down the critical path, the only way to go faster is by improving single core performance.

There is a reason reason why Core i5 are so popular with gamers. They have just enough cores not to limit a game engine, excellent single-thread performance, and they are affordable.


Same for AMD. Not even a consideration without thunderbolt.


We should see it sooner or later. Thunderbolt is royalty-free.

But what application are you needing more than 10Gbps externally?


> But what application are you needing more than 10Gbps externally?

Laptop docking. Running a 4K monitor over a USB Type-C cable leaves you with only the USB 2.0 lanes available for data. (And 5K is impossible.)


All of the damn external disks and devices I bought for video editing.


Ryzen still stuck at SSE instruction. AVX2 instructions are faked and go at SSE speed.

[EDIT - removed AVX-512 claim that was wrong]

Overall perf is likely a lot better with the i9 for most workloads.


The i9-9900k does not support AVX-512, since it's still based on the Coffee Lake uarch. ARK confirms: https://ark.intel.com/products/186605

It's true that Coffee Lake has a performance edge over Ryzen when running AVX/AVX2 code though.


"SSE speed" is the most meaningful term I've ever seen.

Ryzen doesn't have 256-bit wide units, so 256-bit instructions take 2x as long.

The benefit of that is that Ryzen doesn't throttle the clock speed of the whole chip when executing AVX2 instructions :P

Intel's widening of units has brought a giant downclocking problem: https://blog.cloudflare.com/on-the-dangers-of-intels-frequen...


(least meaningful of course)


What do you think it's more profitable to sell: one CPU with 8 cores or 2 CPUs with 4 cores?

What do you think it's more profitable to sell: one CPU with many PCIe lanes that you can attach 8 GPUs to (8 NVIDIA GPUs for 1 Intel CPUs), or more CPUs with fewer PCIe lanes (8 NVIDIA GPUs for 2 Intel CPUs)


I mean, he is saying the exact same thing you are saying - Intel kept making quad core CPUs for ages, bevause....they could. They had the tech to make CPUs with more cores for years - my workstation at work 5 years ago was using an 8-core/16 threaded Xeon chip. The only reason why it wasn't available branded as an i7 was that they wanted to keep it to the more expensive enterprise platform.


Yeah, I wouldn't buy a Ryzen, but I'm glad competition is forcing Intel to put the work in again. They sat on their laurels for a really long time.


Dunno... this looks like the bare minimum Intel thought they could get away with. Sure, they added a couple cores but you give up HT on many models, no Spectre/Meltdown architectural fixes (ok, might still be early to expect that), iGPU stagnant, cutting back on cache size, every variant still significantly more expensive than Ryzen counterpart. Be curious to see how the overall performance comes out. It doesn't look so much like they're trying to compete as they are trying to not compete with their server processors. I'll probably be going Ryzen when I'm next in the market.


Without taking sides, people tend to forget that Intel chips clock higher and consume less power. For instance, the 9900k is clocking at 4.7 GHz on all 8 cores without AVX and AVX-512 instructions. Single core goes up to 5 GHz (!!!). The Ryzen runs at 3.6 GHz with a single core turbo of 4.0 GHz.

That said, the next gen Ryzen is rumored to be 4.5 GHz chip on turbo. So, definitely the steam is picking up.

If you look at the cost difference, it is pretty dead on placed with ~$130 higher than Ryzen 8 core. People trash around AMD/Intel like its their home town sports team. I find that everywhere including on HN.

Let's wait for the benchmarks, specific load comparisons to see if the price difference makes sense. The 8700k is ridiculously fast and it beats the hell out of Ryzen's 1600X.


Not forgetting any of that but none of that is really new Intel information in this generation. (note that current Ryzen top end is 4.3GHz though which is relevant from the standpoint that AMD is increasing clock faster than Intel... great MHz wars part II) Sure, if anything about the previous generation from Intel put them over the top for you, nothing changes. However, there's not much in this new generation that really seems to stand out as a significant improvement though.


Yeah, this looks very underwhelming. I also think you're letting them off too easy with "too early to expect a Spectre/Meltdown fix".

This is a critical security bug that was discovered over a year ago. Intel just ignoring the problem and casually releasing yet another (underwhelming) upgrade that doesn't address it all...

The fact anyone accepts that just goes to show how low our expectations have dipped when it comes to Intel.


Not letting them off, just trying to set realistic expectations. I'm not in chip design, but everything I've seen on the subject suggests that 2-3 years from design to fabrication is the general (minimum) ETA on architectural fixes. The testing needs to be extensive or they risk releasing a worse problem to the world.


They were aware of these bugs for (at least) 16 months. They could delay releasing new chips until they addressed the problem, even if it took them another year. They did not. Instead, they released a chip with a widely known critical vulnerability. That's on them.


Metldown is already fixed by software. So their hardware fixes will just improve performance in some workloads and those workloads are typical for servers, I know that on my PC meltdown fixes did not affect performance in any noticeable way.


It's on you if you buy it.


Spectre was announced 5 months ago. Even if Intel had a fully laid out and tested new design, with new masks and any changes to the production pipeline the instant it was dropped, just fabbing the chips I've been told has over 6 months lead time.


Spectre was indeed "announced" 5 months ago, but it was disclosed to Intel on June 1st 2017. That's over 16 months ago.

See for example https://en.wikipedia.org/wiki/Spectre_(security_vulnerabilit... :

> Spectre proper was discovered independently by Jann Horn from Google's Project Zero and Paul Kocher in collaboration with Daniel Genkin, Mike Hamburg, Moritz Lipp and Yuval Yarom. Microsoft Vulnerability Research extended it to browsers' JavaScript JIT engines.[4][20] It was made public in conjunction with another vulnerability, Meltdown, on January 3, 2018, after the affected hardware vendors had already been made aware of the issue on June 1, 2017.

There were also stories they were aware of the problem even before, and basically ignored it. If you check out that history section, there were multiple public presentations about the feasibility of an attack for years before the practical exploit was discovered. The only way Intel didn't at least suspect it is if it was very sloppy, or didn't care at all.


And I bet they'll continue to ignore it until people forget about it.


Low? How many instances are there of chip bugs being fixed in under a year? It's common across vendors for tape-out to be around a year before they enter the market. This is just the reality of microchip production: the design is set in stone a long time before the enter the market.


More damning is that fact that Intel does have a fix in Whiskey Lake. So why aren't these 8-core desktop CPUs using Whiskey Lake? https://www.anandtech.com/show/13301/spectre-and-meltdown-in...


The i7 lost HT but gained 2 real cores, which will be faster in practice.

The i7 has the same amount of cache as its predecessor and the i9 has more.

9-series has hardware fixes for Meltdown variant 3 as well as the L1 terminal fault fix.


If too many people don't buy Ryzen/Threadripper even though the price/performance is very competitive [1] then not much will change, effectively.

I will only buy Intel if I'm forced to by other people (think: employer, laptop). By my own choice, I will buy Ryzen/Threadripper whenever I can. And I will do so gladly, knowing I give my money to the David who's fighting Goliath and giving me a much better bang for the buck at the same time. Win/win.

[1] https://www.phoronix.com/scan.php?page=article&item=amd-athl...


Funny, I am the opposite. I wouldn't buy an Intel at this point.


Why wouldn't you buy a Ryzen?


Top end video gaming, those crazy 99th percential, 120hz 4k hdr guys.

Workloads limited by a single thread.

Overclockers.

That's about all I see.


Hackintosh


it seems that there is a community working on that https://amd-osx.com/


AMD Hackintoshes are wonky as hell, since you need a custom kernel. It's far more trouble than it's worth.


  Core Generation | Microarchitecture | Process Node | Release Year 
 -----------------|-------------------|--------------|-------------- 
  2nd             | Sandy Bridge      | 32nm         | 2011         
  3rd             | Ivy  Bridge       | 22nm         | 2012         
  4th             | Haswell           | 22nm         | 2013         
  5th             | Broadwell         | 14nm         | 2014         
  6th             | Skylake           | 14nm         | 2015         
  7th             | Kaby Lake         | 14nm+        | 2016         
  8th             | Kaby Lake-R       | 14nm+        | 2017         
                  | Coffee Lake-S     | 14nm++       | 2017-2018    
                  | Kaby Lake-G       | 14nm+        | 2018         
                  | Coffee Lake-U/H   | 14nm++       | 2018         
                  | Whiskey Lake-U    | 14nm++       | 2018         
                  | Amber Lake-Y      | 14nm+        | 2018         
                  | Cannon Lake-U     | 10nm         | 2017*        
  9th             | C. Lake Refre     | 14nm%        | 2018

* Single CPU For Revenue

% Intel '14nm Class'


Why did they introduce so confusingly many microarchitecture names for 8th gen...


Because 10nm is horribly delayed. Skylake was going to be replaced by Cannon Lake, but instead they introduced bunch of Skylake's flavours (and one horrible chip based on Cannon Lake which was just a marketing stunt).


Are you referring to the 8086k?


Na, the i3 8121U [1] is actually a 10nm part but it is worse than Intel's 14nm parts.

[1] - https://en.wikichip.org/wiki/intel/core_i3/i3-8121u


Marketing. New names make it appear as though progress is made.


They've always been there. He's skipping all the mobile/embedded/high-GPU variants for the predecessor to make things look worse for 8th gen.

There have always been U, H, Y, R, and sometimes C variants for most of those prior generations. Each die gets a codename, just like now.


because they are failing


I assume they’re waiting for a Friday the 13th to announce Crystal Lake?


One important niche where Intel is ripe to lose their lead is deep learning workstations. The issue at hand: they don't put high enough PCIe lane counts on all but the highest end desktop processors. It's unclear if these processors will address that (I can't find lane counts on a quick google search).

Let's say you want to run 4 GPUs off your CPU for a powerful workstation or small server (e.g. for a research group). There's debate over whether you need to run cards in x8 or x16 mode for most deep learning applications, so let's say 8x as a conservative choice. That means you need 32 lanes (or 16 for just two cards). But your drive and other peripherals might take some lanes too. So 40 looks like a safer number. Easy enough to find that on a motherboard...

Most of the mid-high range Intel desktop CPUs only have 16 lanes total. Base model Threadripper (1900x) which is price-competitive with those has 64. You can go to Xeon but that (AFAIK) can be problematic in several ways for midrange or mixed use workstations (no integrated graphics, less mobo selection for the needed features).

I think this is pretty important. If enough researchers go to Ryzen then math libraries will get written there and the lead that libs like MKL provide could be nullified[0]. This will filter into the server market, where CPU is used even in deep learning production deployments (e.g. for inference servers). And having lower end processors is important because there are lots of independent researchers doing important work in the field who don't have huge budgets, as well as academics who can't get budget and might be spending out of pocket on their workstation.

[0] I'll admit I don't know enough about the math hardware in Ryzen vs Intel to say if this is possible.


Threadripper CPUs might be price competitive with LGA1151 CPUs, but X399 motherboards start at more than the lower-end CPUs; combined you really should be comparing to Intel's HEDT X299 platform, which starts at about the same overall price for motherboard+CPU. And Intel's HEDT chips all have 44 lanes this year thanks to Threadripper.

More importantly for deep learning (this price difference is lost in the cost of GPUs), AFAIK all X399 motherboards for Threadripper are set up as 16x/8x/16x/8x, whereas there are X299 motherboards with PCIe switches that give you 16x/16x/16x/16x with neighboring GPUs sharing bandwidth. For deep learning, each individual GPU having 16 lanes of bandwidth is more important than having to share with another GPU.


I think you're right about this for many people but not all.

Intel stepping up the lane counts in HEDT is definitely a good reactionary move but won't affect the budget-constrained scenarios I discussed until a few years from now because you can just buy a couple generations back. I might be wrong but I believe many X299 chips before gen 9 had 16-24 lanes?

I also don't think your point about price difference insignificance is universally applicable because buying extra GPUs over time is quite common and having an extensible box with 1-2 GPUs at the start is potentially a good move. As for the PCIe switches on the X399 chipset IMO this depends on the assertion that 16x is X% better than 8x. This depends on use case and analysis of the penalty you get at 8x vary but many people would take a 10-15% penalty down the road @ 4 cards (really 5-7.5% because it only affects 2/4 cards) to save money now.

I built a box with Intel because I had the capital, and most people at an industry job probably should, but if you read forum and mailing list threads many people are faced with this economic decision and are going TR - I see quite a few academics doing this. Math libraries are a good thesis topic :)


What percentage of Intel users do care about running deep learning system. And of those users, how many care it as little to not get a GPU for that. Both Intel and AMD processors can never come close to GPUs for deep learning operations like matrix-vector multiplication.


well in the article it does say 24 pcie lanes. Which is a pity that its not so much (I would also like to see how much memory these chipsets/CPUs could support).


Its funny how about 4 years ago when I was building my last gaming rig people were saying don't bother with AMD unless you're getting a GPU.

Oh how the tables have turned.

Speaking of which, time for a new gaming rig...


A 4 year old gaming CPU should still be plenty today. It's great that we're getting more cores, but they aren't much faster than the old ones, and many games haven't figured out how to use more than about 4.

GPUs? I have a few hundred old unplayed games on Steam that I'm working through. See you next gen!


I would hope games know how to use more than 4 cores seeing as the current crop of consoles all have 8-core APUs, even if one or two of those cores are reserved for the OS/network stack.


XB1 and PS4 have been out with their 8-core APUs for 5 years now, hasn't changed the importance of single-threaded performance. Amdahl's law still applies.


Just because it uses more cores it doesn't mean the game is interesting to play.


I'm still running an i7-4790K I got around when it came out (5 years ago)... getting kind of anxious to build a new PC, not because I need anything faster. Upgraded to NVME drive and GTX 1080 a couple years ago.

Will probably do a new system in the spring with my income tax return. Leaning towards R7, maybe Threadripper, currently. They're finally getting faster enough to justify the cost.

aside: switched to hackintosh for my OS last year, been very happy. Hoping the osx-amd support isn't too difficult when I upgrade.


I am running an i5-2500k from 2011 as my desktop. Just dropped an rx580 in as my graphics card, and it's running great for my light gaming purposes (overwatch, 1080x60 screen). I think I could get away using it for 1440 or 1080x120 pretty comfortably.


> but they aren't much faster than the old ones

Each year the new CPU it's about 20% faster then the old one. After 4 years, it adds up. A top Intel CPU will now deliver twice the same single-thread performance than a 5 year old one of same price.


That's demonstrably not true. A 4 year old 4790K stands literally right next to 8-th gen CPUs in terms of both single and multiple core performance. A rather large leap happened when the architecture moved to DDR4, but CPU "performance" has hugely stagnated. I don't know where you're getting the 20% from but its definitely not the case in the last 5 years.


As a 4790K owner, I couldn't agree more. I just got around to doing a mild overclock and honestly the chip isn't holding me back even gaming 4k@60fps.


Same here. I'm going to wait for Zen2 before moving on. 8c/16t at 4.5+ GHz will be a compelling upgrade.


4k@60 is GPU bottlenecked, not CPU bottlenecked. It's pushing for 144fps where you'll start to be cpu bottlenecked in eg. Battlefield multiplayer or Assassins Creed. (also having a 4core/8thread cpu helps quite a bit vs 4core/4thread now that games have started pushing CPU limits again, especially with the 1% and 0.1% lows that cause stutter).


I've literally just upgraded from 4670K(4c/4t) to the 4790K(4c/8t) because Forza Horizon 4 was stuttering every now and then, the CPU was basically at 100% load all the time. Switching to a CPU within the same architecture but higher thread support has fixed all issues completely .


> the new CPU it's about 20% faster then the old one

Source? I suspect you're being shown a few choice benchmarks which demonstrate some improvement, perhaps where something became newly vectorizable...but realistic and consistent benchmarking doesn't show nearly that level of improvement.

One area in which you may benefit is in peripherals - an M.2 SSD and DDR4 RAM will help, and those aren't available on 5 year old motherboards. Also, power savings may be found with newer systems if you're in a laptop form factor where that matters.

But let's not pretend that single-threaded performance has doubled in 5 years. Here's one article that goes back quite a ways....and 2012 SPECint comes in at about 60,000 while 2017 comes in at 83,600.

https://www.karlrupp.net/2018/02/42-years-of-microprocessor-...


I doubt that number, too. Our CPU-bound workload runs "only" 20% faster on an i7-6xxx than it did on an i7-920 (calculated for equal clocks; due to the higher clock it was actually about 60% faster, which caused me to do the math because I feared a regression). That's far away from 20% per generation.

But in one thing you err: After a small custom BIOS mod, my home PC now boots from nvme - with a Z68 board (Sandy Bridge era), the module sitting in a PCIe adapter card. Was just a matter of adding some UEFI module.


Yeah, my mb had an nvme m.2 slot, but it was pciex2, so the x4 card (most seem to be) didn't work. Why I went with an adapter.

The main thing lately is when I'm running a lot of containers/vms for workloads it tends to box down a little. Outside of that and 4k 3d, don't notice it ever. But the new stuff is only just starting to get better enough that I may upgrade next year sometime.


M2 SSD is available on Haswell (Xeons at least) via a PCIe Board. Thats more than 4 years old.


I'm using a PCIe x4 card adapter for my NVME drive, runs great.


Speaking of which, time for a new gaming rig...

I had the same thought coming into this thread-what a great time to be in the market for a new machine. I welcome this competition, I want AMD to get better in the CPU game, but will gladly scavenge the spoils of this.


How do you want them to get better? Single core freq at the top end is the only advantage I believe Intel has at this point. AMD is cheaper, more IO, more cores at roughly equivalent speeds. If you are building a 10k gaming rig, I understand, but in general I don't see why you wouldn't go with AMD unless you need it to be have a better TDP. To that point, I expect Intel to continue to rule the laptop market.

Edit: clarified that I don't see Intel not ruling laptop market.


They could look to beat the i5-8400 in gaming perf. Don't really care though since gaming is an afterthought for the ryzen/threadripper plan to own the server market.


Gaming is the only space where intel is has an advantage over ryzen. It's better or as good in the rest of the workflows already


What's holding AMD back on TDP? I'm in the market for a new PC fairly soon - probably SFF desktop rather than laptop, but I'd still prefer to keep fan noise, heat and electricity usage down.


TDP is quite high on ryzen chips, and they auto overclock. The 2nd gen will go as high as it can with safe voltages, provided thermals and power are in check


I understand that it's high, I'm asking why. Are they not interested in the laptop market? Does Intel have some secret sauce they can't replicate?


Bigger transistors and more cores. https://youtu.be/ucMQermB9wQ describes this in more detail.


Likely. AMD uses Global Foundries to fab their CPUs. GF's fabs aren't as advanced as Intels, and ramping up clocks (thus more TDP) is the only way AMD can stay competitive on single thread performance.


If you had a top of the line Intel CPU from 4 years ago, today it'd still be competitive with AMD's top CPUs, at least for gaming.

If you had a top of the line AMD CPU from 4 years ago, you'd have a nice toaster.

AMD fans have gotten louder than ever (and quieter, heh), but the fact remains, AMD can barely compete at the top end as far as gaming goes.

For a short while, AMD had finally met up with Intel at the top end of gaming CPUs (i7 8700k vs Ryzen 7 2700x).

Now the i9 9900K clobbers that comparison.

And yes, some people will immediately yell no fair because the 9900k will cost up to 200$ more, but if you're building a PC to last years, who cares?

My gaming PC has a 4790k that's been running at 4.9 Ghz for over 4 years now. I upgraded the GPU to a newer card, Vega 64 (would have been a 1080ti but all the 38" ultrawides only support Freesync), and that 4 year old CPU isn't a bottleneck yet.

Saving 200$ back then would have been the difference between needing a new motherboard, CPU and RAM today, easily a justified investment.


I know where you're coming from, I did the same just two gens earlier (i7-2600, r9 290, WQHD) and am quite happy with that. But: That's back then... Today the situation is different, since the two can actually compete.


In what way does the 2700x compete with the 9900k?

My whole comment is about how if you’re trying to build a PC that lasts, you’re easily justifying the only drawback, the 200$ increase


I game on my FX-8350 with latest titles with no problems.


I mean, technically?

I remember a recent review stating whether you could still use an FX chip for modern games and the answer was essentially, technically, if you only play at 1080p, have a very beefy GPU (which will then be gimped by the FX) and are careful with settings.

The 7700k was getting almost double the FPS in the same rig...

Meanwhile benchmark videos show the 4790k at most 10 FPS of the 7700k in almost every single game tested, and never the difference between playable and never struggling to exceed 60 FPS, even at higher resolutions.


Do most gamers need the latest hardware or care that much about framerates, or do they care more about actually playing games? I'd suggest the latter.


Why the false dichotomy? I’m sure it sounds pretty to you but it’s pretty silly...

Do most gamers who shelled out dollars for the top of the line gaming CPUs of their day care about framerates? Yes.

Does that mean they don’t care “actually playing games”? Apparently you think so...


I work a lot with lower-mid end office desktops, I3 sort of level. Last year I bought my daughter a Ryzen 3 mobo and processor for £140, and I was blown away by how much better it was than the work machines. I didn't do any benchmarking, but in terms of feel, it felt so much faster. Btw £140 is bargain basement price for a cpu and mobo in the UK, yet this was a regular price for a relatively new set up.

I am not really being offered Ryzens at work though. I get sent lots of Dell and HP offers and they are all Intel atm. I wonder if this is about to change?


Did you add an SSD?


Nope, like for like 5400 HDD


All I really want to know is, are they still backdoored? Can we get them without Intel ME/AMT?


Of course they are. The advantage is that you get a full MINIX install on newer MEs!


Hardware mitigations for Meltdown and Spectre are the real heroes here, and the main reason I've been holding off on building a new machine. Once I can find the ninth gen stuff without being bled dry on cost, I'll be building with these.


From the article:

"What makes this a little different are the eight-core products. In order to make these, Intel had to create new die masks for the manufacturing line, as their previous masks only went up to six cores (and before that, four cores). This would, theoretically, give Intel a chance to implement some of the hardware mitigations for Spectre/Meltdown. As of the time of writing, we have not been given any indication that this is the case, perhaps due to the core design being essentially a variant of Skylake in a new processor. We will have to wait until a new microarchitecture comes to the forefront to see any changes."


> The new desktop processors include protections for the security vulnerabilities commonly referred to as "Spectre", "Meltdown" and "L1TF". These protections include a combination of the hardware design changes we announced earlier this year as well as software and microcode updates.

> * Speculative side channel variant Spectre V2 (Branch Target Injection) = Microcode + Software

> * Speculative side channel variant Meltdown V3 (Rogue Data Cache Load) = Hardware

> * Speculative side channel variant Meltdown V3a (Rogue System Register Read) = Microcode

> * Speculative side channel variant V4 (Speculative Store Bypass) = Microcode + Software

> * Speculative side channel variant L1 Terminal Fault = Hardware

[1] https://www.bleepingcomputer.com/news/security/spectre-and-m...


I must've read a different article than the one linked here, which said these were the first generation with those mitigations. :/


That would be great if true, do you have a link?



That's interesting. Would love to hear something definitive, as I've been holding off on all new computer purchases until something changes, or until you can get a powerful AMD chip in a high quality laptop like the Dell XPS line.


It's not really certain if they fixed any security related issues with this 9th gen.


Likely not. I have a strong feeling they will have to do a redesign from the ground up, which they haven't done here, to fix there security issues.


In the article they rightfully suspect there are no real hardware mitigations. And I don't see how it would even be possible to speed up the process so much. We have to wait for at least a couple of years.


maybe give AMD a shot!


We have Ryzen now..


Yup. Oh wow, Intel you have an 8 core 16 thread chip for $488. Which probably has the same security flaws as their current chips, so say goodbye to the hyperthreading, since that should be disabled. Or I can get the Ryzen 7 2700X for $320. Gee, which one should I choose?


And if you want to go up $100-200 more buys you a threadripper 1950, which is 16 core 32 thread, has twice the memory channels and more than twice the pcie lanes...

Edit for clarification: 100-200 over 9900k price


You lose a lot of single core performance, and TR4 motherboards are also a bit pricier than their AM4 counterparts.


Well, higher end z370 board are not much cheaper and I expect the new z390 close the gap even more


ThreadRipper is great if your tasks are always multi-threaded, but if you're doing anything that sees more benefit from single-core performance (like a lot of games, though this is changing), you're better off with something else.


Don't forget just how many simultaneous processes run on a stock OS install anymore.

More cores/threads are good at least to the point where you have as many hardware threads as software threads.

Even a pile of single-threaded apps running simultaneously benefits from multiple hardware threads.


Please note that hyperthreading may not have the same benefits in AMD and Intel.

From https://www.cs.cmu.edu/~pavlo/papers/p2209-kersten.pdf

"The only significant difference between the two platforms is that, although both platforms offer 2-way Simultaneous Multi-Threading (SMT), Intel’s hyper-threading implementation seems to be much better. On the Skylake, we see a performance boost from hyper- threading for all queries. On the AMD system, the benefit of SMT is either very small, and for some queries the use of hyper-threads results in a performance degradation."


Most synthetic benchmarks have proven amd's SMT scales far better than hyperthreading. It might be the paper's domain that's more suited for HT. Moreover, they appear to be using AVX instructions, which are arguably faster, but don't provide an accurate picture on hyperthreading performance


Yeah, take a look at this: https://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd...

> On average, both Xeons pick up about 20% due to SMT (Hyperthreading). The EPYC 7601 improved by even more: it gets a 28% boost on average.

These results are the same as I've seen everywhere except the quote you pulled out. Zen's SMT is more effective than Intel's HT.


As we now know, this is because Intel cheats. That's why Intel's smt is vulnerable to whatever that recent spectre/meltdown variant was called and AMDs is not.


Oh wow, Intel you have the fastest gaming CPU ever for 488$, up to 50% more performance over your last generation CPU

Or I can get a CPU that traded blows with your last generation CPU for 320$.

I'll take my 500$ CPU please and thank you.

If you're stretching out of your comfort zone to afford a 2700X it's one thing, but the moment you're the type of consumer comparing the 9900k to a 2700X, it's not even a contest. Everyone accepts PC parts are a case of diminishing returns, the 9900k is delivering beastly, even if diminishing, returns for what it costs.


Reading about a new Intel ME (11) makes me slightly anxious and worried. Reading about Intel ME and a integrated Wifi MAC & integrated SDXC makes me very anxious and worried, as I am right now. In the past janking a Intel nic out of a machine and replacing it by a different one knocked out some Intel ME "features"; it's something that can hardly be done with that integrated Wifi MAC i'm afraid.


It'll be interesting how far away from the offical $488 price the 9900K will end up. Due to the limited supply, the prices for the existing high-end Intel processors are not competitive anymore right now. The 8700K costs around 490€ right now here, that's something like a 150€ premium compared to previous prices.

I would suspect that the 9900K will also end up being priced way above the official price.


I can't believe they're dropping hyperthreading on the i5/i7 models. Depending on what kind of inter-generational process improvements we see and how those translate to performance boosts, it seems possible that the 8700k could have higher multicore performance than the 9700k, especially given that maxing out 8+0 physical cores will stress the turbo thermals more than 6+6.

But for single core perf/gaming... these things are going to be unmatched. Stock turbo frequency of 4.9 ghz? You could OC to over 5ghz with ease.

I think I'm beginning to understand Intel's market positioning with these. They know that for most consumers, even including gamers, single core perf dominates everything. So if they can drop some multicore perf by killing hyperthreading, and trade that for a higher turbo, that's a huge win for their bread and butter. And they'll sell you the i9 or Xeon if you need the higher core counts, though maybe they recognize that the HEDT segment is already being dominated by AMD so its back to the labs to R&D a strategy for that.


The new i7 without HT and a lower base clock, shouldn't the i7-8700k overall be faster than?


The 9700k has 8 cores and a higher turbo clock. Should hit the turbo on singlethreaded work and more cores for multithreaded work. HT is workload dependent, but I don't think there's a lot of workloads where it's better than adding 33% more cores.

(Cache per core reduction might be negative though)


No. HT has at best 30% improvement. And worst is it hurts ;)

The boost clocks are higher, you're probably gonna run at this more often than not when you care about it.

There maybe some work loads where having more cores will actually care, but HT isn't all that huge if a performance increase.


Base clock is meaningless now. HT performance depends on the application (sometimes HT cores mean only a few percents of improvement or performance decrease - it depends on what the "main" core does, as they share resources). The new 9700K is not something I would upgrade to (from 8700K), but it's surely the way to go if you upgrade from an older system and you stick with Intel.


No it's not. We see linear improvement of c# compilation speed with clock speed.


big deal. if you're chasing new cpus for more ghz you're a dog chasing your tail..

It would be nice to see some low voltage cpus that aren't just reserved for OEMs only.. unfortunately this is one area AMD hasn't pressured intel (amds offerings all > 100W)


Ryzen CPUs are 65W and 95W; Threadrippers are over 100W.


That is the ugliest CPU box on the market


Have you seen AMD's wankery? This is all due to the awful taste of the "Gamer" crowd. Gaming performance, "VR Ready" on every product, big retail experience, etc. - The whole RGB enchilada.

The marketing team does whatever the customers demand and value.


As a gamer, this is pretty rude.

I go to an event called PDXLAN twice a year. It was previously a 550-person event, until this next one and future ones where we've expanded into a new venue and will have 800 gamers present with the possible option of further expanding to 3,200 in the future. It's one of the largest bring-your-own-computer events in the USA, sponsored by nVidia, AMD, Intel, and several other companies that produce hardware for gamers.

We RGB our cases and equipment because it's fun to personalize your stuff. There are over 500 people there, and seeing so many unique setups is entertaining. We've had someone make their computer look like a boombox [0], a centipede [1], or Rey's Speeder from Star Wars [2], complete with custom painted mouse and keyboard.

You might think it's all silly and stupid, but we get a lot of entertainment out of it.

[0] https://www.facebook.com/photo.php?fbid=1330926013706204&set...

[1] https://www.facebook.com/photo.php?fbid=10209758136100415&se...

[2] http://www.pdxlan.net/forums/viewtopic.php?p=216438&sid=28fa...


If thats your thing, thats fine, but so often it means that the 'gamery' aesthetic is the only option if you want high end hardware.

Im a professional software engineer, that likes to game, but doesn't wish to outwardly identify with it, I like interior design, and I like nice clean hardware that fits in with my flat.

This excludes me from having the top end Acer Predator Monitor, because its made to look like a throbbing red quantum spaceship component, which it isnt, its just a damn display, and it would look ridiculous.

If you want a laptop with decent GPU hardware, most of the options have all these edges, and lights, dressed up to look like some kind of weapon or something. If I need to bring my laptop to a meeting, I can't have that and and be taken seriously.

It just feels childish, and its hard to believe that the number of people like me isn't enough for the manufacturers to support that.


We are trying to build a startup around this - contact us at fermi.design@gmail.com if you are interested.


Are these all the version of the i9? Do they have a version with more L3 cache?


The rest of the i9 are in the HEDT platform, and yes, the have more L3: https://www.anandtech.com/show/13402/intel-basin-falls-refre...


Very nice. Pricey, but nice. Thankyou!


I was half hoping that Intel would differentiate themselves from AMD by increasing frequency instead of playing the number-of-cores game against AMD.. Or is further push in frequency meaningless?


Stock clocks are up and the solder is reportedly allowing a slight increase in peak OC.

Frequency is mostly determined by process node and can't be increased without a die shrink. In fact, early 10nm nodes will actually be slower than what Intel can do on their highly-refined 14nm node. Jokes aside, 14+++ is actually amazingly fast.


I wonder if the Mac Mini will get an upgrade.


This was my first thought as well, since Apple is coming out with (supposedly) "pro-focused" new Minis very shortly.


Why do they call all their first table columns AnandTech. That's annoying.


I'm confused, I just got a Dell with an i9-7940x, it has 14 cores, for 28 CPUs. Why is this processor not mentioned?


The Intel product lines going iX-YZ00(x) where X=Product Line, Y= Generation, Z = Spec, and (x) is present for chips that can be overclocked. So yours is a top product line 7th gen unlocked chip.


Thank you for the explanation! I completely lost track of Intel processor branding after the "Core"/"Core 2"/"Core 2 Duo" era, and these model names have been a complete mystery.


A 7940x is $1300 year old seventh generation CPU, this article is about 9th generation CPUs.


I bought an intel chip a few months ago because ryzen 7nm isn't out yet and I use a high frequency display. At this point, it's blatantly obvious intel is playing catchup, with worse SMT,worse node size,and less flexible architecture


Node sizes are not directly comparable and haven't been for years. It's all marketing BS. Please stop perpetuating this myth. Intel's 14nm process is still better than everyone else's and will be until at least the release of TSMC 7nm.


TMSC’s 7nm is in volume manufacturing. It’s used for the A12 SOC in the latest iPhones.


Jet another 14nm CPU... so don't expect any noticeable performance boost. Even low-voltage iPhone XS CPU has the same single core performance as top-notch Intel processors for workstations... Apple is already using 7nm, but Intel needs at least 1-2 years more to make their 10nm manufacturing process working. This is what happend when a tech company only cares about high margin and nothing else.


> Even low-voltage iPhone XS CPU has the same single core performance as top-notch Intel processors for workstations...

yeah right, if that was the case I'd rig up a compile farm with iphones


I have a proof for you. Please check the single core performance here:

iPhone XS: 4794 scores https://browser.geekbench.com/ios_devices/56

The newest Xeon E-2176M: 4900 scores https://browser.geekbench.com/processors/2500

the difference is only 2%...


I never did the test by myself, I just watched now a couple of yt videos. It looks like the test duration is a couple of seconds. To be fast a couple of seconds is not the same as to be fast for a long time. And there are other much faster single core cpus on geekbench. Of course, todays cellphone cpus are "little wonders" but the are (of course) not the same as a desktop cpu.


That's a low power laptop cpu, not a workstation cpu.


Actually, this CPU is in 15 fastest single core CPUs ever, including all desktop processors: https://www.cpubenchmark.net/singleThread.html



Benchmark for a CPU that is not even released yet and is planned for Q4 2018...


It is Q4 2018 already, and it releases next week




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: