Hacker News new | past | comments | ask | show | jobs | submit login
Intel is reducing server chip pricing in attempt to stem the AMD tide (tomshardware.com)
400 points by rbanffy on Sept 14, 2021 | hide | past | favorite | 330 comments



It's amazing how far Intel has fallen.

The 10nm debacle exposed how far they've fallen behind on fabs to the point that they're outsourcing to TSMC. Like, how humiliating must that be?

Intel completely missed the mobile revolution. They had a stake in that race but sold it (ie XScale).

Intel's product segmentation is bewildering. They've also kept features "enterprise" only to prop up high server chip prices to the detriment to computing as a whole, most notably ECC support.

And on the server front, which I'm sure is what's keeping them in business now, they face an existential threat in the form of ARM.

Intel had clearly shifted to a strategy of extracting as much money as possible from their captive market. I'm not sure price cuts here are necessarily about AMD but more than their previously captive market now has more options in general.

How the mighty have fallen.


I think one more issue is support. If I want a chip from TI, Analog Devices, etc., I fill out a web form and get a sample. If I want to talk to an engineer, I place a phone call. If I want to order a dozen of a part, I go to Digikey. If I want a datasheet, it's online.

Intel won't give you the time of day unless you're HP or Dell. That's optimal for capitalizing on old markets, but it means it's never in new markets. It always starts at a disadvantage. It's not that Intel never has chips startups want to use; it's that it's impossible to engineer with most of them.

By the time a product has enough marketshare for Intel to care, they need to displace an existing supplier.

This means they could never really diversify outside of PCs.


This is a point where I would have to disagree. While their early access programs are generally restricted to larger customers, you can apply to join other schemes (called Docs and Docs+ as far as I remember) where they will assign you an account manager and a dedicated platform application engineer to help you with your design-in process.

I worked at a small start-up producing COM-HPC boards for companies who wanted to keep their servers in-house, as opposed to using cloud infrastructure. We weren't purchasing any more than maybe 500 CPUs of their upcoming platform. Despite that, they supplied 1:1 tech support, reference schematics/layouts, a reference validation platform with which to test our design on, and 1000's of documents including product design guides and white papers. This all came about by just contacting Intel's developer account support and filling in a few forms.

We also produced the same product with AMD hardware and the difference was night an day. Say what you will about their production difficulties and roadmaps, their engineering support is years ahead of AMD.


I wasn't comparing to AMD.

I've had few enough interactions with AMD that I can't pass judgement, but from the few I've had were consistent with your assessment. AMD was a complete black hole. My interactions with Intel were lightyears ahead of AMD.

But Intel, in turn, was lightyears behind Analog, Linear, Maxim, TI, and most other vendors I've dealt with (this was before Analog gobbled Linear and Maxim up).


XMG (a gaming laptop brand) even publicly announced that AMD would not meet their request for validation samples of Ryzen 5800 and 5900 CPUs. CPUs that have been launched and are shipping to other customers already.

https://www.reddit.com/r/XMG_gg/comments/n4i3x2/update_threa...


If AMD at at 100% production capacity, why would they want to increase demand? Surely supplying validation samples could only hurt AMD in that situation (technical costs, disappointing the customer when the customer want to shift to production).


> why would they want to increase demand?

1. It's not mostly about the demand, but about maintaining good working relations with systems manufacturers.

2. Increased demand is not a daily thing. Positive reviews and manufacturer interests would likely hold for a while, effecting the next production planning cycle or what-not.

3. Counteract effects quelling demand.

4. They could theoretically avoid letting prices drop if demand is strong.


A business can always use more demand, even if all they use it for is raising prices.


It really depends on who the targeted customers are. I remember inquiring on some TI lines and being told by the rep that unless you’re a customer anticipating 1M+ units, that chip really isn’t available.


Is that one thousand, or one million?


This was a while ago, but I recall it as one million.


Intel and AMD are both like that, and it makes me wonder how much space they have opened up for ARM. I would love a small x86 SoC if it came with the same level of support that an NXP or TI ARM chip has, but they don't.


> Intel and AMD are both like that, and it makes me wonder how much space they have opened up for ARM.

Arguably, this is what led to the creation of ARM. Acorn wanted to make a computer with a 286, but Intel ignored them, so they decided to build their own RISC based CPU, the "Acorn RISC Machine".


I would make a good movie or netflix series.


I wonder how a small outfit like UDOO manages to design around an AMD embedded part then. The boards are out there and they work, but I have no idea how the negotiations happened.


My impression is that if you are an open source project (especially one with a few already existing designs), you can actually get some design support from large companies. This is especially true if you either meet the right person in marketing at those companies or know someone on the inside. The Raspberry Pi uses chips from a very user-hostile company (Broadcom) because they started as a side project by a few engineers at Broadcom.


> If I want to talk to an engineer, I place a phone call. If I want to order a dozen of a part, I go to Digikey. If I want a datasheet, it's online.

I notice you didn't list Broadcom... And bullshit can you call an engineer. Submit a support case through some online portal maybe. Zero chance they are giving you a direct line to their engineers.


Yah they are all like that, in the arm space outside of really low end devices and the rk3399 they won't even give you minimal register docs for standard devices. I had problems at the previous place trying to build a PCIe device where the minimum to to even get the most minimal of documentation was 100k units. Sure you could buy the parts from digikey but they were useless because the public docs were little more than footprints and high level whitepaper like feature matrices.


And everything you detailed there is solely a management issue.

They could devote a market segment to support that as a long term emerging market support aspect of their business, but it's clear that short term hit-strike-price-for-execs has been the dominant management mode for quite some time.


XScale was an awesome processor. Not only was it competitive, but it was 100% completely open and documented.


I used it in a design. What I remember (it was a while ago) was the lack of OS and driver support from third party software houses. It was a mistake to use it.


That's a shame. I only used it at a hobby level and found the Linux kernel to be surprisingly stable on it (before Linux really ran on a lot of phones). This was just before Palm started going all-in on Linux/WebOS.

With time I think it could have been a real contender.


> And on the server front, which I'm sure is what's keeping them in business now, they face an existential threat in the form of ARM.

HPC and inertia. Lots of inertia.


HPC still has a lot of Intel, because these systems run for ~5 years. But if I look at the Top500, there is systems with AMD Rome (and Milan and Naples). There is systems with IBM POWER9 (and POWER7) and the fastest system in the list is of course running Fujitsu A64FX. And there is exotic systems with Vector Engine, Marvell ThunderX2, Hygon Dhyana or Sunway.

And while Xeon Phi (and predecessors) used to be very popular, the accelerator market is now dominated by Nvidia (mostly Volta, but also Ampere and Pascal) and AMD Vega.

Actually only two systems (#7 in China and #10 in Texas) of the top 10 systems rely on Intel. And upcoming systems also feature a wild mix of architectures and vendors. So way less inertia that you might think.


Intel has soundly lost HPC to NVidia at this point.

Not only because of NVidia GPUs, but also because NVidia bought Mellanox (who makes those fancy InfiniBand NICs that those supercomputers use).

Intel's Xeon Phi didn't work out so hot. They're working on Intel Xe (aka: Aurora Supercomputer), but Aurora has been bungled so hard that Intel's losing a lot of reputation right now. Intel needs to deliver Aurora if they want to be taken seriously.


A lot of stuff in HPC still doesn't utilize GPUs (because the problem is not amenable to GPU architecture or laziness / lack of funding and interest) so at least for commercial deployments with a diverse set of solvers I'd say CPUs remain important. Intel might be unable to outperform their competitors at this time, but they have more than enough money to be temporarily cheaper to (seemingly) make up for that.


>The 10nm debacle exposed how far they've fallen behind on fabs to the point that they're outsourcing to TSMC. Like, how humiliating must that be?

Every chip Intel buys from TSMC is a chip not made by its competitors. Doing this is extremely useful for Intel to the point I wonder why TSMC agreed in the first place.

After all, eventually Intel will improve their fabs, and then it's the non-Intel players that will order from TSMC. Why hamper TSMC's future customers? Intel must have offered a lot of money.


>I wonder why TSMC agreed in the first place.

Right now we dont have confirmation as what and when and volume of TSMC usage. Intel is making GPU on TSMC makes lots of sense. After all their GPU team are vastly more familiar with TSMC ecosystem.

Other than that most of it are just rumours.


Didn't Apple already lock up the 'good stuff' from TSMC?


It was reported Apple bought most of the immediate TSMC next-gen node production. This does indirectly helps Intel, since Apple is not a direct competitor. However, even TSMC current-node is very competitive, and the core logic still holds:

Any fab time Intel buys is a chip AMD can't make. Since Intel is so much larger, they can significantly hurt AMD merely by outbidding them, and eventually still make profit. This is so effective I'm not sure this should have been allowed...


Apple and Intel have both contracted the 'good stuff' from TSMC, to GP's point.

https://asia.nikkei.com/Business/Tech/Semiconductors/Apple-a...


I read an interview a few years ago when the CEO at that time was boasting "Intel chips are so well known, even college students know them very well". What the article writer added was "Intel is losing experience engineers, replacing them with cheap students". Not sure how true is that, but if it is true then it explains the past 5-8 years.

I saw it happen with a former large company (top dog in manufacturing solutions area) where in a the middle of a project all experienced people in USA were replaced with dozens of new hires in India. Half of the people in USA were fired, they were competent but considered too expensive. Talking to them, I found it was the norm for many years, now only some sales and management people are in USA, everyone else is in India. This causes companies to lose the competitive edge in engineering, cost was not really a problem and competing only on cost is meaningless, you lose to India and China on that ground alone.


Have been with Intel for almost two decades, I finally moved to AMD for the very first time recently and I'm glad I did. Intel is being called "Toothpaste Company" for a reason. It has deliberately slowed down its innovation since gained performance advantage over AMD with CORE, for over a decade now. Between each iteration, there was not many changes but kept adding fancy instruction sets such as AVX512 useless to most if not all ordinary users. It's a shame that I bought it actually. But over time I gradually realized that the only occasions I used those fancy stuffs were benching marking new systems. So those fancy things mean nothing to me other than showing off to friends.


For those who (like me) didn't get the "toothpaste company" reference - it seems to be a reference to Intel trying to squeeze every last bit of performance out of an old architecture (as one would squeeze every bit of toothpaste out of a tube), rather than innovating with new architectures and technologies.

It's hard to figure out exactly where the toothpaste reference originated, but at least one source makes it sound like it was a mis-translation of materials published by AMD. See https://www.hardwaretimes.com/amd-takes-a-jab-at-intel-we-do...


It has a bit of a double meaning.

Starting with the Ivy Bridge (3rd) generation, Intel switched to using thermal paste between the core and heat spreader instead of solder on socketed desktop processors. Presumably this was done as a cost savings measure.

This caused a marked increase in core temperatures and thermal throttling. Enthusiasts discovered that you could remove, or "delid", the heat spreader and replace the "toothpaste" with higher quality paste or liquid metal to drastically improve temperatures (15-20c) and improve overclocking headroom.

Edit: This event is commonly reflected on to showcase Intel's greed at a time where they dominated the market. It wasn't until the i9-9900k that Intel went back to soldering heatspreaders for consumer CPUs, at which point they were forced to because they were being challenged by AMD.


Cost saving would've been to get rid of the IHS entirely. Their mobile chips work fine without them, I don't really understand why they're a thing for desktop processors.

AMD uses them too, so there must be a reason... is it because they're afraid of improper installation breaking them? That's on the user.

The weight of the desktop heatsinks? Small changes to latch design should suffice. Or you can have a metal spacer around the chip with the die exposed, kinda like GPUs do.

I've replaced many laptop chips and even ran some on desktops with no issues.


> is it because they're afraid of improper installation breaking them?

Yes. This was an issue back in the Athlon Thunderbird days.

"It's on the user" doesn't work as an argument when all of your large desktop/server OEMs notice a large uptick in failure rate post-assembly.


Looking back it seems so barbaric.

I remember how they briefly tried those black foam sticker pads in the corners of the substrate before acquiescing and using the IHS.

At some point they realized they could do better than a heatsink mounting system that involved trying to balance a heavy metal object on a small pedestal while trying to hook a tensioned spring to a clip you couldn't see by exerting tremendous downward force with a flathead screwdriver. I guess those motherboard return rates finally got to them.


I always wondered why that mounting mechanism even existed. Would've thought it would get scrapped on the drawing board but maybe no one in the design pipeline ever put a screwdriver through their motherboard.


It was probably all part of Intel's strategy to sell more chips. It's hard to repair a gouged motherboard and not worth the time to recover the chips soldered into it. After the introduction of the IHS and new cooling solutions the motherboard market became unprofitable, that's why Intel had to exit it. /s


Only as barbaric as the ~50dB, 4krpm tiny fans on enthusiast coolers in those days.


I don't know if there's any truth to this, but I heard that there were also issues that could arise more easily with electrically conductive thermal paste and that there was essentially fraud going on where lower end SKUs were being passed off as higher end units. That being said, that seems like something that would only affect the consumer used market.


> Cost saving would've been to get rid of the IHS entirely.

The IHS itself is a cost saving measure.

When Intel and AMD first introduced flip chips, they didn't have the IHS and the heatsink was balanced on top while you tensioned a spring. If you rocked the heatsink in any direction you would (not could) crush an edge or corner of the chip and likely kill the CPU.

The IHS protected the chip and reduced the failure/return rate.




> Cost saving would've been to get rid of the IHS entirely. Their mobile chips work fine without them, I don't really understand why they're a thing for desktop processors.

Because there's a huge difference between running 5watts sustained through something the size of your fingernail, and 100 watts sustained. That heat has to go somewhere and there's 20x more of it on a desktop part, as it requires way more integrated cooling to not immediately thermally throttle.


The IHS is needed to prevent the die from, hence RMA.


I think the parent meant an anecdote I've heard many times, in slightly different ways. It goes like this: a major toothpaste company was having a meeting, trying to increase sales. Many solutions were tried: new flavors, advertising, none had much effect.

On a whim, a director asks the guy serving coffee:

  - Jack, what would you do to increase sales?
  - Have you tried increasing the hole on the toothpaste?
There might be some truth to this, toothpaste tubes used to be metal in the 60s and you were supposed to punch a hole on the front of it with the back of the cover cap. That hole got a lot smaller than the ≈1cm wide in the plastic ones of today. It was also much easier to squeeze the very last gram by folding it.


I had also heard a point for toothpaste involving the marketing: toothpaste advertisements, and all marketing imagery of toothpaste on a toothbrush, almost always show absurdly larger amounts of toothpaste than is effective or appropriate to use brushing teeth, trying to increase consumption by increasing waste.


Another explanation could be that advertisers are trying to increase the visibility of the product being sold.


It's amazing how backwards we went from a sustainability perspective when you consider likely no one had this issue front and center as they did in the early industrial days.

We used reusable metals and glasses much more. Now everything is plastic.


On the other side, just take a "Tragerl" of beer (German beer crate with 20x0.5l):

- It weighs much more than a crate of 20x0.5l aluminium cans or plastic bottles

- it is more voluminous: glass bottles have way thicker walls and they need plastic spacers to prevent the bottles from crashing each other, whereas cans and bottles can be shrinkwrapped just fine)

- the return logistics are simpler: glass bottles and the crates have to be returned to the brewery to be refilled, whereas PET bottles and aluminium cans enter the normal, regional recycling stream

The switch to plastics has saved lots of money and environmental pollution in logistics. What was missed though was regulating recycling capabilities of plastics - compound foils are impossible to separate, for example - and mandating that plastics not end up in garbage, e.g. by having a small deposit on each piece of plastic sold.


> The switch to plastics has saved lots of money and environmental pollution in logistics

Ah, but this is debatable!

https://www.wri.org/insights/planes-trains-and-big-automobil...

"Trains move 32% of goods in the United States, but generate only 6% of freight-related greenhouse gas emissions. Meanwhile trucks account for 40% of American freight transport and 60% of freight-related emissions."

From the beginning of the industrial period, we relied on rail and boat for logistics, and buggies for last mile deliveries, until the advent of affordable, mass produced vehicles, and the interstate system, this didn't change much. Our reliance on plastics combined with airplanes and trucks for logistics results in much greater pollution in my view.

Granted, coal was the primary fuel source for steamboats and steam engines, but sail still was common until iron boats became widespread, and still more economical for cross-sea transportation.

All this to say, as an amateur historian, in my view, this all comes to a precipice between the late 1950s and early 1960s, with the completion of the interstate highway system in the US, and DuPoint proliferating plastics in 1960s.


That's an interesting point that without the interstate highway system (which had many benefits) we might be using rail a lot more than we are currently and therefore emitting less CO2.

Another way of looking at it is that we could consider the interstate highways only half-complete, and that the important part that was never built was an electrical delivery system for the cars and trucks that use it, so they can recharge their batteries without even stopping. It's what we would have been forced to build if fossil fuels weren't plentiful and cheap and we still wanted to use cars and trucks for our main transportation. We could have built that in the 70's in response to the oil crisis, and we could've had 50 years of electric vehicles by now, and it could have worked even using awful lead-acid batteries if cars didn't have to go more than twenty miles or so between electrified road sections.

Building the same thing now would be a lot easier. Battery technology is good enough that it would only be needed at regular intervals on the major freeways, and we can pair the electrified road sections with cheap solar power where it makes sense to do so.


> electrical delivery system for the cars and trucks that use it, so they can recharge their batteries without even stopping.

> and we can pair the electrified road sections with cheap solar power

I don't know. More cars on the road in general is just a bad idea IMO. Traffic, noise, accidents, parking lots, Fast and the Furious movies...

Alternatively we can use a system of transport that can carry a whole neighbourhood in one go, is electrified and can be built underground like a billionaire suggested we do for cars. It can be automated and sorta self driving too, can hit 180km/h without too much of a fuss. And we've been building them for almost 200 years.

Wouldn't that make more sense?


More trains would be good. In the U.S. that's a hard sell, though. People do road trips in their cars for vacation in part because it's so convenient to be able to bring a whole carload of food, luggage, and camping gear with you. And there's a lot of places trains don't go. How many national parks have rail service?

Replacing trucks for long-haul would be good, but you'd have to accept slower deliveries. (I wonder if Amazon ever ships things by train?) I expect it's less of an uphill battle to just figure out how to make the things people are already doing more energy efficient and emissions-free than it is to tell them to completely change what they're doing. Admittedly, that does come with the risk of getting stuck in a local optimum. I just think of all that diesel being burned to push wheeled boxes around the country and I'm appalled at the unnecessary waste. Those fossil fuels could just as well have stayed in the ground.


> People do road trips in their cars for vacation in part because it's so convenient to be able to bring a whole carload of food, luggage, and camping gear with you.

On a decent rail infrastructure you can run car-carriers like in the Euro Tunnel between UK and Continental Europe (https://en.wikipedia.org/wiki/Eurotunnel_Shuttle). These things are big enough to accomodate cars and even buses, with people being able to walk around outside of their car.


>in the United States.

Fun fact, Europe moves most of its freight by road: https://www.eea.europa.eu/data-and-maps/figures/road-transpo...

Compare with the US: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQ13vD9... (A screenshot from this PDF: https://www.kth.se/polopoly_fs/1.87118.1550154619!/Menu/gene... )


Another big thing is cleaning - maybe someone put paint thinner, bleach or some acid to their used bear bottle before returning it ?

It could be even an accident (eq. someone turning in old beer bottles found somewhere), but you have to still account for that when cleaning all the beer bottles before refill.


> the return logistics are simpler

Umm, explain to me exactly how it is simpler to recycle a set of PET bottles than to transport a crate of glass bottles? It is infinitely more costly and complex, and involves multiple industries.

As for aluminum cans, it's perhaps less of an ordeal, but still you only recycle between 1/3 and 2/3 of the material:

https://www.container-recycling.org/index.php/calculating-al...

I believe you are only thinking about the logistics directly experienced by the end consumer... which is part of the problem with disposable consumption goods.


Aluminum is pretty great for recycling. And plastic bottles can work okay, but most types of plastic use are going to end up in the garbage.


> toothpaste tubes used to be metal in the 60s and you were supposed to punch a hole on the front of it with the back of the cover cap

I'm definitely too young to remember anything from the 1960s, but you can still buy tomato paste in tubes like that. Neat.


Anchovy paste too.


Many medicines have the same tube style.


Reminds me of the Alka-Seltzer campaign "plop plop fizz fizz." One tablet was enough, but there wasn't any harm in consuming two. So they just told people to take two tablets. https://www.snopes.com/fact-check/double-bubble/


What tangible benefits are you getting from choosing AMD now? Honest question as I'm curious if there's another benefit besides price (which Intel is fighting now).


Most stuff eventually turns into cost so I'll ignore "costly" effects like heat/power and performance per dollar and focus on max performance scale for workloads and other unique differences not achievable via just throwing more money at the alternative.

Per socket performance scaling is higher for equivalent tier sockets. At hyperscale that goes back into the price benefit (buy and maintain less physical data center) but for an individual server workload or individual user that also turns into a performance benefit, particularly for non NUMA aware workloads on the server side and just plain availability of such core counts for performance on the desktop or workstation side.

PCIe wise you get about twice the lanes (128 total on AMD) of even a 40 core 8380 in a the base 8 core model of Epyc or a Threadripper workstation CPU.

A place Intel still wins is total NUMA scaling. For a NUMA aware app like SAP HANA Intel can scale to 8 sockets while AMD currently tops out at 2 so you can reach about 2x as many total threads that way.


I had a sister comment which wasn't as thorough as yours so I deleted it. It's worth adding though that for mobile applications, power consumption isn't just a cost factor, since better efficiency means you can have tighter packaging, get more battery life, not roast your lap as much, have quieter cooling, etc.


Good point regarding batteries


For non-NUMA aware workloads with high inter-core coordination (for example, a write heavy database workload) Intel will still perform much better because the cross-chiplet latency of EPYC chips is very high. Going through the IO die and to another chip is about as expensive as going to main memory.

Hyperscalers are running web servers which is a different story. But if you're running web servers you might be better off with Graviton in perf/$.


Though there is the effect that clusters of eight cores on EPYC have faster access to each other than on Intel.


Awesome, thanks for the extra details.


They are faster for many practical applications, at every price point.

Maybe it's just me but all my performance-sensitive applications are heavily multithreaded. AMD CPUs simply have more cores. The profit from Intel only AVX-512 doesn't quite cut it. Besides, not all apps are actually optimized to leverage AVX-512, C++ compilers aren't.


> kept adding fancy instruction sets such as AVX512 useless to most if not all ordinary users

I'm not a microprocessor expert, but this seems like one of the reasons RISC has so much potential in the future. It seems like x86 is just weighed down with so much cruft.


Intel is NOT competing against AMD only. In the past couple of years, we’ve seen a number of big tech companies developing their own chips. Focusing on AMD would be quite myopic from a strategic pov. This market is only getting more competitive. Either you compete on performance or price.


After purchasing an M1, i'm starting to realize how viable ARM is as a main platform. Nearly everything I want to run on it has a natively built version, and runs great on it. I could easily move anything I've built to a server running ARM with little frustration. I think that may be a bigger part of the coming future.


Before M1 my only exposure to ARM has been low-power SBCs and Android devices, and the experience was mediocre in the “just works” department. Poor hardware support, and a lack of proprietary software support. Performance was also lacking. Apple’s tight integration and high-end CPUs have resulted in a vastly better experience, but I want to have more options than just macOS and MacBooks. I think we’re trending in the right direction, but it’s going to be a while before (5 years IMO) before we see anything approaching competitive to the M-series chips from major market players. If Microsoft could fix their frankly horrid x86 compatibility on aarch64 devices thing would speed along nicely I think.


I have tried buying microsoft arm computers since the last two gens now both the surface prox x with qualcomm sq1 and sq2 as well as another yoga book 5g.

Windows performance on these platforms is so trash, you feel like going back ten years on ultrabooks. Even their own apps are not optimized or some like visual studio didnt even run.

Compare that to x86 builtin emulation on apple m1, it performs so close to native performance on a 1000 bucks macbook air.

Microsoft has definitely different priorities like how to chnange settings for a user without their permissions or how to hide settings so users have less choice. windows experience has been so downhill since win7.


Downhill compared to Win7, yes. Downhill ever since, no. Windows 8 was worse then 10.


i agree. But seems like microsoft is back to old habits in win11 with settings regarding browser settings etc. having different standards for edge vs others.hn discussion: https://news.ycombinator.com/item?id=28225043


I vaguely remember seeing a video demonstrating a an M1 device virtualizing Windows ARM faster than it ran on Surface ARM hardware. Kind of reminds me of how an Amiga of the era could be set up to virtualize(?) Mac OS faster than contemporary hardware Mac could.


They want to be apple so bad, but they also want to be google so bad, not realizing that the overlapping set is empty.

You cant be user friendly, private first luxury appliance and data-driven add sell-out at the same time.

One dream has to die for the fish to fry.


It's tempting to blame the user hostility on corporate shortsightedness or disfunction, but I wonder if MS has a long-term plan here?


> If Microsoft could fix their frankly horrid x86 compatibility on aarch64 device

I don't think Microsoft is the real problem there, though.

NT was developed to be portable and was working on architectures other than x86 in the beginning.

So it was interesting when I heard things about "Windows on ARM" half a decade ago--and then the Surface RT. The RT was crap, but it did have real Windows NT working on non-Intel ARM, as was the OS on their Windows Series 10 phones or whatever.

So Microsoft is already there on an OS level. It's the big software vendors that have to be corralled to switch somehow (Autodesk, Adobe, etc.) Honestly .NET overall was probably at least in part Microsoft trying to get developers on something more CPU-agnostic to reduce dependence on x86.


I'm not so optimistic. There are some technical things Microsoft did poorly when going from x86 to x86-64, which in my opinion delayed the transition of a lot of software by a decade. And this is with processors that can run both instruction sets natively, where no actual software emulation was required.

To give some context (this started with Windows Server 2003 64-bit and is still how it works in Windows 11): Instead of implementing fat binaries like OS X did, they decided to run old x86 applications in a virtualized filesystem where they see different files in the same logical path. This results in double the DLL hell nightmare, with lots of confusing issues around which process sees which file where. For many usecases around plugins, this made a gradual transition impossible. (Case in point: The memory hungry Visual Studio is currently still 32-bit. Next release will hopefully finally make the switch.)

Also, it’s surprising how much stuff in Windows depends on loading unknown DLLs into your process, like showing the printer dialog. So you run into these problems all the time.

Have they learned their lesson? It doesn’t look like it. Last I checked, x86 on ARM uses the exact same system as x86 on x86-64. If they ever emulate x86-64 the same way, that’s triple DLL hell right there. And I don’t think they’ll get a decade to sort things out this time around.


Microsoft announced ARM64EC. It’s an ABI for ARM64 that is similar to x64. They say it allows mixing x64 and ARM64 DLLs in the same process.

https://blogs.windows.com/windowsdeveloper/2021/06/28/announ...


Cool - perhaps that opens the way for a x64+ARM big.LITTLE processor, with a few hot fast x64 AMD cores (big) and a lot of slow efficient ARM cores (little).


I think it's not related. If you need to run ARM code on performance x64 core, ARM to x84 emulator is needed.

I believe this is for make applications like DAW (that often uses native plugins, some aren't updated well) port to ARM.


I very nearly want them to double down on this disastrous strategy so in 3-5 years we’ll all be saved from Windows by an MS-run Linux distro (with windows theming, naturally) that just runs Wine+some MS internal goodies for backwards compat. It’s really not that different from Apple’s approach with Rosetta 2 in M1.


It’s crazy that this now aligns with Microsoft’s goals and could conceivably happen.

Microsoft has the capacity to realize that the value of Windows is not the codebase, but the compatibility. They could let the Linux subsystem swallow Windows and wrap Windows itself inside it.

However, I believe we’ll continue to see their colocation system instead, where Windows and Linux are both wrapped inside a system managing both.


... Windows subsystem for Windows? (Although I guess maybe wow64 was that already)


Internet Explorer became Chromium under the hood (MS Edge)

Windows might be fully Linux under the hood one day!

WSL2 is one of the early bridges across the divide.


What you described is actually closer to Apple's strategy for moving from Mac OS 9 to Mac OS X, with a virtual machine for running classic apps on the new OS.


Microsoft-made Linux distribution finally making Linux on the desktop happen, did somebody wish for it on a monkey paw?


Parent's point was that Apple made the switch without having to get software vendors on the project, due to excellent emulation of x86 on their ARM.


Apple has more control over developers - these idiots pay them money for the "privilege" of developing on it. And Apple started by deprecating support for all 32 bits app. That forced many developers to refactor or port their code. The x86 emulation support will end in the near future and will force the remaining developers onto the ARM platform.


Rosetta 1 was supported for 6 years. I don't think that's too bad.


> I don't think Microsoft is the real problem there, though.

> [...]

> So it was interesting when I heard things about "Windows on ARM" half a decade ago--and then the Surface RT. The RT was crap, but it did have real Windows NT working on non-Intel ARM, as was the OS on their Windows Series 10 phones or whatever.

In this specific case, Microsoft is the real problem: Microsoft deeply locked down the Surface RT; you needed a jailbreak to run unsigned applications on it.


> NT was developed to be portable and was working on architectures other than x86 in the beginning.

NT itself yes, but the userland? Not in the slightest. Apple provided Rosetta runtime translation at each arch transition, MS did not. As a result, no company even thought about switching PCs over to ARM which meant that there also was no incentive for the big players you mentioned to port their software over to RT.


M1 is great but not everyone wants a SoC. I like the ability to swap out parts in my PC build.


And the freedom to run full featured alternative OSes on the bare machine.


All the M1 machines have built in RAM, right ? And GPU as well.


Depending on the next iteration of Apple silicons, I am seriously considering a Mac Mini farm for compute heavy tasks.


Would that really be a competitive option for your use case over something like graviton?

It is kind of a wild state of affairs that as good a chip as M1 isn't available as commodity hardware.


I’ve been eyeing Graviton for our server work loads. On paper it’s price competitive.


We’ve been moving more and more to it. It works, and surprisingly well. It’s not quite up there for absolute single thread performance in our experience, but price/perf is excellent.

edit: really, I’m just waiting for Graviton 2 Fargate support, and then I’ll be able to move a lot of workloads.


M1 is the only arm part with memory ordering like x86. This allows them to hit x86 into Arm but not worry about the change in visibility to main memory by different threads.

None of the server parts have that. But by the time you do run your code on an Arm server, most of the bugs will be worked out.

The M1 is special.


Same. I’ve been loving my M1 Mac Mini for almost a year. Cool, silent, fast and compact.


I think we should look at one step further - RISC-V. Open source is the best way to ensure consumers don't get shafted by someone doing the Intel model again or Apple keeping M1 limited to their devices.


You seem to be making the classic mistake of thinking that a given RISC-V processor is open source. The standard is open, the processor's source "code" (design) doesn't have to be.

This does not mean RISC-V use wouldn't be a good thing, as it prevents a whole boatload of legal issues, but it just isn't what a lot of people seem to think it is.

ARM could end up being a better ISA in the very high-clock high-IPC domain, it remains to be seen.


I accept that I should have been more careful in my wording. I intended to comment that having an open source ISA that you could create processors for is a huge step in creating competition.

If someone wants to compete with Intel they would have a hard time even if they make an excellent processor since they are unlikely to get a x86 license. With arm you have to pay licensing fee and control of the Isa is still with a private company.

With risc v you can make your own processor and have a good shot in market. You will also have a chance to have a to propose/comment on future ISA changes.


It is sad to see these sort of misinformation continue to spread like plague. Especially on HN.


Open soure alone is not enough. Look at Chromium, controlled by a single company on what is decided


Well I think it's clear that Apple will keep M1 to themselves. But I would imagine other vendors will come out with Arm offerings to compete.

I agree a truly open-source option would be desirable.


> we’ve seen a number of big tech companies developing their own chips

Fortunately, only a handful of companies have the resources to do that.

Unfortunately for Intel, those handful of companies are the biggest and only customers for large scale server farms.


Intel IS competing against AMD mainly in the server space now. Of course at some point ARM and RISC-V servers will become mainstream, but it will take years. Intel is taking action now and it's aimed directly at AMD.


ARM is already there in the cloud, cf AWS Graviton. With Apple's M1 in the laptop/desktop (mac mini) space, ARM and its superior power/performance ratio is a significant contender for mainstream compute now.


I know a few VPS distributors, and I've heard pretty mixed things about ARM's viability in the server space. Not only is it pretty expensive relative to x86, it's also pretty slow: you won't be getting SIMD instructions like AVX, which are huge in the server space. The only thing ARM has going for it is low IPC, but I really fail to see many applications where you could benefit from that, much less one where it would be worth the price premium over x86.

Maybe in 5 or 10 years, ARM will be viable. But by then, we'll all be rocking RISC-V CPUs because someone realized that accelerating for specialized workloads isn't a crock of shit when 90% of your workload is video decoding.


Maybe read a bit about Graviton? AllIntel/AMD instances in AWS has Graviton processors to handle network/disk IO unless its a very old instance type, large amounts of AWS own services run on it as well.


The rumor is that Graviton instances are being sold at below cost for Amazon to put negotiating pressure on Intel/AMD.

And that M1 only looks as good as it does because of Apple's de facto monopoly on TSMC 5nm. That AMD cores are more than competitive at the same node.


Too soon do people forget that there were budget Zen 2 SiPs whooping the M1's ass back in 2019. It was at the expense of a slightly higher power draw, but that's a price I'm entirely willing to pay for a full-fledged x86 chip. I reckon that I'll accept no substitutes until RISC-V hits the mainstream in a more major way.


Now to only be able to buy a motherboard and a fast ARM CPU to my workstation and install NixOS to it, I'd be willing to try it out.


Because in the past no one could justify competing with Intel. But the xeon parts with huge profit margins, and companies like apple which only tended to buy the high margin parts in their devices the business people realized that it was cheaper to produce their own. Which is outrageous, if you think about it given the amount of engineering investment required to build a competitive product. The idea that a slice of the customer base has decided that the market is so broken that the financials work better to avoid Intel says they are way past the to greedy stage.


Yeah, Custom ARM / RISC-V chips or even ASIC/FPGAs could start threatening x86/AMD64 for datacenters and "clouds" sooner than we think.


ARM maybe. But I'm not convinced that the ARM-alliance (Fujitsu, Apple, Ampere, Neoverse) is quite as unified as you might think. Apple has no apparent goals for cloud/servers, Fujitsu seems entirely focused on the Japanese market, and Ampere Altra isn't reaching critical mass (Amazon prefers a Neoverse rather than joining forces with Ampere / using Altra).

As long as the ARM-community is fragmented, their research/investments won't really be as aligned as Xeon and/or EPYC servers.

HiFive / RISC-V aren't anywhere close to the server-tier.


> As long as the ARM-community is fragmented

Why does this matter? If popular OS distributions consistently target ARM-based CPUs, with a sufficient number of packages (esp. development-support-related) working on them, then who cares about fragmentation? An organization could buy systems with ARM chips and software will basically "just work".

Same argument for consumer PCs, although there you have the MS Windows issue I guess.


Motherboard costs, motherboard designs, motherboard support.

The more fragmented your community, the harder it is for software to work consistently across all of them. Intel vs AMD has plenty of obscure issues (see "rr" project, and all the issues getting that debugging tool to work on AMD even though it has the same instruction set).

Sound, WiFi, Ethernet, southbridges, northbridges, PCIe roots. You know, standard compatibility issues that having a ton of SKUs just naturally makes more difficult. Having a "line" of southbridges / consistent motherboards does wonders for compatibility (fix the BIOS/UEFI bug in one motherboard, fix it for all) in Intel/AMD world.

But just as AMD has AMD-specific motherboard bugs, and Intel has Intel-specific motherboard bugs... I'd expect Graviton to have its share of bugs that are inconsistent with Apple M1 or Ampere Altra.


What about graviton? Isn't it already competitive with x86 on price/performance?


Amazon doesn't offer graviton in the open market. You can only get those chips if you buy AWS.

Graviton is a standard N1 neoverse core, which is slightly slower than a Skylake Xeon / Zen2 EPYC. There's hope that N2 will be faster, but even if it is, we don't really have an apples-to-apples comparison available (since Amazon doesn't sell that chip).

The most likely source of Neoverse cores is the Ampere Altra, which is expected to have N2 cores shipping eventually. As usual though: since Ampere has lower shipping volume than other companies, the motherboards are very expensive.

x86 (both Intel and AMD) have extremely high volumes: so from a TCO perspective, its hard to beat them, especially when you consider motherboard prices into the mix.


These companies are already not paying anything close to the list price, though.


its not about the price but the ability to create chip that fit what you needs. For example: YouTube is now building its own video-transcoding chips.

The biggest cost of making chip is the foundry and the foundry ecosystem have reduced the cost to where everyone can be a fabless and just outsource to foundry like TSMC and Samsung.

https://arstechnica.com/gadgets/2021/04/youtube-is-now-build...


And still they are developing their own ARMs.


Lol, this stinks of PR BS.

It's Intel's vertical integration that has hamstrung its chip design for about half a decade. 10nm transition was an unmitigated disaster and because of it Intel has haemorrhaged technical dominance and has only really maintained market dominance due to entrenched and slow moving decision cycles within the data centre space and to a lesser extend the consumer market.

Intel will likely stablise over time but they won't enjoy the market dominance they had for most of the last decade.


Not just technical dominance — we’ve all heard of lead Intel engineers hired away to Apple/Google/Amazon/etc during this period of stagnation. How many senior engineers, staff engineers, and low-level talent in general has Intel bled in the last 5-7 years? How many of them have moved to Qualcomm, TSMC, Apple, Google, etc? At this point, I wonder if Intel is even capable of fixing their technical problems since most of their talent abandoned the sinking ship long ago.


Talent comes and goes. If other companies can hire away talent, Intel can hire them back too. If you pay enough, people will come. Intel currently seem quite willing to pay.


It doesn't sound like Intel has been competetive on salaries for quite some time. Last time this came up, it sounded like Intel thought they could get away with Oregon salaries, meanwhile they're now competing with Cupertino salaries, among others. I suspect that they've had a captive audience for so long that they don't have a 'playbook' for a seller's market. All of the people who remember are either fat and happy or long gone.

Love of the work is one thing, but if you can love the same work somewhere else for 40% more you'd better think pretty hard about the wisdom of staying.


That's a change then. Before they use to aim for paying at about the 50th percentile. So Google and other companies would literally pay twice or more as much in salary in comparison.


Yeah I would assume the cause and effect are reversed. They lost technical dominance but still had a road map that counted on all of those people still being there and on the ball.


Intel needs better (especially with less energy consumption) chips not lower price. If Intel will lower the price they will have less cash for R&D and this is the trouble for intel now: not much competitive chips. And where are again at the beginning of the circle.

I hope they will make an internal review of their offices/laboratories/whatever, is not a price issue with the chips, is a performance and technical issue.


Don't assume that increased R&D spending leads to better products and/or reduced time to market, at least quickly.

There are consistent signs of technical decline at Intel, and "reviewing" underperforming units into oblivion is likely to drain away talent and destroy more value faster.

EDIT: other comments point out that Intel is sitting on an awful lot of cash. It can be safely assumed that Intel is spending as much as possibly useful on R&D and that their results are limited by talent and strategic choices, not by cheapness.


The US taxpayer will backstop Intel no matter what, purely because of the fab business.


Intel's profit is double AMD's revenue. Cash flow is not their problem.


Can you elaborate on what exactly their problem is then?


I'm reasonably bullish on intel, but Jobs explained the problem well https://www.youtube.com/watch?v=NlBjNmXvqIM When tech companies become a dominant force in their market they lose their ability to innovate, and it's very hard to change culture once that's happened.


There have been big leadership shakeups at Intel over the past few months (see the CEO change). Long term, if they execute well, they should be back in a good position technically. In the meantime, their only option to offer competitive perf/$ is to lower cost.

AMD managed to recover with a much smaller budget than Intel. I don't think that lower margins for a couple of years will prevent a recovery long term.


Quite sure their bean counters have done the math. Intel like any company aims for max profitability given market conditions - i.e. Intel is only drop prices because it maximizes their profit.


Bean counters are often short-sighted, focusing on quarterly reports to keep shareholders happy.


I've never had time for Intel creating 400 different CPUs just to create artificial market segmentation and force people into a more expensive CPU. Why is there an i3, i5, i7, i9 - ahh, right, because then you can try to justify charging incrementally more for each additional feature. Oh you want turbo boost? Sorry that's an i5! Oh you want hyperthreading/SMT? Nope, next model up. Oh you want ECC? That's a "workstation" feature, here's an identical xeon with nothing new other than ECC!

Just STOP. EVERY CPU they make should support ECC in 2021. Give me an option for with or without GPU, and with or without 10Gbe - everything else should be standard. Differentiate with clock speed, core count, and a low power option, and be done with it.


It’s worth keeping in mind that the silicon lottery is very much a thing at these nanometer sizes. So some market segmentation has to exist. If Intel threw away every chip that had one of the four cores come out broken, they’d lose a lot of money and have to raise prices to compensate. By fusing off the broken and one of the good ones, they can sell it as a two core SKU.

Does this excuse Intel’s form of market segmentation? No. They almost certainly disable, for example, hyperthreading on cores that support it - just for the segmentation. But we can’t make every CPU support everything without wasting half good dies.


> Does this excuse Intel’s form of market segmentation? No. They almost certainly disable, for example, hyperthreading on cores that support it - just for the segmentation.

I think even this is a bit unfair. Intel's segmentation is definitely still overkill, but it's worth bearing in mind that the cost of the product is not just the marginal cost of the materials and labour.

Most of the cost (especially for intel) is going to be upfront costs like R&D on the chip design, and the chip foundry process. I don't think it's unreasonable for Intel to be able to sell an artificially gimped processor at a lower price, because the price came out of thin air in the first place.

The point at which this breaks is when Intel doesn't have any real competition and uses segmentation as a way to raise prices on higher end chips rather than as a way to create cheaper SKUs.


> The point at which this breaks is when Intel doesn't have any real competition and uses segmentation as a way to raise prices on higher end chips rather than as a way to create cheaper SKUs.

I’m not sure that this is really fair to call broken. This sort of fine granularity market segmentation allows Intel to maximize revenue by selling at every point along the demand curve, getting a computer into each customer’s hands that meets their needs at a price that they are willing to pay. Higher prices on the high end enables lower prices on the low end. If Intel chose to split the difference and sell a small number of standard SKUs in the middle of the price range, it would benefit those at the high end and harm those at the low end. Obviously people here on HN have a particular bias on this tradeoff, but it’s important to keep things in perspective. Fusing off features on lower-priced SKUs allows those SKUs to be sold at that price point at all. If those SKUs cannibalized demand for their higher tier SKUs, they would just have to be dropped from the market.

Obviously Intel is not a charity, and they’re not doing this for public benefit, but that doesn’t mean it doesn’t have a public benefit. Enabling sellers to sell products at the prices that people are willing/able to pay is good for market efficiency, since it since otherwise vendors have to refuse some less profitable but still profitable sales.

It is unfortunate though that this has led to ECC support being excluded from consumer devices.


Without knowing what the silicon lottery distribution actually looks like we can't really say that.

> "... but it's worth bearing in mind that the cost of the product is not just the marginal cost of the materials and labour."

Yes, you could choose to amortize it over every product but then you're selling each CPU for the same price no matter which functional units happen to be defective on a given part.

Since that's not a great strategy (who wants to pay the same for a 12 core part as a 4 core part because the amount of sand that went into it is the same?) you then begin to assign more value to the parts with more function, do you not? And then this turns into a gradient. And eventually, you charge very little for the parts that only reception PCs require, and a lot more for the ones that perform much better.

Once you get to diminishing returns there's going to be a demographic you can charge vastly more for that last 1% juice, because either they want to flex or at their scale it matters.

Pretty soon once you get to the end of the thought exercise it starts to look an awful lot like Intel's line-up.

I think what folks don't realize is even now, Intel 10nm fully functional yields are ~50%. That means the other half of those parts, if we're lucky, can be tested and carved up to lower bins.

Even within the "good" 50% certain parts are going to be able to perform much better than others.


> So some market segmentation has to exist. If Intel threw away every chip that had one of the four cores come out broken, they’d lose a lot of money and have to raise prices to compensate.

Except in the case with the Pentium special edition 2 cores and i3 parts, Intel actually designed a separate two core part that wouldn't have the benefit of re-enabling cores among hobbyists.

And then there's the artificial segmentation by disabling Xeon support among consumer boards... even though the Xeon branded parts were identical to i7s (with the GPU disabled) and adding (or removing) a pin on a socket between generations even though the chipset supports the CPU itself (and the CPU runs on the socket fine with an adapter.)

Intel definitely did everything they could to make it as confusing as possible.


Its just the behavior of a monopolist where they are making their product line as efficient as possible by milking every last penny out of every single customer.

In a truly competitive ecosystem features that have additional cost would be the only ones that actually cost more, and artificial limits wouldn't work because the vendor with less market share would just throw them in for free.

So you would expect product segmentation along the lines of core counts, dram channels, etc but not really between for example high end desktop/low end server because there would be a gradual mixing of the two markets.

And it turns out the market is still competitive because Arm and AMD are driving a bus through some of those super high margin products that are only artificially differentiated from the lower end parts by the marketing department or some additional engineering time that actually breaks functionality in the product (ecc, locked multipliers, iommu's, 64-bit MMIO windows, etc).


Apple produces one A series chip for the iPhones every year. How does that work?


Look at the Apple A12x. They disabled a GPU core in it for the iPad, and then in the A12z they enabled that core. This was likely to help with yields. Then with the M1 chips they decided to sell a 7 core version of the chip with the base level Macbook Air and save the 8 core version for the higher trims.

Even Apple is susceptible to it. But Apple doesn't sell chips, they sell devices and they can eat the cost for some of these. For example if a chip has 2 bad cores instead of selling a 6 core version Apple is probably just scrapping it.


Having no margin of error on these SKU's would be terminally dumb, but having tight error bars isn't necessarily a bad thing.

Being able to sell bad batches of product takes some of the sting out of failure, and past a certain point you're just enabling people to cut corners or ignore fixable problems. Having a tolerance of 1 bad core means if I think I have a process improvement that will reduce double faults but costs money to research and develop, aren't I more likely to get that funding?


The M1 (ok, in the 7 and 8 GPU core configurations) is in the Macbook air, Macbook pro, Ipad, Imac, and Mac mini...


All of those device perform exactly the same, as Apple has chosen the same power/thermal set point for all of them. This is going to start to look a lot different in coming years when the larger MacBook Pro transitions - I expect 2-3 more models there. Then when the Mac Pro transitions I expect another 2-3 models there.

We'll start to see high-binned next-gen Apple Silicon parts moving to the MacBook Pro, and Mac Pro, and lower-binned parts making their way down-range.


Another commenter (dragontamer) pointed out elsewhere in the thread that Apple might be doing what Sony did for the PS3 (since Sony also made custom chips that had to perform identically in the end product): the strategy Sony took was to actually make better chips than advertised for the PS3, and disable the extra cores. That means that if one of the cores is broken, you can still sell it in a PS3; you were going to disable it anyway. Yields go up since you can handle a broken core, at the cost of some performance for your best-made chips since you disable a core on them.

That could make sense for Apple; the M1 is already ~1 generation ahead of competitors, so axing a bit of performance in favor of higher yields doesn't lose you any customers, but does cut your costs.

Plus, they definitely do some binning already, as mentioned with the 7 vs 8 core GPUs.


We know from die shots that the M1 chips aren't disabling CPU cores, or any GPU cores other than the 7 vs 8 binning.


Baseless speculation: perhaps they do actually throw away chips? They only really target a premium market segment so perhaps it's not worth it to their brand to try and keep those chips.


You’ll likely see them in other products like lower end tablets or the Apple TV where lasering a core or two doesn’t matter.


Turns out the Apple tax means you're also buying the three chips thrown away to produce your one...


Waste is a factor in all production goods. Every fish you eat's price takes into account dealing with bycatch. Your wooden table's price accounts for the offcuts. It's the nature of making (or harvesting, or whatever) things.


Waste is an inherent inefficiency.

In silicon manufacturing, the inefficiency is actually pretty low specifically because of the kind of binning that Intel and AMD do, that GP was complaining about. In a fully vertically integrated system with no desire to sell outside, the waste is realized. In a less integrated system the waste is taken advantage of.

In theory capitalism should broadly encourage the elimination of waste - literally every part of the animal is used, for instance. Even the hooves make glue, and the bones to make jello.


That's not really an Apple tax though, that's a cost of doing business tax. It's not like Intel and AMD and everyone else aren't effectively doing the same exact thing.


Intel and AMD __literally__ sell those broken chips to the open marketplace, recouping at least some of the costs (or possibly getting a profit from them).

Apple probably does the same strategy PS3 did: create a 1-PPE + 8-SPE chip, but sell it as a 1-PPE + 7-SPE chip (assume one breaks). This increases yields, and it means that all 7-SPE + 8-SPE chips can be sold.

6-SPE-chips (and below) are thrown away, which is a small minority. Especially as the process matures and reliability of manufacturing increases over time.


I can confirm that 5000 desktop ryzen series has issues with turbo boost, basically if you disable turbo and stay on base clock then everthing is fine, but with turbo (CPB) enabled you get crashes and BSOD. I had this problem at work at my new workstation with ryzen 5900x. We RMAed it and new cpu works fine. From what i read it's pretty common problem, but it's strange that no on talks about it.


Can the turbo boost maximum frequency value be lowered a little in the BIOS to try and alleviate the problem?


I think yes, but if you buy cpu, you look at advertised speeds and you expect get them in your machine. From what i researched, to achive advertised clock frequencies you need to increase voltage to make it more stable. Some people reported silicon degradation after increasing voltages (it worked fine for week and then problems returned).

You can read more info on this forum post: https://community.amd.com/t5/processors/ryzen-5900x-system-c...


I am very interested in AMD's latest lineup (and bought a 5500U laptop that performs super well so far) but I am aware that on the PC front things can be a bit rockier and not always stable so such comments and articles help a lot.

Thank you.


Apple sells a 7 core and 8 core version of their M1 chips. Maybe Intel and AMD ship CPUs with even more cores disabled but it's not like Apple doesn't do this at all.


Apple doesnt sell chips at all. Next.


What are you talking about? Apple certainly sells chips -- you just have to buy an entire computer to get it.


Next?


There's no way they throw away that much revenue. Not even Apple is that committed to purity. I'm sure they have a hush-hush deal with another company to shove their chips in no-name microwave ovens or something.


Funny story about microwaves, theres basically only 2 main manufacturers. They're both in China, and you've never heard of them. But if you look at various brands in the US and take them apart, you'll see the only difference is the interface. The insides are literally the same.

The only exception to this are Panasonic microwaves.

https://www.nytimes.com/wirecutter/reviews/best-microwave/

Granted, a microwave with a half broken M1 in it would be awesome.


> The insides are literally the same.

To be fair, is there anything particularly revolutionary that could be done with a microwave (short of "smart" features)? They all function the same: shoot specific frequency energy into the (possibly rotating) chamber. It would make sense that the guts are just a rebadged OEM part.


I'd have guessed that brands would be able to differentiate themselves on even heating, how well the reheating sensors work, etc. But I guess not.


It's not that much revenue because the marginal cost of an individual chip is very low. Given that apple has plenty of silicon capacity, throwing away say 5-10% of chips that come off the line is likely cheaper than trying to build a new product around them or selling them off to some OEM who needs to see a bunch of proprietary info to use them.


No way; the half-busted chips go into low-cost products like the iPhone SE. It costs little to accumulate and warehouse them until a spot in the roadmap for a budget device arises.


The SE series uses the same chips, but cheaps out in other ways by going with older body, older camera, older screen.


It doesnt, still cant game with it.


> By fusing off the broken and one of the good ones, they can sell it as a two core SKU.

Fuse off the broken one? Sure, makes sense.

Fuse off a good one? That's arguably amoral and should be discouraged.

Three cores can be better than two. Let the consumer disable the runt core if they need.


That's not amoral. It's missing a market opportunity, but conflating that with morality is an interesting way of looking at it.

Businesses don't owe you a product (before you pay for it) any more than you owe them loyalty after you pay for something. They will suffer when someone else offers what you want and you leave. That's the point of markets and competition.


Maybe 'amoral' is a bit strong, but I think there is something wrong with an economic system where producers destroy wealth, rather than distribute all that is produced.

If it's wrong for the government to pay farmers to burn crops during a depression, then it's wrong for a monopoly to disable chip capabilities during a chip shortage.


I think you're framing the supply chain in a very personal (strawman) way.

The problem is just one of "efficiency". The production is not perfectly aligned with where people are willing to spend money. A purely efficient market exists only in theory / textbooks / Adam Smith's Treatise.

The chips that roll off a fab are not done. They aren't "burning crops". Perhaps they are abandoned (not completed) perhaps because they need to recoup or save resources to focus on finishing and shipping the working (full core) products. They aren't driving their trucks of finished products into the ocean.


> The problem is just one of "efficiency". The production is not perfectly aligned with where people are willing to spend money. A purely efficient market exists only in theory / textbooks / Adam Smith's Treatise.

Destroying wealth is not appropriate the market mechanism to deal with disequilibrium. Producers should either lower the price to meet the market or hold inventory if they anticipate increased future demand. However, the latter may be harder to do in the CPU business because inventory depreciates rapidly.

Intel has hitherto been minimally affected by market pressures because they held an effective monopoly on the CPU market though that is fast changing.

So, there is nothing necessarily "efficient" about what Intel is doing. They're maximising their returns through price discrimination at the expense of allocative efficiency.

> The chips that roll off a fab are not done. They aren't "burning crops". Perhaps they are abandoned (not completed) perhaps because they need to recoup or save resources to focus on finishing and shipping the working (full core) products. They aren't driving their trucks of finished products into the ocean.

That may be true in some cases, but not in others. I'm speaking directly to the case where a component is deliberately modified to reduce its capability for the specific purpose of price discrimination.


> Businesses don't owe you a product (before you pay for it) any more than you owe them loyalty after you pay for something.

This is itself a moral claim. You may choose to base your morals on capitalism, but capitalism itself doesn't force that moral choice.

> That's the point of markets and competition.

And the point of landmines is to blow people's legs off, but the existence of landmines does not morally justify blowing people up. Markets are a technology and our moral framework should determine how we employ technologies and not the other way around.


So, if I had changed to preface with "In today's western society, it is generally accepted that ... ", we'd be on a level playing field? That's reasonable.


You could make that claim, but I disagree that it is generally accepted that companies destroying products is a morally good thing.

I don't know anyone in western society who thinks things like planned obsolenscence are to be admired.


Wait till you find out that two people side by side on an airplane may pay 10x or more difference in ticket price, for the same ride.


Wait until they find out about the fact that all new BMWs come with heated seats, but you need to pay a monthly subscription to have them enabled.


Such as some airlines having a “business/first class” that’s nothing but “board before the plebs”


No, the scenario is that there are massive price differences even for the same class of seats. Traditionally, the major long haul airlines sold seats weeks/months in advance at rates that were basically losing money but made almost all of their per flight profit on last minute bookings at higher rates. These were usually business flights, but not necessarily (not usually, even) business class.

Business models for budget airlines (RyanAir, etc.) are a bit different but that's not relevant here.


Amoral? Why? They advertise a two core part, you pay for a two core part, you get a two core part. Completely fair.


Because if they're capable of making plenty of good 4-cores but have more demand for 2-cores so are cutting good 4c, they should just make the 4-cores a little cheaper. But maybe they already do this.

Anyways, agreed ECC should be standard, but it requires an extra die and most people can do fine without it, so it probably won't happen. But an ECC CPU option with clearly marketed consumer full ECC RAM would be nice. DDR5 is a nice step in this direction but isn't "full" ECC.


I don't know if mobile cores factor into the same process, but if you have a lot of demand for 2 core system for cheap laptops that can't supply the power or cooling for a 4 core then having more 4 cores, even if they're cheaper doesn't help.


To be pedantic, you mean immoral right? It's bad, they shouldn't waste usable resources just to fit their marketing scheme.

Amoral means that it is not moral or immoral.


Just to note, AMD does every single thing you blame Intel for.

AMD recently dicked b350/x370 chipset owners by sending motherboard manufacturers a memo telling them not to support Zen 3 (5000 series) Ryzen CPUs on their older chipsets.[1] This was after AsRock sent out a beta BIOS which proved that 5000 series CPUs worked fine on b350 chipsets. Today, AsRock's beta BIOS still isn't on their website and it's nearly a year after they put it out.

Also, Ryzen APU CPUs do not support ECC. Only the PRO branded versions. Which only exist as A) OEM laptop integration chips, or B) OEM desktop chips which can only be found outside North America (think AliExpress, or random sellers on eBay).

It's more accurate to say AsRock supports ECC on Ryzen. And sometimes Asus. They are also incredibly cagey about exactly what level of ECC they support.

Ryzen only supports UDIMMs. Not the cheaper RDIMMs. There are literally 2-3 models of 32GB ECC UDIMMs on the market. One of which is still labeled "prototype" on Micron's website, last I checked. Even if your CPU supports ECC, it takes the entire market to bring it to fruition. If no one is buying ECC (because non ECC will always be cheaper), then the market for those chips and motherboards won't exist. Want IPMI on Ryzen? You're stuck with AsRock Rack or Asus Pro WS X570-ACE. Go check the prices on those. Factor in the UDIMM ECC. It's not cheaper than Xeon.

[1] https://wccftech.com/amd-warns-motherboard-makers-offering-r...


>AMD recently dicked b350/x370 chipset owners by sending motherboard manufacturers a memo telling them not to support Zen 3 (5000 series) Ryzen CPUs on their older chipsets.[1] This was after AsRock sent out a beta BIOS which proved that 5000 series CPUs worked fine on b350 chipsets. Today, AsRock's beta BIOS still isn't on their website and it's nearly a year after they put it out.

And they stated their reasoning: The average AMD 400 Series motherboard has key technical advantages over the average AMD 300 Series motherboard, including: VRM configuration, memory trace topology, and PCB layers

Which is entirely reasonable, and accurate if you look at the quality of the average X370 motherboard compared to 400+.

And no, AMD does not do everything I described. Which Ryzen model doesn't have SMT? I see it on the 3, the 5, the 7, and the 9. Which model doesn't have turbo boost? I see it on the 3, the 5, the 7, and the 9.

As for ECC: I don't believe I said they're perfect, but it's a heck of a lot better than what Intel has to offer...


> The average AMD 400 Series motherboard has key technical advantages over the average AMD 300 Series motherboard, including: VRM configuration, memory trace topology, and PCB layers

So AMD told you that? And yet you don't call that market segmentation? Come on now. Lose the double standard already. AsRock (and I think Asus or Gigabyte?) has proven the b350/x370 chipset works fine with 5000 series CPUs. People have tested it and are using it just fine. VRMs are up to the motherboard. Why are you letting AMD dictate what motherboard manufacturers want to support here?

> look at the quality of the average X370 motherboard compared to 400+

Uh, what? The x370 is at a higher tier than b450. There are many b450 boards that are straight garbage (and let's be honest, garbage MBs stretch across all chipsets). The difference between a b350 and b450 is vanishingly tiny.

I'm baffled that people really think 300/400/500 series matter. You can run Zen 1 on b550/x570 despite AMD not wanting you to. You can't claim VRM/memory trace/PCB there. The only real limitation that I can tell is physical BIOS RAM capacity.

> Which Ryzen model doesn't have SMT?

The Ryzen 3, of course. Not that I meant literally all the steps Intel took AMD also took. But what the hell do you think the "X" series of Ryzen chips are? Or Threadripper and EPYC? It's all market segmentation. The Ryzen 5 is just the 7 with cores disabled. Why are you picking certain features as "segmentation" over others? It makes no sense.

> As for ECC: I don't believe I said they're perfect, but it's a heck of a lot better than what Intel has to offer...

How? Just so you know I spent literally months researching everything I've stated in this thread just so I could put together a Ryzen system with ECC. With Xeon I could have been done in a day.

Gigabyte allows ECC RAM to operate, but forces it into non-ECC mode thereby working as normal RAM. Good luck figuring out what MSI is doing. Asus, who the hell really knows. Their website spec sheet lists "ECC supported" and the manual for each specific motherboard says something entirely different.


They took right to choose for myself from me for my own good! Like Abortion, Apple genius telling me I should buy new device because replacing battery will cost the same, or Tesla charging $15K for broken battery cooling pipe, is that what you are saying?

>VRM configuration

New CPUs have same TDP.

>memory trace topology

worked fine with previous CPUs at speed X

>and PCB layers

see above

>> The average AMD 400 Series motherboard has key technical advantages over the average AMD 300 Series motherboard

funny you say that, AMD didnt think so before the backlash https://www.itworldcanada.com/article/amd-zen-3-processors-w...

AMD Zen CPUs are full S0Cs nowadays. What they call "chipset" is just a PCIE connected Northbridge. Everything important is integrated inside CPU. pcie, ram, usb 3.0, sata, HD Audio, even RTC/SPI/I2C/SMBus and LPC are on die. You can make perfectly functional system with just an AMD CPU alone.

How about AMD Smart Access Memory totally requiring 500-series chipset despite being just a fancy marketing name for standard PCI Express Resizable BAR support? Already shipping disabled for 2 prior generations before being announced as 5000 exclusive. Oh, enough uproar and even that crumbles a little bit https://www.extremetech.com/computing/320548-amd-will-suppor... but still linked to "chipsed" while implemented entirely inside CPU.

Or that time x470 was going to support PCIE 4, but then it was made x570 exclusive. Despite the fact "chipset" doesnt even touch the lines between CPU and slots.

oh, but but the bios size limit, we cant support all the CPUs on same motherboard (like they did in Socket A days) ... in a 16MB bios chip? please.


The worst part is that adding ECC support should only increase the price of RAM by about 13%, which given that the RAM modules are about $50-$100 on most builds works out to $7-$13 to the total cost of the machine. Every machine should come with ECC. It's such cheap insurance. But because the chip manufacturers have to make more money by artificially segmenting the market almost nobody runs ECC on home machines.


13% is huge in a low margin, highly competitive field. the price difference comes down more to economies of scale and less to artificial segmentation.


It is 13% of one of the cheaper components. Back in the 80s when all memory was expensive there was something of an excuse, but today we are needlessly trading the possibility for silent corruption over the multi-year lifetime of the machine for a couple of coffees. And worse, we make it really expensive and difficult for people who do want to reduce their risk by artificially segmenting the market.


Back in the 80s the need for ECC was much less because the gates were physically bigger and there was much less overall memory. Back then the chance of your computer having a bit flip was like one in a million per year, now with gigabytes of memory it's near 100% chance per year.


But it's 13% of a tiny component of an overall system. Not the same as 13% of the total cost.

Sure, if you're only buying memory modules, maybe you would go for the $7 savings. But as part of an overall system, nobody is even going to notice.


I notice so there goes your argument down the drain.


For RDIMM, it's fair that they don't "implement" it on memory controller because they don't sell chips made from same silicon that need support RDIMM .

Intel's "disabling" ECC is different situation. They implements ECC for the silicon, enable for Xeon, disable for Core i.


>OEM desktop chips which can only be found outside North America (think AliExpress, or random sellers on eBay).

Lenovo offered Pro Series Ryzen APU small form factor PCs. Like the Lenovo ThinkCentre M715q with a 2400GE. I believe HP offered them as well with the 2400GE at some point.


by desktop I meant non-integrated/embedded. A standalone CPU you could buy and plop into any standard ATX/mATX/ITX motherboard.

But even if you have a Pro embedded, it doesn't mean you get ECC. My Lenovo ThinkPad has a PRO 4750U. But they solder on one non-ECC DIMM. So it's rather pointless. Plus, it's SODIMM. So that's yet another factor at play when choosing RAM.

The only real exception that I know of is the recent 5000G APUs may support ECC. But this seems to be borderline rumor/speculation at this point. Level1Techs made the claim on YouTube and were supposed to have a follow up. Not sure if that ever happened.


Yeah, I've switched to AMD Ryzen 5k for my dedicated servers. They're faster and cheaper than Xeon, they support ECC which is the only reasons I need Xeon previously.


Higher end Ryzens along with NVMe make for great high performance CI local worker nodes.


Fun-fact: Intel's 12th gen desktop CPUs will no longer have AVX-512. Well, I mean, the cores do have it, but it's disabled in all SKUs. So to do any AVX-512 development and testing at all you will need an Intel Xeon machine in the future.


Market segmentation both raises and lowers prices. I don't think it is inherently bad. The low cost of entry level chips is only viable because of the high cost of premium chips. It is also critical in getting more viable chips out of your wafers, as defective parts of the silicon can be disabled and the chip placed in a lower SKU.

If you eliminate the market segmentation practices, then the price of the small number of remaining SKUs will regress to the mean. This may save wealthy buyers money as they get more features for less cash, but poor buyers get left out completely as they can no longer afford anything.

I do agree that Intel takes this to an absurd degree and should reign it in to a level more comparable to AMD. With ECC being mandatory in DDR5, I would expect all Intel chips to support it within a few years.


I agree in principle, but it's pretty obvious that this would be bad for their profit margins and as a consequence wouldn't happen.

After all, making your consumers buy the more expensive versions of your product just because they need one of its features is a sound business decision.

Otherwise people will use the cheaper and lower end versions if they only need these features - like i'm currently using 200GEs for my homelab servers, because i do not require any additional functionality that the low power 2018 chip doesn't provide.


Well, now they are losing to AMD, so what does that tell you about it being a sound busineas decision?


They aren't losing to AMD because of market segmentation, they are losing because their fabs are way behind TSMC.


Hence they can no longer afford to do the market segmentation.


They are. The Intel segmentation was too restrictive. AMD started offering "server" grade features on desktop parts.


> they are losing because their fabs are way behind TSMC.

I don't believe it is merely an execution problem.

AMD's out-innovated Intel Evidence being the pivot to multi-core, massive increased PCIe, better fabric, chiplet design, design efficiency per wafter, among others.

Why did this happen?

> Two years after Keller's restoration in AMD's R&D section, CEO Rory Read stepped down and the SVP/GM moved up. With a doctorate in electronic engineering from MIT and having conducted research into SOI (silicon-on-insulator) MOSFETS, Lisa Su [1] had the academic background and the industrial experience needed to return AMD to its glory days. But nothing happens overnight in the world of large scale processors -- chip designs take several years, at best, before they are ready for market. AMD would have to ride the storm until such plans could come to fruition.

>While AMD continued to struggle, Intel went from strength to strength. The Core architecture and fabrication process nodes had matured nicely, and at the end of 2016, they posted a revenue of almost $60 billion. For a number of years, Intel had been following a 'tick-tock' approach to processor development: a 'tick' would be a new architecture, whereas a 'tock' would be a process refinement, typically in the form of a smaller node.

>However, not all was well behind the scenes, despite the huge profits and near-total market dominance. In 2012, Intel expected to be releasing CPUs on a cutting-edge 10nm node within 3 years. That particular tock never happened -- indeed, the clock never really ticked, either. Their first 14nm CPU, using the Broadwell architecture, appeared in 2015 and the node and fundamental design remained in place for half a decade.

>The engineers at the foundries repeatedly hit yield issues with 10nm, forcing Intel to refine the older process and architecture each year. Clock speeds and power consumption climbed ever higher, but no new designs were forthcoming; an echo, perhaps, of their Netburst days. PC customers were left with frustrating choices: choose something from the powerful Core line, but pay a hefty price, or choose the weaker and cheaper FX/A-series.

>But AMD had been quietly building a winning set of cards and played their hand in February 2016, at the annual E3 event. Using the eagerly awaited Doom reboot as the announcement platform, the completely new Zen architecture was revealed to the public. Very little was said about the fresh design besides phrases such as 'simultaneous multithreading', 'high bandwidth cache,' and 'energy efficient finFET design.' More details were given during Computex 2016, including a target of a 40% improvement over the Excavator architecture.

....

>Zen took the best from all previous designs and melded them into a structure that focused on keeping the pipelines as busy as possible; and to do this, required significant improvements to the pipeline and cache systems. The new design dropped the sharing of L1/L2 caches, as used in Bulldozer, and each core was now fully independent, with more pipelines, better branch prediction, and greater cache bandwidth.

...

>In the space of six months, AMD showed that they were effectively targeting every x86 desktop market possible, with a single, one-size-fits-all design. A year later, the architecture was updated to Zen+, which consisted of tweaks in the cache system and switching from GlobalFoundries' venerable 14LPP process -- a node that was under from Samsung -- to an updated, denser 12LP system. The CPU dies remained the same size, but the new fabrication method allowed the processors to run at higher clock speeds.

>Another 12 months after that, in the summer of 2019, AMD launched Zen 2. This time the changes were more significant and the term chiplet became all the rage. Rather than following a monolithic construction, where every part of the CPU is in the same piece of silicon (which Zen and Zen+ do), the engineers separated in the Core Complexes from the interconnect system. The former were built by TSMC, using their N7 process, becoming full dies in their own right -- hence the name, Core Complex Die (CCD). The input/output structure was made by GlobalFoundries, with desktop Ryzen models using a 12LP chip, and Threadripper & EPYC sporting larger 14 nm versions.

...

>It's worth taking stock with what AMD achieved with Zen. In the space of 8 years, the architecture went from a blank sheet of paper to a comprehensive portfolio of products, containing $99 4-core, 8-thread budget offerings through to $4,000+ 64-core, 128-thread server CPUs.

From https://www.techspot.com/article/2043-amd-rise-fall-revival-...

[1] https://en.wikipedia.org/wiki/Lisa_Su


The secondary features (PCIe, ECC) and tertiary features (chiplets) wouldn't have mattered if Intel had delivered 10nm in 2015.

It's a harsh truth, but nodes completely dominate the value equation. It's nearly impossible to punch up even a single node -- just look at consumer GPUs, where NVidia, the king of hustle, pulled out all the stops, all the power budget, packed all the extra features, and leaned harder than ever on all their incumbent advantage, and still they can barely punch up a single node. Note that even as they shopped around in the consumer space, NVidia still opted to pay the TSMC piper for their server offerings. The node makes the king.


Thanks! I had no idea about any of this. Very informative.


Exactly. It seemed like a sound business decision because it gave them measurably more money in their pocket over a short period of time. They don't appear to have taken into account that they left the door open for competition. It wasn't just prices that left them vulnerable, but it sure didn't help.

AMD should never have been able to get back in the game.


I agree but this is a game you can play with your customers when they actually want what you’re selling and you have market power. When you’re losing ground and customers are leaving the shop, it’s time to cut the bullshit and give people what they want.


>I agree in principle, but it's pretty obvious that this would be bad for their profit margins and as a consequence wouldn't happen.

The only reason it hasn't happened is because they had no legitimate competition until recently. In a healthy market they would have been forced to do so long ago. Capitalism and "market forces" only work where competition exists.


Yes please. ECC support by now should come by default, both in CPU support and in motherboards, RAM chips etc.

At least AMD Ryzen supports it, but the fact that one has to spend a lot of time to research through products, specs, forums and internet chats to figure out a good CPU, m/b & RAM combination that works is cumbersome, to say the least.


The "reason" is yield management combined with inventory management.

The i3 through i9 are generally the exact same silicon. But yields are always variable. If you took the raw yield the actual i9 per wafer might only be 10%-20% which would not be economically viable.

So designed into EVERY Intel product (and generally every other semiconductor company's products) are "fuses" and circuitry that can re-map and re-program out failed elements of the product die.

So a failed i9 can AND DOES become i7, i5, or i3. There is no native i3 processor. The i3 is merely an i9 that has 6 failed cores or 6 "canceled" cores (for inventory/market supply management). Same goes for i5 and i7. They are "semi-failed" i9s!

This is how the industry works. Memories work in similar ways for Flash or DRAM: there is a top-end product which is designed with either spare rows or columns as well as half-array and 3/4-array map-out fuses. Further there is speed binning with a premium on EMPIRICALLY faster parts (you can NOT predict or control all to be fast - it's a Bell curve distribution like most EVERYTHING ELSE in the universe)

With this, nominal total yields can be in the 90% range. Without it, pretty much NO processor or memory chip would be economically viable. The segmentation is as much created to support this reality OF PHYSICS and ENGINEERING as it is to maximize profits.

So generally, to use your example, a non-ECC processor is a regular processor "who's" ECC logic has failed and is inoperable. Similar for different cache size versions - part of the cache memory array has failed on smaller cache parts.

So rather than trash the entire die which earns $0 (and actually costs money to trash), it has some fuses blown, gets packaged and becomes a non-ECC processor which for the right customer is 100% OK so that it earns something less than the ECC version but at an acceptable discount.

When I worked at Intel, we had Commercial, Industrial and Military environmental plus extra ones for "emergencies: e.g. parts that completed 80% of military qual and then failed - hence the "Express" class part.

We also had 10 ns speed bins which create 5-7 bins, and then the failed half- and quarter-array parts meant 3 more. So 4x7x3 = 84 possible product just for the memory parts I worked on.

For processors you could easily have separate categories for core failures, for ECC failures, for FPU/CPU failures. That takes you up to 100-200 easy. If you are simultaneous selling 2-3 technology generations (tik-tock or tik-tik-tock), that gets you to 500-1000 easy.

This is about "portfolio effect" to maximize profits while still living with the harsh realities that the laws of physics impose upon semiconductor manufacturing. You don't rely on a single version and you don't toss out imperfect parts.

BTW how do you think IPA and sour beers came about?? Because of market research? Or because someone had a whole lot of Epic Fail beer brew that they needed to get rid of??

It was the latter originally, plus inspired marketing. And then people realized they could intentionally sell schlock made with looser process controls and make even more money!


> So generally, to use your example, a non-ECC processor is a regular processor "who's" ECC logic has failed and is inoperable.

But no high performance mainstream desktop Intel CPU supports ECC [0]. Meanwhile AMD doesn't have any that lack it.

What gives? Surely Intel's ECC logic doesn't have such a huge defect ratio that Intel can't have even a single regular mainstream part with ECC.

At work I need fairly low performance CPU with decent integrated graphics. Intel's iGPUs are great were it not for the lack of any parts with ECC. Nevermind that finding a non-server Intel motherboard with ECC support would restrict the choice such that there'd likely be none with also other desired features.

[0] https://ark.intel.com/content/www/us/en/ark/search/featurefi...


Ok - I was with you until the IPA part. :)

IPA came about because hops are a natural preservative and they needed to ship the beer all the way to India from England.

Sour Beer is just air fermented beer ala Sourdough Bread. It is actually harder to make Sour Beer than "normal" beer (it does not come out of the failure of normal beer fermentation either).

Sorry for being pedantic. :)


> So generally, to use your example, a non-ECC processor is a regular processor "who's" ECC logic has failed and is inoperable.

I find that particular statement, very hard to believe.


Right, I'm doubtful that the die area consumed by the chip's ECC circuitry would fail often enough to support a "non-ECC" manufacturing bin.


I really appreciated this explanation, thank you


ECC support is an actual +10-20% cost in materials for the motherboard and DIMM manufacturers. Also, ECC errors are basically non-existent on desktop/laptop workloads. ECC is worth the extra cost in servers, but for desktops and laptops, the market got it right.


A consumer PC should see a single bit error roughly once a week. That’s hardly non-existent


According to who? I checked the edac module for a year on my work machine, and it never detected a single error. I know I'm just one anecdote, but I doubt I'm that lucky.



Worst case those just trash some family photos of a dead relative. Hardly anything important.

/s


I don't want to pay more for cheap CPUs such that they have ECC. High prices on ECC subsidize cheaper parts without ECC.


In theory Intel could use profits from Xeons to subsidize consumer chips, but I doubt they actually are. In practice you only see that happen in highly competitive commodity markets where the profit margin on consumer grade models is razor thin (e.g. SSDs). Intel's profit margin on their consumer chips is not particularly small, and AMD wasn't a significant competitive threat until a year or two ago.


Except that if the cheaper chips have ECC, they probably couldn't go up much in price — that price is limited by how much people (who don't care about ECC anyway) are willing to pay. So if prices for the low end went up, people (like you) would instead go without (meaning Intel doesn't get your money), or try to get second hand (Intel doesn't get your money), or go with AMD (Intel doesn't get your money). But Intel would really like to have your money, or at least generally more money.


Intel would like to make the same profit per wafer as before. Any savings you get as some who wants ECC gets added weighted by fraction of volumes to chips in my price class. No thanks.


The more expensive chips subsidize the cheaper ones. If they put ECC in low-end models, they would have to charge more for them, because fewer people would buy the high-end models.

Also, there's some cross contamination between price point and market segment here. Nobody just buys a CPU, they buy a CPU wrapped in a laptop. So Intel's real customers are laptop manufacturers, not you. So the low-end chips have to appeal to a model that the laptop vendors want to introduce. That takes the form of thin & light laptops (or low-energy-usage "green" desktops for office workers).

Adding ECC support adds heat and cost and die size. All things the thin & light market do not want under any circumstances.


This is very common across many industries. It doesn't cost much more to manufacture a sports car vs a sedan, but the price is very different.

No product is based on the price of manufacture, it's based on the price people are willing to pay.


Let’s say it costs 5 billion to design a car (it goes as high as 6 billion) and another 2-3 billion to create all the molds and custom tooling and change over a factory. If you sell 10 million cars, that overhead costs $800 per car. If you sell only 1 million, that’s $8,000 per car. Some sports cars sell even fewer units than that. This is the biggest reason prices are higher.


It is a bit much that ECC is only availble on xeons, as ecc is incredibly cheap in terms of circuitry. Glad to see AMD are including it on mid end products.


And similarly with memory speed segmentation in the Xeon line. I'm kicking the tires on a ice lake 8352V, and I was disappointed (but not at all surprised) to learn that it is running its 3200 memory at 2933


While I agree with your general sentiment, I don't agree that you should expect Intel to hand out features for free. That's what competition is for.


> Give me an option for with or without GPU, and with or without 10Gbe

In what capacity is 10Gbe included as a CPU feature? I’ve only ever used PCIe cards.


so these days 10GbPCIe and 10gbe are essentially the same thing at the low level silicon/pins/wires level the bit packing/unpacking/signalling stuff has a whole lot in common and they're all sort of converging on some superset of hardware serdes - the higher level hardware stuff is still different (ethernet MACs vs PCI etc) of course


Business features are tied to software/hardware features. They want piece of your business.


Isn't this always their strategy? But now they just hand out the discount code more easily to everyone.


It's one of the tools in intel tool-belt, they have used shadier tools in the past - aggressive sales force, manipulating benchmarks - which was probably the cause of AMD fall.


This is exactly the purpose of competition, it shouldn’t be news.


It's news because it's fun to finally see intel face the heat of competition after so many decades.


I disagree. This should be news because it's as close as an official declaration that Intel acknowledges it's time as the world's leading chip manufacturer is over, and that the crown is nowadays firmly on AMD's head.

Also, Intel's long history of using unethical tricks to preserve their market share while avoidig competing on price also makes this a historical turn of events.


> This should be news because it's as close as an official declaration that Intel acknowledges it's time as the world's leading chip manufacturer is over, and that the crown is nowadays firmly on AMD's head.

You're being premature here. Intel still makes more profit in one quarter than AMD makes revenue in multiple years. Intel still puts out more than 10x as many CPUs as AMD does in a year in one quarter.


I am sure Intel and AMD relize that being x86 is no longer the advantage it used to be.


For the eventual consumers of the servers at the discounted prices: are we going to see the price decrease benefits?

If say GCP/AWS/Azure decide to build a DC in a new region, and they go blue primarily because of the discounts, would the pricing end up being slightly smaller than otherwise?

I can understand that electricity, cooling and other costs would have an influence; but I'm wondering whether performance & price per watt end up being passed or recouped downstream.


Runtime costs are generally higher for comparable Intel CPUs due to electricity usage… so I would not expect any cost reduction passed on to consumers in the scenario you describe.


Exactly. Energy usage and density are major costs. Data centers throw away perfectly good hardware because at some point if you factor in density and energy usage that CPU might be worth less than zero.


> and they go blue primarily because of the discounts, would the pricing end up being slightly smaller than otherwise?

I don't know about the others, but AWS already has different prices for ec2 for AMD vs Intel vs ARM. It's not a case of "going blue", they'll support anything that people will pay them for. Pricing tends to be dictated more by power usage than by hardware cost.

For non-directly-ec2 backed services (like ECS and s3, as opposed to say RDS) I'd guess they'd go all in on ARM regardless for the power savings.


I noticed that Azure is charging significantly less for Ice Lake even though the MSRP is the same.


What’s happening with respect to any class-action lawsuits against Intel for the performance-damaging Spectre / Meltdown mitigations?

I had expected these lawsuits to be significant, yet I haven’t heard much about them.


I haven't heard much about them in a long time either but even if I had I wouldn't be expecting anything significant. Meltdown affected Intel, IBM, and ARM processors while spectre affected any processor that used branch prediction up until that point. Both were patched the best they could be on all target platforms via combinations of microcode and kernel patches.

Significant class action suits tend to result from intentionally hidden and operated longstanding fraud or discrimination such as Enron or the tobacco settlements or Volkswagen. Even if Intel and every other manufacturer were found negligent for some part of spectre/meltdown it wasn't industry wide multi decade conspiracy to defraud.


At this time where there are shortage of chips across the globe, isn't it a good idea to increase production and diversify the verticals? Any experts here who can put in some thoughts?


Increasing production takes years.


> As seen in renowned system distributor Puget Systems' statistics, AMD has risen from a 5% share in systems sold since June 2020, up to a dominating 60% as of June 2021.

Wow, maybe this stat is misleading or only referring to some small segment of the market, but if not, that is an incredible loss for Intel in just a single year.


Puget Systems is a smallish boutique seller that builds you the best computer for a certain workload given some benchmarks. It's niche but they are an indicator what is better for the workloads of their customer.


German retail distributor https://www.techspot.com/news/90718-amd-smashing-intel-retai...

" full 85 percent of processors sold by Mindfactory in May 2021 were from AMD, leaving just 15 percent of the pie for Intel."


The only surprise is that it took so long, in fact I'd argue Intel isn't doing it enough. Intel is losing marketshare and has some power hungry chips, however financially it's doing well. It makes a lot of sense to compensate via price.

As I keep saying, Intel is very far from dead. A company does not need to have the topmost performing chips to do well, not anymore than AMD/TSMC needed to in the past. Especially not in this seller's chip market.

It just means Intel needs to invest more for some more time and will lose some marketshare. If in 3-5 years Intel will not have improved its processors in a nonmarginal way then they'll be in trouble.


foundry ecosystem forced Intel's hand. we going to see more and more company developing its own chip and outsourced to TSMC and Samsung for production.

Intel's chip no longer fit what the market needs. Apple M1, YouTube own video-transcoding chips, AWS's graviton and Google's own chip for pixel 6.

we have reach the point where off the shelf chip isn't going to fit the problem we are trying to solve. the ability to make custom chip that fit your product/bottleneck is more important than the price and the Foundry ecosystem is reducing the custom chip cost.

i hope Intel IDM 2.0 can take off. we need more foundries that can do high end node.


Interesting strategy, based on my interactions with AMD the work we're seeing materialize today was planned 5-7 years ago. I worked with the GPU team in Florida and they laid out at high level how AMD plans to attack Intel at business and consumer level. I'm not sure if it's viable but when Intel is hiring back old engineers and slashing prices it makes me think they lack a long term plan.


This may increase Amazon's margins, but if you're on AWS, you won't get a price cut just because Amazon got a price cut.


Not sure if price reduction will do the trick. Intel is behind on the product side.


IMO, it has been mainly a price game between AND and Intel for quite some time


Looking forward to those cheap Xeons then. I'm eyeing an HP Gen 10 Plus and trying to find out if its better than a Ryzen Pro build in the same price range.


I got an i3 10100f the other day for a mere €80. For a four core cpu that sounds like a steal to me. Way to go Intel.


Congratulations, you paid for new motherboard and ram just to reach level of performance from 6 years ago. Used i7-4790 is ~$80, motherboard and DDR3 ram would cost 1/2 the new counterparts.


Yeah right. DDR3 memory is more expensive than DDR4, even used one, and runs at half the frequency. A six year old mobo doesn't support thunderbolt or usb 3.1 connectors. An i7-4790 runs hotter and doesn't support more than 32GB while a 10th gen i3 goes up to 128GB.


Is that a good bang for the pound even after the price drop? (cost/performance ratio)


The processor is very efficient. Four cores at 3,6GHz, and can support up to 128GB of RAM. Even AMD can't beat that.


Ryzen 3100 can do the same, at the same price point, and was released 5 months earlier than 10100F.

Good for Intel for finally catching up?


Good luck finding a 3100 at less than 100 euro though.


Why would you want 128GB of RAM without ECC support, something totally mainstream at AMD?


I have ECC RAM on my servers. I don't care for it on my desktop rig.


Sure, but if you don't have a GPU to go with that it won't even boot. The non-f version is twice that.


Sigh.

1. According to DigiTimes, you should have stopped reading right there.

2. Intel has already been discounting their Server side CPU since combating Zen 2.

3. It is actually listed inside Intel quarterly report where Server margin are under pressure.

4. Did I mention DigiTimes?


In anti-trust terms would this not be "dumping"


Or maybe you could call it "being forced to return to reasonable prices now that their monopoly suddenly has competition again"?

Well, at least that's what I remember comments on HN cheerfully proclaiming would happen back when Ryzen & Threadripper were launched.


“Reasonable price” has always been what the market will bare. This is true for all companies.


No. Dumping is selling below the cost of production, with the intent of driving someone else out of a market. This is just normal price competition. It's a good thing.


A monopoly becomes illegal when it negatively impacts consumers.

If a company is able to lower costs to better compete with another company taking market share, wouldn't that imply that:

1. They had a defacto monopoly in the sector that allowed them to price above the fair market value.

2. They harmed consumers by pricing above fair market value.


No, it simply means that the market conditions changed. The price could very well have been a fair market value before, and still is a fair market value after; and the delta of these prices reflects the impact of the new conditions.


They are also doubling down on fab tech to catch up with TSMC. Markets working as intended.


Well... I find that a bit of a stretch. I'd rather say we happen to be lucky.

What if TSMC was a company that was about as good as Intel on specs and price. Would the market be "working" then?

A new player might come along. That new player would need to have 20 billion dollars of money to play along though.


I don't disagree in the abstract. Massive entry costs and all sorts of structures can and do obstruct the "as intended" mechanics a lot of the time. It's a struggle to term the revenue dynamics of an Alphabet, FB or JPM as "markets" at all.

Chips though... chips are a market and it is working as a market. IMO chips are a rare example of Real economics in the modern economy, as opposed to the intangible-only economy that used to be mostly banking. A notable feature of the chip market is the persistent demand, the ability to demand/consume more computing than chip manufacturers can produce.

Compare to cars, say 100 years ago. Most people didn't have one yet. Demand could keep up with supply, markets grow fast, and also make consistent efficiency gains,. Eventually though, the market saturates. People have cars and just need periodic replacements. The market isn't growing. People still want lower prices, or shinier cars. If they get lower prices, the market will shrink. Efficiency gains in mature markets can degrow a market, if demand is saturated. If car factories becomes twice as efficient, we'll probably have fewer of them. Our demand is not that flexible.

Same thing happened with smartphones and laptops, to a degree. They do what they do well and we only need one each.

In order to have a learning curve anything like Moore's law, the chip market has to grow every year. That requires a lot of demand, to offset all the efficiency gains. I don't think a lot of markets have the demand potential to support a Moore's law. In this scenario, market's working pretty well.


>That new player would need to have 20 billion dollars of money to play along though.

From what I understand, quite a few national governments are at least looking into setting up their own local chip plants since semiconductors have become a critical industry. On that scale, $20b is not a huge speed hump.


All of these projects AFAIK involve enticing a company like TSMC to build new plants, not building their own competitor in the market. I don't think there's any appetite in Europe to invest tens of billions in building their own chip industry.


> appetite in Europe to invest tens of billions in building their own chip industry.

Why would they. They build the machines that create the chips. TSMC would merely manage the chip building orders and the consumables. In case of national emergency I doubt that any government would have qualms about nationalising the factories.


> That new player would need to have 20 billion dollars of money to play along though.

In 2020 Uber's net income rose to losing only 7 about billion dollars. And they are competing for a market far less interesting and defensible than advanced semiconductors.

Competition would arise.


When their new fab ready they might have even better advantage, also CPU instructions wise I believe they hold small advantage as well, yet still i didn't see any incredible benchmark with avx512 paying back as performance. I just built 2 gen3 epyc server for homelab waiting delivery, but if they do a nice surprise with upcoming Sapphire with CXL price i will be willing to sell one of the server and switch to Intel, optane not available for epyc, but i think CXL will provide more pmem availability


This will make it harder for them to invest in production technology, which will make it harder for them to catch up to TSMC. It might be the only move they can make, but that doesn't make it a great one.


Intel has $24.8 billion in cash.

https://www.macrotrends.net/stocks/charts/INTC/intel/cash-on...

Intel dropping their prices and thus revenue temporarily should not affect their ability to compete at all. They're not THAT badly mismanaged to the point they're out of cash.


... and dropping prices doesn't necessarily meaning dropping revenue or even profit.

Intel's per chip profit may drop, but if they sell more because of lower prices, they may actually increase their overall profit.

It is really hard to tell without knowing Intel's current profit margin and the increase in number of chips sold from this maneuver (if any).


Intel drank the management consultant kool-aid like many large pharmaceuticals corps, relying more on financial engineering than their research pipeline to compete. The flip side is that they have lots of money to splash around. Companies like Pfizer for example.


Did they? I thought they made a heavy bet that hasn’t paid off (and maybe never will).


So if Intel drops $0.8B on a risktaker learning from what I. B. M. did to make Watson and not make those mistakes. What new thing could come from that? A new category product is what Intel should look for around the corner(s).


You're right for the 10nm (7nm) process development. They focused on shrinking the wrong parts and ended up with an inferior product. Instead of changing direction, they doubled down.


Was this a case of a leader refusing to be wrong, or engineers thinking "we almost have this, give us another shot."


I have no idea, but isn’t it the usual real-world case of “it’s complicated, and it’s both”?


A lack of capital is hardly the reason for Intel falling behind TSMC. If it were, they wouldn't have lost their lead in the first place, and TSMC wouldn't have been able to overtake them.


This move is to attract AMD customers back to Intel, so while in the short term it could hurt revenue, longer term it may mean increased profits and therefore offer more room to invest. There's also the potential for increased sales at the lower pricing, which will still have a profit attached. So I doubt that overall this will have much impact on investment.


Is a discounted chip price going to sway people enough to offset the hotter core (therefore pricier in terms of power and cooling), and limited (in comparison to EPYC) IO?


I don't think money is the problem with Intel's troubles.


Is this not just the efficient-market at work? Simplistically: Intel's chips aren't as good as AMDs so it has to drop prices.

And for future investment Intel still has ~ $24 billion cash on hand as of June 2021 [0]

[0] https://www.macrotrends.net/stocks/charts/INTC/intel/cash-on...


> Simplistically: Intel's chips aren't as good as AMDs so it has to drop prices.

Shouldn't you replace AMD with TSMC in that sentence, unless you meant design instead of chips? AMD doesn't manufacture chips.


It is true that the main reason why the AMD chips are better than the Intel chips is that the TSMC 7 nm manufacturing process is significantly better than the Intel process used for the Ice Lake Server chips.

Nevertheless, the AMD designers must be praised for making the right design choices year after year for the last half of decade, which were needed to fully exploit the characteristics of the modern CMOS processes.

On the other hand the Intel designers appear to have lived in a fantasy land, where they had absolutely no idea about how their future manufacturing processes will behave, even if in their case the required information should have come from another division of the same company, not from different foundry companies, like in the case of AMD.

Once again, Intel was not able to switch in time their style of design, to be in sync with the advance of CMOS technology.

During 2003 - 2008, Intel needed 5 years to follow AMD and switch to CPUs with integrated memory controllers and now, during 2016 - 2021, Intel required again 5 years to follow AMD in the transition to the use of multiple interconnected chiplets instead of large monolithic chips.


By "AMD's chips", they clearly meant chips marketed and sold under AMD's brand. If you're trying to make the point that TSMC deserves the real credit for competing with Intel, then just say that.


You don't think it's important to make the distinction between chip design and manufacturing?

Intel has clearly failed with regards to manufacturing new nodes, but is the chip design really that bad when they could compete for a long time with a large node disadvantage?


They almost never competed in the dictionary sense of the word, historically. They have plenty of shady tactics to gain more market share. Assuming that the "free market" works -- or that it even exists -- is way too charitable and optimistic of you.


It's an interesting point, but I buy from Intel or AMD - I don't buy from TSMC. Intel and AMD supply me the product and set the pricing.

As a simple consumer, I perhaps don't know about their upstream suppliers (granted the HN crowd will absolutely know...).


If you own a cellphone or console your likely buying from TSMC. If you’re buying AMD then you are buying TSMC.

Intel is unusual in that they still manufacture their high end chips in house, cutting edge fabs are simply mind boggling expensive. So basically everyone else outsources and if your outsourcing high end chips you might as well buy from the bets if you can.


TSMC are spending about that every year for the next 3 years on production improvements. Chip fabrication is so expensive to develop, I'm concern that $24bn is nowhere near enough to build a 5nm process.


Intel has 24bn available in cash. One could assume Intel also has $FOO x 24bn available in loaning power.


money can't buy node process. Intel struggle with 10nm for so long and 7nm is delayed while TSMC is now 5nm production ready and developing 3nm.


Isn't a new fab around $20bn?


You usually don't buy the construction of a new fab with cash.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: