The 10nm debacle exposed how far they've fallen behind on fabs to the point that they're outsourcing to TSMC. Like, how humiliating must that be?
Intel completely missed the mobile revolution. They had a stake in that race but sold it (ie XScale).
Intel's product segmentation is bewildering. They've also kept features "enterprise" only to prop up high server chip prices to the detriment to computing as a whole, most notably ECC support.
And on the server front, which I'm sure is what's keeping them in business now, they face an existential threat in the form of ARM.
Intel had clearly shifted to a strategy of extracting as much money as possible from their captive market. I'm not sure price cuts here are necessarily about AMD but more than their previously captive market now has more options in general.
How the mighty have fallen.
Intel won't give you the time of day unless you're HP or Dell. That's optimal for capitalizing on old markets, but it means it's never in new markets. It always starts at a disadvantage. It's not that Intel never has chips startups want to use; it's that it's impossible to engineer with most of them.
By the time a product has enough marketshare for Intel to care, they need to displace an existing supplier.
This means they could never really diversify outside of PCs.
I worked at a small start-up producing COM-HPC boards for companies who wanted to keep their servers in-house, as opposed to using cloud infrastructure. We weren't purchasing any more than maybe 500 CPUs of their upcoming platform. Despite that, they supplied 1:1 tech support, reference schematics/layouts, a reference validation platform with which to test our design on, and 1000's of documents including product design guides and white papers. This all came about by just contacting Intel's developer account support and filling in a few forms.
We also produced the same product with AMD hardware and the difference was night an day. Say what you will about their production difficulties and roadmaps, their engineering support is years ahead of AMD.
I've had few enough interactions with AMD that I can't pass judgement, but from the few I've had were consistent with your assessment. AMD was a complete black hole. My interactions with Intel were lightyears ahead of AMD.
But Intel, in turn, was lightyears behind Analog, Linear, Maxim, TI, and most other vendors I've dealt with (this was before Analog gobbled Linear and Maxim up).
1. It's not mostly about the demand, but about maintaining good working relations with systems manufacturers.
2. Increased demand is not a daily thing. Positive reviews and manufacturer interests would likely hold for a while, effecting the next production planning cycle or what-not.
3. Counteract effects quelling demand.
4. They could theoretically avoid letting prices drop if demand is strong.
Arguably, this is what led to the creation of ARM. Acorn wanted to make a computer with a 286, but Intel ignored them, so they decided to build their own RISC based CPU, the "Acorn RISC Machine".
I notice you didn't list Broadcom... And bullshit can you call an engineer. Submit a support case through some online portal maybe. Zero chance they are giving you a direct line to their engineers.
They could devote a market segment to support that as a long term emerging market support aspect of their business, but it's clear that short term hit-strike-price-for-execs has been the dominant management mode for quite some time.
With time I think it could have been a real contender.
HPC and inertia. Lots of inertia.
And while Xeon Phi (and predecessors) used to be very popular, the accelerator market is now dominated by Nvidia (mostly Volta, but also Ampere and Pascal) and AMD Vega.
Actually only two systems (#7 in China and #10 in Texas) of the top 10 systems rely on Intel. And upcoming systems also feature a wild mix of architectures and vendors. So way less inertia that you might think.
Not only because of NVidia GPUs, but also because NVidia bought Mellanox (who makes those fancy InfiniBand NICs that those supercomputers use).
Intel's Xeon Phi didn't work out so hot. They're working on Intel Xe (aka: Aurora Supercomputer), but Aurora has been bungled so hard that Intel's losing a lot of reputation right now. Intel needs to deliver Aurora if they want to be taken seriously.
Every chip Intel buys from TSMC is a chip not made by its competitors. Doing this is extremely useful for Intel to the point I wonder why TSMC agreed in the first place.
After all, eventually Intel will improve their fabs, and then it's the non-Intel players that will order from TSMC. Why hamper TSMC's future customers? Intel must have offered a lot of money.
Right now we dont have confirmation as what and when and volume of TSMC usage. Intel is making GPU on TSMC makes lots of sense. After all their GPU team are vastly more familiar with TSMC ecosystem.
Other than that most of it are just rumours.
Any fab time Intel buys is a chip AMD can't make. Since Intel is so much larger, they can significantly hurt AMD merely by outbidding them, and eventually still make profit. This is so effective I'm not sure this should have been allowed...
I saw it happen with a former large company (top dog in manufacturing solutions area) where in a the middle of a project all experienced people in USA were replaced with dozens of new hires in India. Half of the people in USA were fired, they were competent but considered too expensive. Talking to them, I found it was the norm for many years, now only some sales and management people are in USA, everyone else is in India. This causes companies to lose the competitive edge in engineering, cost was not really a problem and competing only on cost is meaningless, you lose to India and China on that ground alone.
It's hard to figure out exactly where the toothpaste reference originated, but at least one source makes it sound like it was a mis-translation of materials published by AMD. See https://www.hardwaretimes.com/amd-takes-a-jab-at-intel-we-do...
Starting with the Ivy Bridge (3rd) generation, Intel switched to using thermal paste between the core and heat spreader instead of solder on socketed desktop processors. Presumably this was done as a cost savings measure.
This caused a marked increase in core temperatures and thermal throttling. Enthusiasts discovered that you could remove, or "delid", the heat spreader and replace the "toothpaste" with higher quality paste or liquid metal to drastically improve temperatures (15-20c) and improve overclocking headroom.
Edit: This event is commonly reflected on to showcase Intel's greed at a time where they dominated the market. It wasn't until the i9-9900k that Intel went back to soldering heatspreaders for consumer CPUs, at which point they were forced to because they were being challenged by AMD.
AMD uses them too, so there must be a reason... is it because they're afraid of improper installation breaking them? That's on the user.
The weight of the desktop heatsinks? Small changes to latch design should suffice. Or you can have a metal spacer around the chip with the die exposed, kinda like GPUs do.
I've replaced many laptop chips and even ran some on desktops with no issues.
Yes. This was an issue back in the Athlon Thunderbird days.
"It's on the user" doesn't work as an argument when all of your large desktop/server OEMs notice a large uptick in failure rate post-assembly.
I remember how they briefly tried those black foam sticker pads in the corners of the substrate before acquiescing and using the IHS.
At some point they realized they could do better than a heatsink mounting system that involved trying to balance a heavy metal object on a small pedestal while trying to hook a tensioned spring to a clip you couldn't see by exerting tremendous downward force with a flathead screwdriver. I guess those motherboard return rates finally got to them.
The IHS itself is a cost saving measure.
When Intel and AMD first introduced flip chips, they didn't have the IHS and the heatsink was balanced on top while you tensioned a spring. If you rocked the heatsink in any direction you would (not could) crush an edge or corner of the chip and likely kill the CPU.
The IHS protected the chip and reduced the failure/return rate.
Laptop cooler: https://guide-images.cdn.ifixit.com/igi/4h3FmQQNHsITcHTq.med...
Because there's a huge difference between running 5watts sustained through something the size of your fingernail, and 100 watts sustained. That heat has to go somewhere and there's 20x more of it on a desktop part, as it requires way more integrated cooling to not immediately thermally throttle.
On a whim, a director asks the guy serving coffee:
- Jack, what would you do to increase sales?
- Have you tried increasing the hole on the toothpaste?
We used reusable metals and glasses much more. Now everything is plastic.
- It weighs much more than a crate of 20x0.5l aluminium cans or plastic bottles
- it is more voluminous: glass bottles have way thicker walls and they need plastic spacers to prevent the bottles from crashing each other, whereas cans and bottles can be shrinkwrapped just fine)
- the return logistics are simpler: glass bottles and the crates have to be returned to the brewery to be refilled, whereas PET bottles and aluminium cans enter the normal, regional recycling stream
The switch to plastics has saved lots of money and environmental pollution in logistics. What was missed though was regulating recycling capabilities of plastics - compound foils are impossible to separate, for example - and mandating that plastics not end up in garbage, e.g. by having a small deposit on each piece of plastic sold.
Ah, but this is debatable!
"Trains move 32% of goods in the United States, but generate only 6% of freight-related greenhouse gas emissions. Meanwhile trucks account for 40% of American freight transport and 60% of freight-related emissions."
From the beginning of the industrial period, we relied on rail and boat for logistics, and buggies for last mile deliveries, until the advent of affordable, mass produced vehicles, and the interstate system, this didn't change much. Our reliance on plastics combined with airplanes and trucks for logistics results in much greater pollution in my view.
Granted, coal was the primary fuel source for steamboats and steam engines, but sail still was common until iron boats became widespread, and still more economical for cross-sea transportation.
All this to say, as an amateur historian, in my view, this all comes to a precipice between the late 1950s and early 1960s, with the completion of the interstate highway system in the US, and DuPoint proliferating plastics in 1960s.
Another way of looking at it is that we could consider the interstate highways only half-complete, and that the important part that was never built was an electrical delivery system for the cars and trucks that use it, so they can recharge their batteries without even stopping. It's what we would have been forced to build if fossil fuels weren't plentiful and cheap and we still wanted to use cars and trucks for our main transportation. We could have built that in the 70's in response to the oil crisis, and we could've had 50 years of electric vehicles by now, and it could have worked even using awful lead-acid batteries if cars didn't have to go more than twenty miles or so between electrified road sections.
Building the same thing now would be a lot easier. Battery technology is good enough that it would only be needed at regular intervals on the major freeways, and we can pair the electrified road sections with cheap solar power where it makes sense to do so.
> and we can pair the electrified road sections with cheap solar power
I don't know. More cars on the road in general is just a bad idea IMO. Traffic, noise, accidents, parking lots, Fast and the Furious movies...
Alternatively we can use a system of transport that can carry a whole neighbourhood in one go, is electrified and can be built underground like a billionaire suggested we do for cars. It can be automated and sorta self driving too, can hit 180km/h without too much of a fuss. And we've been building them for almost 200 years.
Wouldn't that make more sense?
Replacing trucks for long-haul would be good, but you'd have to accept slower deliveries. (I wonder if Amazon ever ships things by train?) I expect it's less of an uphill battle to just figure out how to make the things people are already doing more energy efficient and emissions-free than it is to tell them to completely change what they're doing. Admittedly, that does come with the risk of getting stuck in a local optimum. I just think of all that diesel being burned to push wheeled boxes around the country and I'm appalled at the unnecessary waste. Those fossil fuels could just as well have stayed in the ground.
On a decent rail infrastructure you can run car-carriers like in the Euro Tunnel between UK and Continental Europe (https://en.wikipedia.org/wiki/Eurotunnel_Shuttle). These things are big enough to accomodate cars and even buses, with people being able to walk around outside of their car.
Fun fact, Europe moves most of its freight by road: https://www.eea.europa.eu/data-and-maps/figures/road-transpo...
Compare with the US: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQ13vD9... (A screenshot from this PDF: https://www.kth.se/polopoly_fs/1.87118.1550154619!/Menu/gene... )
It could be even an accident (eq. someone turning in old beer bottles found somewhere), but you have to still account for that when cleaning all the beer bottles before refill.
Umm, explain to me exactly how it is simpler to recycle a set of PET bottles than to transport a crate of glass bottles? It is infinitely more costly and complex, and involves multiple industries.
As for aluminum cans, it's perhaps less of an ordeal, but still you only recycle between 1/3 and 2/3 of the material:
I believe you are only thinking about the logistics directly experienced by the end consumer... which is part of the problem with disposable consumption goods.
I'm definitely too young to remember anything from the 1960s, but you can still buy tomato paste in tubes like that. Neat.
Per socket performance scaling is higher for equivalent tier sockets. At hyperscale that goes back into the price benefit (buy and maintain less physical data center) but for an individual server workload or individual user that also turns into a performance benefit, particularly for non NUMA aware workloads on the server side and just plain availability of such core counts for performance on the desktop or workstation side.
PCIe wise you get about twice the lanes (128 total on AMD) of even a 40 core 8380 in a the base 8 core model of Epyc or a Threadripper workstation CPU.
A place Intel still wins is total NUMA scaling. For a NUMA aware app like SAP HANA Intel can scale to 8 sockets while AMD currently tops out at 2 so you can reach about 2x as many total threads that way.
Hyperscalers are running web servers which is a different story. But if you're running web servers you might be better off with Graviton in perf/$.
Maybe it's just me but all my performance-sensitive applications are heavily multithreaded. AMD CPUs simply have more cores. The profit from Intel only AVX-512 doesn't quite cut it. Besides, not all apps are actually optimized to leverage AVX-512, C++ compilers aren't.
I'm not a microprocessor expert, but this seems like one of the reasons RISC has so much potential in the future. It seems like x86 is just weighed down with so much cruft.
Windows performance on these platforms is so trash, you feel like going back ten years on ultrabooks. Even their own apps are not optimized or some like visual studio didnt even run.
Compare that to x86 builtin emulation on apple m1, it performs so close to native performance on a 1000 bucks macbook air.
Microsoft has definitely different priorities like how to chnange settings for a user without their permissions or how to hide settings so users have less choice. windows experience has been so downhill since win7.
You cant be user friendly, private first luxury appliance and data-driven add sell-out at the same time.
One dream has to die for the fish to fry.
I don't think Microsoft is the real problem there, though.
NT was developed to be portable and was working on architectures other than x86 in the beginning.
So it was interesting when I heard things about "Windows on ARM" half a decade ago--and then the Surface RT. The RT was crap, but it did have real Windows NT working on non-Intel ARM, as was the OS on their Windows Series 10 phones or whatever.
So Microsoft is already there on an OS level. It's the big software vendors that have to be corralled to switch somehow (Autodesk, Adobe, etc.) Honestly .NET overall was probably at least in part Microsoft trying to get developers on something more CPU-agnostic to reduce dependence on x86.
To give some context (this started with Windows Server 2003 64-bit and is still how it works in Windows 11): Instead of implementing fat binaries like OS X did, they decided to run old x86 applications in a virtualized filesystem where they see different files in the same logical path. This results in double the DLL hell nightmare, with lots of confusing issues around which process sees which file where. For many usecases around plugins, this made a gradual transition impossible. (Case in point: The memory hungry Visual Studio is currently still 32-bit. Next release will hopefully finally make the switch.)
Also, it’s surprising how much stuff in Windows depends on loading unknown DLLs into your process, like showing the printer dialog. So you run into these problems all the time.
Have they learned their lesson? It doesn’t look like it. Last I checked, x86 on ARM uses the exact same system as x86 on x86-64. If they ever emulate x86-64 the same way, that’s triple DLL hell right there. And I don’t think they’ll get a decade to sort things out this time around.
I believe this is for make applications like DAW (that often uses native plugins, some aren't updated well) port to ARM.
Microsoft has the capacity to realize that the value of Windows is not the codebase, but the compatibility. They could let the Linux subsystem swallow Windows and wrap Windows itself inside it.
However, I believe we’ll continue to see their colocation system instead, where Windows and Linux are both wrapped inside a system managing both.
Windows might be fully Linux under the hood one day!
WSL2 is one of the early bridges across the divide.
So it was interesting when I heard things about "Windows on ARM" half a decade ago--and then the Surface RT. The RT was crap, but it did have real Windows NT working on non-Intel ARM, as was the OS on their Windows Series 10 phones or whatever.
In this specific case, Microsoft is the real problem: Microsoft deeply locked down the Surface RT; you needed a jailbreak to run unsigned applications on it.
NT itself yes, but the userland? Not in the slightest. Apple provided Rosetta runtime translation at each arch transition, MS did not. As a result, no company even thought about switching PCs over to ARM which meant that there also was no incentive for the big players you mentioned to port their software over to RT.
It is kind of a wild state of affairs that as good a chip as M1 isn't available as commodity hardware.
edit: really, I’m just waiting for Graviton 2 Fargate support, and then I’ll be able to move a lot of workloads.
None of the server parts have that. But by the time you do run your code on an Arm server, most of the bugs will be worked out.
The M1 is special.
This does not mean RISC-V use wouldn't be a good thing, as it prevents a whole boatload of legal issues, but it just isn't what a lot of people seem to think it is.
ARM could end up being a better ISA in the very high-clock high-IPC domain, it remains to be seen.
If someone wants to compete with Intel they would have a hard time even if they make an excellent processor since they are unlikely to get a x86 license. With arm you have to pay licensing fee and control of the Isa is still with a private company.
With risc v you can make your own processor and have a good shot in market. You will also have a chance to have a to propose/comment on future ISA changes.
I agree a truly open-source option would be desirable.
Fortunately, only a handful of companies have the resources to do that.
Unfortunately for Intel, those handful of companies are the biggest and only customers for large scale server farms.
Maybe in 5 or 10 years, ARM will be viable. But by then, we'll all be rocking RISC-V CPUs because someone realized that accelerating for specialized workloads isn't a crock of shit when 90% of your workload is video decoding.
And that M1 only looks as good as it does because of Apple's de facto monopoly on TSMC 5nm. That AMD cores are more than competitive at the same node.
As long as the ARM-community is fragmented, their research/investments won't really be as aligned as Xeon and/or EPYC servers.
HiFive / RISC-V aren't anywhere close to the server-tier.
Why does this matter? If popular OS distributions consistently target ARM-based CPUs, with a sufficient number of packages (esp. development-support-related) working on them, then who cares about fragmentation? An organization could buy systems with ARM chips and software will basically "just work".
Same argument for consumer PCs, although there you have the MS Windows issue I guess.
The more fragmented your community, the harder it is for software to work consistently across all of them. Intel vs AMD has plenty of obscure issues (see "rr" project, and all the issues getting that debugging tool to work on AMD even though it has the same instruction set).
Sound, WiFi, Ethernet, southbridges, northbridges, PCIe roots. You know, standard compatibility issues that having a ton of SKUs just naturally makes more difficult. Having a "line" of southbridges / consistent motherboards does wonders for compatibility (fix the BIOS/UEFI bug in one motherboard, fix it for all) in Intel/AMD world.
But just as AMD has AMD-specific motherboard bugs, and Intel has Intel-specific motherboard bugs... I'd expect Graviton to have its share of bugs that are inconsistent with Apple M1 or Ampere Altra.
Graviton is a standard N1 neoverse core, which is slightly slower than a Skylake Xeon / Zen2 EPYC. There's hope that N2 will be faster, but even if it is, we don't really have an apples-to-apples comparison available (since Amazon doesn't sell that chip).
The most likely source of Neoverse cores is the Ampere Altra, which is expected to have N2 cores shipping eventually. As usual though: since Ampere has lower shipping volume than other companies, the motherboards are very expensive.
x86 (both Intel and AMD) have extremely high volumes: so from a TCO perspective, its hard to beat them, especially when you consider motherboard prices into the mix.
The biggest cost of making chip is the foundry and the foundry ecosystem have reduced the cost to where everyone can be a fabless and just outsource to foundry like TSMC and Samsung.
It's Intel's vertical integration that has hamstrung its chip design for about half a decade. 10nm transition was an unmitigated disaster and because of it Intel has haemorrhaged technical dominance and has only really maintained market dominance due to entrenched and slow moving decision cycles within the data centre space and to a lesser extend the consumer market.
Intel will likely stablise over time but they won't enjoy the market dominance they had for most of the last decade.
Love of the work is one thing, but if you can love the same work somewhere else for 40% more you'd better think pretty hard about the wisdom of staying.
I hope they will make an internal review of their offices/laboratories/whatever, is not a price issue with the chips, is a performance and technical issue.
There are consistent signs of technical decline at Intel, and "reviewing" underperforming units into oblivion is likely to drain away talent and destroy more value faster.
EDIT: other comments point out that Intel is sitting on an awful lot of cash. It can be safely assumed that Intel is spending as much as possibly useful on R&D and that their results are limited by talent and strategic choices, not by cheapness.
AMD managed to recover with a much smaller budget than Intel. I don't think that lower margins for a couple of years will prevent a recovery long term.
Just STOP. EVERY CPU they make should support ECC in 2021. Give me an option for with or without GPU, and with or without 10Gbe - everything else should be standard. Differentiate with clock speed, core count, and a low power option, and be done with it.
Does this excuse Intel’s form of market segmentation? No. They almost certainly disable, for example, hyperthreading on cores that support it - just for the segmentation. But we can’t make every CPU support everything without wasting half good dies.
I think even this is a bit unfair. Intel's segmentation is definitely still overkill, but it's worth bearing in mind that the cost of the product is not just the marginal cost of the materials and labour.
Most of the cost (especially for intel) is going to be upfront costs like R&D on the chip design, and the chip foundry process. I don't think it's unreasonable for Intel to be able to sell an artificially gimped processor at a lower price, because the price came out of thin air in the first place.
The point at which this breaks is when Intel doesn't have any real competition and uses segmentation as a way to raise prices on higher end chips rather than as a way to create cheaper SKUs.
I’m not sure that this is really fair to call broken. This sort of fine granularity market segmentation allows Intel to maximize revenue by selling at every point along the demand curve, getting a computer into each customer’s hands that meets their needs at a price that they are willing to pay. Higher prices on the high end enables lower prices on the low end. If Intel chose to split the difference and sell a small number of standard SKUs in the middle of the price range, it would benefit those at the high end and harm those at the low end. Obviously people here on HN have a particular bias on this tradeoff, but it’s important to keep things in perspective. Fusing off features on lower-priced SKUs allows those SKUs to be sold at that price point at all. If those SKUs cannibalized demand for their higher tier SKUs, they would just have to be dropped from the market.
Obviously Intel is not a charity, and they’re not doing this for public benefit, but that doesn’t mean it doesn’t have a public benefit. Enabling sellers to sell products at the prices that people are willing/able to pay is good for market efficiency, since it since otherwise vendors have to refuse some less profitable but still profitable sales.
It is unfortunate though that this has led to ECC support being excluded from consumer devices.
> "... but it's worth bearing in mind that the cost of the product is not just the marginal cost of the materials and labour."
Yes, you could choose to amortize it over every product but then you're selling each CPU for the same price no matter which functional units happen to be defective on a given part.
Since that's not a great strategy (who wants to pay the same for a 12 core part as a 4 core part because the amount of sand that went into it is the same?) you then begin to assign more value to the parts with more function, do you not? And then this turns into a gradient. And eventually, you charge very little for the parts that only reception PCs require, and a lot more for the ones that perform much better.
Once you get to diminishing returns there's going to be a demographic you can charge vastly more for that last 1% juice, because either they want to flex or at their scale it matters.
Pretty soon once you get to the end of the thought exercise it starts to look an awful lot like Intel's line-up.
I think what folks don't realize is even now, Intel 10nm fully functional yields are ~50%. That means the other half of those parts, if we're lucky, can be tested and carved up to lower bins.
Even within the "good" 50% certain parts are going to be able to perform much better than others.
Except in the case with the Pentium special edition 2 cores and i3 parts, Intel actually designed a separate two core part that wouldn't have the benefit of re-enabling cores among hobbyists.
And then there's the artificial segmentation by disabling Xeon support among consumer boards... even though the Xeon branded parts were identical to i7s (with the GPU disabled) and adding (or removing) a pin on a socket between generations even though the chipset supports the CPU itself (and the CPU runs on the socket fine with an adapter.)
Intel definitely did everything they could to make it as confusing as possible.
In a truly competitive ecosystem features that have additional cost would be the only ones that actually cost more, and artificial limits wouldn't work because the vendor with less market share would just throw them in for free.
So you would expect product segmentation along the lines of core counts, dram channels, etc but not really between for example high end desktop/low end server because there would be a gradual mixing of the two markets.
And it turns out the market is still competitive because Arm and AMD are driving a bus through some of those super high margin products that are only artificially differentiated from the lower end parts by the marketing department or some additional engineering time that actually breaks functionality in the product (ecc, locked multipliers, iommu's, 64-bit MMIO windows, etc).
Even Apple is susceptible to it. But Apple doesn't sell chips, they sell devices and they can eat the cost for some of these. For example if a chip has 2 bad cores instead of selling a 6 core version Apple is probably just scrapping it.
Being able to sell bad batches of product takes some of the sting out of failure, and past a certain point you're just enabling people to cut corners or ignore fixable problems. Having a tolerance of 1 bad core means if I think I have a process improvement that will reduce double faults but costs money to research and develop, aren't I more likely to get that funding?
We'll start to see high-binned next-gen Apple Silicon parts moving to the MacBook Pro, and Mac Pro, and lower-binned parts making their way down-range.
That could make sense for Apple; the M1 is already ~1 generation ahead of competitors, so axing a bit of performance in favor of higher yields doesn't lose you any customers, but does cut your costs.
Plus, they definitely do some binning already, as mentioned with the 7 vs 8 core GPUs.
In silicon manufacturing, the inefficiency is actually pretty low specifically because of the kind of binning that Intel and AMD do, that GP was complaining about. In a fully vertically integrated system with no desire to sell outside, the waste is realized. In a less integrated system the waste is taken advantage of.
In theory capitalism should broadly encourage the elimination of waste - literally every part of the animal is used, for instance. Even the hooves make glue, and the bones to make jello.
Apple probably does the same strategy PS3 did: create a 1-PPE + 8-SPE chip, but sell it as a 1-PPE + 7-SPE chip (assume one breaks). This increases yields, and it means that all 7-SPE + 8-SPE chips can be sold.
6-SPE-chips (and below) are thrown away, which is a small minority. Especially as the process matures and reliability of manufacturing increases over time.
You can read more info on this forum post:
The only exception to this are Panasonic microwaves.
Granted, a microwave with a half broken M1 in it would be awesome.
To be fair, is there anything particularly revolutionary that could be done with a microwave (short of "smart" features)? They all function the same: shoot specific frequency energy into the (possibly rotating) chamber. It would make sense that the guts are just a rebadged OEM part.
Fuse off the broken one? Sure, makes sense.
Fuse off a good one? That's arguably amoral and should be discouraged.
Three cores can be better than two. Let the consumer disable the runt core if they need.
Businesses don't owe you a product (before you pay for it) any more than you owe them loyalty after you pay for something. They will suffer when someone else offers what you want and you leave. That's the point of markets and competition.
If it's wrong for the government to pay farmers to burn crops during a depression, then it's wrong for a monopoly to disable chip capabilities during a chip shortage.
The problem is just one of "efficiency". The production is not perfectly aligned with where people are willing to spend money. A purely efficient market exists only in theory / textbooks / Adam Smith's Treatise.
The chips that roll off a fab are not done. They aren't "burning crops". Perhaps they are abandoned (not completed) perhaps because they need to recoup or save resources to focus on finishing and shipping the working (full core) products. They aren't driving their trucks of finished products into the ocean.
Destroying wealth is not appropriate the market mechanism to deal with disequilibrium. Producers should either lower the price to meet the market or hold inventory if they anticipate increased future demand. However, the latter may be harder to do in the CPU business because inventory depreciates rapidly.
Intel has hitherto been minimally affected by market pressures because they held an effective monopoly on the CPU market though that is fast changing.
So, there is nothing necessarily "efficient" about what Intel is doing. They're maximising their returns through price discrimination at the expense of allocative efficiency.
> The chips that roll off a fab are not done. They aren't "burning crops". Perhaps they are abandoned (not completed) perhaps because they need to recoup or save resources to focus on finishing and shipping the working (full core) products. They aren't driving their trucks of finished products into the ocean.
That may be true in some cases, but not in others. I'm speaking directly to the case where a component is deliberately modified to reduce its capability for the specific purpose of price discrimination.
This is itself a moral claim. You may choose to base your morals on capitalism, but capitalism itself doesn't force that moral choice.
> That's the point of markets and competition.
And the point of landmines is to blow people's legs off, but the existence of landmines does not morally justify blowing people up. Markets are a technology and our moral framework should determine how we employ technologies and not the other way around.
I don't know anyone in western society who thinks things like planned obsolenscence are to be admired.
Business models for budget airlines (RyanAir, etc.) are a bit different but that's not relevant here.
Anyways, agreed ECC should be standard, but it requires an extra die and most people can do fine without it, so it probably won't happen. But an ECC CPU option with clearly marketed consumer full ECC RAM would be nice. DDR5 is a nice step in this direction but isn't "full" ECC.
Amoral means that it is not moral or immoral.
AMD recently dicked b350/x370 chipset owners by sending motherboard manufacturers a memo telling them not to support Zen 3 (5000 series) Ryzen CPUs on their older chipsets. This was after AsRock sent out a beta BIOS which proved that 5000 series CPUs worked fine on b350 chipsets. Today, AsRock's beta BIOS still isn't on their website and it's nearly a year after they put it out.
Also, Ryzen APU CPUs do not support ECC. Only the PRO branded versions. Which only exist as A) OEM laptop integration chips, or B) OEM desktop chips which can only be found outside North America (think AliExpress, or random sellers on eBay).
It's more accurate to say AsRock supports ECC on Ryzen. And sometimes Asus. They are also incredibly cagey about exactly what level of ECC they support.
Ryzen only supports UDIMMs. Not the cheaper RDIMMs. There are literally 2-3 models of 32GB ECC UDIMMs on the market. One of which is still labeled "prototype" on Micron's website, last I checked. Even if your CPU supports ECC, it takes the entire market to bring it to fruition. If no one is buying ECC (because non ECC will always be cheaper), then the market for those chips and motherboards won't exist. Want IPMI on Ryzen? You're stuck with AsRock Rack or Asus Pro WS X570-ACE. Go check the prices on those. Factor in the UDIMM ECC. It's not cheaper than Xeon.
And they stated their reasoning:
The average AMD 400 Series motherboard has key technical advantages over the average AMD 300 Series motherboard, including: VRM configuration, memory trace topology, and PCB layers
Which is entirely reasonable, and accurate if you look at the quality of the average X370 motherboard compared to 400+.
And no, AMD does not do everything I described. Which Ryzen model doesn't have SMT? I see it on the 3, the 5, the 7, and the 9. Which model doesn't have turbo boost? I see it on the 3, the 5, the 7, and the 9.
As for ECC: I don't believe I said they're perfect, but it's a heck of a lot better than what Intel has to offer...
So AMD told you that? And yet you don't call that market segmentation? Come on now. Lose the double standard already. AsRock (and I think Asus or Gigabyte?) has proven the b350/x370 chipset works fine with 5000 series CPUs. People have tested it and are using it just fine. VRMs are up to the motherboard. Why are you letting AMD dictate what motherboard manufacturers want to support here?
> look at the quality of the average X370 motherboard compared to 400+
Uh, what? The x370 is at a higher tier than b450. There are many b450 boards that are straight garbage (and let's be honest, garbage MBs stretch across all chipsets). The difference between a b350 and b450 is vanishingly tiny.
I'm baffled that people really think 300/400/500 series matter. You can run Zen 1 on b550/x570 despite AMD not wanting you to. You can't claim VRM/memory trace/PCB there. The only real limitation that I can tell is physical BIOS RAM capacity.
> Which Ryzen model doesn't have SMT?
The Ryzen 3, of course. Not that I meant literally all the steps Intel took AMD also took. But what the hell do you think the "X" series of Ryzen chips are? Or Threadripper and EPYC? It's all market segmentation. The Ryzen 5 is just the 7 with cores disabled. Why are you picking certain features as "segmentation" over others? It makes no sense.
> As for ECC: I don't believe I said they're perfect, but it's a heck of a lot better than what Intel has to offer...
How? Just so you know I spent literally months researching everything I've stated in this thread just so I could put together a Ryzen system with ECC. With Xeon I could have been done in a day.
Gigabyte allows ECC RAM to operate, but forces it into non-ECC mode thereby working as normal RAM. Good luck figuring out what MSI is doing. Asus, who the hell really knows. Their website spec sheet lists "ECC supported" and the manual for each specific motherboard says something entirely different.
New CPUs have same TDP.
>memory trace topology
worked fine with previous CPUs at speed X
>and PCB layers
>> The average AMD 400 Series motherboard has key technical advantages over the average AMD 300 Series motherboard
funny you say that, AMD didnt think so before the backlash https://www.itworldcanada.com/article/amd-zen-3-processors-w...
AMD Zen CPUs are full S0Cs nowadays. What they call "chipset" is just a PCIE connected Northbridge. Everything important is integrated inside CPU. pcie, ram, usb 3.0, sata, HD Audio, even RTC/SPI/I2C/SMBus and LPC are on die. You can make perfectly functional system with just an AMD CPU alone.
How about AMD Smart Access Memory totally requiring 500-series chipset despite being just a fancy marketing name for standard PCI Express Resizable BAR support? Already shipping disabled for 2 prior generations before being announced as 5000 exclusive. Oh, enough uproar and even that crumbles a little bit https://www.extremetech.com/computing/320548-amd-will-suppor... but still linked to "chipsed" while implemented entirely inside CPU.
Or that time x470 was going to support PCIE 4, but then it was made x570 exclusive. Despite the fact "chipset" doesnt even touch the lines between CPU and slots.
oh, but but the bios size limit, we cant support all the CPUs on same motherboard (like they did in Socket A days) ... in a 16MB bios chip? please.
Sure, if you're only buying memory modules, maybe you would go for the $7 savings. But as part of an overall system, nobody is even going to notice.
Intel's "disabling" ECC is different situation. They implements ECC for the silicon, enable for Xeon, disable for Core i.
Lenovo offered Pro Series Ryzen APU small form factor PCs. Like the Lenovo ThinkCentre M715q with a 2400GE. I believe HP offered them as well with the 2400GE at some point.
But even if you have a Pro embedded, it doesn't mean you get ECC. My Lenovo ThinkPad has a PRO 4750U. But they solder on one non-ECC DIMM. So it's rather pointless. Plus, it's SODIMM. So that's yet another factor at play when choosing RAM.
The only real exception that I know of is the recent 5000G APUs may support ECC. But this seems to be borderline rumor/speculation at this point. Level1Techs made the claim on YouTube and were supposed to have a follow up. Not sure if that ever happened.
If you eliminate the market segmentation practices, then the price of the small number of remaining SKUs will regress to the mean. This may save wealthy buyers money as they get more features for less cash, but poor buyers get left out completely as they can no longer afford anything.
I do agree that Intel takes this to an absurd degree and should reign it in to a level more comparable to AMD. With ECC being mandatory in DDR5, I would expect all Intel chips to support it within a few years.
After all, making your consumers buy the more expensive versions of your product just because they need one of its features is a sound business decision.
Otherwise people will use the cheaper and lower end versions if they only need these features - like i'm currently using 200GEs for my homelab servers, because i do not require any additional functionality that the low power 2018 chip doesn't provide.
I don't believe it is merely an execution problem.
AMD's out-innovated Intel Evidence being the pivot to multi-core, massive increased PCIe, better fabric, chiplet design, design efficiency per wafter, among others.
Why did this happen?
> Two years after Keller's restoration in AMD's R&D section, CEO Rory Read stepped down and the SVP/GM moved up. With a doctorate in electronic engineering from MIT and having conducted research into SOI (silicon-on-insulator) MOSFETS, Lisa Su  had the academic background and the industrial experience needed to return AMD to its glory days. But nothing happens overnight in the world of large scale processors -- chip designs take several years, at best, before they are ready for market. AMD would have to ride the storm until such plans could come to fruition.
>While AMD continued to struggle, Intel went from strength to strength. The Core architecture and fabrication process nodes had matured nicely, and at the end of 2016, they posted a revenue of almost $60 billion. For a number of years, Intel had been following a 'tick-tock' approach to processor development: a 'tick' would be a new architecture, whereas a 'tock' would be a process refinement, typically in the form of a smaller node.
>However, not all was well behind the scenes, despite the huge profits and near-total market dominance. In 2012, Intel expected to be releasing CPUs on a cutting-edge 10nm node within 3 years. That particular tock never happened -- indeed, the clock never really ticked, either. Their first 14nm CPU, using the Broadwell architecture, appeared in 2015 and the node and fundamental design remained in place for half a decade.
>The engineers at the foundries repeatedly hit yield issues with 10nm, forcing Intel to refine the older process and architecture each year. Clock speeds and power consumption climbed ever higher, but no new designs were forthcoming; an echo, perhaps, of their Netburst days. PC customers were left with frustrating choices: choose something from the powerful Core line, but pay a hefty price, or choose the weaker and cheaper FX/A-series.
>But AMD had been quietly building a winning set of cards and played their hand in February 2016, at the annual E3 event. Using the eagerly awaited Doom reboot as the announcement platform, the completely new Zen architecture was revealed to the public. Very little was said about the fresh design besides phrases such as 'simultaneous multithreading', 'high bandwidth cache,' and 'energy efficient finFET design.' More details were given during Computex 2016, including a target of a 40% improvement over the Excavator architecture.
>Zen took the best from all previous designs and melded them into a structure that focused on keeping the pipelines as busy as possible; and to do this, required significant improvements to the pipeline and cache systems. The new design dropped the sharing of L1/L2 caches, as used in Bulldozer, and each core was now fully independent, with more pipelines, better branch prediction, and greater cache bandwidth.
>In the space of six months, AMD showed that they were effectively targeting every x86 desktop market possible, with a single, one-size-fits-all design. A year later, the architecture was updated to Zen+, which consisted of tweaks in the cache system and switching from GlobalFoundries' venerable 14LPP process -- a node that was under from Samsung -- to an updated, denser 12LP system. The CPU dies remained the same size, but the new fabrication method allowed the processors to run at higher clock speeds.
>Another 12 months after that, in the summer of 2019, AMD launched Zen 2. This time the changes were more significant and the term chiplet became all the rage. Rather than following a monolithic construction, where every part of the CPU is in the same piece of silicon (which Zen and Zen+ do), the engineers separated in the Core Complexes from the interconnect system. The former were built by TSMC, using their N7 process, becoming full dies in their own right -- hence the name, Core Complex Die (CCD). The input/output structure was made by GlobalFoundries, with desktop Ryzen models using a 12LP chip, and Threadripper & EPYC sporting larger 14 nm versions.
>It's worth taking stock with what AMD achieved with Zen. In the space of 8 years, the architecture went from a blank sheet of paper to a comprehensive portfolio of products, containing $99 4-core, 8-thread budget offerings through to $4,000+ 64-core, 128-thread server CPUs.
It's a harsh truth, but nodes completely dominate the value equation. It's nearly impossible to punch up even a single node -- just look at consumer GPUs, where NVidia, the king of hustle, pulled out all the stops, all the power budget, packed all the extra features, and leaned harder than ever on all their incumbent advantage, and still they can barely punch up a single node. Note that even as they shopped around in the consumer space, NVidia still opted to pay the TSMC piper for their server offerings. The node makes the king.
AMD should never have been able to get back in the game.
The only reason it hasn't happened is because they had no legitimate competition until recently. In a healthy market they would have been forced to do so long ago. Capitalism and "market forces" only work where competition exists.
At least AMD Ryzen supports it, but the fact that one has to spend a lot of time to research through products, specs, forums and internet chats to figure out a good CPU, m/b & RAM combination that works is cumbersome, to say the least.
The i3 through i9 are generally the exact same silicon. But yields are always variable. If you took the raw yield the actual i9 per wafer might only be 10%-20% which would not be economically viable.
So designed into EVERY Intel product (and generally every other semiconductor company's products) are "fuses" and circuitry that can re-map and re-program out failed elements of the product die.
So a failed i9 can AND DOES become i7, i5, or i3. There is no native i3 processor. The i3 is merely an i9 that has 6 failed cores or 6 "canceled" cores (for inventory/market supply management). Same goes for i5 and i7. They are "semi-failed" i9s!
This is how the industry works. Memories work in similar ways for Flash or DRAM: there is a top-end product which is designed with either spare rows or columns as well as half-array and 3/4-array map-out fuses. Further there is speed binning with a premium on EMPIRICALLY faster parts (you can NOT predict or control all to be fast - it's a Bell curve distribution like most EVERYTHING ELSE in the universe)
With this, nominal total yields can be in the 90% range. Without it, pretty much NO processor or memory chip would be economically viable. The segmentation is as much created to support this reality OF PHYSICS and ENGINEERING as it is to maximize profits.
So generally, to use your example, a non-ECC processor is a regular processor "who's" ECC logic has failed and is inoperable. Similar for different cache size versions - part of the cache memory array has failed on smaller cache parts.
So rather than trash the entire die which earns $0 (and actually costs money to trash), it has some fuses blown, gets packaged and becomes a non-ECC processor which for the right customer is 100% OK so that it earns something less than the ECC version but at an acceptable discount.
When I worked at Intel, we had Commercial, Industrial and Military environmental plus extra ones for "emergencies: e.g. parts that completed 80% of military qual and then failed - hence the "Express" class part.
We also had 10 ns speed bins which create 5-7 bins, and then the failed half- and quarter-array parts meant 3 more. So 4x7x3 = 84 possible product just for the memory parts I worked on.
For processors you could easily have separate categories for core failures, for ECC failures, for FPU/CPU failures. That takes you up to 100-200 easy. If you are simultaneous selling 2-3 technology generations (tik-tock or tik-tik-tock), that gets you to 500-1000 easy.
This is about "portfolio effect" to maximize profits while still living with the harsh realities that the laws of physics impose upon semiconductor manufacturing. You don't rely on a single version and you don't toss out imperfect parts.
BTW how do you think IPA and sour beers came about?? Because of market research? Or because someone had a whole lot of Epic Fail beer brew that they needed to get rid of??
It was the latter originally, plus inspired marketing. And then people realized they could intentionally sell schlock made with looser process controls and make even more money!
But no high performance mainstream desktop Intel CPU supports ECC . Meanwhile AMD doesn't have any that lack it.
What gives? Surely Intel's ECC logic doesn't have such a huge defect ratio that Intel can't have even a single regular mainstream part with ECC.
At work I need fairly low performance CPU with decent integrated graphics. Intel's iGPUs are great were it not for the lack of any parts with ECC. Nevermind that finding a non-server Intel motherboard with ECC support would restrict the choice such that there'd likely be none with also other desired features.
IPA came about because hops are a natural preservative and they needed to ship the beer all the way to India from England.
Sour Beer is just air fermented beer ala Sourdough Bread. It is actually harder to make Sour Beer than "normal" beer (it does not come out of the failure of normal beer fermentation either).
Sorry for being pedantic. :)
I find that particular statement, very hard to believe.
Also, there's some cross contamination between price point and market segment here. Nobody just buys a CPU, they buy a CPU wrapped in a laptop. So Intel's real customers are laptop manufacturers, not you. So the low-end chips have to appeal to a model that the laptop vendors want to introduce. That takes the form of thin & light laptops (or low-energy-usage "green" desktops for office workers).
Adding ECC support adds heat and cost and die size. All things the thin & light market do not want under any circumstances.
No product is based on the price of manufacture, it's based on the price people are willing to pay.
In what capacity is 10Gbe included as a CPU feature? I’ve only ever used PCIe cards.