I agree in principle, but it's pretty obvious that this would be bad for their profit margins and as a consequence wouldn't happen.
After all, making your consumers buy the more expensive versions of your product just because they need one of its features is a sound business decision.
Otherwise people will use the cheaper and lower end versions if they only need these features - like i'm currently using 200GEs for my homelab servers, because i do not require any additional functionality that the low power 2018 chip doesn't provide.
> they are losing because their fabs are way behind TSMC.
I don't believe it is merely an execution problem.
AMD's out-innovated Intel Evidence being the pivot to multi-core, massive increased PCIe, better fabric, chiplet design, design efficiency per wafter, among others.
Why did this happen?
> Two years after Keller's restoration in AMD's R&D section, CEO Rory Read stepped down and the SVP/GM moved up. With a doctorate in electronic engineering from MIT and having conducted research into SOI (silicon-on-insulator) MOSFETS, Lisa Su [1] had the academic background and the industrial experience needed to return AMD to its glory days. But nothing happens overnight in the world of large scale processors -- chip designs take several years, at best, before they are ready for market. AMD would have to ride the storm until such plans could come to fruition.
>While AMD continued to struggle, Intel went from strength to strength. The Core architecture and fabrication process nodes had matured nicely, and at the end of 2016, they posted a revenue of almost $60 billion. For a number of years, Intel had been following a 'tick-tock' approach to processor development: a 'tick' would be a new architecture, whereas a 'tock' would be a process refinement, typically in the form of a smaller node.
>However, not all was well behind the scenes, despite the huge profits and near-total market dominance. In 2012, Intel expected to be releasing CPUs on a cutting-edge 10nm node within 3 years. That particular tock never happened -- indeed, the clock never really ticked, either. Their first 14nm CPU, using the Broadwell architecture, appeared in 2015 and the node and fundamental design remained in place for half a decade.
>The engineers at the foundries repeatedly hit yield issues with 10nm, forcing Intel to refine the older process and architecture each year. Clock speeds and power consumption climbed ever higher, but no new designs were forthcoming; an echo, perhaps, of their Netburst days. PC customers were left with frustrating choices: choose something from the powerful Core line, but pay a hefty price, or choose the weaker and cheaper FX/A-series.
>But AMD had been quietly building a winning set of cards and played their hand in February 2016, at the annual E3 event. Using the eagerly awaited Doom reboot as the announcement platform, the completely new Zen architecture was revealed to the public. Very little was said about the fresh design besides phrases such as 'simultaneous multithreading', 'high bandwidth cache,' and 'energy efficient finFET design.' More details were given during Computex 2016, including a target of a 40% improvement over the Excavator architecture.
....
>Zen took the best from all previous designs and melded them into a structure that focused on keeping the pipelines as busy as possible; and to do this, required significant improvements to the pipeline and cache systems. The new design dropped the sharing of L1/L2 caches, as used in Bulldozer, and each core was now fully independent, with more pipelines, better branch prediction, and greater cache bandwidth.
...
>In the space of six months, AMD showed that they were effectively targeting every x86 desktop market possible, with a single, one-size-fits-all design. A year later, the architecture was updated to Zen+, which consisted of tweaks in the cache system and switching from GlobalFoundries' venerable 14LPP process -- a node that was under from Samsung -- to an updated, denser 12LP system. The CPU dies remained the same size, but the new fabrication method allowed the processors to run at higher clock speeds.
>Another 12 months after that, in the summer of 2019, AMD launched Zen 2. This time the changes were more significant and the term chiplet became all the rage. Rather than following a monolithic construction, where every part of the CPU is in the same piece of silicon (which Zen and Zen+ do), the engineers separated in the Core Complexes from the interconnect system. The former were built by TSMC, using their N7 process, becoming full dies in their own right -- hence the name, Core Complex Die (CCD). The input/output structure was made by GlobalFoundries, with desktop Ryzen models using a 12LP chip, and Threadripper & EPYC sporting larger 14 nm versions.
...
>It's worth taking stock with what AMD achieved with Zen. In the space of 8 years, the architecture went from a blank sheet of paper to a comprehensive portfolio of products, containing $99 4-core, 8-thread budget offerings through to $4,000+ 64-core, 128-thread server CPUs.
The secondary features (PCIe, ECC) and tertiary features (chiplets) wouldn't have mattered if Intel had delivered 10nm in 2015.
It's a harsh truth, but nodes completely dominate the value equation. It's nearly impossible to punch up even a single node -- just look at consumer GPUs, where NVidia, the king of hustle, pulled out all the stops, all the power budget, packed all the extra features, and leaned harder than ever on all their incumbent advantage, and still they can barely punch up a single node. Note that even as they shopped around in the consumer space, NVidia still opted to pay the TSMC piper for their server offerings. The node makes the king.
Exactly. It seemed like a sound business decision because it gave them measurably more money in their pocket over a short period of time. They don't appear to have taken into account that they left the door open for competition. It wasn't just prices that left them vulnerable, but it sure didn't help.
AMD should never have been able to get back in the game.
I agree but this is a game you can play with your customers when they actually want what you’re selling and you have market power. When you’re losing ground and customers are leaving the shop, it’s time to cut the bullshit and give people what they want.
>I agree in principle, but it's pretty obvious that this would be bad for their profit margins and as a consequence wouldn't happen.
The only reason it hasn't happened is because they had no legitimate competition until recently. In a healthy market they would have been forced to do so long ago. Capitalism and "market forces" only work where competition exists.
After all, making your consumers buy the more expensive versions of your product just because they need one of its features is a sound business decision.
Otherwise people will use the cheaper and lower end versions if they only need these features - like i'm currently using 200GEs for my homelab servers, because i do not require any additional functionality that the low power 2018 chip doesn't provide.