Hacker News new | past | comments | ask | show | jobs | submit login
AMD server CPUs capture highest market share gains from Intel in 15 years (hothardware.com)
328 points by giuliomagnifico on May 9, 2021 | hide | past | favorite | 115 comments



And, in my opinion, its only going to get worse if Intel doesnt sober up and address the elephant in the room: Hyperthreading.

Without serious reform to the design of the Intel X86 chip to eliminate and refactor what basically amounts to a performance-before-safety feature, Intel is going to see the lions share of performance hits. Eventually people will tire of writing backflip code to sidestep pitfalls in the Intel HT design and simply return the responsibility to Intel, where its existed since day one.

It could be argued Intels mouthpiece has already lost its ability to convince major datacenter and cloud customers that HT is even remotely safe as a continued investment. Intel needs a new X86.


Epyc also has hyperthreading. Intel's security woes have more to do with a deep corporate philosophy than anything else. Meltdown happened because Intel works with the premise: you can cheat (not respect protection domains) if nobody sees it, AMD respected the protection domains in its implementation even if not visible. I know that my assessment is a bit inflammatory but that is what it is.


What exactly is the connection between hyperthreading and safety?


Modern CPUs have a very large bag of tricks to make the code they execute run fast. These tricks are usually designed for some optimal workload, and (especially x86) the real deal can't actually keep the backend busy. Solution: Duplicate some of the CPU to feed the backend with more stuff to do. This can get you a performance win without the cost of a whole core.

All good so far, for (say) HPC where you have people with guns guarding the machines you can turn on all the performance. The issue is that as CPUs have got more complicated, it's increasingly easy to attack the internal state of the processor. This is already possible in a single thread (i.e. the prototypical spectre and meltdown implementations), however imagine what you can do next door to a completely different "core". The processor maintains separation of structure but not of state, so you can use (say) timing side channels to extract information about what your neighbour is up to.

FWIW these issues are more of a problem for Cloud vendors than you or I as they usually require pretty specific knowledge of the hardware being attacked and even then are not easy to pull off.


Pretty much nobody in the HPC sphere really uses hyperthreading, it's disabled in the BIOS on pretty much every HPC cluster I've ever used.


I don’t know why this insane rule of thumb still exists in the HPC world. True that some compute bound jobs that are not designed to make use of HT will be hurt by running in an HT environment. But, not all of HPC work is such purely compute bound. For a scientific field, it is ironic that someone perpetrates this blind rule. Do an A/B test with and without HT for your workload in hand and decide based on results…..

As an example, in our test suite, we found that we get almost a 80% boost by turning on HT.

The right rule is “HT may help or hurt your performance depending on workload. Test and decide”.


Well, it's not such a bad idea for most cluster admins (me being one at a large UK University). Most of our compute time comes from a small number of very popular MPI parallelised applications, all of which suffer from worse (or at best equivalent) performance with hyperthreading.


Why is this a A/B test and not a regular test?


Point is you can use it without worrying about security.


HT enables some side-channel attacks due to its architecture of "let's allow two threads to share the same pipeline and use the unused parts to allow more work to go through".

In practice this allows some attacks to attach themselves to the same pipeline with the target process and nibble required information slowly, but surely.

IIRC FreeBSD disabled HT out of the box at the kernel level for this reason.

I'm tired and it's late here. I may worded some stuff wrong or plainly misremembered it. Please feel free to correct.


OpenBSD did, not FreeBSD


FreeBSD did not disabled it by default but you can disable it with this:

% sysctl -d machdep.hyperthreading_allowed machdep.hyperthreading_allowed: Use Intel HTT logical CPUs


Doesn’t AMD also provide hyper-threading of sorts?

My Ryzen has 6 cores and 12 threads. Isn’t that pretty much the same?


Something about the AMD implementation is notably harder to exploit, though last I looked it wasn't clear what that was. It's held up much better, and a lot of people have looked at it.

Still exploits, but almost impossible to do in the field last I checked, where Intel has been demonstrated on live systems handling production load.


It's difficult to say why AMD's implementation has held up better. One interesting thing I noted looking at wikichip's info [0] was that Zen calls out that Cache is tagged by thread. I wonder whether Intel's HT did/does the same or not.

[0] https://en.wikichip.org/wiki/amd/microarchitectures/zen#Simu...


I heard somewhere that AMD speculated a bit less and cleaned up a little after a miss, which made exploitation a lot more difficult. I'm sorry I don't recall the source, might have been an interview with one of AMD's engineers.


The fundamental difference is that AMD respects exclusion domains even when speculating a thing intel is notorious for not doing. It's a fundamental difference. Intel: speculate without checking rights and crush afterward if a violation happened. Gains some points in benchmarks but makes side channel exploits very interesting. AMD: speculate only on allow if allowed. No need to crush afterward, no speculation violate security. Costs some points in benchmarks but make side channel exploits not very interesting.


It's simply SMT or Simultaneous multithreading on AMD.

Hyperthreading was Intel's branding of SMT, if I understand this correctly.

Mine has 12 and 24. :)


It's simply SMT or Simultaneous multithreading on AMD.

Hyper-Threading was Intel's branding of SMT, if I understand this correctly.

My Ryzen has 12 and 24. Simply amazing what they did. :)


Apologies for the dupe; this was the most recent version, I thought I hit edit to correct the capitalisation of HT, but I obviously didn't.


I disagree. If you the whole machine to yourself (colo/bare metal hosting) the vast majority of workloads you run wont run any untrusted code.


For data center operators the choice between AMD and Intel in this space comes down to power efficiency.

Ignoring the performance improvements for real workloads, the #1 driving factor is performance per watt. By going with AMD that metric improves ~2x over comparable Intel offerings.

There are facilities where physical space to expand is available but the building has effectively run out of additional power that can be provided by the utility company.

Scaling up on performance per watt is the last frontier.


Even if the facility can supply more power, as electricity prices go up and CPU cycle prices go down, the power becomes the primary driver of the cost to operate the chip over its lifetime. Already electricity costs are much greater than CPU purchase costs over the lifespan.


The comment in general may be true but I don't see electricity costs being "much greater than CPU purchase costs over the lifespan" at least not for a data center operator.

The AWS M4 EC2 instance uses "2.3 GHz Intel Xeon® E5-2686 v4 (Broadwell) processors or 2.4 GHz Intel Xeon® E5-2676 v3 (Haswell) processors" [0] so about 4-6 years old and may approach 7 years. A comparable processor Intel® Xeon® Processor E5-2670 v3 costs about $ 1600 [1] in bulk. Average US home electricity price is ~ 0.13 $/month [2], this would probably be much lower for a data center operator. The mentioned processors probably consume ~ 150 W at full load, which will probably not be the case most of the time. Anyway, let's see:

CPU cost over its life time may be on the order of 1600 USD, which is quite cheap as today's top processors cost usually between 4000 and 8000 USD. [3] The electricity cost would probably be: 24 hours * 365 days * 7 years * 0.13 $/kWh * 0.15 kW ~ 1200 USD which seems about right.

We can observe the electricity cost per CPU to be significantly lower than the purchase price of a rather cheap processor even assuming very generous CPU load and electricity price, while assuming rather low CPU cost. This of course doesn't include the mainboard, RAM, NIC, power conversion, cooling and other hardware that is needed to operate a modern data center. On the other hand, the cost of this hardware wasn't included in the calculation either.

Of course, I am not the only one to do the calculations. James Hamilton from AWS has done them too: https://youtu.be/kHW-ayt_Urk?t=333

[0] https://aws.amazon.com/ec2/instance-types/ [1] https://ark.intel.com/content/www/us/en/ark/products/81709/i... [2] https://www.eia.gov/todayinenergy/detail.php?id=46276 [3] https://www.anandtech.com/show/16529/amd-epyc-milan-review


What's stopping more people from moving cloud workloads to AMD? Unless you're running a rickety legacy application using Intel features you'll instantly save money by moving to AMD.

Both my work and all my side projects have moved to AMD instances where available, except for some legacy on-prem stuff.


> What's stopping more people from moving cloud workloads to AMD?

It's not just about CPU bound workloads but also things that are heavily I/O dependent (typically on bare metal hardware that you own, rather than instances you rent somewhere). There's lots of networking and storage things that would be performance bottlenecked on an intel cpu with less PCI-E lanes. Having a 16-core CPU around $950 that has 128 PCI-E 4.0 lanes is very useful for many things.

And not just for EPYC but also the single socket threadripper parts, which are used in both higher end workstations and some types of server.


^^ this is huge. I was looking at CPU's for a ML recently and Intel is out of the question. Chips with enough lanes to hit full speed on 4 GPUs + SSD cost twice as much as AMD.

This may even apply to high end gaming machines. Soon as you have 2 SSD's or video cards you will exceed link budget on most Intel CPU's and everything slows down.

It also happens to routers. 10 gig NICs plus attached SSD storage, over link budget again.

Intel's stupid market segmentation is biting them in the rear


Not just 10Gbps (such as the Intel card which is four 10Gbps SFP ports in one slot), but single and dual 100GbE per slot like this:

https://www.intel.com/content/www/us/en/products/docs/networ...

In calculating the bandwidth and pci-e bus throughput needed, a single 100GbE port is full duplex, so one has to budget about 210Gbps per port.

The funny thing is that some of the best 100GbE NICs for x86-64 servers on the market right now are Intel, but are best used on an AMD platform...


Nah, PCIe is also full duplex you don't need to double it like that but Mellanox has dual 200gbe cards. Here's the PCIe 3.0 version: https://cdn11.bigcommerce.com/s-uxkkta8o/images/stencil/1280... and here's the PCIe 4.0 version: https://cdn11.bigcommerce.com/s-uxkkta8o/images/stencil/1280...

Yes, the 3.0 version needs two PCIe x16 slots.


What? Why do you think the Intel 100GbE NIC is good?

We've been quite happy with Mellanox and Chelsio 100GbE NICs. The latest from each can do in-line HW TLS offload, which is a killer feature for us. No Intel NIC can do that.

IMHO the last good Intel NIC was the 10GbE "ixgbe" NIC. The design of the NIC was so tight as to be almost beautiful.

Recent 40GbE (and 10GbE based on the 40GbE chipset), and the new 100GbE NIC have the feel of being designed by a committee with endless features of questionable value stuffed in and consuming power and chip area.


Primarily the state of its Linux driver, and rock solid support in VyOS (derived from Debian stable). For a router, TLS offload isn't so much of a consideration when the system isn't doing anything at layers 4-7 in the OSI model.

If I had to make a perhaps overly broad generalization, I see more Chelsio and Mellanox NICs used in end point servers, and more Intel used in DIY whitebox network equipment.


That's fair; I come at this from a CDN perspective where end system performance is most important.

Do you see any benefit from the fancy features? Can it source/sink min sized frames at 100GbE? (144Mpps) ?


Intel's E810 based NICs require only a PCIe3.1x16 slot. 16 lanes will accommodate the 100GbE port just fine. Theoretical PCIe throughput for 16 PCI lanes is around 252Gbps. The 800 NIC chipset is just four 25Gb Ethernet lanes stitched together. PCIe4 won't help this NIC much.


Your throughput calculation is based on a single port 100GbE NIC working okay at full duplex bidirectional line rate in a PCI-E 3.0 x16 slot, which is true. Let's say that we budget 220-230Gbps of throughput per optical transceiver, such as if you were to max out a 100GbE link both directions with iperf3 for testing. But the intel X820 card has two ports on it, so your bandwidth needs are going to be in the range of 500 Gbps.

also, cumulative number of pci-e 3.0 or 4.0 lanes in a system is a big consideration if you want to have, for instance, four dual-port 100GbE NICs all talking to one CPU. Or some mixture like three dual-port 100GbE NICs + one or two 4-port 10GbE NICs.

Where the lower end of the Intel server CPU offerings really falls flat is not having anything close to 64 or 128 PCIE lanes at a reasonable price.


The maximum throughput of Intel's Columbiaville NICs is 100G max. When the 2nd port is used the total bandwidth is split in half, so there will be two 50G links. The 2nd port is usually in stand-by mode when the 1st port is used as a 100G link. So, two active ports on a Intel 100G NIC does not result in 200G throughput.


This is correct. The Asics on these cards cannot do 200G, even if pcie allowed. The only one I'm aware of that can is the Mellanox cx6 dx


I don't understand why you are combining tx and RX. PCI, fiber, and the switch/nic are all full duplex, so you don't need to add the two directions together. A dual port nic, even with pcie4, is unlikely to be able to hit 200G ( single direction) unless it's using very large packet sizes.

Since you're limited by the 100G negotiated speed, you nic will never send more than 100G with iperf on a single port.


100% this. With NVMe and higher bandwidth NICs, fast and plentiful PCIe lanes are increasingly important and Intel have just been so far behind in this (or gated equivalance at hilarious price differentials from AMD - like having to go to $4k Intel parts (and beyond) to match AMD prosumer desktop parts).


Just try to attach a 8 GPU A100 unit to a 2P Intel system, then try to add fast Infiniband with NVMe for scratch, and whole platform just chokes.


There's a surprising amount of lock-in for corporate customers. They'll be using proprietary software which is only certified for Intel (for reasons which can sometimes be real). They could be using the Intel compiler which as far as I know still pessimizes on AMD. Also it's not necessarily cheaper to move your workloads to AMD if that involves buying new hardware (if on-prem) and retesting everything or having to go to all your software suppliers to check if they support AMD.

Personally I switched to AMD hardware a couple of years ago and haven't looked back, but corporations don't do that.


I suspect large customers are going to AMD for a quote, and then taking the quote straight to Intel for them to beat. No transition headache and price benefits with that approach.

But those are on-prem customers, what about cloud customers? In the SOA world, surely greenfield stuff won't be using any of that proprietary Intel software.


Intel's margins are dropping fast, so you're probably right that something is going on.

> Intel said its gross margin, the percentage of revenue remaining after deducting the cost of production, was 55.2%, down more than five percentage points from the same period in 2020. This is a key indicator of the strength of its manufacturing and product pricing. Intel has historically delivered margins above 60%.

https://www.msn.com/en-us/news/technology/intel-falls-most-i...


If AMD are being used to drive down prices, it has a good overall effect though doesn’t it?

And if AMD get the feeling they are being used this way, offering quotes with no margin will make for painful days at Intel.


Intel’s margins are almost certainly higher than AMDs.


Probably they were 4 years ago, but I'm not so sure now. Intel's processors require about 2x as much silicon per core (14nm vs 7nm), so if they are priced similar per core, Intel is probably getting squeezed.


The silicon costs are a very tiny portion of their cost of goods sold.

Electricity, water and even just plain packaging of the cpu will likely cost more than the difference in silicon use between 14 and 7 nm


1. The cost difference between 14mm2 and 7mm2 is a whole new fab at twice the density (assuming transistor count remains the same). Asset costs really matter to Intel.

2. If the constraint on the number of packages you can sell is the number of chips you can produce, then the packaging cost of the chip is not so relevant (assuming packaging is not a constraint on production). If you can halve the chip area on the same production node, you can double production of packages, which can make a huge difference to profits (assuming Intel is a high margin business with high demand and that demand elasticity is in their favour etcetera).

Disclaimer: I am not in the industry, but what you say just seems wrong without even arguing that the cost of the silicon for Intel dominates packaging costs.


Intels 14nm fabs and tooling are now fully amortized and yields are going to be very much higher than 7nm/5nm nodes.

193i steppers are also dirt cheap, and most of their old fabs can also be modified for 14nm/10nm production, whereas EUV tools are 180 tonne behemoths that require overhead cranes and/or physical disassembly of the plant to move.


I was surprised that Intel hasn’t seemed to have bought more of the EUV machines from after they helped fund their development. Do you have any insights?


You're assuming that the cost of a fab (machinery, labour, whatever) is proportional to wafer size rather than transistor count. That seems pretty dubious.


The difference isn't in material usage. Larger dice translate into lower yield per wafer and less throughput for a fab.


Own fab vs outsourced fab. It's significant difference how they calculate the cost. Reducing die size is more serious for AMD but not so much for Intel.


I think you’ve hit the nail on the head here. I also think you may have answered your own question.

Cloud customers may be taking volume quotes to Intel from AMD to see what they can/will do for them on price. I don’t see why they wouldn’t do that, what with their (way out of the normal range) buying power.


Cloud is skipping amd and going straight to ARM.

I suspect AMD is getting a bigger chunk of a smaller pie as ARM makes headwind.


AWS is the only large scale provider with ARM. There are some smaller scale and VPS options but those are more for personal or small biz workloads or low overhead things.

We moved stuff to AMD cores on Google Cloud and saw a roughly 15% reduction in utilization (and thus cost) for the same work load. Those are just Zen2 too, not Zen3 yet.


Who is currently offering ARM VPSes to small businesses? I only knew about Graviton


I stand corrected... I no longer see any. For a while Scaleway and a few others were offering them but I checked and they're gone. They were too early IMHO.


Who except for Amazon is doing this currently? Or are you forecasting a long term trend.


Google would not be talking about it, but they've been rolling their own silicon for more than a decade.


Google was using POWER9 internally a couple years ago. Probably not cost efficient right now.


Exactly. I moved some workloads to AMD, but with Graviton2 out now I’m going straight to it where I can.


I run infrastructure for my startup and previously was “locked in” because of AWS reserved instances that may come with multi-year tenancies. However recently I committed to some new generation AMD instances, but Graviton (ARM) were serious contenders.


> What's stopping more people from moving cloud workloads to AMD? Unless you're running a rickety legacy application using Intel features you'll instantly save money by moving to AMD.

Nothing. Unless you're explicitly targeting AVX-512, you don't miss anything. Just move the systems and continue where you left.

Furthermore, first Epyc sold so fast that, it was virtually impossible to buy in large quantities since Dropbox, FB and Google just bought the whole production out, IIRC (we weren't able to buy it, and no big IT vendor sold it under their generally available server lines).


We've had a company wide call to switch to AMD instances wherever possible. It's generally a straight 20% cost savings and most instances are usually overprovisioned anyway.


By "capturing" a client, many companies unwittingly become captive to the client.

By having clients who can't run away from you, you often limit yourself to the tech you can sell to them, instead of pursuing new superior alternatives.

Cisco, Oracle, SAP, IBM — all great examples of this.


For those of us running VMware

Licensing is a part. VMware moved to per core licensing a while ago and away from per socket.

The other part is some limitations with mixed architecture clusters. Things like DRS and vmotion will be gimped.

The third part is lead times.


Can you be more specific? Or provide a citation?

VMWare workstation 16 Pro is still has a flat price, regardless of core count [1].

[1]: https://store-us.vmware.com/vmware-workstation-16-pro-542417...



Thank you for the reference.


That's a consumer product


For the workloads I manage, the AMD instances are the same or slower per $ than comparable Intel instance families (m5/m5a and r5/r5a). Those use the 1st gen Epyc AFIK

Regardles, when third generation epyc Milan rolls out to AWS this year or next, the wave of movement to AMD will be massive

For c5a in us-east-1, it's simply not offered in all the AZs we've been assigned


I can’t mix and match my VMware clusters with Intel and AMD, you lose features like vMotion


Not the case for everyone but I’ve had instances (and again, I recognize this is an edge case) where AMD was more expensive due to per-core software licensing.


It's a quite common theme actually and I'm very curious how it plays out for the companies insisting on per-core licensing in the long term. I mean, if the customers have no choice, they'll stay, but I bet they're looking into some exit strategy in order to stay competitive.


We have been slowly changing over to AMD instances on Azure, it's not a huge savings but it's enough to justify it unless you have a specific need for certain CPU types or licensing.


I remembered they used Bulldozer Opteron for VM in early days. It was slow.


Personally I run a plex server and that one piece of hardware I’ve kept intel because of qsv. Amd has no answer for that. That one workload is 100% intels wheelhouse.


I actually already switched my Kubernetes cluster to the new AMD CPUs in GCP and it's running fantastically. Couldn't be happier with the performance.


At least on AWS the AMD CPUs seem to be clocked pretty low, so assuming the IPC is similar to Intel, the performance/cost benefit of AMD seem pretty small and comes with downsides for tasks that benefit from single core performance.


Server CPUs everywhere are lowly-clocked compared to client, except some specialty ones (F series for AMD, Xeon-W among other lines for Intel).

High clocks come at the detriment of power efficiency, so they are avoided when possible.


An M5 is a "3.1 GHz Intel Xeon® Platinum 8175M" ($0.192 for xlarge)

while an M5a is an "AMD EPYC 7000 series processors with an all core turbo clock speed of 2.5 GHz" ($0.172 for xlarge)

which is only about 10% cheaper. But the claimed clock is 20% lower. Different IPC or sustained clock might shift the balance a bit, but it seems unlikely that AMD wins decisively on performance/cost.


"AMD EPYC 7000 series" is so vague as to be almost meaningless... so we need more info there.

The 8175M's base clock 2.5Ghz, with a Turbo (all core) of 3.1Ghz and (one core) 3.5Ghz.

> assuming the IPC is similar to Intel

Using your assumption as a criteria... the Platinum 8175M should perform the same as an AMD EPYC 7763, whose base clock is also at 2.5Ghz with 3.5Ghz top Turbo speed. This is the only EPYC part that keeps nearly the exact same clocks as the Platinum at every stage.

But we know the IPCs aren't equal, so I don't even know why you'd mention that when discussing your comparison when it's so fatally flawed from the start.

Even using something as rudimentary as Passmark highlights the difference.

8175M Single Thread Rating on Passmark: 1903

EPYC 7763 Single Thread Rating on Passmark: 2639

So, clock for clock where the speed stages area identical, AMD wins.

Going through the list of every Eypc 7000 series part I could find, the one that turbos at or near 2.5Ghz is the 7551... first gen part and only hits 2.55Ghz when it's all core turbo.

EPYC 7551 Single Thread Rating: 1813.

Performance difference ratio of single thread rating between EPYC 7551 vs Platinum 8175M: 0.95270625328

Cost difference ratio of EPYC 7551 vs Platinum 8175M: 0.89583333333

Looks like AMD is still the more cost effective solution here.


The posts listing a concrete model mentioned the EPYC 7571 which has a single thread passmark of 1934. I believe first gen EPYC had an IPC pretty close to Intel.

On the other hand, if I take the all core passmark result divided by cores (since you pay per-core), it's 26659/24 = 1110 vs 27445/32 = 857.

They seem close enough that it's not possible to predict which one is better without actually benchmarking on AWS, since it'll depend on the clock rate they're able to sustain in AWS's setup.

In any case, I don't see any significant cost savings potential for AMD on AWS, which was the point of my original post.


I think you may be comparing single-core turbo against all-core turbo speeds. ark.intel.com doesn't list the 8175M, but all the similar models have 2.1GHz all-core turbo with single-core turbo around 3.7 or 3.8 GHz. Other sources list the 8175M as having a base frequency of 2.5GHz and single-core turbo of 3.5GHz.


It's one of the reasons the so-called desktop CPUs are so popular for certain applications. [0]

[0] https://jan.rychter.com/enblog/cloud-server-cpu-performance-...


for me, it costs money to make sure that my CPU-intensive workloads run the same or better, the gains probably aren't huge, so it's just not worth it


A trend likely to continue in the short term but remember AMD still cannot create enough product to fulfill demand.


Indeed, but the race between TSMC and Intel to scale up 200+ MTr/mm^2 processes should benefit us all. Both AMD and Apple have good micro-architectures to produce at TSMC, and Intel is mostly behind on process, so if we can keep that race going maybe we can still have ~2x performance every ~2 years for a while longer. I'm looking to switch laptops and was surprised to find out that after 5 years and as many generations I can be right on track for that performance growth.


> maybe we can still have ~2x performance every ~2 years for a while longer

At no point in the past 20 years did we even come close to that.

You're probably thinking of Moore's Law, which refers to transistor count, not performance.


I'm thinking of actual benchmarks. Here's the one I've been looking at:

https://www.cpubenchmark.net/compare/AMD-Ryzen-7-PRO-4750U-v...

The top CPU choice for my current Lenovo T460s was the i7-6600U. Exactly 5 laptop generations later the top CPU choice for the T14sG2 is the Ryzen 7 5850U. The ratio of performance between those two is 5.82. That's 42% per year or almost exactly 2x every 2 years on the same form factor and TDP. Last year's 4750U was at 4.4x after 4 years.

Intel has only been able to do 1.6x every 2 years in the same comparison. Their single core performance is comparable but they've only been able to deliver 4 cores versus AMD's 8.


CPUMark is a flawed benchmark, but.. in 2004 the average score was 385 on Desktop. In 2012, it was 4626. So-- that's 12x in 8 years, vs. 16x from doubling every 2 years. I'd say we used to come close.

For the past 5 years, it's been doubling about every 3 years.


~


> 2004: 385

> 2005: 770

> 2006: 1540

You are doubling every year, not every two years.


Ah, you're right! Thanks


I would think the chiplet strategy that AMD used is also a benefit compared to Intels more monolithic design when it comes to yields.


I am curious how much of the process delay in Intel came from having chip design and manufacturing under one umbrella.

If AMD wanted the latest process they had to deal with the limits TSMC said they could achieve. Intel's chip team could simply push back on their manufacturing team to improve reliability via leadership.

If Intel had gone with chiplets would 10nm have been delayed?


What makes you think so?


I've been wanting to build a new desktop with a 5950x ever since it came out, and have been completely unable to find anybody who has it in stock except for scalpers at a 30%+ markup. This is fairly well known information.


I think that everyone building their own computer has to compromise something right now. I couldn't find any ECC when I built my machine last year, so I live without it. It sucks, but global pandemic and all. Next time around it will be better. As much as we hate scalpers driving up the price of computer parts, the cost does translate to availability. So you can pay $2000 and have the latest thing right now, and maybe between when the price drops $500 because of increased availability you will have earned more than $500 because of the increased productivity. Or, maybe no amount of performance will make you $500, and you just want a $35 Raspberry Pi to cut your losses.

(Once you're willing to spend $2000, you might as well just get a 3970 and have twice as many cores, if you can tolerate higher latency in exchange for higher throughput. I have a 3970 and definitely benefit from the throughput more than I would benefit from lower latency, even if it's quite noticeable. For example, some games are bottlenecked by the CPU, which is annoying. But, if you want consistent 360 fps in every game, you're spending an infinite amount of money for no financial gain anyway, so the cost-based reasoning goes out the window.)


> (Once you're willing to spend $2000, you might as well just get a 3970 and have twice as many cores, if you can tolerate higher latency in exchange for higher throughput. I have a 3970 and definitely benefit from the throughput more than I would benefit from lower latency, even if it's quite noticeable. For example, some games are bottlenecked by the CPU, which is annoying. But, if you want consistent 360 fps in every game, you're spending an infinite amount of money for no financial gain anyway, so the cost-based reasoning goes out the window.)

I also love my 3970X, but this really depends how much of what you do is limited by single-thread speed and how much can actually make use of > 32 threads for sustained periods of time. Remember the 5950X is 20-30% faster in single-thread benchmarks.


Seems to be in stock at least three places[1] here in Norway, at a relatively reasonable price given the $799 price from AMD.

But yeah, of those that have it in stock there's like 1-5 units at each place, and those out of stock don't have a firm date on next delivery...

[1]: https://prisguiden.no/produkt/amd-ryzen-9-5950x-469899


Few of them actually has them in stock on the real page, doubt that any of them actually have stock though. None of the retailers on this Swedish website has them in stock, and its been like this since release.

Both CPU/GPUs from AMD are hard to get a hold of, at least you can get CPUs sometimes. Most of production seems to go to consoles and server CPUs.

https://www.prisjakt.nu/produkt.php?p=5588367


Not sure what your usecase is, but I bought a used 3950X for $633 (with tax) in January to use for a homeserver build. You could buy an X570 mobo and a used 3950X for now, and upgrade to the 5950X if you decide you need it.


MicroCenter[0] has been receiving stock for in-store purchase only.

Antonline[1] has them in stock for shipping in a bundle with a Lenovo gaming monitor.

[0]: https://www.microcenter.com/product/630282/amd-ryzen-9-5950x...

[1]: https://www.antonline.com/Lenovo/Computers/Computer_Displays...


Buy from Europe. Seriously, most stores will ship to US. The ones without English UI have better stocks.

I have been able to buy everything I've looked for at normal retail from European stores


This is just the general chip shortage problem, right?


There's two flavors of the chip shortage problem:

1. TSMC leading-edge process is the hottest (coolest, really ;) process and just not enough capacity to meet all the demand for mobile, high end desktop, GPUs, etc. This predates the main shortage we're talking about. Cryptocurrency mining is a contributor to this one, too.

2. There's a broad shortage of parts made on less cutting-edge processes. Causes: disruption to production from COVID, disruption from automakers churning orders, increased demand for consumer products, and speculation/hoarding.

Of course, #2 made #1 even worse.


For AMD its also that zen3, RDNA2 and the new PlayStation and Xbox consoles all launched at the end of last year and are all competing for the same wafers.


Just look at the prices for the Ryzen lineup. They've gone like 100% up in the last year because AMD can't meet demand.


Pretty sure Intel started the pandemic to cause this chip shortage so they could hurt AMD. /s


[flagged]


It will go down if I understood the recent market trends correctly.


It will go up if I understood the recent market trends correctly.


It will change if I understood the recent market trends correctly.


It will stay the same...


I predict it will do none of these things




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: