Probably because they're server cpus that can be sold for more, it's too bad if this is true, I really want more PCIe lanes, though the threadripper 128 lanes are overkill, I'd be fine with 40.
Other than that, performance is plenty on a 5950x for all my imaginable use cases.
Silicon supply and engineering resources are a major issue for HEDT platforms.
Thread Threadripper uses a massive amount of high quality silicon, Epyc has much better margins.
Supporting HEDT is living hell for Intel/AMD. You're often using all kinds of consumer hardware with otherwise enterprise grade gear. Addressing every little PCIe device issue and catering to a very very small market has very little ROI.
Not really, that's not how Threadripper works thanks to the chiplet design. It's not necessarily "high quality silicon" since it has a higher TDP budget. But regardless remember that in this case the 5800X and 5950X are also competing with Epyc for that same 8c chiplet silicon.
For something like Intel's HEDT then yes, you're absolutely correct. That involves HEDT-specific binning and it eats into Xeon-W profits very directly. But for Threadripper it's not nearly as clear cut. In fact it seems more like it's silicon that failed to validate for Epyc since it's half the IO die of Epyc (for the non-Pro Threadripper anyway).
Otherwise you're talking about AMD just taking Ryzen-quality chiplets, putting more of them on a substrate, and selling them at a huge markup.
Take the Threadripper 3970X as just a simple example here. It's 4x 8c zen2 chiplets that can hit a peak of 4.5ghz at 280W TDP. Meanwhile the Ryzen 3950x is a 2x 8c zen2 chiplet that hits a peak of 4.7ghz at 105W TDP. So 3970x is 2x the silicon of the 3950x, but lower quality silicon, and AMD charged more than 2x for the 3970X.
That's the brilliance of chiplets and Threadripper. It's Ryzen-class chiplets with half of an Epyc IO die (the IO die being 12nm GloFlo means it's not really competing for high-end prices anyway). It's binning they're already doing for they're primary product lines, not a specialized HEDT-specific process.
Sure but I'd be paranoid about not having ECC RAM whereas Threadrippers support it officially. But I am sure there's a curated list of Ryzen motherboards that do in fact support ECC...
The Gigabyte B550 VISION motherboards claim to support ECC both on box and on the "key features" page[1]:
Reliability
ECC Memory
To protect against data corruption
Error Correction Code (ECC) memory corrects errors in your data
as it passes in and out of memory to ensure reliability for
critical applications.
I just tried out a Vision D-P (B550) and the IOMMU groups were terrible. All PCIe cards were in one big group, so good luck passing through specific hardware to a VM. It might not matter in a workstation or it might be fixed by a BIOS update, but shrug.
I went with an Asus instead. The Asus manual says "ECC memory support varies by CPU" which is questionable for the 5700G APU I went with, and I didn't feel like digging to find out if harder to obtain ECC UDIMMs would be beneficial. (This was for a router / personal server. My workstation is ECC on a libreboot/KGPE)
Is it all-AMD? If it is, could you link to your setup? I am looking to build an all-AMD workstation this year and I need all the PCIe lanes that I can get.
This would usually be a tangent, but it's quite apropos to the main topic. My workstation is the last amd64 hardware without ME/PSP. It's a few generations old - dual Opteron 6380's with 112G of RAM. I couldn't get it to train with 8 sticks of RAM, but it's solid and reliable with 7. Last time I stuck a Kill-a-watt on it, I think it draws around 140W at idle. An alternative to it would be something like the Raptor Talos.
It's got 5 usable PCIe slots (including the physically-flipped "PIKE" one), I think a total of 40 PCIe lanes from both sockets that I believe you could theoretically split out with the maximum amount of bifurcation, since the BIOS is Free Software.
My new Ryzen build is for a personal server that resides at a lower trust level, and targets around 30W draw. B550 motherboards generally have around 28 PCIe lanes going to expansion slots (20 from the proc, 8 from the chipset). The best you can do for number of slots is to bifurcate the main "graphics" x16 into 3 x4 slots (maybe 4 with a non-APU) using a "VROC" card, but the BIOS has to support that (Gigabyte and Asus seem to generally, but check manual). I believe X570 motherboards have a few more PCIe lanes coming from the chipset.
If you really need more PCIe lanes, I hear Threadripper/EPYC is the way to go. But I don't have any personal experience. If you just need more PCIe slots, you can find PCIe x1 -> four slot switch-based splitters inexpensively on ebay/aliexpress.
I see. Thanks a bunch, definitely bookmarking your comment. I have a bit more requirements for hardware but your setup might actually turn out awesome for a server.
> Last time I stuck a Kill-a-watt on it, I think it draws around 140W at idle
Another tangent (sorry!): do you know of good power meters that don't have to sit right at the outlet? I really need several but I want them to have extensions because I want to glue them on my wall and monitor them in real time, not having to stick my ass in the air while trying to crawl under my desk (where all the outlets are), just so I can see the measurements.
I'd probably just get an appropriate power strip and some Kill-a-watts. Alternatively you could look at smart devices with energy monitoring. The TP-LINK HS110 connects to wifi and measures current, power, and total energy with a community-documented local-network protocol [0], but they have been discontinued and prices shot through the roof. The replacement is KP115 but I have no idea if it still uses the same protocol or not. And I have no idea how accurately any of these devices handle weird (eg non-resistive) current waveforms.
[0] it wants to connect to "cloud" as well, but works fine without giving it Internet access.
Thank you. Any observations on whether the CPU performs worse? It's a widely circulated meme that AMD CPUs hugely benefit from as fast a RAM as you can put in, whereas ECC is a bit slower.
ECC is not inherently slower. Registered modules are slower (by the added latency of the register). ECC-or-not and registered-or-not are two orthogonal features of the DIMM and in theory all four combinations are possible. On the other hand non-registered ECC is somewhat rare (and for some reason often ridiculously expensive) and registered non-ECC probably is not manufactured at all (it does not make much sense).
I honestly don't care if my machine miscalculates, I don't run any important workloads. All content is in some cloud, all code is on github, all configuration is on github and by definition everything is hashed and stored in multiple locations.
EDIT: miscalculates as rarely as a computer normally does
Cheers, though I would say that I don't like that ECC isn't the standard, there's nothing but segmentation stopping ECC from being mainstreamed other than that it would make a lot of server gear redundant for a lot of applications, but as I said, it really doesn't matter for my application. Something that should be online for extended periods of time though, go ECC
Other than that, performance is plenty on a 5950x for all my imaginable use cases.