Is it too much to ask that tech journalists proof read their articles before hitting publish?
> AMD unveils world's most powerful desktop CPUs
This article contains zero benchmarks or anything other than the core count and TDP to substantiate the claim that these are the "world's most powerful desktop CPUs"
Can we please stop giving ZDNet impressions for this tripe?
The Anandtech article from 4 days ago on the Threadripper announcement at least included AMD's slide deck. 
Is there something wrong with Intel management/culture that's allowing other firms to run rings around them?
Could they have seen it coming? Absolutely, and they could have had a new architecture ready earlier or adjusted their process plans to de-risk, but again, multiple years of uncontested dominance in their main business tends to make things that aren't next quarter top-line a lower priority. And I think that really is at the heart of the issue with their projects failing so frequently. Intel's pursued anti-competitive practices for decades, and when it works, they coast along without urgency, letting marketing call the shots, and when it doesn't they use their economic moat to rush a solution.
But this time, with everything running late, Intel is going to face several quarters ahead without having a real response on the CPU front. They are not going anywhere, though. Their foundry channel is needed to meet global demand for new chips - a good deal will still find buyers. They still have plenty of existing agreements to work with, and can negotiate their way through a lean patch. And they have their new GPU stuff coming soon which should hopefully give them a piece of a different market. There will be plenty of stress to re-org along the way, though, so Intel could come out of this looking very different.
That's how competition works. You are constantly making decisions, each of which could bury you, even if you was a total winner only yesterday.
Intel put its bet on own fab and single die design. AMD outsourced the fab and focused on chiplet design. And AMD gain a benefit.
Though Intel still has some advantage, and would move to chiplet design soon most likely, so we would yet see a very tight competition.
This is what happens when an incumbent company delays new products because of mismanagement or to maximize profits. (See also, Apple)
A CPU isn't like a database, it's easy for customers to switch to a competitor.
Weren't AMD cpus a good alternative to intel for most consumers ? If I remember correctly athlon, phenom, FX were pretty good for the price. Or are you talking about earlier times ?
edit: "good alternative" perf/price wise, yes intel used to have the advantage in raw performance but their pricing made AMD cpus look very interesting for most consumers.
So between about 2010 and the introduction of Ryzen in 2017, there really was no alternative to Intel if you were building a PC that was either fast or energy efficient. In terms of performance, the 1000 & 2000 series Ryzens roughly attained parity with Intel, while the 3000 series now pulls ahead in many multithreaded and some singlethreaded disciplines, and definitely has better performance-per-watt under load.
Ryzen still draws significantly more power than Intel's CPUs when idle though - the PCIe 4.0 transition has counteracted the improvements from the 7nm shrink in this regard, and the APU variants lagging the GPU-less SKUs by almost a year in terms of process and microarchitecture means you also have to power budget for a discrete GPU, which adds a few idle watts.
(The most power efficient at idle Ryzen build I'm aware of is the Asrock DeskMini A300, which uses no chipset, combined with a low end APU - this draws about 7W when idle. In terms of performance and expandability, this system isn't any better than a NUC, which draws somewhat less power when idle. You can't buy a full size A300/X300 motherboard though, for some reason, even though this would actually be quite an interesting setup.)
The highly reputable c't Magazine used it in one of their recommended PC builds recently. They tend to be very thorough and fussy about issues, so I suspect they didn't run into whatever issue you're talking about. I believe they did mention that you need to watch out for BIOS support for the 3000 series CPUs (and ask the seller to update it if necessary and you don't have an older AM4 APU handy), as with any motherboards that predate release of a particular CPU you're trying to use.
Then Intel Core came out in 2006 and it was game over for AMD until Zen was introduced in 2017.
Between 2006 and 2016 AMD looked half-dead most of the time.
Projects fail. It's expected. The only way to have no failing projects is not to create experimental projects.
But AMD is definitely catching up and that's a good thing.
Yet in the typical use case (desktop PC, most cores idle most of the time), power consumption will be far lower.
What really matters is the graph of "how much computation can I get out of this thing in 10ms bursts, 1 second bursts, and 10 minute bursts?", with a specific cooling setup.
¹: it needs to be noted that in the last years, Intel's TDP is getting "much more relative" than AMD's
The marketing communication is amusing: "Optimized for water cooling": https://images.anandtech.com/doci/15062/AMD%20Fall%20Desktop...
Threadrippers are at least HEDT; I think this TDP is not unusual. Paradoxically, AMD's marketing is likely more reliable when they specify that the i9-9920X wall power consumption is up to ~300W (https://images.anandtech.com/doci/15062/AMD%20Fall%20Desktop...), rather than Intel's own spec (https://www.intel.com.au/content/www/au/en/products/processo...).
That's inaccurate. Air cooling solutions are readily available for Threadripper.
Water cooling systems simply move heat to a more convenient place to air cool it away (thus the radiators and fans at the destination). The only limiter on air cooling (and not water) is weight/size on the CPU socket (or amount of air moving over the fins of the heatsinks).
Some examples of retail air coolers that can cool it out of the box already:
Now the hypothetical ceiling on water cooling is of undeniably higher. You could literally use a jet engine as a fan to cool the water, or send it through liquid nitrogen. But we aren't there yet with 280 Watts. At the current wattage air cooling remains viable, with some trade-offs (e.g. huge air coolers, requiring huge cases, plus strong in-case airflow).
You definitely don't "need water cooling," yet. At least not for threadripper.
You want a steep temperature gradient at the limited chip surface to maximize heat flux. Pumping a liquid with high heat capacity to it steepens the gradient where you need it. Cooling the liquid on the other hand can happen over a much larger surface area with shallower gradients and thus lower heat flux.
Note that while the heat pipes in air coolers do use convection they do so passively, liquid cooling solutions are active.
> You want a steep temperature gradient at the limited chip surface to maximize heat flux.
Is true with both and is really just a convoluted way of saying "if you don't convect heat fast enough, it won't work." Yeah, undeniably true...
> Pumping a liquid with high heat capacity to it steepens the gradient where you need it.
Where you need it is at the surface of the CPU package. Both solutions provide adequate heat capacities at that location, that's why they fundamentally work. One uses heat pipes and fins directly above with fans/air carrying the heat away, the other moves the heat, and does the same elsewhere.
> Cooling the liquid on the other hand can happen over a much larger surface area with shallower gradients and thus lower heat flux. Note that while the heat pipes in air coolers do use convection they do so passively, liquid cooling solutions are active.
This is a basic explanation of how water cooling works, it doesn't really add any specific insight into the problem scope.
Water cooling systems allow heat to be moved away from the chip's surface more efficiently than pointing at can at it can. Once moved away, a larger radiating surface can be air-coooled
I'm disappointed AMD have chosen to confuse people with their marketing material. I guess these are expensive high-end CPUs aimed at the type of enthusiasts who see the idea of requiring water-cooling as a positive.
Laptop manufacturers have always used the cheapest possible cooling, but since Sandy Bridge, it has become painfully obvious the TDP is severely understated.
Thermal throttling has become acceptable, and while it's impressive that Intel CPUs last years at ~90 degC, I'd rather they be honest with their actual requirements to get the full performance out of a given chip.
Many laptops go over 90 degrees even at the guaranteed base speeds (no TurboBoost), in everyday use, not even multihour rendering or anything.
Large numbers of cores have never been useful for gaming, but overall thread performance has. Right now the major bottleneck for high end gaming is GPUs because display pixel counts have gone through the roof with high frame rate VR and 5k monitors, but keep in mind that the current most powerful graphics card for games (Nvidia 2080 Ti) is over a year old, still costs over $1000, and is not about to be replaced yet because, just like Intel, NVidia hasn't had competition at the top in a long time. If AMD can bring this success to the graphics space, maybe that will change too.
I suspect it won't be long until the empire strikes back in this case though: NVIDIA also uses TSMC for fabbing its GPUs, so they too will soon be shipping 7nm GPUs, with all the intrinsic advantages of that node. Of course, pricing of those new products will hopefully be strongly influenced by AMD's resurgence. (And, supposedly, Intels re-entry into the GPU market, but I remain skeptical this will make much of a dent for a few years yet.)
The main reason for this is most likely that all current-gen game consoles are using 8 CPU cores, so this is basically the "baseline" for games right now. If the next generation of game consoles increases CPU core count, the core usage for games running on PC will most likely also go up accordingly.
Does anyone have any concrete experience?
I wad hoping to build a workstation with ECC memory, but it appears only the EYPC cpus have certified support for ECC.
The Xeon W appears to be the most cost effective for guaranteed ECC memory.
64GB DDR4-2666 ECC UDIMM runs perfectly on it, and the kernel reports ECC support.
Day-one memory support was very, pathetically, bad with Threadripper, but it has gotten better. It improved even more with Threadripper 2000.
If Ryzen 3000 is any guide, the gaping memory compatibility gulf between Intel and AMD will probably be closed to a slight gap by the time TR 3000 releases.
As far as I'm aware, every X399 motherboard vendor supports and warranties ECC support on the platform, but only if you use memory on their QVL at its rated speeds.
sTRX4 will likely be no different.
As far as cost-effectiveness, a W-3175X ($2,978.80) and Dominus Extreme ($1,868.74) is indeed $552.45 less expensive than a Supermicro MBD-H11SSL-NC with EPYC 7702P ($5399.99), but the the 7702P is about 37% faster multi-threaded, and about 13% faster single-threaded.
But also no hurry to release it, as even the 24 core wipes the floor with anything Intel has to offer.
* in synthetic benchmarks that uses down to the last core
edit: my bad I mistook this for a technical audience
But also Zen 2 core at 4.5 GHz is faster than Skylake core at 4.5GHz (about the peak boost in Intel HEDT) also in single threaded tasks.
Milticore performance gets more and more important with time.
no? above 2-4 core it's just marketing hype like it was ten years ago already and today if you need parallel processing you have vastly powerful resources to tap.
sure this amd cpu might be faster in 3d rendering or video encoding or other task that use 4+ cores and that might be useful if you truly can't move them onto a gpu but truth is at the top end many-cores architectures are getting squeezed out by streaming processors and the workloads that have a hard dependency on many core single node general processor are rarer by the day.
It's not only about 3D. We're talking everyday programmer productivity, partially achieved by less waiting around when compiling your projects.
> the Linux kernel build process
For what it's worth, I migrated from a consumer-grade i7 CPU to a workstation-grade Xeon and the differences in compile speeds in my everyday projects (less than 400 files) are still significant and very noticeably in favour of the Xeon.
Linux kernel compilation is just a benchmark that multiplies such differences between CPUs. Thus the buyers can -- hopefully -- make a more informed decision.
It also has tooling for "watching" test directories and auto re-running tests when certain files are changed -- which utilises incremental compilation under the hood and there indeed there is no visible compilation performance difference.
To be fair, Ice Lake/Sunny Cove seems to introduce a significant IPC improvement. It's currently stuck in low-power dual core CPUs due to Intel's 10nm yield woes. (allegedly quad cores have launched too, but I haven't seen any reviews of devices based on those?) But presumably it will eventually percolate up the product stack and give desktop processors a solid boost too.
So that particular fight doesn't look anywhere near over just yet. (Zen+ to Zen2 was also a decent IPC bump, so no reason to believe AMD have hit a wall in that regard either - I'm sure they're scrambling to make another leap in Zen3/4 that compares favourably with the Skylake->Sunny Cove one to preempt Intel retaking the crown.)