And that's just cheating anyway. Intel's libraries refuse to use some instructions on AMD's CPU that the CPU supports, and instead downgrade to slower legacy versions. Cite.
There are two problems here.
The first problem is that AMD does not provide versions of those libraries tuned for AMD hardware. The investment of AMD into software for their hardware is close to zero, and without software, hardware is irrelevant. AMD has this problem when competing against nvidia in the GPU space, and against intel in the CPU space.
The second problem is that buyers of AMD products demand that Intel would release their software optimized for AMD products as well, which is just nonsensical. First, Intel is not required to do this and have no reason to do so - if this software doesn't work for your hardware, either buy Intel, or use something else. Second, these people don't expect AMD to provide these libraries, and aren't willing to provide them themselves, so... combined with AMD culture of not investing in software, nothing will happen.
That link is 10 years old, but these issues are at least 10 years older. That is, AMD has had 20 years to fix this, and nothing has happened.
I mean, this might sound like captain obvious, but Intel and Nvidia have super good profilers for their hardware (VTune and Nsight). AMD provides uProf, which sucks, so if that is what their engineers are using for their libraries, it isn't a surprise that they don't perform good.
When writing optimized code you typically check for CPU feature before CPUID, and profile to verify that it actually is faster.
In Intel's case, libraries like MKL/IPP don't decide which implementation to use based on available features but CPUID, and fall back to slow versions even if the CPU supports all features required for the optimization.
There's nothing stopping Intel from providing fast libs that are optimized for their hardware, profiled on their systems, and utilize all the secret sauce they want while still being more or less "fast" on AMD.
But all that said those libs aren't especially compelling when there are alternatives that may be a bit slower on Intel but kick ass on AMD, when a non-zero segment of users are transitioning to the better value processors today.
The CPU features indicates if a feature is available, but doesn't indicate if it's advisable or fast -- that's why Intel built a table for their hardware, and they'd need a table for AMD hardware too, except they don't care, because it's not their hardware. AMD or someone could build that table and patch it in, or try to convince Intel to include it, but expecting Intel to build it themselves is wishful thinking at best.
They very much do care, and that is amply evident. Every other compiler uses feature detection or developer intention (e.g. if I say use AVX, use it or crash). Intel actively and intentionally -- with intent -- sabotages performance on AMD devices.
This is where the market decides, however, and it's why the intel compilers and libraries are fringe products, and their software division is effectively a disaster. If you have a specific HPC setup with Intel processors maybe you'll cross compile with them, but most simply steer far clear of them. For a while Intel sold them as a software vendor -- not as some sort of coupled processor support -- and many learned they aren't trustworthy.
ICC has tons of built-in functions like _mm_div_epi32, _mm256_log10_ps, _mm_sin_ps and many others. These are not hardware intrinsics. These are library functions. Other compilers don’t need to feature detect because unlike Intel’s they don’t have these functions in their standard libraries: no exponents/logarithms, no trigonometry, no integer divides, nothing at all on top of the hardware instructions.
I mostly use VC++ and I had to implement these functions more than once. Funny enough, AMD’s SSEPlus helped even on Intel, despite AMD abandoned the library a decade ago.
I can see MKL not automatically picking the fastest path, but they should allow you to manually pick.
In fact they do, this flag lets you pick the AVX2 libs and speed increases up to 6.6x in my tests with matlab.
Err... Correct me if I'm wrong, but wasn't the check literally just a string match on "GenuineIntel"?
No, not at all.
They want to run the Intel-optimized version. But instead it checks CPUID and runs a deoptimized version.
So the answer to your question is "they don't need to", but your question has nothing to do with the problem.
Intel's response was roughly "We don't trust CPU flags, so we have kernels for each specific Intel chip, and a generic kernel for non-Intel chips"
"[Update 2019:] The newest versions of Gnu and Clang C++ compilers are now optimizing better than the Intel compiler in my tests."
Good reason to avoid Intel compiler even more.
While Intel isn't REQUIRED to do so, it absolutely is a sensible thing to do. If my customers need something, I want to provide it to them. If my customers want me to add a feature that makes their products run well on competitors' CPUs, then its in my best interest to make that happen, if I can get the other CPU maker to give me the data I need. This makes my direct customer happy, makes me look good for prospective customers, and builds loyalty to my brand. It also makes my customer happy by making THEIR customers happy, which keeps them customers of mine.
Yes, it's a weird mindset for a company that employs a lot of software engineers.
When there are only two viable options in a marketplace, offering improvements to make your direct competitor's product better than yours is shooting yourself in the foot.
Your point about happy customers would make sense if Intel was selling all manner of service contracts and extraneous engagements with it's consumer base, but is it?
I'm pretty sure the chip business is speed / cost and that's all.
What's ironic is that AMD knows better than any tech company I can think of exactly what the rewards are for coming second in the chip game.
However, ICC does support several advanced optimisations whereby it literally schedules instructions based on expert knowledge of architecture cycle latency and number of ports: LLVM and GCC have similar tables for overall stats which I think AMD contribute to, but ICC can also hook in memory and cache bandwidth / latency to schedule instructions: this by definition would need to be Intel-specific, unless AMD were happy to give Intel this information.
So some of the optimisations are by definition Intel-only.
The question really is how fair should the fall-backs be.
The benefits of those optimizations may or may not be Intel-only, but the machine code emitted by the compiler is not Intel-only.
Keep in mind that ICC supports dynamic code paths for different marchs, so in theory you could have code for Intel 6th Gen, 7th Gen, 8th Gen, etc all dynamically switchable at run time.
Given how a modern high perf processor work, I don't expect that at all. Moderately bad performance in some extremely rare cases, and comparable performance in most, is WAY more probable. Better performance in some cases is also possible.
If you don't know the microarchitecture enough, you can also just look at benchmarks of Zen processors. They are vastly running code non-tuned for it, and for some loads I expect the code was actually tuned for Skylake, given all desktop Intel processors have been Skylake microarch for a non trivial number of years. It performs well on Zen.
Whether it's to do with cache associativity, different cache / instruction latencies or number of ports between CPUs or things like false-sharing, it happens enough that it's worth doing per-CPU optimisations.
One of the reasons Zen2 runs code likely not directly optimised for it so well is likely because AMD's microarch is now quite similar to Core now (at least within the chiplets).
Previously, with things like Bulldozer, that was most definitely not the case, and you needed quite different instruction scheduling code to perform even moderately well on AMD machines of 5/6 years ago.
That's actually what matters and drives the market (the capability to run very diverse general purpose loads with a reasonably high efficiency). If expert hand tuning to each microarch (and actually the whole computer around it) was required, Cell-like things would have won, coherency would be low, and so over.
You don't have that; the whole Earth use x64 for general purpose high perf, or other ISA but now the microarch is very similar for everybody. Oh yes, you will still find some differences and tuning points here and there, but the general principle are all the same, and any precise tuning will be as a rule of thumb as fragile as the gain it gives are impressive. Robust good enough perf is WAY more important. You don't want to fine tune all the programs again just because you went from 2 memory channels to 4.
I mean it is well known that we are at the point where random code offset changes give more perf diff than fine assembly adjustments in some cases, and there is even tooling to do diverse layout to empirically try to find out if perf diff of two snippets are by chance because of very indirect factors, or intrinsically one is better than the other. And except for VERY specific application, there is absolutely no economic sense in trying to get the last perf bits of a specific computer by trying to hand-tune everything at low level. Those modern beast do it dynamically largely well enough, in the first place.
Now I understand that this art still exists and TBH I sometimes even practice it myself (moderately), and if you actually are going to deploy some cool tricks to a fleet of similar servers, that can make sense. But in the long run, or even just medium, the improvement of the CPUs are going to "optimize" better than you are, like usual. So while I'm not saying "don't micro-optimize ever", I actually insist that Zen 2 is extremely decent, very similar to Skylake (not just Core, even the state of the art Skylake improvement of it, Ice Lake does not really count yet) even if it also have drawbacks and advantages here and there, and that the general purpose "enthusiast" benchmarks simply reflect that. And some of the loads in those benchmarks are actually programmed by quite expert people too.
Now if you have really really precise needs, fine, but I mean that's kind of irrelevant, you could also say "oh but all of that is crap, I need a shitload of shitload of IO and my mainframe is really cool for that, plus I'm an expert on this family since 20 years and now all the details of where it shines". Yeah, that can be an existing situation too. But not really where the fight is playing right now, for most people.
The large majority of the code people actually run isn't written by Intel or AMD. We use benchmarks for comparison because it gives people an idea of the relative performance of the hardware. Reviewers are not going to test the hardware against every individual piece of line-of-business software or in-house custom code that people will run on the hardware in real life, so instead you get a sample of some common categories of computer code.
Optimizing the benchmark for one architecture and not the other is cheating because it makes it no longer a representative sample. Real third party code is generally not going to enable instructions only for Intel and not AMD when they exist (and improve performance) on both, it will either use them on both or not at all.
That is just not true, they work on the open source toolchain like LLVM and GCC and hopefully things will catch up. And as far as I can tell things are getting much better in 2019.
Intel has software engineers for pretty much every application. AMD has very few software engineers.
Probably need to be a lot of money though.
But yes, they are generally way behind in this area, and in my domain (genomics) this is a serious barrier to adoption. I've been waiting for years for good-enough AMD-optimized linear algebra libraries, but distribution of Linux binaries statically linked to Intel MKL is still the obvious best choice as of November 2019, and that's a shame.
Depending on the benchmark, I've seen up to 6.6x improvement in matlab on the AMD Rome/Zen2 chips
Voting with wallets counts, people! If you want to see more AMD laptops, buy them!
My current laptop- an HP Pavilion 2000 has an AMD APU as well- but it's from like a decade ago and it's been showing it's age for quite a while, and I needed something newer.
Ryzen 5 3550H, BTW
It's not that difficult to port Embree to other architectures, at least for the 4-wide stuff: you just need intrinsics wrappers, and a bit of emulation for the instructions PowerPC doesn't support.
afaict, the thing uses multi-templated indirected hardcoded asm inlines instead of intrinsic calls - it is not "that difficult" but it is by no means simple and they've done precisely zero favours to anyone trying. They've really gone out of their way, with Embree, to make it crazy hard, if not impossible, to fully activate the built-in SSE/MMX to AVX compatibility shim headers GCC ships with too, where they even can be.
This helps quite a bit with Matlab (which uses MKL) as well.
For Blas/Lapack like libraries look for Blis and libFLAME.
Gcc-9.2.0 isn't bad, but if you want Zen2/Rome/Ryzen 3xxx optimizations I'd recommend AMD's AOCC, which is a tuned version of LLVM (including c, c++, and fortran).
If you need FFTs look at amd-fftw.
Basic low level math (pow, log, exp, exp2 and friends) library look at the "AMD Math library (libM).
It may be not-nice for Intel to use this form of "DRM" to lock their high-performance compiler to Intel chips, but they don't owe AMD users a high-performance compiler.
Now, if vendors are shipping pre-compiled software that has the enabled-for-Intel-only fast-paths and don't ofter AMD-optimized versions, those vendors are misbehaving (intentionally or not).
If AMD have made different choices about AVX implementation, then benchmarking becomes difficult.
Intel benchmarks for sustained AVX512 load (HPC measuring contests) cannot be used to extrapolate for normal mixed loads (single or or short bursts of AVX512 instructions).
Edit: are there better links on the true costs of AVX512?
Also see: https://blog.cloudflare.com/on-the-dangers-of-intels-frequen...
From the AnandTech review, 3D particle test was showing AVX512 effect nicely:
10980XE had 3.9x speedup compared to 3970WX per core using AVX512.
So for some scientific computing purposes (maybe game physics?) AVX512 is worth it.
Not game physics, as it puts the CPU in a lower speed regime it'd have negative implications on the rest of the games performance. So far, that AVX512 requires this lower speed (due to thermals) is an implementation detail, and it could be expected that newer processes (Intels 10 or 7 nm?) would allow them to work AVX512 tasks on full speed.
Until that happens, and everyone has AVX512 (because it'd be a massive fail to have a game that requires you to have a HEDT Intel processor to play), it'd be a nice gimmick to have on very specific tech demos, and performance sensitive scientific code that you know will run on a certain machine with certain characteristics.
Game programmers will put time into making slow CPUs faster. Outside tech demos or hardware marketing tie-ins no budget is allocated to making yet more spare capacity.
AVX-512 isn't just wider execution units, it's different types of instructions, particularly some that fill in holes in the existing sets of instructions. Once it starts to be widely available, it will get used, and will eventually be a requirement, just like AVX has.
Ice Lake is introducing AVX-512 on Intel mobile, Tiger Lake will introduce it on desktop, presumably Zen3 will be introducing it on AMD.
There are also various libraries that leverage metaprogramming to do similar things. I don't think you understand what game devs are willing to do, to get a few more polygons and pixels on the screen!
Totally depends on the trade-offs. You can write your whole game in assembly, and target very specific hardware, and may be beat optimizing compiler (doubtful). But at what cost? Time spent on that could be spent on making more games.
Normal up to date hardware handles games just fine, as long as they are not using some abysmal and poorly parallelized engines. Modern CPUs with more cores are also helping that, especially after Ryzen processors opened the gates for it.
That said, desktops can apply whatever optimizations they want. Denuvo uses AVX in their DRM, which is also not a thing on console, so presumably they will eventually incorporate AVX-512.
its also relatively common to write multiple versions of SIMD code (or use tools like ISPC or metaprogramming) to leverage whatever SIMD instruction set a cpu has. Such as the DOTS system in Unity. Games will happily leverage AVX-512 as soon as a fair number of desktop cpus support it.
Let's say that, for whatever reason, you have a vector struct containing 3 doubles, and another 64 bits of arbitrary data. Now, if you want to add those vectors together, keeping the arbitrary data of one element, that's quite difficult to do with AVX2. In AVX512, you can just set the bits of the mask to zero to exclude them from the operation, making it trivial.
What? That's just a _mm256_setzero_pd (set the whole register to zero), _mm256_maskload_pd (load the 3 doubles, ignoring the 4th load), and then _mm256_add_pd (4x double-precision add).
For more details: https://software.intel.com/sites/landingpage/IntrinsicsGuide...
AVX had mask instructions, but they took up all 256-bits. AVX512 mask instructions are exciting because they only use 1-bit per mask. A 64-bit mask can cover 64-bytes (aka: 512-bits) of masking.
It's useful for crypto. I haven't measured myself, but I expect AVX-512 roughly doubles the throughput of ChaCha20. (Not only do you have 2x as many lanes, you also have new bit rotation instructions.) Something similar applies if you use a tree hash like KangarooTwelve.
Whether your application is actually bottlenecked on crypto performance is another question of course.
Nobody is even thinking about leveraging this stuff without understanding things like this anyway.
In not saying you can't find something in a game to speedup with avx512, but that you wouldn't want to do that in the first place.
Cloudflare's chip where they saw the problems, was the Xeon Silver 4116: https://en.wikichip.org/wiki/intel/xeon_silver/4116#Frequenc...
Note that for most AVX workloads, it's fine. AVX2 sees a very minimal clock speed drop until almost all cores are actively engaged in the work. It's also worth mentioning that since Haswell days, the CPU groups cores doing AVX work together, to reduce the likelihood of the downclock impacting non-AVX work (I am somewhat curious what the impact on L1 / L2 caching is from that).
AVX-512 is where it can hurt, but it really depends on the workload. Lots of AVX-512 instructions and you're fine, the throughput of the AVX-512 instructions is higher than scalar or AVX2 instructions, even with the down-clock.
The important thing to note here, is that Cloudflare went with what amounts to a bargain basement server CPU. It's almost on the absolute bottom end of the range, and it's a chip not designed for AVX workloads (or indeed anything resembling high performance work). Just take a look at the product brief from when the family was launched (page 8): https://www.intel.com/content/dam/www/public/us/en/documents...
If they'd actually taken a chip range designed for the workload, even at the bottom end of that range, just $100 or so more than the one they did choose: https://en.wikichip.org/wiki/intel/xeon_gold/5115#Frequencie...
Notice that it can handle a fair level of AVX-512 instructions before it down-clocks, and even then it takes a while before the down-clocking amount is significant, and it can handle significant AVX2 workloads before the maximum frequency gets affected at all (at the point where AVX2 starts causing it to down=clock, you'd be more than reaping the benefits of the faster instructions).
For just a few hundred more dollars: https://en.wikichip.org/wiki/intel/xeon_gold/6126#Frequencie..., you can be utilising all the cores doing AVX-512 instructions and still be faster than the not-designed-for-the-workload chip that Cloudflare cheaped out on.
Note: These extra costs on CPUs are negligible when you take in to account depreciation, server lifetime etc. The biggest lifetime cost of a rack of servers in production is never the cost of the servers. It's your other OpEx. Cheaping out on server processors is a perfect example of what AWS employees call "Frupidity": Stupidity through Frugality. (Frugality is a leadership value in AWS, and sometimes it leads to the most phenomenally stupid things happening)
Shift up in to the slightly more premium range, towards the kinds of chips you're likely to be running on in cloud platforms: https://en.wikichip.org/wiki/intel/xeon_platinum/8124#Freque...
You can be using all cores with AVX-512 instructions, and see a drop of only 300Mhz, on the entry level platinum chip from that same year as Cloudflare's chip.
Also if you actually needs tons of processors, availability might be an issue.
Now in the end, I'm not familiar enough with Cloudfare needs and I know that pretty much anybody can make mistakes, so it is possible that they should have gone with something like the 6126 instead of 4116. But then the 8124? hm, less probable.
For "just a few hundred more dollars", you got two less cores, but notably better AVX-512 scaling.
5118 release price was $1273, https://en.wikichip.org/wiki/intel/xeon_gold/5118. That keeps you in the 12 core range, still only a hundred dollars more than the option they went with.
Also bare in mind those are RRPs, which Cloudflare likely isn't paying, especially at the scales they're operating at.
The crux of the point was that they were apparently surprised when a chip that wasn't designed for the kind of workload they decided to use it for, didn't perform well.
It's like buying small car and then being surprised that it doesn't have the same hauling power as a truck.
Frustratingly, everyone has taken it as gospel that it means AVX-512 is entirely crap and just going to hurt you.
It definitely has made optimising a bit more complicated. You could certainly argue that you might need to have the runtime profiling the code to figure out if using AVX-512 is harmful or not.
Note that a CPU can switch fairly quickly between different clock speeds. It's not instantaneous but it's pretty quick.
I'd encourage CPU pinning if you can (pinning each VM to a specific core). If you're depending on oversubscribing, that won't be possible, but presumably you'd already be expecting more of an impact from noisy neighbours anyway.
Looks like Microsoft has already got 1000+ cores on Windows: https://techcommunity.microsoft.com/t5/Windows-Kernel-Intern...
Can we get Bruce Dawson one of those? I wonder how many more bugs he'll run into.
Anyone? Got a spare one lying around?
See the diagram on page 6, they have a custom routing chip to link up 8 boards over UPI.
Knight's Landing supports 4-wide threading per core so you get 256 threads, which is exactly what it shows in task manager under "logical processors".
The HP 32 socket chassis (8x4 socket boards) seems to be the answer.
Oh yeah, especially because core affinities in Windows get all wonky once you go above 64 threads.
Can you elaborate? I haven't noticed any particular "wonkiness" happening?
Annoying if a processor group spans two NUMA regions leaving just a few processors to other side...
Hypervisor developers are livid about this across all cloud providers.
In practical terms the baseline of a windows OS is rather high, so if you have high density throughput of compute then you get more for your money.
IE: You pay a license cost per CPU, and you pay at minimum 1physical CPU core and 1GiB of memory per Windows machine;
I/O layer overhead in Windows is considerable. As any Windows kernel driver developer knows, passing IRPs (I/O request packets) through a long stack does not come for free. Not just drivers for filesystems, networking stacks, etc. and devices, but there are usually also filter drivers. IRPs go through the stack and then bubble back up.
Starting threads and especially processes is also rather sluggish. As is opening files.
There's no completely unified I/O API in Windows. You can't consider SOCKETs (partially userland objects!) as I/O subsystem HANDLEs in all scenarios and anonymous pipes for process stdin/stdout redirection are always synchronous (no "overlapped I/O" or IOCP possible).
For compute Windows is fine, all this overhead doesn't matter much. But I don't understand why some insist using Windows as a server.
But when someone pays me for making Windows dance, so be it. :-) You can usually work around most issues with some creativity and a lot of working hours.
Visual Studio a great IDE... well, the debugger isn't amazing unless you're dealing with a self-contained bug (often find myself using windbg instead). Basic operations (typing, moving caret, changing tab, find, etc.) are slow at least on my Xeon gold workstation.
The last Sun chips (Niagara / Ultrasparc Tx) also had pretty high count, IIRC they had 64 threads / socket, and were of course multi-socket systems. At 1.6GHz they were clocked pretty low for 2007 though.
Anyway, each of those lights was "on" when a core was utilized, and "off" when a core was off.
Although this particular bout of verbalised obscenity did just make me change my default browser search engine to duck duck go as Google crossed out 'supercomputer' as missing from its query, which was annoying as it was 50% of my sodding query and, technically, the only correct bit. Google search is totally crap these days.
Oh, sorry for the typo. I just fixed it. It used to say "CX-5", I must have been thinking about Mazda cars or something when I wrote the post...
While it retained the LEDs, I don't think they had the same 1:1 correspondance to cores of the previous model: the CM1 and CM2 up to 65536 cores (the CM2 also had an FP coprocessor for 32 cores, for an additional 2048 special-purpose cores) whereas the CM5 had "only" 1056 processors.
sum of core % utilization
number of cores
The "Processes" tab improves on this slightly by adding a single decimal digit, but this display still becomes less and less useful as core count increases because its precision remains inversely proportional to the number of (virtual) cores in the system.
This change means that Chrome's task manager can show CPU usage percentages that are higher than 100%, but that is fine. The percentage is simply the percentage of a logical processor that is being used and 200% means that two logical processors are being used. Simple, easy to understand and explain.
[EDIT: 256 threads in htop on ARM] https://www.servethehome.com/wp-content/uploads/2018/05/Cavi...
In the "HEDT" space, the media coverage and forum chitchat is mostly for users who by these cards for decorative purposes, not work purposes. People buying for real work need realistic test measurements on realistic workloads.
I guess it's driven by the types of hardware they normally review - their audience is interested in game benchmarks, so they might as well report them.
If you're just streaming casually this doesn't matter at all. If this is your day job, though, you probably want all the quality & control you can get, and NVENC isn't quite there.
And entry-level TR was never $250. That's the EOL "we need to dump old inventory" fire sale price.
2990WX is pretty bad for most use cases. Its only good for rendering.
The 24-core 3960x probably is better due to lower latency, and better balanced I/O.
Reinforcement learning might be a good use case for 2990WX.
> Reinforcement learning might be a good use case for 2990WX.
Hmm, the 2990wx is better than the 2950x for that task, but the 3960x has 256-bit AVX2 vectors. Since the 2990wx only has 128-bit AVX2 vectors, I would place my bets on the cheaper, 24-core 3960x instead.
Doubling the SIMD-width in such a compute-heavy problem would be more important than the +12 cores that the 2990wx offers.
EDIT: The 3960x also fixes the latency issues that the 2990wx has, so its acceptable to use the 3960x in general-use case scenarios (aka: video games). The latency-issue made the 2990wx terrible at playing video games.
Yeah, no one is buying these HEDTs for "purely" gaming tasks, but any "creative" individual who does video rendering during the week, but plays Counterstrike on the weekend, needs a compromise machine that handles both high-core counts AND high-clock speeds for the different workloads.
No they weren't. I certainly wasn't. There was no reason at all to believe TR3 would be a price drop. Ryzen 3000 wasn't and neither was X570. If the mainstream platform parts didn't get a price drop why would the HEDT halo products? Particularly since new generations are almost never price drops, especially without any competition?
> instead we got overpriced TRX40 (compared to high-end x399)
X399 at launch ranged from $340 to $550. TRX40 at launch ranges from $450 to $700. Yes there was a bump there, but there is also overlap in pricing, too. You are getting PCI-E 4.0 instead along with a substantially higher spec'd chipset. You're also getting in general a higher class of board quality & construction. Similar to the X570 vs. X470 comparison.
> but it's still an untypical situation
Untypical in that they are actually a lot faster generation over generation, sure. Untypical in that they are priced similarly or slightly more? Not really. That's been status quo for the last decade or so. The company with the halo product sets the price. The company in 2nd place prices cuts in response. AMD has the halo, they were never going to price cut it.
For x399 users TRX40 is underwhelming as it just feels like "pay for the same stuff again" if you want to use new CPUs.
Halo boards are always stupidly overpriced. X570 tops out at $1000, too. That's a terrible way to judge a platform's costs.
> PCIe 4's usefulness is questionable (basically just for 100 Gigabit networking right now).
Not true at all. It's more bandwidth to the chipset, meaning you can run double the PCI 3.0 gear off of that chipset than you could before without hitting a bottleneck (well actually 4x since the number of lanes to the chipset also doubled...). That means more SATA ports. More M.2 drives. More USB 3.2 gen 2x2.
> For x399 users TRX40 is underwhelming as it just feels like "pay for the same stuff again" if you want to use new CPUs.
Not disagreeing on that but that's very different from TRX4 is "overpriced vs X399." Just because it's not worth upgrading to the new platform doesn't make the new platform overpriced vs. the old one.
Not necessarily the case in practice since that would require some sort of chipset or active converter exposed by the motherboard to mux 3.0 lanes to bifurcated 4.0 lanes. A 3.0 x4 device still needs those four lanes to get full speed so in a PCI-e 4.0 setting you’ll actually be using up four of the PCIe 4.0 lanes, but inefficiently.
This is a new market segment. If you want a fast cpu, the Ryzen 7 and 9 series are completely fine if you want that price range!
The exact same price range as you're used to still exist.
On the other hand, people have been used to paying exorbitantly for Xeon processors, like 2000-5000 per cpu, so this is a breath of fresh air.
There was no reason to kill TR4. It could have been a "legacy" board with PCIe 3.0 support, like X470 is for the desktop socket.
AMD just killed TR4 because they wanted everyone to buy new boards. The classic Intel move.
(meanwhile Intel put a new generation of chips on X299, while also putting out a compatible X299X socket that increases lane count. Intel doing it right for once, AMD doing it wrong for once.)
I know video games don't really get bandwidth bottlenecked, but all you gotta do is perform a "Scan" or "Reduce" on the GPU and bam, you're PCIe bottlenecked. (I recommend NVidia CUB or AMD ROCprim for these kinds of operations)
CUB Device-reduce is extremely fast if the data is already on the GPU: https://nvlabs.github.io/cub/structcub_1_1_device_reduce.htm.... However, if the data is CPU / DDR4 RAM side, then the slow PCIe connection hampers you severely.
I pushed 1GB of data to device-side reduce the other day (just playing with ROCprim), and it took ~100ms to hipMemcpy the 1GB of data to the GPU, but only 5ms to actually execute the reduce. That's a PCIe-bottleneck for sure. (Numbers from memory... I don't quite remember them exactly but that was roughly the magnitudes we're talking about). That was over PCIe 3.0 x16, which seems to only push 10GBps one-way in practice. (15Gbps in theory, but practice is always lower than the specs)
Yeah, I know CPU / GPU have like 10us of latency, but you can easily write a "server" kind of CPU-master / GPU-slave scheduling algorithm to send these jobs down to the GPU. So you can write software to ignore the latency problem in many cases.
Software can't solve the bandwidth problem however. You gotta just buy a bigger pipe.
A good point for people looking to use this in their home server like I am. It’s going to be really hot if you’re getting your money out of it.
At some point there’s not an efficient way to cool these processors when the ambient room temperature rises too high. Anyone have suggestions for that? I’m in a weird position where power is cheap but I can’t AC my garage (where it’s located).
Super ghetto but actually pretty efficient at bringing the server's temperature down.
The cores start to throttle at 70C, which is far hotter than even the hottest garage.
Thermal dissipation however is proportional to delta-T (change in temperature).
A 25C ambient will provide delta-T of 45C, but a 35C ambient will provide a delta-T of 35C: so your 35C ambient garage will reduce performance by at least 22% compared to 25C.
Note that Intel chips can go all the way to 95C or 100C, making cooling a far easier problem. So you're already dealing with a lower Delta-T when you use these AMD chips that throttle at 70C. (To be fair: AMD and Intel's measurement systems are a bit different, so we can't compare them directly. But IIRC, most people have a slightly easier time cooling Intel chips).
AMD spent the extra few $ for a good thermal connection between the silicon and the heat spreader.
The new threadripper and Epyc chips are also physically quite large. It's much easier to cool the head spreader with 8 hot spots spread across the chip than a single smaller hot spot.
So generally it's easy to get 180 watts out of a threadripper/Epyc chips than the equivalent Intel. In fact some of the highest end chips are not considered feasible to cool on air. It's either water cooling of throttling, like in the new Intel based Frontera (don't confuse that with the AMD based Frontier cluster).
Basically keeping a dual socket Xeon 8280 cool without throttling is not practical on air.
Finding an AIO with a waterblock that will cover a Threadripper is very difficult though. It's one of the reasons why I'm sticking with an overclocked 12 core Ryzen.
AFAIK the Enermax TR4 is the only AIO that covers the TR4 but it has mold issues that cause it to degrade over a few months. It has 2 stars on Newegg right now.
For the first month or two after TR1 was released in 2017, it was true that the existing AIO blocks didn't cover the entire heat spreader, but they did cover all of the chiplets inside the heat spreader.
sTR4-specific AIO blocks came out within a handful of months. This hasn't been a problem for over a year.
Even if you left it on 24/7, it would take you a year and a half to make up the difference in price between the $2000 AMD and the $3000 Intel. If we assume both chips have identical performance, the Intel would cost $4350 to run over 5 years vs $3480 for the AMD (assuming 24/7 at all cores). The total cost of ownership for that Intel chip over 5 years seems to be around 2x that of the AMD chip.
I'd note we've been talking about 32 cores 24/7. If you're using 4 cores and 8 threads, you will be using half the power (about 140 watts). If your system is idling, it's using a bit less than 50 watts.
Let's say you leave your system on 24/7. You work 8 hours, play 4 hours (4 cores, 8 threads constantly running), and it idles the other 12 hours. We'll assume that you somehow max all cores for your entire workday and max 4 cores during play. You'd be well under half that price per month.
Change all-out time to a more reasonable, but still strenuous 4 hours with 8 moderate hours and shutting off your computer at night will get you down to less than $20 per month.
In CA there's also the issue of not having electricity in the first place sometimes though. So power _consumption_ seems kind of secondary.
These chips don't run at a fixed frequency. They dynamically adjust based on thermal limits and power limits.
You only have to burden most of the cores/ALUs to hit maximum power. Any load above that threshold uses the same amount of power, as frequencies nudge down to compensate.
The power limit is configured in the BIOS, and can be disabled, but with these massive chips they default to throttling down and capping power consumption so that you aren't forced to go crazy with the cooling system (and motherboard and PSU capacity).
Personally I'm not that interested in extreme power at the cost of extreme heat. I'm more interested in enough power with as little heat as possible. I find it hard to figure out which processor is the best fit for that these days, but I guess this one isn't it.
Pioneer had the best bang-for-the-buck the last time I checked.
I live in Idaho and keep my 24U rack in my office, total idle draw on it is now around 4A@110V and considering power tops out at $0.11/KWh during the summer there's no real reason to colo it somewhere else.
The downside is of course the price at >$5k for a 24U.
TL;DR: a short "geothermal" ground loop.
Consider adding Linux as the underlying platform for some of your tests. Not only it's usually easier to script repeatable developer-oriented workloads, but your reviews will be more comprehensive as well. The Linux scheduler is quite different and more advanced (as illustrated by vastly different Gen 2 Threadripper performance on Windows vs Linux), and your articles have not been capturing this.
I can see why Linux is not yet the mainstream for gamers, but this OS should be more popular among the HEDT crowd.
For example in articles like this where you include older chips are you just listing the numbers from previous benchmarks, or are all running on the same patch level/OS?
And golem.de (link in german, see table at the end of the page): https://www.golem.de/news/threadripper-3970x-3960x-im-test-a...
Pair one of these Threadrippers with a pair of the fastest NVMe SSD's out there, with plenty of high performance DDR4 memory and you've got a near supercomputer from recent past in terms of performance.
On large projects you'll run OOM quickly if you're compiling thousands of files at once. All the context switches also slow down compilation.
Wouldn't count on it. I can't remember the specifics, but we're coming up on the end of the vague time range that AMD said they'd support the AM4 socket. They talked about how difficult and expensive it was to get the interposer to connect the new chiplets to the old pins on the socket. I think Lisa Su also implied in an interview that AM4 is nearing it's end. In any case, DDR5 is coming in the next year or two which would necessitate a new socket, so I'd guess one more generation on AM4 at most.
My goal is a decent build < 1000. I already have a case/power supply. Just getting CPU, Mobo, RAM basically. Keeping my r9 270x amd gfx card from 2013. Normal cooling. My RAM can go higher than 3200, but price range 3200 seems best bang for buck, I figure when it comes down I could upgrade to 128GB @ 4400 and eventually the cpu to whatever the highest end am4 socket allows, and a better graphics card and get a few more years out of my system.
I've never spent more than 800 on a system. I guess one man's 'wow this is cool' is another's 'no, that's lame' lol.
The Threadrippers are the same cpu with just more of them so I'm sure they do well also.
Even with a fast SSD on my lowly i7, I often wind up sitting at IO or lock contention instead of actual CPU bottlenecks (although it could be argued faster CPU = faster lock release = faster compilation).
On one of these systems I'm fairly sure you could just whack the entire thing in /tmp and -j64 it, the compile artifacts aren't that big either.
I do have 2 super SFF HP boxes that only take NVME in the M.2 drive so have one on hand but it isn't installed at the moment.
15000MB/sec, available now for $3119 AUD.
15 seconds to compile and run a Linux kernel of the time (64MB RAM required!).
So there's that. I'd give them a break with the fact they also had to produce the review for Intel's launch on the same day: https://www.anandtech.com/show/15039/the-intel-core-i9-10980...
On the other hand bench results have to be comparable and relevant (in time). Which is easy when you run the same still widely played GTA V year after year on every new CPU. But comparing compilation time for kernel version 3.11 (released at the same time as GTA V) seems a lot less relevant today.