Hacker News new | past | comments | ask | show | jobs | submit login
The AMD Threadripper 2 Teaser: Pre-Orders Start Today, Up to 32 Cores (anandtech.com)
188 points by srinikoganti on Aug 6, 2018 | hide | past | favorite | 118 comments



At the end of the day we still have the third party me_cleaner to disable the proprietary secret coprocessor on Intel chips while AMD chips still have their equivalent PSP With no first or third party means to disable it.

Until such a time I can get the equivalent tool to stop the hardware spyware built into the CPU I can have no enthusiasm or motivation to buy AMD chips. Not to say I want to buy Intel parts - they have nothing to do with third party efforts to nullify their backdoor - but if I were buying a chip tomorrow it would be a begrudging Intel purchase just for me_cleaner.


What percentage of Intel owners have run the ME-cleaner software? Probably infinitesimally small.

If AMD wants to dominate they don’t need to care about your particular use-case at all in fact. They just need to produce the fastest x86 chips at the cheapest price.


I think the problem is that me-cleaner is a 3rd party solution, I'm not sure I would trust it not to brick my CPU. If AMD solved this, it would be a huge differentiator in my eyes, and I think it would further boost their sales. That said, I suspect there is some deal with 3 letter agencies that doesn't allow them to do that. It doesn't make sense to go against their customers like that otherwise.


For a desktop - if you're running Linux (or maybe FreeBSD?) - then the POWER9 based Talos workstations seem like a reasonable alternative:

https://www.raptorcs.com/TALOSII/

Pricy, but they don't have the ME/PSP problem.


Lisa Su sounded like she'd be in the boat to open the PSP source when she was on Reddit. It seems that she only learned later that she can't do that.


The Platform Security Processor (and the code to make it work) is licensed from ARM, AMD did hire a 3rd party to audit the PSP after that thread on Reddit blew up. The results indicated the need for a rewrite, so in 5 to 6 years hopefully AMD will own the IP to their new PSP.


Really? Are the audit results or an announcement about them public?


The audit results aren't public, primarily due to them not being good: https://www.reddit.com/r/Amd/comments/6o2e6t/amd_is_not_open...


Where are you seeing the results of this audit?


You can disable PSP on some motherboards AFAIK. My motherboard has an option that at least seems like it disables PSP.


That option makes the PSP invisible to the running OS, but no one knows what is left running on the PSP.


My MSI X399 Gaming Pro Carbon AC motherboard had an option to disable the PSP, but it was removed in the latest bios update. The latest one has an option to turn on SVM (Secure Virtual Machine, needed for virtualization). I can either run VMs _or_ have PSP disabled...


Enabling virtualization is pretty simple, someone could probably write a little EFI app to do it and chainload from there.


What is the model?


You're relying on a third-party community effort to disable their coproprietary processor, which neither company seems to have the will to so by themselves.

The best outcome would be to have AMD provide a first-party, auditable option to disable it, otherwise the community will have to do it themselves, which will probably take longer if there's less people using them. Until then, the main focus would be the buy the best performing product, because that's the only real differentiator.


Given Intel's struggles around their process shrink, AMD may have come up with the perfect product at the perfect time. You can already see their effect with Intel finally adding more cores to their chips to try and stay competitive.

I do think that Intel will manage to get silicon out a bit earlier then their current end of CY19, but if they don't, TR and Ryzen may be the default go-to for manufacturers.


> Given Intel's struggles around their process shrink, AMD may have come up with the perfect product at the perfect time.

Yes -- AMD got lucky, this time. I sincerely doubt they (or anyone else) had any idea how Intel will struggle with their 10nm when they started on Zen in 2012. Back in 2013 Intel said they will have Cannon Lake out in 2015 (!) https://www.theregister.co.uk/2015/07/16/intel_10nm_14nm_pla... and today, mid 2018, the only released Cannon Lake CPU is a sad little dual core with the graphics chip disabled ( i3-8121U) selling in "very limited quantities".

65nm: 2006

45nm: 2007

32nm: 2010

22nm: 2012

14nm: 2014

10nm: 2019 (?)


It is time to give AMD a chance. Intel is plagued by problems with the 10nm process technology resulting in no significant innovation on their CPUs' performance or power consumption for over two years now. And just recently they pushed their 10nm products even further back to the second half of 2019. AMD might have the first 7nm CPUs out by then. Of course process technologies are not easily comparable between fabs but it is still crazy to see Intel starting to fall behind in process technology - a game they dominated for decades.


> It is time to give AMD a chance. Intel is plagued by problems with the 10nm process technology resulting in no significant innovation on their CPUs' performance or power consumption for over two years now.

The difference between 14nm and 14nm+++ is pretty close to a full node step...

> And just recently they pushed their 10nm products even further back to the second half of 2019. AMD might have the first 7nm CPUs out by then. Of course process technologies are not easily comparable between fabs but it is still crazy to see Intel starting to fall behind in process technology - a game they dominated for decades.

The numbers used for process 'sizes' don't really hold any meaning anymore and can't be directly compared. Intel's 14nm is 'better' in almost every metric than other fabs 10nm process. Intel's 10nm process is expected to be similarly comparable to other fabs 7nm.

Intel isn't really behind on process tech, but the competition has closed the gap considerably.


> Intel's 10nm process is expected to be similarly comparable to other fabs 7nm

No, Intel's 10nm is worse than TSMC's 7nm in every metric. Not significantly so, but it is so.

It also seems to be viable whereas Intel's hasn't been proven yet (the only released product is a mobile chip with the iGPU disabled due to defects and low yields).


If they arent behind on process tech, then they arent using it properly. The numbers are there. All techs aside, AMD is building competative chips at lower pricepoints. Intel has to pull the rabbit from the hat if it wants to stay in the spotlight.


Yes but only because Intel hit the wall first.

The still have tons of money and spend more time breaking that wall.

Good for us anyway. Don't mind getting my Intel CPUs cheaper.


> Yes but only because Intel hit the wall first.

I fail to see how that's relevant. What matters is what products are available in the market and how much they cost, and AMD's offering is both cheaper and more performant than anything Intel managed to put together.


There's also Samsung hurting intel image a lot. They became the most profitable silicon~ company (due to loads of memory) and they are head to head with intel in transistor process (near EUV ready).


I'm very excited for AMD and their market successes lately. I never wanted a world controlled by Intel, and even if all my computers are Intel at the moment I feel the world benefits from this competition.

That being said... Processor pre-orders? I had No idea that was a thing. Hope you get a QA discount. It doesn't talk about sockets but for sake of preorderers I hope it's compatible with original threadripper boards. edit: Actually it does, and they mention it's compatible with existing main boards too. I missed that this was multiple pages.


What is strange about pre-ordering hardware that might initially be in short supply?

Pre-orders don't make sense in the case of downloadable games, where there is no scarcity.


Pre-ordering a product that hasn't been reviewed and benchmarked means you could get something much worse than you expect. For example, there may be apps that can't take advantage of 32 cores so you could spend $1,800 to get no benefit.


Not to mention that with so many cores, if they want to read and write to disk, the disk is going to be a huge bottleneck.


Right, they could just pay the same money to get a 10-core processor from Intel.


you get bonuses and you get to annoy gamers.

that's 1 good reason to preorder, plus some bonuses that tend to be lame.


Are you implying you make poor financial decisions because you feel that it annoys other people?


> It doesn't talk about sockets but for sake of preorderers I hope it's compatible with original threadripper boards.

AMD has already committed to using the same socket until 2020.


This alone is a great reason to run AMD...


2020 is currently just 18 months ahead. In the consumer's pov that means close to nothing. Heck, I assume they would not be able to change within that time frame even if they wanted to.


The commitment was made over a year ago when Ryzen was first released. Basically it was a guarantee if you were going to upgrade you processor in the next 3 years you wouldn't have to get a new motherboard. A moderate upgrader would get 1 upgrade out of this deal, an aggressive upgrader (gets ever new chip), would get 3 upgrades. Someone who is conservative and only plans to upgrade ever 5 years or so would see no benefit. However, Intel seems to change the socket every. damn. time.


> Someone who is conservative and only plans to upgrade ever 5 years or so would see no benefit.

I disagree based on my personal experience. The CPU upgrade path for my current desktop computer ended roughly a year after I bought it, even though that CPU generation had been introduced very recently by Intel. I got a relatively high-end CPU, so upgrading to anything but the highest-end CPUs of that generation doesn't really make sense from a performance point of view. Yet those still cost a surprising amount of money today and offer worse performance than the midrange options of newer generations.

Simply put, extending the lifetime and thus upgrade path out by at least 150% means that the options available 5 years down the line will be 150% "better", even though no CPUs for that socket might even be in production anymore at that time. For me that seems like a significant advantage.


It's "compatible" in the sense that it plugs in and runs, but the 32-core processors are going to push the VRMs on all existing boards right to the limit, even at base clocks. After all, it's at least 70% more power than the first gen, and probably higher (most 2000-series Ryzens bust through their official TDP pretty readily).

Turbos are going to be pretty iffy under sustained load, let alone Precision Boost and XFR, which are (iirc) enabled by default.

https://www.youtube.com/watch?v=aKe7CnZT9ZE

AMD is shipping all the review kits with an unspecified "add-on VRM cooling kit" to try and help that, and there are second-gen X399 boards coming out with 16 phases (up from 6-8 in the first gen).


Bloody hell, this processor has more cores than the dual socket Xeon HPCs that my lab bought 2 years ago. Amazing. Honest question: Isn't the memory throughput an issue when you try to feed 32 hungry processors?


Threadripper uses quad-channel DDR4-2666 memory. That gets you 80 GB/s memory bandwidth, which is possible because the processor sits in an enormous 4094-pin TR4 socket. Your Xeons were probably in 2011 pin sockets with DDR3-1866 resulting in 70 GB/s memory bandwidth. One Threadripper has more than double the pin count and slightly more total memory bandwidth than your dual Xeon machine.

Also, your the Intel setup has a 25 GB/s QPI bus between the processors. Threadripper runs an Infinity Fabric bus, with 42 GB/s between each die internally (aggregate 170 GB/s). While a couple of those internal dies are not directly connected to the memory, you should have little trouble sharing meals among those hungry processors.


Actually with this release threadripper 2 will support DDR4-2933 which should increase the memory bandwith and the throughput of the infinity fabric if I'm not mistaken (I am pretty sure infinity fabric runs at the ram speed)


Most people overclock RAM using XMP settings and hit 3200 MT/s easily (which also overclocks the infinity fabric).

I've seen some people hit 3466MT/s on higher-end more expensive RAM on AMD's 12nm stuffs (Ryzen 2). So that's probably the practical limit.

3200 MT/s on Hynix dies is probably a reasonable expectation.


> enormous 4094-pin TR4 socket

A lot of these pins aren't used on TR, since you only get half the memory controllers you'd have with EPYC


That's true. A staggering number of them are power delivery to start with - you don't waste anywhere near half the socket by dropping 4 memory controllers (!) but yeah, it's not EPYC.


Do you know whether Threadripper is affected by NUMA issues? Otherwise this could be an additional selling point for applications that need a lot of memory (e.g. databases, where you do not want the memory latency to be unpredictable depending upon which socket you are running on)


It is. Only 1/2 of the dies are connected directly to memory controllers.

For actual production workloads you want EPYC which has a memory controller for each die, meaning it's 8 channel.


Depends on the arithmetic intensity. Higher intensity means less memory bandwidth needed to keep the cores happily crunching.


For what it's worth, Broadwell E5v4 Xeons (which have been out a little over 2 years) use a 2011-3 socket and support DDR4-2400.


Well, it's got four fairly fast memory channels but yes, you'll probably tend to be more memory limited with this guy - though the 64 MB of L3 should help in many cases.


For TR kind of. In the newer models only half the cores have direct access to a DDR channel. This shows in memory bounded benchmarks, but Epyc (server line) elevates this problem by have 8 (!!) memory channels, instead of the consumer TR’s 4.

Ultimately it depends on how cache friendly your workload is.

The 32c/64t is kind of a marketting gimmick selling binned Eypcs. If your doing memory bound HPC consider the 16c/32t model for roughly equal performance at a lower price.


EPYC fixes the problem by having 8 channels. All cores have direct access to DDR.


Do we know if the 16c TR (2950x) will be full bandwidth or not? I'm assuming it'll still be halved due to die count/arrangement?


2950X is using two dies with 2 memory channels each, basically an update of the original Threadripper with Zen+ dies.


That's what I was hoping to hear. Thanks!


It's the unfortunate downside to these. Two of the dies don't have direct access to memory, but it seems like AMD didn't want to do a refresh of the pinouts and force socket updates. I suspect this will be left for the Zen 2 version of Threadripper. That being said, I imagine in most circumstances the performance should still be pretty good.


Yes, hopefully the motherboard manufactures take that seriously.


While I can see Intel's leadership in single-core performance with 6 cores (overclocked 8700K can have 5+ GHz and future flagships probably will be even faster with 8 cores), at high-core workstation AMD is a winner, hands down. Competition is good.


AMD has always had great OC performance but my impression is that they have been power hogs. Is that incorrect or has it changed?


They had their ups and downs. The athlon 64 was a masterpiece in both performance and power consumption, in part because they made power management an integral part of the platform not just for laptops. At the time it was released Intel was busy pushing the pentium 4 to its limits, making it a great replacement for central heating during winter.

Then Intel was on top of the game again with the core architecture and while bulldozer (iirc) improved in that regard again Intel was still ahead, especially as bulldozer sucked pretty much in every other regard. Ryzen seems to be about on par with current Intel CPUs (performance per watt), depending on benchmark.


> Intel was on top of the game again with the core architecture

Funny thing is that Core, AFAIK, came from a separate laptop chip branch. For 64 bits Intel was betting on Itanium, which crashed and burned like iAPX-432 did before. And so Intel once again was forced to stay on the 8080 treadmill ...


Threadripper was born out of side-projects too.

"They worked on it in their spare time and it was really a passion project for about a year before they sought the green light from management, which is quite unusual – it was something they really cared about."


Yup. If was a little "side project" by their team in Haifa that turned out much better than expected. So the Israelis saved Intel. (Dramatization mine)


Zen (both Ryzen 2000 and 1000) is on a fairly low-power node, optimized for efficiency rather than clocks. Zen is pretty much comparable to Intel these days, if not slightly better, but runs into a voltage wall much quicker. If you force your way into that wall they do pull a fair bit of power.

The Ryzen 2000 chips have a smarter SpeedStep-type functionality than Intel does, it pretty much squeezes out all the OC performance from the chip while maintaining normal power management, while Intel (and Ryzen 1000) needs to manage voltages on a processor-wide basis, which eats more power.


They haven't always been power hogs. Nowadays they have a clear advantage. You don't want to know how much heat a Xeon machine with 32 cores will have to dissipate.


CPU wise they are holy shit close. GPU wise not that much.


Does this generation still lack the kind of performance counters that are needed for rr?


Go AMD Go.

Ryzen is default choice now for me and my brother. This pushes even more people toward DEFAULT RED.


Roughly linear pricing per core? What is this wizardry?


> Roughly linear pricing per core? What is this wizardry?

It's called "marketing", and it's definitely not limited by the laws of physics as we know them.

You too can learn the ways of marketing, but it comes at a very steep price: your eternal soul. (JK, that's sales.)


I think the modularity of the manufacturing process helps too.


Modularity helps cutting production costs, but that's just a lower bound. Prices are set based on marketing strategies and on how much the company is able to get for a product.


With a monolithic die that lower bound is not linear though: the bigger the die, the higher a chance of defects and the rarer the parts that don't need to have sections fused off.

With a modular design the lower bound is, indeed, linear, allowing AMD to severely undercut Intel.


That pricing is actually more reasonable than I was expecting. I might get myself a 32 core quad GPU workstation. Unthinkable even just a year ago.


Good god! Why on earth you need 32 core quad GPU workstation?


Rendering scales fairly linearly with hardware. If I'm rendering with Arnold for example, it can saturate all my cores pretty well and gives back an almost linear rendering time reduction.

Same with GPU systems like redshift, alternatively I can dedicate half the machine to rendering and the other half to continued work.


Does 16+ threads actually contribute to render times with Redshift?

I know

Unless you’re using Houdini, I think you’ll get the same result using Redshift with 16 threads. Houdini is the only DCC that uses more than one thread to prep the scen


Deep learning. CPUs do data augmentation, GPUs do linear algebra. I already have a Core i9, but it’s “only” 10 cores, and it’s struggling to keep up with GPUs


Deep Learning, once you’ve got enough GPUs you end up bottlenecking on the CPU.


Cinema4D and Blender are both very CPU intensive as well as taxing on the GPU.


Blender 2.8 Eevee is reducing rendering by factor of 100, with a modest reduction of quality. It's basically using game rendering techniques, but instead of catering to realtime, it'll take a couple seconds to produce a frame. It's almost entirely GPU driven, so in the future there may not be a need to really go full Threadripper unless you use Cycles daily.


If you want to use Cycles for animations, you'll probably need the full Threadripper.

Cycles is easily 5+ minutes a frame, maybe 20+ minutes on a threadripper if you have an interior scene or something hard to light. Even with Cycle's denoiser, you need a lot of rays to get a decent animation.

Even a short 10-second animation through Cycles will take multiple days to render on a Threadripper.


Because it's awesome.


Self driving car simulation. :) LIDAR by itself scales nicely across any hardware you have.


Small to mid-scale CFD.


for my purposes? editing and recompressing 110Mbps H264 3840x2160 30fps video into lower bitrate H265 formats.

It can take a very long time just to process a 5 minute video.


For the same reason Apple is selling an 18-core Mac Pro for $20,000 or whatever the price is now. Some people need that power. It's just that with AMD, you can get more of it and for a tiny fraction of the cost of Apple's Mac Pro.


That's just abusive pricing. If you're stuck with Apple software then I guess you have no choice. I have a 10-core 128GB ram workstation that's two years old now, it cost me half as much as a MacBook Pro. I need a new laptop soon, but the difference between the i9 Dell and the i9 MacBook is $800 CAD. Plus the Dell is for sure going to work better with a Linux install, in terms of drivers, which is better for me for work than MacOS. I won't be buying Apple.


He made that number up. The most expensive possible iMac Pro is ~$13.5K (see here: https://www.apple.com/shop/buy-mac/imac-pro/3.2ghz-1tb).

Note that most price comparisons show that the iMac Pro is actually quite the deal.


iMac Pro is cost-effective on the Intel platform. But AMD's Threadripper is the topic of discussion here, and I think for most people... those 32-cores for $1799 is just a way better deal.

Do remember that this 32-core Threadripper offers quad-channel memory, 64x PCIe lanes, and ECC support (!!). So the Threadripper offers pretty much everything a professional wants.

The only downside to Threadripper is that its multi-threaded performance is similar to a quad-socket / quad-CPU design (~200ns to ~300ns latencies to memory, depending on how many "hops" and NUMA nodes and stuff...). So audio professionals and gamers who are IIRC latency bound may want to stick to Intel.

But almost everyone else is bandwidth bound (video editing, 3d rendering, compilers, web servers, LT Spice...). So most professionals would probably prefer a Threadripper.


Threadripper is the broadest point of topic in the thread. Down here, the question was about if Apple actually charged $20K for a particular computer. They don't and given the particular hardware choices, the iMac Pro is a great deal. The ideal choice of hardware is a separate topic.


Still heavily marked up. 32GB ECC isn't $800. Current nVidia GPUs will run circles around those Radeons - even more so with Turing.


Yeah, it's like $500. Of course there's a markup on it. Apple doesn't make money by assembling parts at stock prices and then sending it to you.

It's the case that building a comparable computer (in terms of the specific iMac Pro hardware) is very similarly priced BUT requires labor, the purchase of an OS and additional peripherals: https://youtu.be/SONKIJd8xRM?t=187

That said, your point that Apple could've chosen better hardware at its price, is reasonable but off topic.


He did not. He is talking about the Mac Pro not iMac Pro. The former goes from 3000.0 to 4000.0 USD https://www.apple.com/shop/buy-mac/mac-pro

[update]Sorry he said $20,000. SO you are right


Maxed out Mac Pro is $6K...


Curious that there's quad-channel DDR4. AMD's similar Epyc processors that have four chips on the module are octal-channel (because there's a dual-channel controller on each chip). Won't these therefore perform worse than they should?


Ultimately they're limited by the number of pins that the socket they use has, even if the silicon could support more channels. They want to be able to maintain socket compatibility and they want to have some SKUs with just 2 working dies so 4 channels it remains.


This is great. Paying twice as much for a processor with twice as many cores makes a lot of sense in this case. I'm glad it wasn't more.

I've got my costs down for a TR machine and it comes in at around $4K CAD.


An image says x series is for gamers. Are modern games that CPU intensive? Don't most major game engines depend almost entirely upon the GPU?


A non-trivial amount of the game is still on the CPU. All game logic is on the CPU. AI, physics, networking, audio, etc... are all CPU as well (GPU physics never really happened beyond particle effects).

Even for rendering it's still the CPU preparing the commands and doing a first-pass curation to minimize rendering load.

That said, in this case it's not so much running the games better, as yes the GPU is usually the bottleneck. It's more for gamers that also stream or similar, which is an increasingly popular thing. You can GPU offload it, but that hurts your game performance and GPU encoding quality tends to be relatively fixed and relatively poor. CPU encoding is more flexible and higher quality, and if you've got an extra 8 cores sitting around idle anyway might as well use those instead of eating into the GPU power budget.

Also you've got the PCI-E lanes to run things like SLI or multi-GPU along with RAID NVME drives. Threadripper has 60 lanes. Something like an i7-8700k only has 16 PCI-E lanes - you can't even run a single GPU and an NVME drive without multiplexing your PCI-E lanes.


IME gamers are those who most likely are willing to shell out noticeably more money for something which they perceive may potentially improve their game performance.

Statistically I’m pretty sure most of them aren’t too scientific about their assumptions.

Hence the huge market for lots of HW which no normal person would buy, explicitly targeted at gamers.

TLDR: reality doesn’t matter as long as the x CPUs are faster :)


Gamers don't buy new CPUs hoping they might be faster, that's ridiculous.

They check benchmarks and see what kind of gains they can expect over what they currently have.


Software behaves like an ideal gas: it will always expand, if given more space. Modern games do a lot of stuff on CPUs that would have seemed insanely inefficient to do 10 years ago, simply because gamers have cores to spare.


Soon AAA games will be released on electron.


That TR 1900X SKU looks like an Intel killer for its target audience.


Not really, it's kind of a weird entry in the lineup. Due to the dual-NUMA layout of TR it's essentially a pair of 4-cores smushed together, which is essentially the worst of all worlds. Desktop users will be better off using the 2700X (same number of cores but on a single NUMA node, and higher clocks), high-core-count users will be better off with the 1920X/1950X or the 2920X/2950X.

The sole advantage is that it's a lot of PCIe lanes for the money, so it can make sense for storage builds that want to address a lot of NVMe, or GPU compute builds that need very little CPU horsepower.

Also, like all TR boards, the motherboards are extortionate. They start at about $350 and go up from there. And that doesn't even buy you a futureproof system - the higher core counts on the TR 2000 series mean that first-gen boards likely won't be able to turbo the new processors, you're running base-clocks.

https://www.youtube.com/watch?v=aKe7CnZT9ZE


> The sole advantage is that it's a lot of PCIe lanes for the money, so it can make sense for storage builds that want to address a lot of NVMe, or GPU compute builds that need very little CPU horsepower.

That's what I was thinking. The 1900x looks useful for quad-GPU Redshift rendering setups.

Its certainly not "mainstream". The 1920x / 2920x is far better bang for your buck. But the price from AMD scales very well. If you're willing to spend bigger bucks on cooling, the 2990WX is great. But 250W TDP is going to be a tough-one to design around.

Custom-cooling is probably the answer. Enermax's TR4 AIO turned out to be all sorts of awful, unfortunately. And Noctua's Air coolers cap out at 180W TDP designs.


> They start at about $350 and go up from there

That's an excellent deal compared to the Talos POWER9 boards :)


And you can already buy it today (AMD 1xxx chips are Zen 1, already on the market).


I hope AMD hits hard the low end, too.


Most chips sold these days are server chips or mobile (cellphone or notebook). I hope these gains filter down to the mobile SKUs.


It may be of no concern to many, but 3D artists constantly express problems with filling DIMM slots when using Threadrippers. Supposedly, 128 GB RAM is only reliable on a couple of motherboards. I just constantly read these stories on forums. (And to be sure, these aren’t situations where users were mixing RAM sets)


I've seen that as well. I believe it boils down to a few AMD motherboard integrators that have rushed out products that are marginal - they work ok for simple use cases, but maxing them out reveals some edge cases they didn't work out.

The solution, of course, is to name them. Can you share which motherboards you've seen people complaining about?


AMD's memory controller has worse compatibility compared to Intel's. AMD's 1st Gen Ryzen was known to have issues with Hynix dies.

Remember: DDR4 goes straight to the CPU these days. I'm fairly certain that motherboard makers can make a simple wire connection between the DDR4 pins and the CPU itself without much issue.

https://community.amd.com/thread/217871

The AMD Community knows to avoid Hynix and to buy Samsung. Hynix did improve with some BIOS updates (in particular: increasing V_soc voltage to 1.1 and other tricks). Update your BIOS if you have any issues.

Zen+ dies have better DDR4 compatibility.


Hasn't the latest BIOS fix most of the issues?

AMD's memory controller is actually from Rambus.

The good thing is as more EPYC moves inside DC, there will be lots more memory testing done for AMD.


Imagine what'd happen what'll happen once AMD starts competing with Nvidia (for AI stuff).


Does anyone in here know if Zen2 is going to implement AVX properly? Threadripper2 will not, all AVX2 instructions will operate at ~SSE speeds, as on past AMD chips.


Zen 1 already implements AVX2 properly.

It doesn't have 256 bit wide units, so it's 2 clock cycles instead of 1… but that's actually good. Intel's 256 bit units cause horrible downclocking issues: https://blog.cloudflare.com/on-the-dangers-of-intels-frequen...

What even is "SSE speeds"?!


At least AMD has consistent AVX performance over prolonged periods of time. Intel starts throttling and downclocking so hard because of heat output that if you are doing any type of mixed workload, its better to skip the AVX instructions since the rest of the workload will suffer due to the extreme performance degradation.


where did you get this idea from?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: