At the end of the day we still have the third party me_cleaner to disable the proprietary secret coprocessor on Intel chips while AMD chips still have their equivalent PSP With no first or third party means to disable it.
Until such a time I can get the equivalent tool to stop the hardware spyware built into the CPU I can have no enthusiasm or motivation to buy AMD chips. Not to say I want to buy Intel parts - they have nothing to do with third party efforts to nullify their backdoor - but if I were buying a chip tomorrow it would be a begrudging Intel purchase just for me_cleaner.
What percentage of Intel owners have run the ME-cleaner software? Probably infinitesimally small.
If AMD wants to dominate they don’t need to care about your particular use-case at all in fact. They just need to produce the fastest x86 chips at the cheapest price.
I think the problem is that me-cleaner is a 3rd party solution, I'm not sure I would trust it not to brick my CPU. If AMD solved this, it would be a huge differentiator in my eyes, and I think it would further boost their sales. That said, I suspect there is some deal with 3 letter agencies that doesn't allow them to do that. It doesn't make sense to go against their customers like that otherwise.
The Platform Security Processor (and the code to make it work) is licensed from ARM, AMD did hire a 3rd party to audit the PSP after that thread on Reddit blew up. The results indicated the need for a rewrite, so in 5 to 6 years hopefully AMD will own the IP to their new PSP.
My MSI X399 Gaming Pro Carbon AC motherboard had an option to disable the PSP, but it was removed in the latest bios update. The latest one has an option to turn on SVM (Secure Virtual Machine, needed for virtualization). I can either run VMs _or_ have PSP disabled...
You're relying on a third-party community effort to disable their coproprietary processor, which neither company seems to have the will to so by themselves.
The best outcome would be to have AMD provide a first-party, auditable option to disable it, otherwise the community will have to do it themselves, which will probably take longer if there's less people using them. Until then, the main focus would be the buy the best performing product, because that's the only real differentiator.
Given Intel's struggles around their process shrink, AMD may have come up with the perfect product at the perfect time. You can already see their effect with Intel finally adding more cores to their chips to try and stay competitive.
I do think that Intel will manage to get silicon out a bit earlier then their current end of CY19, but if they don't, TR and Ryzen may be the default go-to for manufacturers.
> Given Intel's struggles around their process shrink, AMD may have come up with the perfect product at the perfect time.
Yes -- AMD got lucky, this time. I sincerely doubt they (or anyone else) had any idea how Intel will struggle with their 10nm when they started on Zen in 2012. Back in 2013 Intel said they will have Cannon Lake out in 2015 (!) https://www.theregister.co.uk/2015/07/16/intel_10nm_14nm_pla... and today, mid 2018, the only released Cannon Lake CPU is a sad little dual core with the graphics chip disabled ( i3-8121U) selling in "very limited quantities".
It is time to give AMD a chance. Intel is plagued by problems with the 10nm process technology resulting in no significant innovation on their CPUs' performance or power consumption for over two years now. And just recently they pushed their 10nm products even further back to the second half of 2019. AMD might have the first 7nm CPUs out by then. Of course process technologies are not easily comparable between fabs but it is still crazy to see Intel starting to fall behind in process technology - a game they dominated for decades.
> It is time to give AMD a chance. Intel is plagued by problems with the 10nm process technology resulting in no significant innovation on their CPUs' performance or power consumption for over two years now.
The difference between 14nm and 14nm+++ is pretty close to a full node step...
> And just recently they pushed their 10nm products even further back to the second half of 2019. AMD might have the first 7nm CPUs out by then. Of course process technologies are not easily comparable between fabs but it is still crazy to see Intel starting to fall behind in process technology - a game they dominated for decades.
The numbers used for process 'sizes' don't really hold any meaning anymore and can't be directly compared. Intel's 14nm is 'better' in almost every metric than other fabs 10nm process. Intel's 10nm process is expected to be similarly comparable to other fabs 7nm.
Intel isn't really behind on process tech, but the competition has closed the gap considerably.
> Intel's 10nm process is expected to be similarly comparable to other fabs 7nm
No, Intel's 10nm is worse than TSMC's 7nm in every metric. Not significantly so, but it is so.
It also seems to be viable whereas Intel's hasn't been proven yet (the only released product is a mobile chip with the iGPU disabled due to defects and low yields).
If they arent behind on process tech, then they arent using it properly. The numbers are there. All techs aside, AMD is building competative chips at lower pricepoints. Intel has to pull the rabbit from the hat if it wants to stay in the spotlight.
I fail to see how that's relevant. What matters is what products are available in the market and how much they cost, and AMD's offering is both cheaper and more performant than anything Intel managed to put together.
There's also Samsung hurting intel image a lot. They became the most profitable silicon~ company (due to loads of memory) and they are head to head with intel in transistor process (near EUV ready).
I'm very excited for AMD and their market successes lately. I never wanted a world controlled by Intel, and even if all my computers are Intel at the moment I feel the world benefits from this competition.
That being said... Processor pre-orders? I had No idea that was a thing. Hope you get a QA discount. It doesn't talk about sockets but for sake of preorderers I hope it's compatible with original threadripper boards. edit: Actually it does, and they mention it's compatible with existing main boards too. I missed that this was multiple pages.
Pre-ordering a product that hasn't been reviewed and benchmarked means you could get something much worse than you expect. For example, there may be apps that can't take advantage of 32 cores so you could spend $1,800 to get no benefit.
2020 is currently just 18 months ahead. In the consumer's pov that means close to nothing. Heck, I assume they would not be able to change within that time frame even if they wanted to.
The commitment was made over a year ago when Ryzen was first released. Basically it was a guarantee if you were going to upgrade you processor in the next 3 years you wouldn't have to get a new motherboard. A moderate upgrader would get 1 upgrade out of this deal, an aggressive upgrader (gets ever new chip), would get 3 upgrades. Someone who is conservative and only plans to upgrade ever 5 years or so would see no benefit. However, Intel seems to change the socket every. damn. time.
> Someone who is conservative and only plans to upgrade ever 5 years or so would see no benefit.
I disagree based on my personal experience. The CPU upgrade path for my current desktop computer ended roughly a year after I bought it, even though that CPU generation had been introduced very recently by Intel. I got a relatively high-end CPU, so upgrading to anything but the highest-end CPUs of that generation doesn't really make sense from a performance point of view. Yet those still cost a surprising amount of money today and offer worse performance than the midrange options of newer generations.
Simply put, extending the lifetime and thus upgrade path out by at least 150% means that the options available 5 years down the line will be 150% "better", even though no CPUs for that socket might even be in production anymore at that time. For me that seems like a significant advantage.
It's "compatible" in the sense that it plugs in and runs, but the 32-core processors are going to push the VRMs on all existing boards right to the limit, even at base clocks. After all, it's at least 70% more power than the first gen, and probably higher (most 2000-series Ryzens bust through their official TDP pretty readily).
Turbos are going to be pretty iffy under sustained load, let alone Precision Boost and XFR, which are (iirc) enabled by default.
AMD is shipping all the review kits with an unspecified "add-on VRM cooling kit" to try and help that, and there are second-gen X399 boards coming out with 16 phases (up from 6-8 in the first gen).
Bloody hell, this processor has more cores than the dual socket Xeon HPCs that my lab bought 2 years ago. Amazing.
Honest question: Isn't the memory throughput an issue when you try to feed 32 hungry processors?
Threadripper uses quad-channel DDR4-2666 memory. That gets you 80 GB/s memory bandwidth, which is possible because the processor sits in an enormous 4094-pin TR4 socket. Your Xeons were probably in 2011 pin sockets with DDR3-1866 resulting in 70 GB/s memory bandwidth. One Threadripper has more than double the pin count and slightly more total memory bandwidth than your dual Xeon machine.
Also, your the Intel setup has a 25 GB/s QPI bus between the processors. Threadripper runs an Infinity Fabric bus, with 42 GB/s between each die internally (aggregate 170 GB/s). While a couple of those internal dies are not directly connected to the memory, you should have little trouble sharing meals among those hungry processors.
Actually with this release threadripper 2 will support DDR4-2933 which should increase the memory bandwith and the throughput of the infinity fabric if I'm not mistaken (I am pretty sure infinity fabric runs at the ram speed)
That's true. A staggering number of them are power delivery to start with - you don't waste anywhere near half the socket by dropping 4 memory controllers (!) but yeah, it's not EPYC.
Do you know whether Threadripper is affected by NUMA issues? Otherwise this could be an additional selling point for applications that need a lot of memory (e.g. databases, where you do not want the memory latency to be unpredictable depending upon which socket you are running on)
Well, it's got four fairly fast memory channels but yes, you'll probably tend to be more memory limited with this guy - though the 64 MB of L3 should help in many cases.
For TR kind of. In the newer models only half the cores have direct access to a DDR channel. This shows in memory bounded benchmarks, but Epyc (server line) elevates this problem by have 8 (!!) memory channels, instead of the consumer TR’s 4.
Ultimately it depends on how cache friendly your workload is.
The 32c/64t is kind of a marketting gimmick selling binned Eypcs. If your doing memory bound HPC consider the 16c/32t model for roughly equal performance at a lower price.
It's the unfortunate downside to these. Two of the dies don't have direct access to memory, but it seems like AMD didn't want to do a refresh of the pinouts and force socket updates. I suspect this will be left for the Zen 2 version of Threadripper. That being said, I imagine in most circumstances the performance should still be pretty good.
While I can see Intel's leadership in single-core performance with 6 cores (overclocked 8700K can have 5+ GHz and future flagships probably will be even faster with 8 cores), at high-core workstation AMD is a winner, hands down. Competition is good.
They had their ups and downs. The athlon 64 was a masterpiece in both performance and power consumption, in part because they made power management an integral part of the platform not just for laptops. At the time it was released Intel was busy pushing the pentium 4 to its limits, making it a great replacement for central heating during winter.
Then Intel was on top of the game again with the core architecture and while bulldozer (iirc) improved in that regard again Intel was still ahead, especially as bulldozer sucked pretty much in every other regard. Ryzen seems to be about on par with current Intel CPUs (performance per watt), depending on benchmark.
> Intel was on top of the game again with the core architecture
Funny thing is that Core, AFAIK, came from a separate laptop chip branch. For 64 bits Intel was betting on Itanium, which crashed and burned like iAPX-432 did before. And so Intel once again was forced to stay on the 8080 treadmill ...
"They worked on it in their spare time and it was really a passion project for about a year before they sought the green light from management, which is quite unusual – it was something they really cared about."
Yup. If was a little "side project" by their team in Haifa that turned out much better than expected. So the Israelis saved Intel. (Dramatization mine)
Zen (both Ryzen 2000 and 1000) is on a fairly low-power node, optimized for efficiency rather than clocks. Zen is pretty much comparable to Intel these days, if not slightly better, but runs into a voltage wall much quicker. If you force your way into that wall they do pull a fair bit of power.
The Ryzen 2000 chips have a smarter SpeedStep-type functionality than Intel does, it pretty much squeezes out all the OC performance from the chip while maintaining normal power management, while Intel (and Ryzen 1000) needs to manage voltages on a processor-wide basis, which eats more power.
They haven't always been power hogs. Nowadays they have a clear advantage. You don't want to know how much heat a Xeon machine with 32 cores will have to dissipate.
Modularity helps cutting production costs, but that's just a lower bound. Prices are set based on marketing strategies and on how much the company is able to get for a product.
With a monolithic die that lower bound is not linear though: the bigger the die, the higher a chance of defects and the rarer the parts that don't need to have sections fused off.
With a modular design the lower bound is, indeed, linear, allowing AMD to severely undercut Intel.
Rendering scales fairly linearly with hardware. If I'm rendering with Arnold for example, it can saturate all my cores pretty well and gives back an almost linear rendering time reduction.
Same with GPU systems like redshift, alternatively I can dedicate half the machine to rendering and the other half to continued work.
Does 16+ threads actually contribute to render times with Redshift?
I know
Unless you’re using Houdini, I think you’ll get the same result using Redshift with 16 threads. Houdini is the only DCC that uses more than one thread to prep the scen
Deep learning. CPUs do data augmentation, GPUs do linear algebra. I already have a Core i9, but it’s “only” 10 cores, and it’s struggling to keep up with GPUs
Blender 2.8 Eevee is reducing rendering by factor of 100, with a modest reduction of quality. It's basically using game rendering techniques, but instead of catering to realtime, it'll take a couple seconds to produce a frame. It's almost entirely GPU driven, so in the future there may not be a need to really go full Threadripper unless you use Cycles daily.
If you want to use Cycles for animations, you'll probably need the full Threadripper.
Cycles is easily 5+ minutes a frame, maybe 20+ minutes on a threadripper if you have an interior scene or something hard to light. Even with Cycle's denoiser, you need a lot of rays to get a decent animation.
Even a short 10-second animation through Cycles will take multiple days to render on a Threadripper.
For the same reason Apple is selling an 18-core Mac Pro for $20,000 or whatever the price is now. Some people need that power. It's just that with AMD, you can get more of it and for a tiny fraction of the cost of Apple's Mac Pro.
That's just abusive pricing. If you're stuck with Apple software then I guess you have no choice. I have a 10-core 128GB ram workstation that's two years old now, it cost me half as much as a MacBook Pro. I need a new laptop soon, but the difference between the i9 Dell and the i9 MacBook is $800 CAD. Plus the Dell is for sure going to work better with a Linux install, in terms of drivers, which is better for me for work than MacOS. I won't be buying Apple.
iMac Pro is cost-effective on the Intel platform. But AMD's Threadripper is the topic of discussion here, and I think for most people... those 32-cores for $1799 is just a way better deal.
Do remember that this 32-core Threadripper offers quad-channel memory, 64x PCIe lanes, and ECC support (!!). So the Threadripper offers pretty much everything a professional wants.
The only downside to Threadripper is that its multi-threaded performance is similar to a quad-socket / quad-CPU design (~200ns to ~300ns latencies to memory, depending on how many "hops" and NUMA nodes and stuff...). So audio professionals and gamers who are IIRC latency bound may want to stick to Intel.
But almost everyone else is bandwidth bound (video editing, 3d rendering, compilers, web servers, LT Spice...). So most professionals would probably prefer a Threadripper.
Threadripper is the broadest point of topic in the thread. Down here, the question was about if Apple actually charged $20K for a particular computer. They don't and given the particular hardware choices, the iMac Pro is a great deal. The ideal choice of hardware is a separate topic.
Yeah, it's like $500. Of course there's a markup on it. Apple doesn't make money by assembling parts at stock prices and then sending it to you.
It's the case that building a comparable computer (in terms of the specific iMac Pro hardware) is very similarly priced BUT requires labor, the purchase of an OS and additional peripherals: https://youtu.be/SONKIJd8xRM?t=187
That said, your point that Apple could've chosen better hardware at its price, is reasonable but off topic.
Curious that there's quad-channel DDR4. AMD's similar Epyc processors that have four chips on the module are octal-channel (because there's a dual-channel controller on each chip). Won't these therefore perform worse than they should?
Ultimately they're limited by the number of pins that the socket they use has, even if the silicon could support more channels. They want to be able to maintain socket compatibility and they want to have some SKUs with just 2 working dies so 4 channels it remains.
A non-trivial amount of the game is still on the CPU. All game logic is on the CPU. AI, physics, networking, audio, etc... are all CPU as well (GPU physics never really happened beyond particle effects).
Even for rendering it's still the CPU preparing the commands and doing a first-pass curation to minimize rendering load.
That said, in this case it's not so much running the games better, as yes the GPU is usually the bottleneck. It's more for gamers that also stream or similar, which is an increasingly popular thing. You can GPU offload it, but that hurts your game performance and GPU encoding quality tends to be relatively fixed and relatively poor. CPU encoding is more flexible and higher quality, and if you've got an extra 8 cores sitting around idle anyway might as well use those instead of eating into the GPU power budget.
Also you've got the PCI-E lanes to run things like SLI or multi-GPU along with RAID NVME drives. Threadripper has 60 lanes. Something like an i7-8700k only has 16 PCI-E lanes - you can't even run a single GPU and an NVME drive without multiplexing your PCI-E lanes.
IME gamers are those who most likely are willing to shell out noticeably more money for something which they perceive may potentially improve their game performance.
Statistically I’m pretty sure most of them aren’t too scientific about their assumptions.
Hence the huge market for lots of HW which no normal person would buy, explicitly targeted at gamers.
TLDR: reality doesn’t matter as long as the x CPUs are faster :)
Software behaves like an ideal gas: it will always expand, if given more space. Modern games do a lot of stuff on CPUs that would have seemed insanely inefficient to do 10 years ago, simply because gamers have cores to spare.
Not really, it's kind of a weird entry in the lineup. Due to the dual-NUMA layout of TR it's essentially a pair of 4-cores smushed together, which is essentially the worst of all worlds. Desktop users will be better off using the 2700X (same number of cores but on a single NUMA node, and higher clocks), high-core-count users will be better off with the 1920X/1950X or the 2920X/2950X.
The sole advantage is that it's a lot of PCIe lanes for the money, so it can make sense for storage builds that want to address a lot of NVMe, or GPU compute builds that need very little CPU horsepower.
Also, like all TR boards, the motherboards are extortionate. They start at about $350 and go up from there. And that doesn't even buy you a futureproof system - the higher core counts on the TR 2000 series mean that first-gen boards likely won't be able to turbo the new processors, you're running base-clocks.
> The sole advantage is that it's a lot of PCIe lanes for the money, so it can make sense for storage builds that want to address a lot of NVMe, or GPU compute builds that need very little CPU horsepower.
That's what I was thinking. The 1900x looks useful for quad-GPU Redshift rendering setups.
Its certainly not "mainstream". The 1920x / 2920x is far better bang for your buck. But the price from AMD scales very well. If you're willing to spend bigger bucks on cooling, the 2990WX is great. But 250W TDP is going to be a tough-one to design around.
Custom-cooling is probably the answer. Enermax's TR4 AIO turned out to be all sorts of awful, unfortunately. And Noctua's Air coolers cap out at 180W TDP designs.
It may be of no concern to many, but 3D artists constantly express problems with filling DIMM slots when using Threadrippers. Supposedly, 128 GB RAM is only reliable on a couple of motherboards. I just constantly read these stories on forums. (And to be sure, these aren’t situations where users were mixing RAM sets)
I've seen that as well. I believe it boils down to a few AMD motherboard integrators that have rushed out products that are marginal - they work ok for simple use cases, but maxing them out reveals some edge cases they didn't work out.
The solution, of course, is to name them. Can you share which motherboards you've seen people complaining about?
AMD's memory controller has worse compatibility compared to Intel's. AMD's 1st Gen Ryzen was known to have issues with Hynix dies.
Remember: DDR4 goes straight to the CPU these days. I'm fairly certain that motherboard makers can make a simple wire connection between the DDR4 pins and the CPU itself without much issue.
The AMD Community knows to avoid Hynix and to buy Samsung. Hynix did improve with some BIOS updates (in particular: increasing V_soc voltage to 1.1 and other tricks). Update your BIOS if you have any issues.
Does anyone in here know if Zen2 is going to implement AVX properly?
Threadripper2 will not, all AVX2 instructions will operate at ~SSE speeds, as on past AMD chips.
At least AMD has consistent AVX performance over prolonged periods of time. Intel starts throttling and downclocking so hard because of heat output that if you are doing any type of mixed workload, its better to skip the AVX instructions since the rest of the workload will suffer due to the extreme performance degradation.
Until such a time I can get the equivalent tool to stop the hardware spyware built into the CPU I can have no enthusiasm or motivation to buy AMD chips. Not to say I want to buy Intel parts - they have nothing to do with third party efforts to nullify their backdoor - but if I were buying a chip tomorrow it would be a begrudging Intel purchase just for me_cleaner.