Well isn't that a kick in the pants.
AMD had similar upper hand against Intel with their Athlon processor 20 years ago. Intel's transition to the 180 nm process was delayed and AMD's K7 Athlon was superior microarchitecture. Intel's response was to lower profit margins until AMD was in the ropes again.
This time AMD has better change, but don't count out Intel. AMD's success depends on their profit margins when competing against Intel. If Intel force AMD's profits close to zero, it can spoil AMD's technical win.
But that would be retroactive and not help AMD. But what helps AMD in advance is the data center landscape. How would Intel force Google, Microsoft and Amazon to use their server processors in their custom servers for their cloud data centers? They can compete on price (though price dumping could alert regulators again), but Intel has zero influence on those three. Thus we see https://news.ycombinator.com/item?id=20643604.
And smaller OEMs should be hard to pressure given the history and hard to mislead given the big companies validating AMDs processors.
Regular vendors for desktops and laptops are already offering AMD, so there seems to be limited danger there.
Also, Intel is operating under a consent decree and FTC supervision until October 29, 2020. The FTC would come down hard on direct violations of the decree.
and it looks like they finally brib^^convinced right people and have a chance of cancelling it https://www.nytimes.com/2017/09/06/business/intel-eu-antitru...
But the consent decree is new to me. I guess FTC has been slightly more active than I gave them credit for.
Betting against TSMC, not a very wise move at the moment.
>If Intel force AMD's profits close to zero, it can spoil AMD's technical win.
That is assuming Intel will make a lost in other to hurt AMD, and I am not sure if predatory pricing is legal in US. AMD is not pricing their product to undercut Intel, AMD is pricing their product as it is because their Chiplet strategy allows them to do so while still retaining Industry level margin.
I am pretty sure Intel can afford to lose billions, just like their contra revenue, but this time around the value will be so great I am not sure if their Investor will be happy with it, not to mention their declining margin.
Keep in mind AMD's dies are tiny compared to Intel's. The CPU dies, the 7nm parts, are a mere 74mm2. They are going to be getting fantastic yields on them as a result, and can trivially allocate them to what's selling well. And it's the same die for their entire stack - consumer & enterprise.
Meanwhile the 28 core Xeons are a monstrous ~694mm2. Even though the 14nm process is mature, that's still a hugely expensive chip to make due to yields and capacity. You can only fit so many of those rectangles on the circular wafer.
But, it seems Intel is and has been for quite some time unable to even keep up with competition. It was leader and it seems 10 and 7nm processes completely stopped them in their tracks with regards to competition.
I am kind of interested what actually happen. Probably there is an interesting book to be written on Intel's troubles with getting 10nm node.
I've had enough of the issues with my 6850k.
The delay of the 10nm process and patching the delay with 14+ and 14++ didn't stop anything else they do in process technology. They have accelerated the path to 7nm (similar to TSMC's 5mm).
ps. Intel plans to launch their first discrete GPU in 2020 using 7nm process. That's going to be interesting.
It is 10nm in 2020, defiantly not 7nm. That is scheduled for 2021, if they could make it on time.
I feel like they would have burned a lot less goodwill if they hadn't straight up lied for multiple years about their maturity and timeline.
I've been holding out on building a desktop because I could go for a while without it but my patience is wearing very thin after waiting months and now having to wait even longer just to get CPUs in stock in the first place.
Intel's advantage is OEMs and sheer output volume, but the hyperscale infrastructure folks are going to be shoring up AMD financially while Intel has problems and maybe in two years or so of this nonsense AMD might be much more of a serious decision, but as of this moment it isn't a slam dunk for AMD at all.
You're confusing bandwidth and latency. Ryzen 3000 launched a month ago, the 3900x outperformed expectations, so the initial shipments sold out faster than expected. You can't magic up stock out of thin air, so there's an inevitable lag between retailers reporting unusually high demand and AMD being able to deliver sufficient stock.
The bandwidth question is much more important and bodes very poorly for Intel. TSMC's 7nm process is stable, providing excellent yields and has plenty of available capacity; they're already in risk production for 5nm, which is expected to free up substantial capacity at 7nm/7nm+ into 2020. Intel's 10nm has been a complete debacle and (despite the Ice Lake launch) is still blighted with sub-par yields.
Yields in terms of getting lots of operational dies, sure, but one of the underlying problems here is chip quality. They have lots of chiplets but most of those chiplets are not fast enough to hit the clocks advertised for the 3900X.
In fact, even a lot of the chiplets sold as 3900X are not fast enough to hit the clocks advertised for 3900X, as well as elsewhere throughout their range. There are a lot of people finding their chips only boost to 50-200 MHz less than the advertised frequency of the chip.
Essentially, the boost algorithm now takes chip quality into account when determining how high it will boost. And most of the chips have silicon quality that is too poor to hit the advertised clocks, even on a single core, even under ideal cooling, etc etc.
Thus, AMD has the somewhat dubious honor of being the first company to make the silicon lottery apply not just to overclocking, but to their stock clocks as well. They really wasted no time before shifting to anti-consumer bullshit of their own; all they had to do was advertise the chips as being 200 MHz lower and everyone would have been happy, but they wanted to advertise clocks the chips couldn't hit.
And again, the underlying problem is chip quality - a lot of these chiplets can't boost to 4.3 or 4.4 GHz let alone 4.7. AMD simply can't yield enough 4.7 GHz chiplets to go around, even if the chiplets are nominally functional. The process may be "stable and providing excellent yields" but it's not stable and well yielding enough to meet AMD's expectations.
That's a major reason they're now introducing 3700 and 3900 non-X variations - that will allow them to reduce clocks and satisfy demand a bit better.
New architecture, new chipset, bound to have some release issues. Intel is on its second or third refresh of their Skylake architecture from 2015, all ironed out.
More like the fourth, I think.
So to be clear, older AGESA isn't a magic bullet that is letting all chips "easily hit their rated boost clocks".
It could be cleaned up somewhat in future AGESA releases, and silicon quality will definitely go up over time.
Do you have some sort of source for this claim?
A lot of reviewers have noted similar things, but often are working with singular samples and didn't want to make too much of a stink without more data, but the problem is widespread. Out of all of der8auer's CPUs, only one hit its advertised boost clocks, and it was one of the lower-end CPUs with a less ambitious target to hit.
It may be a problem with early AGESA firmware, and silicon quality will definitely go up over time, but at least at this point in time AMD has certainly falsely advertised the clocks these CPUs are capable of achieving.
As it stands, this mine has some dead canaries.
> ...but one of the underlying problems here is chip quality
> ...every forum pretty much
That's hearsay not actual justification. And intel has a reputation for dirty play.
It is unclear whether Intel is truly disadvantaged in throughput for any appreciable length of time. We've seen what happened to Intel after the disastrous Prescott release years ago - they worked on the Core architecture and its follow-up Core 2 that put AMD in a pretty serious rut for the past decade. Your point of the 10nm offering launching and _still_ being lackluster is the big, big problem for Intel for short-term competitiveness.
I'm experiencing the opposite. There has been so many deals on 3900X that I have to constantly tell myself I don't need an upgrade.
This doesn't necessarily completely invalidate my point though - distribution by AMD is clearly needing some work when one region is drowning in 3900X processors and a very wealthy metro area has none in retail channels.
This strategy is quite widely used in many other industry as well.
Although given it has only been launched for less than a month and demand is actually through the roof ( I have been seen reviews being so Pro AMD, even in the AMD Athon 64 days ). So I think it is simply supply is a little tight while TSMC are working hard.
I hope this letter finds you in good health. Since I've seen that you are offering AMD 3900X at a discount, I'd like to inform you that I am not like those pleebs and would therefore like you to pay in full MSRP. Please let me know where I can send the goats.
I have the honor to be your obedient customer.
B&H Photo is a reputable site, and they are advertising the 3900X at $499. Now whether they will ship anytime soon...
In fact Intel had supply issues with their 9900K at launch too, for at least a month or two they were often out of stock at the major retailers. If getting ahold of a 3900X is still tough by next month then maybe that's cause for concern.
I would agree that Intel is more mature on the BIOS side of things. AMD usually has launch issues that need to be ironed out with UEFI updates. But, if the past few launches are any indication, they've always got things fixed.
Next month, the 3950X will be the one hard to get a hold of.
These chips launched on schedule on 7/7, almost exactly one month ago. If you've been exasperatedly waiting for months it's not AMD's fault.
The "literally no reason" part is okay, what you need to do is adjust "at the moment" to "once we're out of the launch window".
i have one and there is no bios issue. i got the cheapest x570 mobo and everything is great
There's no business reason on desktop or server for Intel for sure but there's so much inertia here which AMD needs to counter, it'll take years.
Intel is down on the floor until 2021-2022 when their 7nm (which is a smaller node than TSMC's 7nm) begins to ship because a) there's no reason to believe 10nm actually will ship in quantity b) there's every reason to believe even if it does, it's not going to be great, the first iteration of a process is never so and 14nm is so fine tuned by now, it is better in watt/performance which makes Ice Lake look stupid. 7nm is said to be a totally different, independent development and not a fine tune of the (dead) 10nm.
Intel has 12B cash at hand though so don't expect them to just go out with a whimper. If their profits go down a little for 2-3 years, they will live. The stock price didn't crash, with good reason. AMD had a net loss for seven consecutive quarters before turning a profit in Q2 2016. Intel won't even turn unprofitable for a similar period of time, just it'll have a littles less profit. And, again, they have a decent sized war chest to draw on if necessary.
The chip business is a slow business. In 2012, Intel said they will ship 10m chips in 2015. https://www.crn.com/news/components-peripherals/240007274/in... This is about the same time when AMD re-hired Jim Keller. AMD saw their window in 2015 when Intel 10nm didn't ship, thrown away K12 in a hurry and brought Zen to market in 2016 -- surely they didn't expect they will have a five year run when Intel can't put up a competition.
The fun will start in 2021 when TSMC is expected to have a refined 5nm (they call it 5nm Plus) process which you bet AMD will use vs Intel 7nm.
The hilarious gain of this is that this will drive down Intel's chip prices in attempts to compete, improving Mac margins.
Warehouse-scale computing has similar budgets and timeframes. You don't decide how to re-build this month's 10k machines based on this month's benchmarks. You made the decision as far back as the supply chain required you to do it, maybe a year or more.
I'm sure that AMD's sales team has been telling their big customers about this generation's performance improvements for a while. But with their history, decision makers are going to discount the story a bit until they can see it in production silicon.
So the next few month's movements in AWS and the like will all depend on the extent that their decision makers were convinced many months ago.
Like it or not, Google's stamp of approval carries a tremendous weight in this industry. And if that's not good enough for you, Microsoft and AWS are stepping up their deployments of EPYC as well.
As a result, a lot of smaller companies will now require a much lower standard of due diligence when approving an EPYC Rome deployment.
The signs are there, Lenovo has called them T480 and A480 last year, T490 and T495 this year, indicating these are very close.
Also, software is very often the largest cost for these systems. It's not hard to find yourself paying $100k a month for an Oracle license. A one-time expense of $50k for one piece of hardware vs another is barely a blip.
And in fact that hardware is often charged based on spec. So if you have 4x as many cores on Epyc, you will pay more in software costs on a monthly basis as well. That, or the software will simply refuse to use them until you buy an upgrade, meaning those extra cores are sitting there doing nothing.
It's counterintuitive to people whose experience is building a gaming desktop at home, but hardware expenses are not necessarily a big part of total cost of ownership for enterprise operators.
That shouldn't be a problem. They are both fundamentally the same architecture (amd64) and any CPU-specific features are already opportunistically handled by the vast majority of software because otherwise you wouldn't be able to run the same code on different versions of Intel's CPUs.
It works fine if you shut everything down and reboot the system, but that is often undesirable.
The whole point of the feature is that the VM can be migrated around different physical hardware without having to interrupt service. It just suddenly is running on a different host instance. But it has to be the same type of processor... or at least the same feature set. "Close" is not good enough, it needs to be a 1:1 match.
You can manually disable features until you have found the lowest common denominator between the feature sets of the different processors. But obviously the more types of processors you have in your cluster, the more problematic this is. In very few clusters will you find servers of mixed types, you buy 10,000 of the same server and operate them as a "unit". You don't just add in servers after the fact, sometimes you don't even replace failed servers.
And that hardware decision will have been made years ago, very often. The server market is hugely inertial, it's nothing like you putting together a build one evening and then going out and buying parts and putting it together.
I think you can do that on multisocket systems:
A very long time ago, I worked for a then very large company that sold servers. Plain standard 80486 based servers.
My job was to drive around and drop off these servers for evaluation at prospective customers, who would compare them against 80486 offerings from a different vendor.
Your argument about them all being fundamentally the same would be even stronger: it’s the same CPU.
And yet, customers did not take chances and would go through the eval motions. Because their business relied on it.
Now imagine that at a scale of thousands.
Claiming “they are fundamentally the same” is not wrong, but you don’t care about the fundamentals only. You care about the whole picture and you don’t take chances.
Very conservative corporate customers could wait a short time for good BIOS corrections and sufficient supply for all the parts (not only CPUs) they need before shopping for AMD servers, but they would be buying different hardware from the same established suppliers even if they went with Intel.
That sentence above is a contradiction in terms.
A conservative corporate customers spends many months to do evaluations. There is no such thing as “a short time.”
There's a lot more incentive to explore EPYC than there was a day ago.
And exactly what kind of risk are you taking about?
If switching is as easy as you claim it is, you can still do so next year or the year after.
Your comments in this discussion are very small scale, retail oriented.
it's not a huge advantage, but I'm not sure I would go so far as to call it foolish to buy intel at this point. if your only serious workload is gaming, intel seems like the obvious choice to me. you can actually get a decent all-core overclock on the intel parts, which leads to a significant performance lead in esports titles.
To call that a "significant" performance lead is silly.
also the "only at 1080p" meme is not really true for some esports titles. counterstrike is so cpu bound that it really doesn't matter what resolution you play at.
My next desktop of will be AMD+Nvidia. Now if only I could avoid the Nvidia tax for deep learning...
The main difference is on Intel you get that magical sounding 5ghz number by overclocking, but it's not actually much higher than stock (4.7ghz on the 9900k is the "all-core turbo")
Even formerly die-hard intel gaming shops (Linus Tech Tips and similar) are recommending AMD for most gaming rigs nowadays.
Ryzen 9 3900x: $499 and sold out
And here’s gaming performance. Far Cry 5 has a 25fps advantage. And yes, I have an RTX 2080 Ti
So for a large group of people, Intel still makes sense.
And my reply was in response to “literally no reason to buy an Intel“ which is clearly just not true.
On top of which, given the history, those building gaming machines will assume Intel’s next 10nm cpus will still outshine AMDs in gaming for the foreseeable future.
Intel is a juggernaut because even if it's not shipping the best, it's always shipping something on time, every time.
Intel is in their current situation because their 10 nm process is years late, and is still not able to manufacture high-performance parts like server CPUs in any meaningful quantity for a price that the market would bear. They've also had severe shortages over the past year, which has resulted in orders being delayed for weeks or months.
And of course, there's the matter of their products basically being warmed-over refreshes of a nearly 5-year-old architecture (because their new architectures are dependent on 10 nm), which has resulted in comically lopsided performance in AMD's favor, in basically every objective metric that matters.
In addition, some specific software (eg Android emulator on Windows) doesn't exist for AMD CPUs.
Looks like AMD expected Intel would actually start to fight back a few years ago when AMD started the Zen and Rome cores, and AMD has been running full steam ahead since then. Meanwhile, in reality, Intel dropped the ball and was too slow to react, and now AMD has basically leapfrogged them. What a time to be alive.
That being said, if AMD never made Ryzen, you can bet your bottom dollar we would have been really enjoying ourselves 6 core hyperthreaded Ice Lake desktop and 16-24 core server CPUs next year for prices that AMD is now pushing 12 core and 48 core chips at.
As a result a bunch of the greybeards left, so for 10nm a bunch of institutional knowledge was completely missing and they had to learn everything again the hard way.
Ah the P4 days, what glory that was.
I think the situation is a bit different this time around though as AMD’s bet on TSMC is paying off in a major way while Intel continue to flounder in the fab space.
Going back a bit further, AMD spinning off GlobalFoundries and then shopping around for fabs on the open market is definitely looking like a very good decision with hindsight. GF has also, since then, run into problems rolling out a next-gen node, and eventually cancelled their 7nm. Hard to say how much you can credit AMD for foresight there vs getting lucky, but being manufactured by TSMC vs. in-house has worked out well.
I don't think this was obvious at the time. Some people thought it was a good move (obviously including the decision makers), but a good number of pundits interpreted AMD giving up on a proprietary in-house fab and relying on commercially available facilities as basically AMD throwing in the towel on being able to compete head to head with Intel as an integrated chip designer/manufacturer, relegating them to more the budget space. To be fair, at the time (2009), TSMC processes were behind Intel's, so you would've had to predict TSMC catching up and surpassing Intel.
I kind of want to see an analysis of the minimum viable volume of product to justify a new fab process going back over the years. Today it's just not feasible, compared to Noyce et al who could do it in their lab.
Intel's own competitive analysis figured competition in servers "is likely to be the most intense in about a decade" (https://www.techpowerup.com/256842/intel-internal-memo-revea...). Sounds about right.
The AMD presentation had announcements from Azure and GCP but not Amazon.
How soon Intel gets Ice Lake ready for servers seems pretty relevant here.
- A slide in AMD's presentation suggested 64C would cost $7k, but that's for the top two-socket-capable version. If you "just" want 64C, not 2x64C, there is a $4.4k single-socket 64C part. Or if the RAM/PCIe capacity of two sockets is useful, you can get two 32C parts, and those start at $2k each. Makes pricing look better at 64 cores.
- Since they turned on most everything on the I/O die for all parts, seems possible to build boxes with lots of RAM and I/O but as little as 8C or 16C ($500 or $1k) of CPU. Of course, balance tends to be nice, but the ability to make it as lopsided as you want could be relevant for applications that are very RAM/IO heavy (caching), or if you're running a commercial DB where you pay by the core. Neat.
> Ice lake promises 18% higher IPC, eight instead of six memory channels and should be able to offer 56 or more cores in reasonable power envelope as it will use Intel's most advanced 10 nm process. The big question will be around the implementation of the design, if it uses chiplets, how the memory works, and the frequencies they can reach.
I just put together an EPYC (Naples) 3201 (8 cores, no SMT, 2133MHz DDR4) and my circa 2012 Xeon E3-1230v2 (4 cores/8 threads, 1333MHz DDR3) is still faster because of the higher clock rate. More interestingly, the EPYC peaks at 45W at the outlet but the Xeon only hits ~55W. The EPYC advertises a 30W TDP, but IIRC the Xeon advertised a 65W TDP for the chip, so Intel substantially over performs.
I don't regret building the 3201, and I'm looking forward to the next generation of EPYC embedded. But Intel still has superior design skills when it comes to power consumption and clock rates. I'd expect Intel to keep pressing this advantage, especially because at this point it's all they've got left.
 Anybody know when it's coming out? Are they gonna wait for Zen 3?
No they aren't and no they didn't.
Obviously this review of the ultra high end is not focused on single thread performance because that'd be insane. But in segments where it does matter, like consumer, it was not glossed over at all.
And if you look at the comparison table AMD didn't really trade cores for clocks. Both the Epyc and the Xeons are all in that mid 2ghz base frequency range.
> But Intel still has superior design skills when it comes to power consumption and clock rates.
Except no not really. Clock rate yes, but AMD has an IPC advantage now so it's not entirely clear cut. And you only get that clock rate advantage on the consumer chips anyway.
Power consumption is not at all in Intel's favor, either.
I principally meant glossed over in discussion in threads like this. The review has a whole section on single-threaded performance, and Xeon comes out ahead:
It doesn't just matter in consumer land. It matters in some high-end workloads as well. I was replying to someone who lamented the poor performance of EC2 EPYC instances. Depending on your workload, they are poor performers.
I was very careful in my statement when I said that Intel has superior skill in designing for lower power or high clock rate. I did not say that that particular skill results in Xeons having a generally better performance, cost/performance, or power/performance profile. What I'm saying is that they can use that skill to anchor themselves at the very low power and very high clock rate niches to slow their decline.
AMD will continue to close the gap even in those respects. If I had put an EPYC 3251 up against my Xeon E3-1230 it would have smoked it at roughly the same power draw because doubling the core count absolutely matters. I'm not disputing that. And Zen 2- or Zen 3-based 3201 probably would out perform the 1230 as well without double the core count, though notably no such CPU exists at the moment. But people are underestimating what Intel still has going for them. Intel's strategy at this point will be to slow their market loss to buy them time to retool and counter, just like when AMD had the lead 15 years ago. Intel has some leverage to slow that loss.
And as some others have pointed out, Intel also has a huge product lineup. Do you know how many systems you can find with an EPYC 3000 series embedded chip? Only two: Supermicro in a miniITX form factor and Congatec on a COM Express Type 7 module. So, basically one as far as most people are concerned. OTOH, Intel has more SKUs in that market space than I can even be bothered to investigate, some of which are equal to or better than the EPYC in power, performance, and cost. It'll take many more years for AMD to begin to displace Intel in those spaces. Again, that gives Intel time, breath spacing, and cash flow.
 I chose the EPYC because of Intel's poor security track record, and to support AMD. If the only thing that mattered to me was power, performance, or cost (individually or together) then a Xeon D would have been a smarter choice. EPYC Rome would likely change that, but there is no Zen 2/EPYC Rome embedded yet, nor any hint of one. I'm beginning to think it won't happen until Zen 3/EPYC Milan.
Is there a hypothetical point at which it becomes cheaper to spend additional capital in order to remove a perfectly-good Intel server in favor of a new AMD server (considering potential savings on cooling+power+space)?
Edit: Okay, since some people are downvoting my joke, a more serious answer -- if your server farm is running out of space, and you're currently looking at renting additional space to accommodate your growth, EPYC Rome will take up substantially less space for a given level of performance. This is one case where it may make sense to instantly replace.
Another is to test EPYC Rome on a limited basis, to evaluate potential for a larger scale replacement in the future -- I think many companies in the space are going to fall into this camp.
Hence in the near future, there will be a modest increase EPYC Rome uptake, followed by a massive increase in the medium-term future.
If AMD's chips offer better performance/watt, that would make them attractive. The capital cost of purchasing a server tends to be a smaller number than the cost of the power to run and cool it over its lifetime.
However, what also matters is vendor support. Lots of companies have contracts for pricing and support on servers, and they won't necessarily change to save even a large amount of money on one generation of hardware. So for adoption of Zen2 CPUs in the data center, it will be critical for AMD to get the big names on board.
The next generation of the Zen chipset has already been designed.
EPYC Rome is likely being priced s.t. Xeon's would have to be sold at a loss to match $/perf.
People can't buy interesting ideas, and businesses aren't going to change system vendors overnight.
Thanks for mentioning immersion cooling as a potential solution to this; as a shameless plug, we are working very hard to make immersion cooling not exotic anymore at Submer ( https://submer.com ), solving all the problems we saw of the previous state of the art and helping with some of the biggest problems in the data center industry: cooling, power, densities, real estate costs, data center location, DC power distribution, TCO and more.
Exciting times... we are seeing a lot of trends pointing towards our immersion cooling solution and stealth traction from big names.
AMD could have clearly charged more for these EPYC Rome chips, but they priced low for a reason -- to grab as much market share as humanly possible as quickly as possible, and I believe they will do so.
AMD priced these chips so low because they know Intel is going to fight back with deep discounts. AMD also knows they can manufacture more cheaply for any given level of performance.
It shouldn't surprise anyone if AMD is offering prices that Intel would have to counter by selling Xeon's at a loss.
Furthermore, some orgs do not put a price on security. With Intel's poor security track record in the recent past, it's no surprise that Google hopped on the EPYC train.
So at least one of the following two are going to play out over the next year: 1) AMD takes massive server market share. 2) INTC bleeds red to slow the loss of market share.
Seems like a slam dunk from AMD.
I expect to see many more AMD based EC2 tiers on AWS
> We are also not allowed to name because Intel put pressure on the OEM who built it to have AMD not disclose this information, despite said OEM having their logo emblazoned all over the system. Yes, Intel is going to that level of competitive pressure on its industry partners ahead of AMD’s launch.
Here is the entire section:
> We are going to present a few data points in a min/ max. The minimum is system idle. Maximum is maximum observed through testing for the system. AMD specifically asked us not to use power consumption from the 2P test server with pre-production fan control firmware we used for our testing. We are also not allowed to name because Intel put pressure on the OEM who built it to have AMD not disclose this information, despite said OEM having their logo emblazoned all over the system. Yes, Intel is going to that level of competitive pressure on its industry partners ahead of AMD’s launch.
Where standard/8 AI was unplayable on my FX8350.
I'm a little puzzled by this; we've been able to buy Ryzen and Epyc stuff from Dell in our university for a while. The biggest problem has been the nvme ssds..
The way I see it is that CPU performance for most of my needs has reached the tipping point. Unless something unexpected happen the performance per dollar in the next few years are only going to increase. I would not be surprised to see 128 Core / 256 Threads in Single Socket by 2021 / 2022.
The question I have in my head now, when will DRAM price drop to the point, where I have 64 Core EPYC Server with 4TB of Memory and call it a day. While there are some insanely large dataset, for possibly 90% of the Web DB I doubt we have a Database that is 4TB large. And it could all be in Memory. But even at $10 /GB, which is very low already for a 256GB DIMM Stick, 4TB is like $40K
> ...first generation of EPYC, ... attaching each one to two memory channels, resulting in a non-uniform memory architecutre (NUMA).
> 2nd Gen EPYC, ... solved this. The CPU design implements a central I/O hub through which all communications off-chip occur.
Well it's solved in that all mem accesses now uniformly are a bit slower as all have to go through the new memory access hub. Is this a correct reading?
> The CCDs consist of two four-core Core CompleXes (1 CCD = 2 CCX). ... those CCX can only communicate with each other over the central I/O die. There is no inter-chiplet CCD communication.
What is the communication for, presumably the MESI (or whatever AMD uses) cache coherence stuff, and poss. the sync instructions (CAS, atomic increment) too? Anything else I'm missing?
(edit: bloody love that you can click the 'Print This Article' button and it becomes a single long web page. Webbyness as god intended).
Yes. But the new EPYC chips have doubled their L3 cache, and that new memory-access hub has stupidly high bandwidth.
The larger L3 cache mitigates the latency problems, while the memory-access hub has more than enough memory-bandwidth to feed all the cores.
(Seriously - I'm about ready to buy a used Zen1 workstation.)
The percentages for "A vs B" seems to me to be (score_A / score_B - 1) * 100.
AT just can't catch a break. Any reviews out there that tested heavy workloads? These are the interesting ones on a CPU like this one.
Not sure about desktops.
Dell is in that game too.
What AMD need for consumer computers is to get in the laptop oem build.
Many folks are posting that there is "no reason" to buy intel. Actually, Intel is going to keep market share if you can't actually buy these great AMD product in a prebuilt configuration.
Will they show up eventually? Sure. But it can take a surprising amount of time for something like this to come down (I'm hearing murmurs of issues with heat / BIOS etc). I hope AMD is working closely with whomever is most ready / has a long track record with them (Lenovo comes to mind) to get these to market in a config that a business can buy.
Even the first generation Ryzen CPUs are amazing for multitasking and general computing workloads. Zen 2 is quite a bit further beyond that, and is priced highly competitively. That's something that can translate into increased productivity along with savings, but it's going to take some time before OEMs are really ready to take the plunge rather than just dipping their toes in.