I'm worried about AMD since AFAIK, the enthusiast PC builder/Gamer for who these chips are addressed is a niche market and most volumes are in OEMs laptops (Dell, HP, Lenovo) that companies buy in bulk for their workforce and that's where AMD has zero presence while intel has a deathgrip on those OEMs.
Can AMD still recover in this market?
Everyone I know building a new PC is using an AMD chip. It's cheaper for the performance. In addition, if the gains (both energy and performance) can be moved to the mobile CPUs, then most people would switch there as well.
In terms of OEMs, they have some super computers being built with their CPUs, Google using their GPUs for their gaming engine, and I suspect with that confidence placed in AMD we'll see even more. Just recently, Lenovo also released the T495, which has an AMD chip.
I suspect the move to AMD will be slow, but steady until Intel has an answer for this. AND even if Intel has an answer (10nm), they have had several large security bugs. I suspect many companies and cloud providers will move to or at least explore a more diverse set of chips just because of that. This means more people trying the AMD chips, at the very least.
It would've been nice to get the product launched in time for the fall, but hopefully we'll see it launch soon.
I wanted to get one as well, however I might hold off until the next gen as there’s a chance of a Ryzen 2 chip and USB 4. And maybe a bigger battery ;)
I’d be curious of how the builds using the 3700 are doing, though. They pack a punch.
If I'm looking at user-submitted benchmarks properly, there's only a very slight difference compared to the 3500U. Seemingly not worth the upgrade, or am I mistaken about this?
But the laptop parts (3700U) are still old-gen CPUs, not the ones everybody is happy about this year. Hopefully in 2020 we get some decent 7nm options with Thunderbolt, HiDPI screens and LPDDR4.
I had 2400g and it isn't cutting it anymore. And I don't need a gaming video card to work with visual studio at 4k@60.
The 3rd gen Ryzen with iGPU aren't out yet
The onboard graphics support is great. Now I can use my PCI slots for a 10gb SFP+ backbone and some other stuff.
I doubt that any gamer need more than 4-6 cores, but single thread is very important.
I am getting a 3600 to replace 1700x in my gaming build and replacing 2400g with 1700x and 1030 in my workstation build.
This is no longer true, thankfully due to the influence of current gen consoles.
Games like Assassin’s Creed: Origin tend to be heavily CPU limited with less than 8 fast cores. Core count is really starting to matter in gaming.
I realize that this proofs you from getting a PCIe 4.0 GPU of your choice much later, when you have the budget for it.
Of course, don't expect it to give you ultra level graphics for that small die size and heat dissipation.
I'm left wanting a heterogeneous next-gen system using shared HBM for everything.
Intel currently sells the i7-8809G. It contains a substrate, 4GB HBM2, 24CU Vega M (14nm), and quad-core (3.7-4.7GHz, 14nm) with a TDP of 95w for an MSRP $546. If the substrate and HBM cost $100, then that's MSRP $446 for quad core plus 24CU GPU.
AMD currently sells the 2400G for $130 (launch MSRP for 3400G is $150). The 1300X (100MHz slower, but double the cache) costs $110 giving us around $20 for the 11CU GPU (the 2200G can be had for about $90 indicating the GPU price may actually be lower than that).
Let's build a next-gen APU for both desktop and gaming laptops.
HBM3 spec was finalized almost 3 years ago. Estimates for Vega put 2 HBM2 chips (8GB) at around $150 with a substrate adding another $25. We'll actually work within the constraints of HBM 2 at the moment. Just realize you should get 2x the RAM for around the same price (die shrinks and advances in 2.5D chip construction) along with lower voltages, higher speeds per pin, and potentially fewer stacks needed. We'll use HBM because it has enough bandwidth and uses simpler memory controllers while also cutting latency in HALF vs DDR [latency source](http://www.pdsw.org/pdsw-discs17/slides/stocksdale-pdsw-disc...).
We'll need 16GB of HBM2 at a cost of $300 (remember, that should become 32GB of HBM3 or lower costs closer to $150 for 16GB). If we cost $20 for 11CU, we'll assume $100 for 40CU (adding around a 25% markup for the larger chip though 36CU and 24CU lasered versions could greatly reduce that). That puts us up to $400. A 3600 6-core CPU is selling at $200. We'll add another $50 to the MSRP just because for a final price of $650.
That is $100 more than the MSRP for the i7-8809G, but has a TON of advantages. It has 2 more CPU cores and almost 2x the GPU power (with access to more RAM if it needs). There's no need to spend another $100-200 for additional DDR4 plus the CPU performance will be better due to reduced latency. The lack of RAM on the motherboard and getting rid of all those traces should reduce motherboard cost and size. Finally, at 7nm, this should all fit inside a 95w TDP suitable for both desktops and gaming laptops (which usually have 35-45w CPUs plus 50w+ GPUs).
I doubt that’s the cheapest option though.
But it’s up to the manufacturers to decide what ports to include.
PS: That said it does seem like there is a premium associated with DisplayPort, are you sure you are not also paying that premium when looking at Intel motherboards?
That's largely false for modern engines.
I’m not a gamer. I don’t give a shit about any GPU performance I couldn’t have on intel integrated 10 years ago.
I just need compute, and I hate to have to pay 50-100 usd extra for a completely overspecced GPU that will idle at 0.0000000001%.
It just doesn't make sense for AMD to waste a lot of very expensive die area on a crappy GPU that most of their customers will never use, for the benefit of a very small minority of customers who resent paying extra for a cheap dGPU. The Zen 3 APUs should offer up to 8c/8t or 8c/16t with a fairly capable Vega GPU, which is a far more compelling value proposition for the vast majority of consumers.
Stuttering sucks. I went with the 15" with dGPU on my new MBP and the problem went away.
I'm not sure if it was part of their business strategy to overwhelm, but they sure did, and this didn't make them any more sympathetic, on the contrary...
Intel's deathgrip rests on four factors:
- Intel's "deal sweeteners" (preferential pricing et al), which they can't afford to do forever.
- Intel's ability to provide the whole stack (chipsets->boards).
- Corporate IT's sclerotic processes (not always a bad thing BTW though personally I find it stifiling).
- Server chip architecture.
It will all come down to money. If AMD parts have good performance/cost/power consumption points, you'll get AMD laptops (is thunderbolt 3 silicon available for AMD? I don't know).
If they beat Intel on server performance why wouldn't everybody switch? AMD doesn't appear to have the full AVX instruction set but I'm not sure how many people really use that.
Corporate IT's process is mostly about money, holistically (does it give us volume pricing? Does it reduce support cost?) as corporate IT is always a cost center. So they'll switch if it's worth it.
So you're right that Intel will continue to be dominant for a (relatively) long time, but for the reasons above that's more about market hysteresis than fundamental superiority, which Intel still has in certain places but which has eroded mostly across the board for a long time. And the kicker of hysteretic phenomena is when you reach the critical point you can't really get back.
That is so far from truth. Cooperate IT is all about not getting fired and doing no wrong. It has very little to do with performance, power, or cost. Your CTO could make an excuse of not buying AMD because it is not stable. That was the reason in the old days AMD not getting any traction.
There is a huge brand stickiness in Corporate purchase, and people are willing to pay premium for it. And that is why everyone want these client.
From what I have seen this rule is mostly about software which is vastly more complex than hardware for typical IT departments. So that 'not getting fired' is mostly about software. E.g our company still buy very expensive licenses from IBM, Oracle, SAP and so on. However as far was hardware goes it is super crappy HP laptops despite devs complaining about disk crashing, freezing and myriad other problems everyone still keep getting those.
This is the only AM4 motherboard that I’ve seen so far with Thunderbolt 3:
It’s “available” but hasn’t been adopted yet by most motherboard manufacturers. Maybe we’ll see laptops with TB3 when the 4000-series mobile chips come out.
Now that AMD has a solid foot in the enthusiast mindshare with Ryzen and ThreadRipper, it's time for them to strenghthen the EPYC lineup and seriously compete with Xeon.
I’ll be honest, I was worried for them a few years ago, but they’re still in business and have a killer new chip so of course they can recover.
If Zen 2 is as good as it seems, it might make some inroads with volume OEMs.
Also, Oak Ridge’s new exascale supercomputer will use AMD EPYC:
I also suspect there's a market positioning long game in controlling console chips. Every AAA title and major engine for the next 5-10 years is going to target 8-thread AMD CPUs, so you'd expect lazy ports to perform well on Ryzen for free.
Buying 600 a year minimum gives you some pull. Hopefully more companies are asking these questions.
There are a few things to keep in mind. The absolute vast majority of the $1000+ laptops market belongs to Apple. And most of the other laptop CPU are low price and comparatively low margin. AMD is already competing in that area with their APU using GF 12nm, Those 12nm wafer capacity first and foremost goes to I/O chip for Zen 2 especially EPYC, which is where all the money are made.
Their next Gen 7nm APU, will be going head to head against Intel's 10nm IceLake. So while Zen 2 could enjoy its performance lead on Desktop, it is highly likely to lose against Icelake. And Icelake is suppose to have laptop on shelf by this Xmas, I don't expect the 7nm APU to be announced until next CES and product shipping in Q2 2020.
Sometimes I do wish AMD could work out a deal with Apple to use their CPU / APU in their product line. It will have a lock on effect across the industry, if it is good enough for Apple. It is good enough for everyone else.
This seems likely to be extremely overstated or false just looking at overall market share.
Eh? You make this a statment, but without a source, I really fail to believe it.
I would guess that most laptop purchases are by business, and I would guess that most of those are over $1000ea, and I would further guess the overwhelming majority of those are not Macbooks.
New Ryzen Notebooks are appearing. A friend of mine has one and is pretty happy with it (apart from some touch screen issues on Linux, which have been fixed in the latest kernel).
I don't know much about AMD's OEM strategy, but it seems like they're making some progress.
What incentives can Intel still offer OEMs to go with them?
I've held back buying an Acer swift for this reason alone.
As all technology does.
From the conclusion:
"When it comes to gaming performance, the 9700K and 9900K remain the best performing CPUs on the market"
"Ultimately, while AMD still lags behind Intel in gaming performance, the gap has narrowed immensely, to the point that Ryzen CPUs are no longer something to be dismissed if you want to have a high-end gaming machine. Intel's performance advantage is rather limited here – and for the power-conscientious, AMD is delivering better efficiency at this point – so while they may not always win out as the very best choice for absolute peak gaming performance, the 3rd gen Ryzens are still very much a very viable option worth considering."
This isnt to take anything away from AMD and people will finally consider them now as a serious alternative, but I would not call this a "curb stomping"
A curb stomp would be Intel vs pre-Ryzen AMD
I guess it depends on where you're defining 'curb stomped'. As a whole, I think AMD is besting Intel here.
The Ryzen 7 3700x actually beat the 9900K in one game benchmark, and it's so close in performance that its much lower price is going to be attractive.
The Ryzen 9 will also outperform the 9900K in basically everything as soon as you start Twitch streaming.
And then for every other task not related to gaming you're completely destroying the 9900k.
All with better thermals, a usable stock cooler, and potentially cheaper motherboards saving you money over Intel.
And this is with first week BIOS and drivers.
LinusTechTips, that used to somewhat favor Intel, did their benchmarks with current patches to everything: https://www.youtube.com/watch?v=z3aEv3EzMyQ
I can't say they're wrong to be excited, though. When's the last time a processor launch really mattered to anyone? The improvements to compile time alone have me sold, though it will be tough to pick between 12c/16c.
The biggest component missing in these reviews is where and when I can find these things on shelves. Especially as someone not terribly concerned with gaming performance.
The short answer is they are available now, if they don't get sold out from your preferred retailer.
I didn’t really plan to fire off a purchase today, but I did stop into my local computer store and they did not have any yet. So it seems supply may be at least a bit limited for now.
Off Topic, do people still buy components in Retail store and not online? I could understand people want to shop clothes, food, thing that you need to take a look first in Real. But Electronics, especially components, don't matter.
Heavy stuff through shipping can be expensive if you can’t get it through Amazon Prime. Shipping with Li-ion batteries is restricted to ground shipping.
I still ship many of my electronics, but yeah actually it’s not a wash. Especially if you are buying used stuff, I find eBay is often a ripoff.
The last two times I went into a retail store for technology were when I wanted flash drives sooner than Amazon would turn them around for me, and when my laptop catastrophically failed and I discovered the old one I use as a fallback was also broken.
I have lived in places where there really were no computer parts stores anymore. I would be fine buying components online -- I'm not opposed to it -- it's just that I like to go shop in physical stores while there are still physical stores left where I can do that. :)
And not all of the A-series were awful, but it seems those were the favored ones for OEMs to put in the very popular $300 Ultrabook competitors. I'd also hazard to say that if we went back and benchmarked the Intel CPUs of that time with the mitigations of today, they wouldn't have fared better than AMD's.
We've come a long way from the world before smartphones, parallelism, SoC, and affordable flash, and I'm just glad we're finally here.
Except until recently, AMD GPU performance was a bag of wet farts for both industry and gaming.
If the Chromium compilation benchmark (which reportedly was screwed up and will be added later) shows similar improvements, I think this generation will finally convince me to replace my Ivy Bridge box.
edit: I just saw the Phoronix benchmarks, which show LLVM compilation going from 547 seconds on 2700X to 406 seconds on 3700X. I'm not sure whether those were run with the same chipset/RAM/storage versus the 2700X result being an older one pulled out of the database, though.
As for compiling, Gamers Nexus' review of the six core 3600 had it absolutely trashing other CPUs (33% faster than the 9900k, 11% faster than the 2700x) in their GCC self compilation test.
Programming those would require a new paradigm possibly.
GPU cores run the same operations across a large number of "threads" (32, 64, etc) - they're no really threads.
When you have branching code running on a GPU, the threads are split into the branches, and each branch gets executed separately.
CPU threads run independently of each other (there's often locking for access of shared data, but that's not what I'm talking about).
Even when multiple threads on a single CPU core are running (SMT) they're still not performing the same operation.
Per thread (it's not quite what they are, but a more appropriate terminology escapes me) there is and (unless it's a very stripped down CPU) will always be a far greater silicon overhead for CPUs compared with GPUs.
That wasn't the best description of the way things work, but I'm rather tired and not a hardware guy.
I'm not an expert on GPU hardware but in my readings on this, I've seen it said that if there is a branch in your code, all cores take that branch even if it is a no-op for many of them, thus all branches are taken by all cores when the code is not coherent. Hence why making GPU parallelization can be quite a different programming paradigm to really take advantage of how it works, and it is more challenging to do properly than truly independent CPU threads that do not affect each other when they branch.
Ideally all "threads" of the wavefront take the same side of a branch, so it can skip the not-taken side of the branch. Wavefront divergence is when a single wavefront has threads that take different sides of a branch, so the whole wavefront runs both sides and masks out the results based on branch direction per thread.
Also, sorry about the barely prompted and completely unjustified wall of rambling text - this is something that I really should've given more consideration to already.
That's a programming model that has the potential to go very wrong, very fast if you don't think in depth about what you're doing.
Branches with branches are going to be very problematic (with exponentially decaying throughput), although multiple branches at the same level are handled very cleanly.
To be honest, I don't know how I thought that it might be rescheduling things instead.
The sort of rescheduling that I seem to have been thinking of, could only make a difference in cases where the batch size exceeds the wave size, and my guess is that the factor of difference would need to be large to make an impact.
At the base case, NOPs and rescheduling would perform identical operations - so all the scheduler would get you is a hardware overhead (the time overhead could be mitigated when running on a single wave).
The scheduler would also introduce latency since different waves in the same batch would need to wait for each other before rescheduling could proceed.
You'd cause problems for your memory layout - programmers referencing values from threads that have branched would probably need to be treated as undefined behaviour.
You'd also need to introduce a stack, per thread, to keep track of the wave histories - allowing you to unsort and re-reschedule at the ends of branches.
All this to run parallel execution on what seems to be uncommonly large batch sizes (I believe AMD just dropped their wave size down from 64 to 32 - I don't know if this is because 32 is a Goldilocks batch size, or if it's simply to achieve better performance on Nvidia optimised applications).
Perhaps this should be threads, but certainly not cores - it's a single core running the same operations on different data across multiple threadish things.
I'm not sure if there's a more technically accurate terminology for quite what they are.
If you're interested in an earlier mass market phase of moving GP computation of the von Neumann CPU approach, check out the PlayStation 3's "cell" architecture. Devs really struggled with it and Sony went back to von Neumann for the PS4.
CPU cores are spending a lot of transistors minimizing latency: branch prediction, microops fusion, sophisticated multi-level caches. GPUs are fine without most of that (they do have caches but much simpler ones), they are spending majority of their transistors and electricity on ALUs. Instead of fighting latency, GPU cores embrace it and optimize for bandwidth on massively parallel workloads: they have cheap hardware threads so they switch threads instead of stalling the pipeline.
The programming paradigm to enable this exists, it's just pure functional programming. But people are intimidated by Haskell.
We need more investment in functional programming before the dream of a 1000 core computer can be realized.
Parallel programming in Haskell is hard. GHC's style of by-demand lazy evaluation and the ubiquitous use of monads impose a lot of sequential execution.
There has been good research around parallel FP programming languages, but that was mostly around the 90s (Sisal, pH)
Whether more cores can be used is ultimately down to the problem, not any language. Some problems are naturally paralisable, some just aren't.
FP maybe exposes a bit more parallelism, but it may introduce more overheads such as less efficient cache use. FP is not a solution, it may be part of the solution.
The Radeon 5700 is also amazing and seems to be better than the new / Price Reduced 2060 Super.
I am wondering what are the chances of Apple choosing AMD in near future.
Ryzen 3 seems like a very strong improvement to general computing.
That really was unexpected, unlike the CPUs which we already knew would be good.
The 5700 gets 36.0 fps in Shadow of the Tomb Raider 4k, the Vega 56 gets 30.2. That's a 19% improvement.
Also, the Vega 56 was originally priced at $399. So the price:performance has definitely improved since Vega 56 was launched.
It should be interesting to see the results of those tests with SMT disabled.
Basically, if the higher clock of the 9700K can be put to use in a single-threaded application, then that chip can still come out on top. But in enough scenarios, the Ryzen 3700X and 3900X are quite competitive.
They are with Spectre and Meltdown mitigations. They are not new enough results to include anything for Fallout/ZombieLoad."
Not sure if including the latter mitigations would make AMD beat Intel in single threaded.
Firefox and chrome use a half-dozen or so threads parsing and compiling JS code so it loads quickly. Firefox's Quantum work adds a lot of threads for rendering as well. In addition, service workers, web workers, and Wasm all spawn additional processes which serve the same purpose if you aren't sharing memory directly.
That said, the Phoronix Selenium tests show AMD ahead by quite a bit. (source)[https://www.phoronix.com/scan.php?page=article&item=ryzen-37...]
Only in the narrow sense that processes are not threads. See `Window > Task Manager` in Chrome.
I’ll be using this soon to build a MAME arcade cabinet, probably along with the new GPU.
One thing Intel has the advantage is in the engineering desktop. I use a couple of programs that are compiled with Intel compilers. Also one that uses AVX 512. Will Intel compilers optimize for AMD?
Are these really, y'know, launched launched? I can't find any 3000 series CPUs for sale on either Amazon or Newegg... Who's stocking them?
It's not a paper launch at all.
Also AMD got stomped on the system performance testing (those tests seem like they might be difficult to unpack into individual reasons given their breadth). Why would that be the case.
And WOW. Any vectorised workloads written for AVX-512... Besides the larger operand sizes, the instruction set changed seem to matter hugely. I did a small not using avx512 too effectively scan hash table for the keys, and it made a huge difference too.
Still considering the top Ryzen chip for a sorely need new desktop, but I don't play games so it would mostly be a cost thing.
If you don't game, I see absolutely no reason not to buy Ryzen, but you might not need to buy an X570 mobo.
They are rerunning all tests due to a mobo firmware upgrade showing non trivial improvements in turbo/boost mode.
There is still a chance that Intel with be beaten, even without taking ZombieLoad etc. patches into consideration on Intel.
How much of their innovation is due to TSMC vs AMDs in house research?
One of Intel's big strength is their fabs. But perhaps that's becoming another drag on execution?
Edit: I understand some would disagree with what I said, but I can't see what else would explain why the 3800X with one CPU chiplet at 3.9GHz base freq would be the same 105W as the 3900X with two chiplets at 3.8GHz. And by "beefiness" I mean the overall performance potential, including possible overclocking headroom.
AMD does much better than intel, as long as the peer is in the same CCX.
For the case where it is not, zen2 has improved considerably relative to zen/zen+.
"Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."
Nationalistic flamewar is particlarly unwelcome here.
> I have a bias that the company with the best technology should not be working with those that would love to destroy American values
I presume you mean China? China would "love to destroy American values" then?
TBH, I'm not sure if you're buying into the latest "red danger" FUD by the US gov and tabloid media, or if you work for them...