We're 2-3 weeks away from global release. People buying components NOW are exactly the sort of people AMD should be working their socks off to stop buying Intel parts. But instead of any firm performance figures, we have rumour.
The only reason I can imagine AMD is still keeping the press under cover is because these Zen chips still can't compete where it matters and if AMD lets that be known, everybody will go back to Plan A and buy an i7 6900K.
Err, for the people that matter, they probably are.
I'm not sure why you would think otherwise.
Certainly you realize their large purchasers vastly dwarf people frequenting tech sites.
They 100% have done benchmarking/testing with, say, their top N chip purchasers, well before now, and already either have firm commitments to buy, or firm commitments to pass.
Maybe not. But in any case, none of the 8 digit purchasers i know care about when the review shows up on anandtech :)
We're at the pointy end, now.
Anyway I think the Internet is already primed for Intel still to have the edge on single core perf. Not a single AMD-controlled benchmark has not been amply caveated as such by the coverage so far, with good doses of scepticism, just like your post. Nobody's expecting parity IMO, they're just expecting something competitive, at the right price. And the price is half.
The Intel's -E line CPUs are in a world of their own.
Either which way, that's not neccessarily the point, it's that we shouldn't still be guessing how these CPUs are actually going to perform at this point.
This much hype won't end well if this isn't the second coming.
On the leaked pricing, some are expressing cynicism, but remember that AMD pays big penalty fees to Global Foundries if it doesn't meet long term contractual volume requirements. So it is perfectly feasible that it might favour volume over margins, since the end-result P&L to the company might be similar, but with the former strategy it gets market share for "free".
I'll give you that hype is at fever pitch though.
There was a great article back then too about why Apple and Dell weren't going all in on AMD chips and whilst sweeteners from Intel is part of it, a lot of it had to do with Dell or Apple's typical volume far outpacing what AMD could reliably manufacture.
Because Intel paid them to not use AMD chips.
I would think, and hope, that the situation is better now. Back then, AMD was manufacturing their chips in house, now they're manufactured by GlobalFoundries and TSMC.
"This flexibility comes at a cost though. Not unlike past years where AMD has paid GlobalFoundries a penalty under take-or-pay, AMD will be paying the foundry for this new flexibility. GlobalFoundries will be receiving a $100M payment from AMD, spread out over the next 4 quarters. Meanwhile starting in 2017, AMD will also have to pay GlobalFoundries for wafers they buy from third-party foundries. This in particular is a notable change from past agreements, as AMD has never paid a penalty before in this fashion. Ultimately this means AMD could end up paying GlobalFoundries two different types of penalties: one for making a chip at another fab, and a second penalty if AMD doesn’t make their wafer target for the year with GlobalFoundries."
"Along with all of the above, in exchange for the latest agreement AMD is making one more payment in the form of a stock warrant...The warrant will give Mubadala the option to buy up to 75 million AMD shares at a currently below-market price of $5.98/share, so long as they continue to own less than 20% of AMD. AMD is valuing the warrant at $235 million, which will bring the total one-time-costs of the latest agreement to $335M."
So Globalfoundries was behind Intel, Samsung and TSMC on 14nm, and they have chronic yield problems which made it hard for AMD to compete. I understand that most contracts for large orders have lots of stipulations and minimum purchase agreements but this agreement seems unwise. Why tie your entire business to one fab when their fortunes change year by year and being early can be such a competitive advantage? Moving between fabs is a huge amount of work yes, but now they have to do that work to get on TSMC and also pay Glofo for the privilege.
Regardless of that AMD is a much smaller company than Intel (~9k employees vs ~100k employees) so it is quite a remarkable feat that they have managed to stay in the competition for so long.
Either way, I predict both will slowly be eaten by GPU-driven processing and ARM desktops. It's going to be a very tough secular cycle for CISC chip makers in the coming decade.
I think these press releases, rumors, and leaks are doing exactly that. On the one hand, the manufacturing and sales channels are working for a target release date. On the other hand, the marketing team is dribbling out little details to delay customer's purchasing decisions. It seems kinda standard practice to me. The fact that you're sick of waiting (so am I!) is evidence that its working :)
However they do pay 3rd parties to run parallel benchmarks on both their silicon and competitors, and get access to detailed reports. And probably they will buy silicon off the market when it's available commercially.
What percentage of CPU sales are people buying boxed components for building/upgrading their home computers?
Define 'real workload' here. Are we talking about a workload where the compiler specifically optimizes for Intel, or where the executable chooses unoptimized paths if it finds anything other than intel processors inside?
Because I've found that when it comes to raw x86, no SSE anything or other nonsense; pure, unadulterated original x86, AMD simply owns.
Who cares, though? Is that something that matters?
Do you mean the 8086's instruction set? Many of those instructions are now implemented in microcode; sure, you can use them, but they're going to be dog slow. Turns out, nobody uses them.
Do you mean the i586 instruction set? Your system calls are going to be slow, since you have to trigger an interrupt.
Do you just mean 32-bit x86? That covers a large set of incompatible extensions among different vendors, and is of rapidly decreasing relevancy.
Do you just mean scaler instructions? That's really just hamstringing yourself: a lot of performance-sensitive applications involve manipulating vectors, and SIMD is a fairly easy and effective win.
But my definition of two workload really just means a spectrum of different sorts of computation so we can see how various subsystems compare.
They don't really work much faster when the basic stuff in the core architecture is gimped (low-count ADD/MULTIPLY units? In this day and age of cheap silicon? Really?)
"AMD said its upcoming Zen x86 core fits into a 10 percent smaller die area than Intel’s currently shipping second-generation 14nm processor. Analysts and even Intel engineers in the session said the Zen core is clearly competitive though many confidential variables will determine whether the die advantage translates into lower cost for AMD."
We need to compare apples to apples. AMD's Zen chips have no IGP, while most Intel chips are shipping with an Intel GPU.
If you look at an Intel die shot, about half of the area is taken by the GPU. 
Since the current batch of Zen SKUs lacks any IGP, I'd be worried if they weren't able to come up with a die that's smaller than Intel's with an IGP...
Thanks, this wasn't obvious from the linked article.
* Cloud computing has an absurd market cap.
* Ryzen has some features that specifically target cloud computing:
* * More PCIe lanes than a Xeon. These are basically useless for desktop/gaming (as SLI doesn't scale beyond 2 cards). This shines for cloud computing because you can fit more GPUs in a blade. There's also ML to consider.
* * Virtualization security features. The chip can automatically encrypt pages, mitigating some of the virtualization attacks that we have seen recently.
* * More compute per watt. Cost is nice to have (which Ryzen has), but cooling is one of the main concerns when selecting chips for high density.
Intel's brand power is a waste of time and money to compete with. Unless AMD can sort something out with Apple (for example), Intel are going to continue dominating the desktop market - irrespective of which chip is actually superior. It's incredibly likely that only enthusiasts would buy a Ryzen - who AMD has catered for with XFR. Essentially, ignoring IGP is likely a strategic decision because it's irrelevant for the markets that AMD can easily attack.
Guess you weren't around in the days of Quad VooDoo 2 GPU rigs. They most certainly did scale. nVidia and AMD's implementation? Nope. They screwed the pooch with that one and will never recover until they learn that the older way was indeed better.
And if your dataset or code is GPU-capable, then no, SLI is quite useful here.
"Only matters for gaming" Yea, as if there weren't a bunch of other datasets that could utilize matrix acceleration.
Sounds totally irrelevant.
Here's what this means. Intel has about 10% advantage in process technology. Yet despite that, AMD's chip is still 10% smaller than Intel's chip. That gives AMD an effective 20% architectural advantage over Intel for similar performance. This is why Intel is now scrambling to "streamline" its architecture as soon as possible (but won't be able to do it for the next 3 years or so).
A smaller, more "efficient" chip (in terms of performance/die area) means AMD can sell it significantly cheaper and still make as much profit as Intel does on that chip. But from the looks of it, AMD is going to price its equivalent chips at around half or less of Intel's chips, which will make AMD chips a complete no-brainer.
Add to this the fact that Intel will begin to make Xeons rather than its mainstream chips on new process nodes, at first, which means a delay of 6-12 months for the mainstream chips using Intel's latest process technology, and AMD looks to have a tremendous opportunity to steal a ton of market share from Intel over the next few years.
No, the conclusion you should be drawing is "AMD is using ~20% less transistors per core. What did AMD not implement that Intel did and how badly does that hurt Ryzen's performance?"
It is probably a good call on AMD part though, as applications that are sensitive to AVX performance are still rare and if they can get a cheaper core which is faster on integer loads it is a good tradeoff.
The key is the "similar performance" part.
Intel can just as well sacrifice 10% of their die too and get the same, or better, performance as AMD.
Does it make sense? No. Does it make AMD's chip in any way better? Only if it's smaller AND faster / more efficient (which it is not). And even if it was, it would only matter if AMD could sustain it -- which it hasn't proven it can.
Besides that, a smaller area is also good for yield (% of chips that test out OK out of all fab'ed, in case random defects occur in any of the many processing steps.)
> Intel has a double precision IPC of 16 FLOPs per Clock with Skylake as well as 2x 256 bit FMA whereas Zen only has 8 FLOPs per clock and 2* 128 bit FMA.
FMA=Fused multiply-add. It remains to be seen whether dollar for dollar AMD matches Intel or not -- it's likely to be very application dependent.
I'm waiting for the day when the CPU designers learn from GPU and just give us full OpenMP kernel support, such that you can use both multicore and vectorization by just scattering the OpenMP kernel call, including branching and early returns inside the kernels. I actually think that the higher core Xeons would still be quite competitive with GPUs if that was the case, but as it stands even the Intel MICs are a pain to program effectively. Doing this all with OpenMP is IMO just not good solution, it's very hard figuring out where things goes wrong.
I do fully agree that SIMT is a much easier programming model than SIMD.
Note that the 0xa7 core also uses 256 AVX. That's multiple CPU threads and vector instructions.
Anyway, it's an interesting technique. An extension of chip-on-module packaging where instead of having a circuit board made of FR4 you have a tiny PCB made of multilayer silicon. This allows fast connections between chips made with different processes (CPU/DRAM/Flash are somewhat incompatible), and joining small chips together into large ones to improve yield.
I originally submitted it with a headline describing the 14nm AMD Zen core being 10% smaller than the 14nm Intel current-gen core (and I accidentally linked to page 1 of the article instead of page 2.)
At an ELI5 level, FPGAs are reconfigurable blocks of digital logic gates. So a CPU-connected chunk of FPGA fabric could be reconfigured on-the-fly to create task-specific CPU instructions.
A smart C compiler might be able to detect an ultra-common 16-instruction block of code and synthesize it into FPGA logic (effectively collapsing it down to a single instruction as far as the CPU is concerned).
edit: as pjc50 points out, Intel isn't making any FPGA/x86-64 hybrids just yet. But I'd bet good money that they're on Intel's roadmap after the Altera aquisition.
The use of multi-die technology lets Intel build transcievers that run at a higher speed than the main die - 28GHz SERDES on a 1GHz FPGA.
GIGABYTE GA-X150M-PRO ECC
Kingston 16GB ECC DIMM KVR21E15D8/16
Pentium G4600 CPU
Toys arrive in a couple of days. Total outlay was $220.
When I first built my storage box, in 2009, the AMD CPUs all supported ECC DRAM, but I could not find a motherboard-chipset-BIOS setup that would actually implement it.
You sure? Around that time I plumped down a Phenom X4 and ECC RAM on an ASUS board, and I thought it supported ECC - the chipset was AMD too.
It's okay as a NAS that mostly sits idle and occasionally serves up unencrypted data at GigE speeds or less. For more demanding tasks, it's woefully underpowered.
Of course it always depends on the use-case, but for most people at home it's sufficient. I use it as a Minecraft and media server.
Still, a box is a box :P
The closest to a classic HP Microserver is the SuperServer 5028A-TN4, although that one uses the Atom chips that have been recalled.
There's also the much more powerful SuperServer 5028D-TN4T.
They also have a bunch of older models whose names escape me, but can be found by searching for "supermicro mini tower".
I thought it referenced AMD originally?
Like EETimes articles (and to be fair, they're not alone in the practice), this article is basically just a collation of manufacturer press releases.
To hammer this point home, or perhaps to reach a 500 word target, the author concludes by discussing a new Mediatek mobile SOC completely unrelated to either Intel's FPGAs on page 1 or the AMD Zen cores on the top half of page 2.
edit: oh, derp. nevermind. there's a second page: http://www.eetimes.com/document.asp?doc_id=1331317&page_numb...
edit edit: so apparently page 2 can't be linked to. whatever. bottom of the article content has a "next page" link.
Games or other carefully tuned programs that carefully lay out data in memory may not be much affected I guess.
The main reason intel is getting better single-thread performance is the higher clock speeds. I think the graphs even suggest that zen is getting higher IPC.
I really doubt Intel can push the clock speeds much higher. Intel really need to find more IPC somewhere, otherwise future versions of Ryzen (Zen+, Zen++) will overtake the intel chips as AMD refine the Zen design to improve IPC and improve clock speeds at the same time.
I could be AMD trying to steal market share.
That plus your point about TCO could be a nice way of competing with Intel.
Although I'm sure there is little profit in that business model, but it would keep the workers busy.
If you read past the marketing speak it is made to allow software to run without the user being able to poke at it. I assume this is for DRM, but who knows what is running there.