Hacker News new | comments | show | ask | jobs | submit login
Intel Shows 2.5D FPGA at ISSCC (eetimes.com)
256 points by mrb 131 days ago | hide | past | web | 121 comments | favorite



Getting a bit bored of leaks, press releases and even specs. The proof of a CPU is in how it benchmarks across real workloads.

We're 2-3 weeks away from global release. People buying components NOW are exactly the sort of people AMD should be working their socks off to stop buying Intel parts. But instead of any firm performance figures, we have rumour.

The only reason I can imagine AMD is still keeping the press under cover is because these Zen chips still can't compete where it matters and if AMD lets that be known, everybody will go back to Plan A and buy an i7 6900K.


"People buying components NOW are exactly the sort of people AMD should be working their socks off to stop buying Intel parts."

Err, for the people that matter, they probably are. I'm not sure why you would think otherwise.

Certainly you realize their large purchasers vastly dwarf people frequenting tech sites.

They 100% have done benchmarking/testing with, say, their top N chip purchasers, well before now, and already either have firm commitments to buy, or firm commitments to pass.


Nitpick: I'd like to point out that there's significant overlap between the two groups. To wit, I read hardware sites (from the realworldtech forum to anandtech and friends) and I've made 8 digit server hardware investment choices for work. This being said, the actual product focus between a typical core i7...K review and the mid range Xeons we've been purchasing are obviously not the same despite the similarity in architecture. We do get to do our own benchmarking, though. ;)


"Nitpick: I'd like to point out that there's significant overlap between the two groups."

Maybe. Maybe not. But in any case, none of the 8 digit purchasers i know care about when the review shows up on anandtech :)


They're the people they should have been taking to months before release. And in all likelihood, they were.

We're at the pointy end, now.


an i7 6900k costs more than 1000 dollars. The top spec AMD chip which is supposed to be within spitting distance, also 8/16, is less than half that price. It's going to take a real benchmark letdown for these AMD chips not to fly out the door. The floating point (AVX-256) will not be competitive, but the rest of the chip will have a tough time NOT being a success at the leaked prices, even if it proves to be 20% or so slower. Recall that for one of the huge markets, gaming, the Vulkan and DX12 APIs make much better use of multithreaded code so core counts suddenly matter much more.

Anyway I think the Internet is already primed for Intel still to have the edge on single core perf. Not a single AMD-controlled benchmark has not been amply caveated as such by the coverage so far, with good doses of scepticism, just like your post. Nobody's expecting parity IMO, they're just expecting something competitive, at the right price. And the price is half.


Sorry, I confused myself with all these benchmarks. I meant i7 7700K @ £340. That's the mainstream performance option at the moment that's what they have to compete against at that end of the market.

The Intel's -E line CPUs are in a world of their own.

Either which way, that's not neccessarily the point, it's that we shouldn't still be guessing how these CPUs are actually going to perform at this point.

This much hype won't end well if this isn't the second coming.


7700k is a 4 core / 8 thread. AMD appears to be pricing its 8 core / 16 thread chip at 480 USD which is about 390 GBP. So if the leaked pricing is anywhere near reality (and I'll admit that's contentious), then it's a no brainer in favour of AMD.

On the leaked pricing, some are expressing cynicism, but remember that AMD pays big penalty fees to Global Foundries if it doesn't meet long term contractual volume requirements. So it is perfectly feasible that it might favour volume over margins, since the end-result P&L to the company might be similar, but with the former strategy it gets market share for "free".

I'll give you that hype is at fever pitch though.


I would chose superior performance with 4 core, because I expect my workload to be mostly single-core, so 4 more cores won't make me that happy. But for those, who can load those 8 cores, it's awesome deal.


If the pricing leak is true, then I think we can expect a price drop on Intel processors on the release date. We'll have to wait for a true price/performance comparison until after then.


I think the issue might more be related to yields and distribution. Even when AMD were outpacing Intel (many years back) they struggled to meet demand and manufacture enough chips.

There was a great article back then too about why Apple and Dell weren't going all in on AMD chips and whilst sweeteners from Intel is part of it, a lot of it had to do with Dell or Apple's typical volume far outpacing what AMD could reliably manufacture.


> There was a great article back then too about why Apple and Dell weren't going all in on AMD chips

Because Intel paid them to not use AMD chips.

https://www.extremetech.com/computing/184323-intel-stuck-wit...


> they struggled to meet demand and manufacture enough chips.

I would think, and hope, that the situation is better now. Back then, AMD was manufacturing their chips in house, now they're manufactured by GlobalFoundries and TSMC.


The situation is better except for the protection money AMD has to pay GloFo:

"This flexibility comes at a cost though. Not unlike past years where AMD has paid GlobalFoundries a penalty under take-or-pay, AMD will be paying the foundry for this new flexibility. GlobalFoundries will be receiving a $100M payment from AMD, spread out over the next 4 quarters. Meanwhile starting in 2017, AMD will also have to pay GlobalFoundries for wafers they buy from third-party foundries. This in particular is a notable change from past agreements, as AMD has never paid a penalty before in this fashion. Ultimately this means AMD could end up paying GlobalFoundries two different types of penalties: one for making a chip at another fab, and a second penalty if AMD doesn’t make their wafer target for the year with GlobalFoundries."

"Along with all of the above, in exchange for the latest agreement AMD is making one more payment in the form of a stock warrant...The warrant will give Mubadala the option to buy up to 75 million AMD shares at a currently below-market price of $5.98/share, so long as they continue to own less than 20% of AMD. AMD is valuing the warrant at $235 million, which will bring the total one-time-costs of the latest agreement to $335M."

http://www.anandtech.com/show/10631/amd-amends-globalfoundri...

So Globalfoundries was behind Intel, Samsung and TSMC on 14nm, and they have chronic yield problems which made it hard for AMD to compete. I understand that most contracts for large orders have lots of stipulations and minimum purchase agreements but this agreement seems unwise. Why tie your entire business to one fab when their fortunes change year by year and being early can be such a competitive advantage? Moving between fabs is a huge amount of work yes, but now they have to do that work to get on TSMC and also pay Glofo for the privilege.


GlobalFoundries was spin off from the AMD fabrication units so I don't think they got an immediate advantage with them.

Regardless of that AMD is a much smaller company than Intel (~9k employees vs ~100k employees) so it is quite a remarkable feat that they have managed to stay in the competition for so long.


I speculate that they could have been extinguished years ago by intel, if it weren't for their fears of anti-competition and monopoly.

Either way, I predict both will slowly be eaten by GPU-driven processing and ARM desktops. It's going to be a very tough secular cycle for CISC chip makers in the coming decade.


> We're 2-3 weeks away from global release. People buying components NOW are exactly the sort of people AMD should be working their socks off to stop buying Intel parts. But instead of any firm performance figures, we have rumour.

I think these press releases, rumors, and leaks are doing exactly that. On the one hand, the manufacturing and sales channels are working for a target release date. On the other hand, the marketing team is dribbling out little details to delay customer's purchasing decisions. It seems kinda standard practice to me. The fact that you're sick of waiting (so am I!) is evidence that its working :)


Isn't it typical to lift the NDA's exactly on the launch date? They're rather consistent here IMHO.


If amd reveals the details now, Intel will react with better price/perf before you can but an AMD chip.


About time Intel got some competition again. I got the 6 core 6850k last June, for my personal use (previously a 2600k); its predecessor still clocks and performs better due to poor overclocking headroom of the new one.


I'd be very surprised if Intel doesn't have running Ryzen silicon in their Labs right now.


As I used to work for Intel, I can tell you: absolutely not - this would expose Intel to far too great risks on antitrust level.

However they do pay 3rd parties to run parallel benchmarks on both their silicon and competitors, and get access to detailed reports. And probably they will buy silicon off the market when it's available commercially.


People buying components NOW are exactly the sort of people AMD should be working their socks off to stop buying Intel parts.

What percentage of CPU sales are people buying boxed components for building/upgrading their home computers?


Given what Intel already did with the Pentium G4560 perhaps it would be wise to keep everything under wraps until release rather than give Intel 3 weeks of 'play time' to ruin the launch.


What does i7 6900K have to do with FPGA?


Looks like this thread has the wrong comments or a mod changed the link. Originally this was an article about upcoming AMD CPUs


Thank you this was confusing the hell out of me. FPGA looks like a beast though


"The proof of a CPU is in how it benchmarks across real workloads."

Define 'real workload' here. Are we talking about a workload where the compiler specifically optimizes for Intel, or where the executable chooses unoptimized paths if it finds anything other than intel processors inside?

Because I've found that when it comes to raw x86, no SSE anything or other nonsense; pure, unadulterated original x86, AMD simply owns.


Because I've found that when it comes to raw x86, no SSE anything or other nonsense; pure, unadulterated original x86, AMD simply owns.

Who cares, though? Is that something that matters?


It is something that matters for those of us that are more than competent and capable of programming without needing to use proprietary extensions to the underlying architecture.


"Unadulterated original x86" doesn't really exist, and hasn't mattered for decades at least.

Do you mean the 8086's instruction set? Many of those instructions are now implemented in microcode; sure, you can use them, but they're going to be dog slow. Turns out, nobody uses them.

Do you mean the i586 instruction set? Your system calls are going to be slow, since you have to trigger an interrupt.

Do you just mean 32-bit x86? That covers a large set of incompatible extensions among different vendors, and is of rapidly decreasing relevancy.

Do you just mean scaler instructions? That's really just hamstringing yourself: a lot of performance-sensitive applications involve manipulating vectors, and SIMD is a fairly easy and effective win.


Other nonsense? These additional instruction sets aren't just a gimmick, they exist to do more work faster.

But my definition of two workload really just means a spectrum of different sorts of computation so we can see how various subsystems compare.


"These additional instruction sets aren't just a gimmick, they exist to do more work faster."

They don't really work much faster when the basic stuff in the core architecture is gimped (low-count ADD/MULTIPLY units? In this day and age of cheap silicon? Really?)


You don't trust the presentation demo AMD made a few weeks ago ?


No. I trust independent comparisons a lot more than I would ever trust the company making the product.


Following that event, a french magazine tried to replicate the bench and said they were similar to AMD's claims. So far no lies have been uncovered.


I think the benchmarks shown by amd can be trusted. But they are very very cherry picked, example the demonstration they did when they released the rx480


I've found out the hard way that the rx480 is actually a counter example to my earlier point. I'm happy with the performance but was misled about the state of Linux support.


I should have linked to the second page: http://www.eetimes.com/document.asp?doc_id=1331317&page_numb...

Relevant quote:

"AMD said its upcoming Zen x86 core fits into a 10 percent smaller die area than Intel’s currently shipping second-generation 14nm processor. Analysts and even Intel engineers in the session said the Zen core is clearly competitive though many confidential variables will determine whether the die advantage translates into lower cost for AMD."


> AMD said its upcoming Zen x86 core fits into a 10 percent smaller die area than Intel’s currently shipping second-generation 14nm processor.

We need to compare apples to apples. AMD's Zen chips have no IGP, while most Intel chips are shipping with an Intel GPU.

If you look at an Intel die shot, about half of the area is taken by the GPU. [0]

Since the current batch of Zen SKUs lacks any IGP, I'd be worried if they weren't able to come up with a die that's smaller than Intel's with an IGP...

[0] https://www.techpowerup.com/215333/intel-skylake-die-layout-...


The i7-6900k doesn't have an IGP. Besides which, they are already comparing just the areas of the cores (and their L2 cache).


> they are already comparing just the areas of the cores (and their L2 cache)

Thanks, this wasn't obvious from the linked article.


I strongly doubt AMD are specifically targeting the desktop/laptop CPU market.

* Cloud computing has an absurd market cap.[1]

* Ryzen has some features that specifically target cloud computing:

* * More PCIe lanes than a Xeon. These are basically useless for desktop/gaming (as SLI doesn't scale beyond 2 cards). This shines for cloud computing because you can fit more GPUs in a blade. There's also ML to consider.

* * Virtualization security features. The chip can automatically encrypt pages, mitigating some of the virtualization attacks that we have seen recently.

* * More compute per watt. Cost is nice to have (which Ryzen has), but cooling is one of the main concerns when selecting chips for high density.

Intel's brand power is a waste of time and money to compete with. Unless AMD can sort something out with Apple (for example), Intel are going to continue dominating the desktop market - irrespective of which chip is actually superior. It's incredibly likely that only enthusiasts would buy a Ryzen - who AMD has catered for with XFR. Essentially, ignoring IGP is likely a strategic decision because it's irrelevant for the markets that AMD can easily attack.

[1]: http://www.forbes.com/sites/alexkonrad/2015/06/18/byron-deet...


"SLI doesn't scale beyond 2 cards"

Guess you weren't around in the days of Quad VooDoo 2 GPU rigs. They most certainly did scale. nVidia and AMD's implementation? Nope. They screwed the pooch with that one and will never recover until they learn that the older way was indeed better.


SLI only matters for gaming - which is outside of the scope of my argument. Regardless, it looks like 3Dfx SLI wasn't that great[1] and I did own a Diamond Monster 3D II.

[1]: http://www.tomshardware.com/reviews/diamond-monster-3d-ii,52...


Diamond made crap hardware. Creative Labs' cards performed much better.

And if your dataset or code is GPU-capable, then no, SLI is quite useful here.

"Only matters for gaming" Yea, as if there weren't a bunch of other datasets that could utilize matrix acceleration.


Summit Ridge is literally a desktop CPU. The server CPU line, Naples, will come later.


I haven't read the article (I know...), but AMD is specifically talking about the 'core' part of the microprocessor (there are 4 or 8 of them on a Ryzen CPU) . The core includes the L2 cache (as it is part of the core per se), but not L3, IGP and other uncore parts.


>AMD said its upcoming Zen x86 core fits into a 10 percent smaller die area than Intel’s currently shipping second-generation 14nm processor.

Sounds totally irrelevant.


It's actually super-relevant.

Here's what this means. Intel has about 10% advantage in process technology. Yet despite that, AMD's chip is still 10% smaller than Intel's chip. That gives AMD an effective 20% architectural advantage over Intel for similar performance. This is why Intel is now scrambling to "streamline" its architecture as soon as possible (but won't be able to do it for the next 3 years or so).

A smaller, more "efficient" chip (in terms of performance/die area) means AMD can sell it significantly cheaper and still make as much profit as Intel does on that chip. But from the looks of it, AMD is going to price its equivalent chips at around half or less of Intel's chips, which will make AMD chips a complete no-brainer.

Add to this the fact that Intel will begin to make Xeons rather than its mainstream chips on new process nodes, at first, which means a delay of 6-12 months for the mainstream chips using Intel's latest process technology, and AMD looks to have a tremendous opportunity to steal a ton of market share from Intel over the next few years.


>Here's what this means. Intel has about 10% advantage in process technology. Yet despite that, AMD's chip is still 10% smaller than Intel's chip. That gives AMD an effective 20% architectural advantage over Intel for similar performance.

No, the conclusion you should be drawing is "AMD is using ~20% less transistors per core. What did AMD not implement that Intel did and how badly does that hurt Ryzen's performance?"


Apparently two full 256 bit ALUs capable of 4 cycle MAD and 14 cycle division. Those probably take a lot of space.

It is probably a good call on AMD part though, as applications that are sensitive to AVX performance are still rare and if they can get a cheaper core which is faster on integer loads it is a good tradeoff.


>Here's what this means. Intel has about 10% advantage in process technology. Yet despite that, AMD's chip is still 10% smaller than Intel's chip. That gives AMD an effective 20% architectural advantage over Intel for similar performance.

The key is the "similar performance" part.

Intel can just as well sacrifice 10% of their die too and get the same, or better, performance as AMD.

Does it make sense? No. Does it make AMD's chip in any way better? Only if it's smaller AND faster / more efficient (which it is not). And even if it was, it would only matter if AMD could sustain it -- which it hasn't proven it can.


Maybe the implication is that a 10% smaller die means fewer transistors per chip, and therefore lower failure rate per chip, meaning that they get more working chips out of each wafer or something? And maybe that makes them less expensive, so that they could undercut Intel?


Chip fabrication costs are usually a direct value per mm2 the chip takes up in the wafer.

Besides that, a smaller area is also good for yield (% of chips that test out OK out of all fab'ed, in case random defects occur in any of the many processing steps.)


There's a lot that goes into CPU performance. Go to http://wccftech.com/ryzen-smaller-die-intel-zen-architecture... and search for "Table courtesy of the Linley Group" without quotes to see an extremely interesting table: http://cdn.wccftech.com/wp-content/uploads/2017/02/Zen-Doubl...

> Intel has a double precision IPC of 16 FLOPs per Clock with Skylake as well as 2x 256 bit FMA whereas Zen only has 8 FLOPs per clock and 2* 128 bit FMA.

FMA=Fused multiply-add. It remains to be seen whether dollar for dollar AMD matches Intel or not -- it's likely to be very application dependent.


Skylake might have 2x 256 bit FMA, but it still only has 1x 256 bit non-fused add, the same as Zen.


you can of course use the FMA as an add unit (with a slightly longer latency). IIRC Kabylake has the same latency (4 cycles) and throughput (2 * 256 bits) for add, mul and mad.


Differ t workloads will be if it in different ways from the trade offs. I do t imagine many applications will miss AVX 512.


It's actually quite difficult to use AVX512 and have it pay off significantly unless you design something to specifically take advantage of it.


Exactly. IMO doing GPGPU is actually easier than porting code to use vectorisation, since GPU cores are quite a bit more capable (allow branching/context switches, just with a performance penalty) than the CPUs vector units (vector code AFAIK completely breaks if you try to do an early return on one unit).

I'm waiting for the day when the CPU designers learn from GPU and just give us full OpenMP kernel support, such that you can use both multicore and vectorization by just scattering the OpenMP kernel call, including branching and early returns inside the kernels. I actually think that the higher core Xeons would still be quite competitive with GPUs if that was the case, but as it stands even the Intel MICs are a pain to program effectively. Doing this all with OpenMP is IMO just not good solution, it's very hard figuring out where things goes wrong.


Note that AVX512 actually addresses many of the issues that made SIMT code so inefficient on CPUs. It's not just widening the vectors, but also adding predication/masking etc.

I do fully agree that SIMT is a much easier programming model than SIMD.


Interesting, TIL. How good is the support for OpenCL on AVX512 processors?


So far that's only Knights Landing and friends, but they do seem well supported.


Have you looked into Xeon Phi? They have up to 288 threads. With a 256 thread Xeon we got good speed ups using up to 128 threads with Folding@home's latest GROMACS core (0xa7). With higher thread counts it failed to improve. In the long-run massively parallel CPUs could outpace GPUs precisely because of their flexibility.

Note that the 0xa7 core also uses 256 AVX. That's multiple CPU threads and vector instructions.


I haven't tried MICs personally, but AFAIK you need vectorization to match current Pascal generation GPU performance with Knight's landing -> which is where my comment applies. I don't doubt that you can get good speedups when this applies to your code already, but if you start from naive CPU code you'll have a lot of work to do, which IMO is similar to the work needed to port to GPGPU.


That is true. CPUs are catching up to GPUs in some ways. Intel is doing its best to take this market from NVidia. The future will tell.


> just give us full OpenMP kernel support, such that you can use both multicore and vectorization by just scattering the OpenMP

oops. s/OpenMP/OpenCL


Skylake doesn't have AVX512. Still those two 256 bit ALUs do take a lot of space an at least partially explain the size difference.


OT, but did your 'n' key fall off?


I'm slightly confused why everyone's talking about CPUs and ignoring the FPGA!

Anyway, it's an interesting technique. An extension of chip-on-module packaging where instead of having a circuit board made of FR4 you have a tiny PCB made of multilayer silicon. This allows fast connections between chips made with different processes (CPU/DRAM/Flash are somewhat incompatible), and joining small chips together into large ones to improve yield.


The HN headline was changed by a moderator, I suppose to comply with the guideline that the submitted headline should match the article's title.

I originally submitted it with a headline describing the 14nm AMD Zen core being 10% smaller than the 14nm Intel current-gen core (and I accidentally linked to page 1 of the article instead of page 2.)


Can you ELI5 how putting an FPGA on a cisc die will ultimately improve performance? Does it affect both multi core and single core benchmarks?


Adding an FPGA onto a desktop CPU probably doesn't speed up any existing benchmark tests, but it does have a lot of potential.

At an ELI5 level, FPGAs are reconfigurable blocks of digital logic gates. So a CPU-connected chunk of FPGA fabric could be reconfigured on-the-fly to create task-specific CPU instructions.

A smart C compiler might be able to detect an ultra-common 16-instruction block of code and synthesize it into FPGA logic (effectively collapsing it down to a single instruction as far as the CPU is concerned).

edit: as pjc50 points out, Intel isn't making any FPGA/x86-64 hybrids just yet. But I'd bet good money that they're on Intel's roadmap after the Altera aquisition.


It doesn't, there are two unrelated announcements in a separate article!

The use of multi-die technology lets Intel build transcievers that run at a higher speed than the main die - 28GHz SERDES on a 1GHz FPGA.


Seems like the title was changed from last night. It was about how AMD's new processors took 10% less die space than intel chips


So the latest AMD cores seem like they might be more competitive... does anyone know which AMD processors are likely to support ECC memory? My one big gripe with Intel CPUs is that they currently only support ECC memory for non-consumer chips. I run a personal ZFS cluster and am more concerned about data integrity / cost than I am about pure CPU performance.


Intel does actually support ECC memory in some of their low end consumer chips: https://ark.intel.com/search/advanced?s=t&MarketSegment=DT&E...


I am re-building my ZFS array with consumer parts.

GIGABYTE GA-X150M-PRO ECC Kingston 16GB ECC DIMM KVR21E15D8/16 Pentium G4600 CPU

Toys arrive in a couple of days. Total outlay was $220.

When I first built my storage box, in 2009, the AMD CPUs all supported ECC DRAM, but I could not find a motherboard-chipset-BIOS setup that would actually implement it.


> AMD CPUs all supported ECC DRAM, but I could not find a motherboard-chipset-BIOS setup

You sure? Around that time I plumped down a Phenom X4 and ECC RAM on an ASUS board, and I thought it supported ECC - the chipset was AMD too.


I think that's why so many people go with the older HP microservers. Super cheap, nice form factor, just add ECC RAM and HDDs.


I have an HP MicroServer N40L that I bought several years ago, and it's almost a doorstop now. Its CPU (dual-core 1.5 GHz AMD Turion II Neo) is slow and doesn't support AES-NI. It maxes out at 8 GB of RAM (16 GB of RAM if the stars align and it likes the RAM you bought). It has one GigE port, and SATA ports are limited to 3 Gbps (SATA II). Expansion is limited to an eSATA port, USB 2.0 ports, a low-profile PCIe 2.0 x1 slot, and a low-profile PCIe 2.0 x16 slot.

It's okay as a NAS that mostly sits idle and occasionally serves up unencrypted data at GigE speeds or less. For more demanding tasks, it's woefully underpowered.


I have one of those. Finding the RAM is pretty easy, I'm running 16GiB of ECC RAM. Cheap, too, because DDR2 if I remember. You can put a NIC in the PCIe slot. The SATA ports are fine for spinning disks. You can use the drive bay which is also SATA.

Of course it always depends on the use-case, but for most people at home it's sufficient. I use it as a Minecraft and media server.


That'd be adequate for a simple nas (samba/dav) and maybe running a 24/7 torrent client with a simple web interface but that's it, yeah.

Still, a box is a box :P


Underpowered? With that much hardware I could run 32 Q3 and UT99 servers at once without blinking.


Yes! I'm dreading the day when my N36L dies. So far I've only had to replace the PSU fan. And it's weird that no other manufacturer has anything like it. Compact case, not too noisy, houses 4 disks, runs FreeNAS.


Supermicro has a number of servers with a similar form factor.

The closest to a classic HP Microserver is the SuperServer 5028A-TN4, although that one uses the Atom chips that have been recalled.

There's also the much more powerful SuperServer 5028D-TN4T.

They also have a bunch of older models whose names escape me, but can be found by searching for "supermicro mini tower".


That looks promising, thanks for the tip!


If you aren't worried about power consumption, you're probably better off with a Xeon class processor that's a few generations old. Still very fast and capable, if a bit hot.


I think that most AMD processors support ECC.


The catch appears to be finding a motherboard to do it.


Was this headline changed?

I thought it referenced AMD originally?


Yes, it originally linked to the second page of this article, which is subtitled "Zen squeezes x86 area and power" and has a completely different topic. The HN submission received the editorialized title "AMD Zen core is 10% smaller than Intel's current gen, and has 2x larger L2" which is why the top comment (https://news.ycombinator.com/item?id=13642442) references AMD as the focus of the article.

Like EETimes articles (and to be fair, they're not alone in the practice), this article is basically just a collation of manufacturer press releases.

To hammer this point home, or perhaps to reach a 500 word target, the author concludes by discussing a new Mediatek mobile SOC completely unrelated to either Intel's FPGAs on page 1 or the AMD Zen cores on the top half of page 2.


The AMD news is on the second page. Changing the title here to match that of the linked article, while in keeping with HNpolicy, has ruined the utility of the link.


Link seems to go to "Intel Shows 2.5D FPGA at ISSCC" instead?

edit: oh, derp. nevermind. there's a second page: http://www.eetimes.com/document.asp?doc_id=1331317&page_numb...

edit edit: so apparently page 2 can't be linked to. whatever. bottom of the article content has a "next page" link.


Your link takes me to page 2. Running Chrome with uBlock Origin; not sure why you're being redirected to page 1.


¯\_(ツ)_/¯ it seems to be working now. maybe strange cookies / site bugs / who-knows-what.


Maybe a big L2 cache is nice, but overall performance is all that really matters. I suspect the figures we have seen are from running the chip hot with a fancy cooler. Gamers won't be able to overclock much and regular people will have heat and noise to deal with. Just a hunch, we'll see in a few weeks. I'm going to buy a ryzen setup if they are not awful.


a big l2 cache will tend to lead to good overall performance. In that it will positively impact the performance of a wide range of programs. Especially ones that adhere to common OOP or functional programming practices. (Pointer hopping, virtual functions, linked lists, etc)

Games or other carefully tuned programs that carefully lay out data in memory may not be much affected I guess.



Why do both of these feature only the physics benchmark? The second link illustrates that Intel still wins in single core performance.


I don't think there's any question that Intel will win in single-thread performance. AMD has never stated the opposite. They will, though, manage to deliver much more cores despite only being a tiny bit behind in single-core, which is a very big difference from e.g. Bulldozer.


Yeah, Intel will remain the single-thread king. But I feel like intel have backed themselves into a corner here.

The main reason intel is getting better single-thread performance is the higher clock speeds. I think the graphs even suggest that zen is getting higher IPC.

I really doubt Intel can push the clock speeds much higher. Intel really need to find more IPC somewhere, otherwise future versions of Ryzen (Zen+, Zen++) will overtake the intel chips as AMD refine the Zen design to improve IPC and improve clock speeds at the same time.


Intel wins on single thread… on the desktop line (4 cores max). Summit Ridge is competing against the overpriced "enthusiast" lineup (Haswell-E/Broadwell-E), which doesn't clock nearly as high as Skylake/Kaby Lake.


personally hoping for ECC support on the AMD consumer chips, because Xeon pricing right now is just a total and utter ripoff. For the growing in-memory database world this would be huge.


Link points at something else. Even the edited link below..


Groxx has the correct link


And consumes twice less WATs!


AMD really needs to start competing on price. Match Intel and Nvidia performance, at 75% TCO.


Given early benchmarks are showing that the Ryzen 7 1700X ($390) is performing similarly to the Intel Core i7-6900K ($1050), they definitely are going in that direction.

http://hothardware.com/news/amds-ryzen-7-1700x-delivers-i7-6...


If that's true they are vastly under cutting the market and I do see these cheap prices staying for the next gen.

I could be AMD trying to steal market share.


Which is what they need to do. They need a "rapid" increase in both market share and relevance, and the two are tied together. They can delay making a lot of profits until later. But it looks like AMD's architecture is significantly more cost-effective than Intel's, so even if they have to sell the chips at "only" 70% of Intel's price later on, for the same performance, they could still make significant profit on those chips.


I think they could carve out a niche by being the chip maker that produces a trustworthy computing environment that supports blob-free computing without the equivalent of Intel Management Engine.

That plus your point about TCO could be a nice way of competing with Intel.


That's not a big enough niche to pay for fabs.


AMD are also pushing their custom fabrication. I believe all the current gaming consoles use custom AMD chips.

Although I'm sure there is little profit in that business model, but it would keep the workers busy.


AMD has their own thing called TrustZone[1].

If you read past the marketing speak it is made to allow software to run without the user being able to poke at it. I assume this is for DRM, but who knows what is running there.

[1] https://www.amd.com/en-us/innovations/software-technologies/...


Yes indeed. The current landscape (at least as I glean from the Core Boot people) is that AMD is at least as bad if not worse than Intel.


Isn't this essentially the direction they're going? They've been very friendly to open source lately and there new processors and graphics cards are hitting the 75% TCO.


That's pretty much what they do.


What do you mean by "start"? They already do so and their FX CPUs are cheaper than equivalently performing Ibtel CPUs.


This is exactly what AMD has done for the past decade on their CPUs.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: