Hacker News new | past | comments | ask | show | jobs | submit login
AMD Debuts New 12- and 16-Core Opteron 6300 Series Processors (techpowerup.com)
83 points by brokenparser on Jan 22, 2014 | hide | past | favorite | 29 comments



I imagine these will be popular with CPU based mining such as primecoin?

I've heard there's a shortage of Xeon L5639 servers (6 core CPU, usually ex-lease) as people are using them to mine cryptocurrencies. I tried out a primecoin mining calculator and it claimed it was pretty profitable (don't think the calculator is very accurate however).

According to https://bitcointalk.org/index.php?topic=255782.msg3025191#ms... a dual L5639 server does 3.77730295 chains per day, which actuates to 19XPM which is 58.55USD a day. I've seen these server being rented for $85/month, so it could be worth it.

Can't find out much about mining with the 6300 series, but I'm guessing it would be around the same, so I'm guessing if you built a mining box for under $1500, you can break even in under a month.


price performance wise the amd processor is in the lead, but if price is not an issue intel still leads. also check the memorycoin performance database: http://agran.net/memorycoin2_calc.html


Does price performance count when talking about directly earning money from the computation?


In that case, performance/watt is more important. You still have to take the upfront cost, but first-order effects tend to dominate 0-order effects over time :)


It factors into the length of the breakeven (on the hardware investment) period, which can be a big deal. Not just because return is slower, but also because in a market as volatile as Bitcoin, who knows if a projected 6-month breakeven period will ever actually break even? Breaking even sooner dramatically reduces capital investment risk.

(A low price-to-performance ratio, or conversely a high performance-to-price ratio, will mean a shorter breakeven period)


It's not profitable. The calculation is based on 9-chains per day. The chains per day are defined as as the chain length that required to obtain a block. At the moment you need a chain of at least 10 to get a block. floor(difficulty)

The 10-chains per day for this processor is probably more in the range of 0.06-0.09 chains per day and at these rates it not profitable or only slightly profitable if you need to pay for energy.


> I imagine these will be popular with CPU based mining such as primecoin?

Huh, my first thought was VM's. I guess I'm just not one of the cool kids anymore.



Note that these are MCMs, with multiple semiconductor die on the IC, so a large heatspreader is unsurprising.


Comes with a single PCIe 3.0 x16 link on die, good for 16Gbyte/s. Kaveri was the first PCIe3 capable chip, good to see that in server-land too.

There are a couple people out and about for whom 8GB/s just wasn't enough- dual port IB-FDR, & storage controllers are right at that threshold.

Apparently the PCIe competes with one of the HT channels? Kind of under the impression one is limited to 2P if using the on-chip PCIe, but not 100% on that.


Comes with a single PCIe 3.0 x16 link on die, good for 16Gbyte/s.

This is the area where Intel is just killing it with their E5 chips, along with being able to write directly to the L3 from I/O. (I have no idea if AMD does this.)

The E5 is so good that it lets you do entirely different architectures from what came before it. Total game changer.


> The E5 is so good that it lets you do entirely different architectures from what came before it.

As an example: Luke Gorrie is one such person who is actively talking about doing so by talking directly to Ethernet controllers via DMA from user space. Here he is in a 30 minute talk about exploiting 512 Gbit/s of PCIe in his project called Snabb Switch. He's even written a 10 Gbit/s Intel Ethernet driver in Lua. The idea, as far as I can tell, is you can turn a common Xeon server in to a very low latency, zero-copy, multi-gigabit, software defined, layer 2 network appliance.

https://cast.switch.ch/vod/clips/26uo9i576i/

https://github.com/SnabbCo/snabbswitch/wiki


Intel seems stuck at 2P and HT still has massively lower latency but 80 GByte/s worth of PCIe lanes is huge. As big as main-memory throughput huge! Hence DDIO, which you reference, which allows IO to write to cache and skip the historic data-path it used to take through main memory. AFAIK AMD doesn't have anything equivalent. And they only have 16 lanes on chip: the rest come out of io-hubs.

I'd love to see someone actually try and use all that Intel PCIe IO and report on how utilized those pipes can get. Perhaps someone wants to send the PacketShader people a box loaded with GPUs? That'd be great, thanks!

http://shader.kaist.edu/packetshader/


Cool project! I wonder if you'd get similar perf from CPUs if you could used Intel's ISPC compiler[0] with the same GPU algorithms. I've found that GPU algorithms also perform substantially better on plain old CPUs, IMO because they use memory bandwidth more effectively.

I too would like to see how far those PCI Express busses can be pushed. :)

BTW We're adopting Intel's DPDK[1] approach to get massive packet processing performance on a single machine. So far we're liking it, but we'll see as it's not in production yet.

[0] http://ispc.github.io/ [1] http://www.intel.com/content/www/us/en/intelligent-systems/i...


I don't follow hardware too closely, but I'm under the impression the new processors have ridiculously complicated architectures now. Integrated graphics on die, PCI bridges, write-through cache... I remember back in the day it was Processor / Northbridge / Southbridge. Is it still the case? In which direction are they going? System-on-a-chip?


What I find most interesting is the TDP per core/speed with the TDP of both parts being 99W range. My old core2due 2.4ghz affair does 65W for only 2 cores at comparable raw clock speeds. So for 6-8x more cores it's only an extra 34W odd. Ok comparing yesteryear to todays cutting edge is unfair. But the point being whilst clock speeds have not changed and cores have gone up, they have been pretty fairly balanced of with power savings. This is excluding extra power saving levels and options in that 7 odd year timeframe.

Whilst electricty demand is still increaseing, I do wonder at what time we hit the point when we hit a tech usage apex when these power savings become measurable at the demand level. That said I still wonder at home much electricty is was used on the SETI client project and the search of extra life out in space whilst adding to the carbon footprint upon our own `intellegent` planet. Still I still wished the processing for bitcoins was based upon actualy useful computational units - that too me is a market - but cluster cloud computing and trust of running data on external unknown systems is a hurdle on that one. Though for medical/reasearch that becomes viable and if BOINC was to reward blocks of work with a virtual currency akin to BITCOIN then I and many would be happier I suspect.

But certainly good to see AMD still pulling them out to the market and keeping Intel in check, albeit a relay race being passed onto ARM. Though that is a more open and diverse area, so good times and however AMD plays the future, I thank them for being there and keeping Intel in check.


AMD stock has been tanking (down 10% in pre-market today). Any opportunities here?

(Tanking bit strong... down from a recent high of 4.50 not long ago)


They just announced their financials, which is probably why it has been dropping.

Of course, the financials matched the projections, but the stock market is funny.


I feel like they have a strong portfolio going forward, though. Diversifying into ARM, Southern Islands is behind the door conquering the consumer compute space, focus on heterogeneous compute.

They are only in trouble in the spaces where their x86 processors have to compete with Intel. They don't have their own fabs anymore, they aren't a die node ahead like Intel, and they are also just in terms of raw revenue an order of magnitude smaller than Intel. That isn't a fight to win, and they have been stuck in this low-cost no-margins commodity cpu space for a while.


They are definitely a potential opportunity. To my understanding, many analysts are disturbed by falling gross margin, because in the Intel space high gross margin is important. But, according to some articles I've read, lower gross margin is a natural part of the semi-custom sort of work the company is doing now, like the Xbox & Playstation chips.

An article that explains the shift:

http://www.fool.com/investing/general/2013/12/30/can-amds-se... ("Semi-custom could be the answer")

So, depending on what you believe, the future of the stock could be rather undervalued.


if AMD gets back to 2.50, then it's a strong buy. today's drop is just back to where it was a month ago.


This seems right.


Piledriver, boring :/


Someone who knows what they're talking about. It may be boring, but the fact of the matter is that AMD doesn't have the resources to make a Steamroller-based Opteron.

Intel continues to make new architectures and revisions of their chip, but AMD is still stuck on a several-year old architecture. Granted, this one has a solid price/performance ratio... but being stuck on Piledriver is a real downer for AMD's server line.


I'm not sure it's such a bad thing. Intel already does something similar to their server line; Ivy Bridge-EP came out after Haswell was released, Sandy Bridge-EP came out after Ivy Bridge was released. Thinking this could very well be a way for AMD to use the consumer line to get the arch out the door, and then give it over to the server team for fine tuning. In fact, it could give a good way for AMD to get back into the performance desktop market as well -- given the leaks on how the arch is going to have three sets of pcie 3 lanes for single/dual proc machines, I can very much see AMD leapfrogging in terms of getting 3+-way crossfire rigs for people who want ridiculous gaming machines. But, as you mentioned, the difficult part is seeing if AMD has the engineers to properly exploit all the groundwork they've made for ridiculously parallel system. I'm hoping so, because I like the direction AMD's going, I'm just hoping it's not a case of too little too late.


AMD has not "chosen" to do this strategy, they were forced to. With a $1 Billion shortfall in 2012, and millions of dollars shortfall in 2013 (despite cutting tons of staff and selling off their headquarters)... AMD is strapped for cash and their strategy proves it.

Its probably the best AMD can do for the moment. It will take them several years to build up the staff and resources to once again compete against Intel in the high-end CPU market, but the time is not now.

On the other hand, AMD is pushing very interesting technology in the form of APUs, which honestly are going to be the future of general purpose computing. APUs are good enough to serve as the primary CPU/GPU hybrid for XBox One and PS4... and while their Desktop / Laptop APUs aren't quite as powerful... the concept has been proven.

Anyway, Bulldozer was years ago. AMD has shown the world 12 and 16 core devices at lower GHz, but people prefer to buy Intel's 4 or 6 core devices at higher GHz and IPC. Waiting a few years... or even a decade, before another "high-core count" CPU is in the works.

The current market prefers high IPC devices at higher GHz still. Single threaded performance is king in current games. Only when games and applications take advantage of the massive amounts of cores should AMD bet on heavy multi-core boxes again.



Why is this review over one year old? Are they just releasing more cores of the same architecture?


The old one is code named Abu Dhabi, this new one is Warsaw. Looks like the parent was too quick to post.

According to [1] they are about the same but more energy efficient

[1] http://www.kitguru.net/components/cpu/anton-shilov/amd-16-co...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: