
AMD launches Kaveri processors aimed at starting a computing revolution - mactitan
http://venturebeat.com/2014/01/14/amd-launches-kaveri-processors-aimed-at-starting-a-computing-revolution/
======
pron
AMD is doing some interesting work with Oracle to make it easy to use HSA in
Java:

* [http://semiaccurate.com/2013/11/11/amd-charts-path-java-gpu/](http://semiaccurate.com/2013/11/11/amd-charts-path-java-gpu/)

* [http://www.oracle.com/technetwork/java/jvmls2013caspole-2013...](http://www.oracle.com/technetwork/java/jvmls2013caspole-2013527.pdf)

* [http://developer.amd.com/community/blog/2011/09/14/i-dont-al...](http://developer.amd.com/community/blog/2011/09/14/i-dont-always-write-gpu-code-in-java-but-when-i-do-i-like-to-use-aparapi/)

* [http://openjdk.java.net/projects/sumatra/](http://openjdk.java.net/projects/sumatra/)

It is intended that the GPU will be used transparently by Java code employing
Java 8's streams (bulk collection operations, akin to .Net's LINQ), in
addition to more explicit usage (compile Java bytecode to GPU kernels).

~~~
jerven
This is one of the things that makes java 8 really exiting. I think it is
going to show where JIT languages are going to shine in comparison to AOT
compiled ones. Multi architecture code in one program without the developer
needing to jump through hoops.

Graal/JVM (like pypy) is a really nice way to bring many languages to advanced
VM's. See for example node.jar/nashorn/"fast js on the jvm" or Topaz and
Truffle (Ruby on pypy and graal/jvm)

~~~
pjmlp
Not only that, replacing Hotspot by Graal will also reduce the amount of C++
code in the standard JVM towards the goal of having a production quality meta
VM.

~~~
mikevm
Speaking of Java, does anyone know if there are any plans of having "real"
generics in Java?

~~~
pjmlp
It is part of the wishlist for post Java 8 as presented on Java One 2013, but
the wishlist was presented without any guarantee of what will be the exact
focus.

------
ChuckMcM
This reaffirms for me again that we really need AMD to keep Intel from falling
asleep at the wheel. I was certainly intrigued by what I saw in the Xbox One
and PS4 announcements and being able to try some of that tech out will be
pretty awesome.

It is fascinating for me how FPUs were "always" co-processors but GPUs only
recently managed to get to that point. Having GPUs on the same side of the
MMU/Cache as processors is pretty awesome. I wonder if that continues though
what it means for the off chip GPU market going forward.

~~~
rhubarbquid
In older PCs the FPU was a separate chip.

~~~
ChuckMcM
That it was (a separate chip), but what is perhaps less well known is that
Intel also produced an _I /O_ co-processor chip called the 8089. That chip was
interesting to write code for, as it tried to offload various I/O operations.
At the time, Intel was on something of a "systems" tear, building their own
small systems to compete with other computer vendors like DEC, Motorola, and
TI.

The 8089 was a total flop relative to its development cost and Andy Grove
declared Intel would not do any more Graphics or I/O processor chips. (I was
the Systems Validation Engineer on the 80782 at the time, so much for my
project!) As it turned out I think it was just too early for a specialized co-
processor.

~~~
DougMerritt
I wouldn't say too early. There were a fair number of temporarily successful
co-processors in that general era of many different sorts; aside from float
(and smaller markets for FFT chips and boards and some efforts at vector co-
processors), the most famous were probably the multimedia coprocessors on the
Atari ST and on the Amiga (Jay Miner etc).

If that's too late for your tastes, look at the nearly universal support chips
for 8080/z80 like DMA. Possibly one could count UART and PIO too.

But Moore's Law kept killing them, and eventually it became common wisdom that
such was the case, so they faded.

Over a long time period, external devices to offload the primary cpu come and
go, under different guises, as Ivan Sutherland noticed already in 1968.

Intel's 8089 and related may have been partially motivated by the success of
IO processors in the mainframe and supercomputer world in the 1960s and later.

P.S. Hi Chuck :)

------
pvnick
Among other things, this has lots of applications for molecular dynamics
(computational chemistry simulations) [1]. Before you had to transfer data
over to the GPU, which if you're dealing with small data sets and only
computationally limited is no big deal. But when you get bigger data sets that
becomes a problem. Integrating the GPU and the CPU means they both have access
to the same memory, which makes parallelization a lot easier. If, as someone
else here said, AMD is partnering with Oracle to abstract the HSA architecture
with something more high-level like java [2], then you don't need to go learn
CUDA or Mantle or whatever GPU language gets cooked up just for using that
hardware.

I'm personally hoping that not only will we get to see more effective
medicines in less time, maybe some chemistry research professors will get to
go home sooner to spend time with their kids.

[1]
[http://www.ks.uiuc.edu/Research/gpu/](http://www.ks.uiuc.edu/Research/gpu/)

[2] [http://semiaccurate.com/2013/11/11/amd-charts-path-java-
gpu/](http://semiaccurate.com/2013/11/11/amd-charts-path-java-gpu/)

~~~
blah32497
Can you give examples of medicines developed thanks to computational chemistry
simulations?

I'm rather ignorant about this area of research, but it always seemed to me
kinda fruitless? Sorta like high throughput screening - a ton of resources and
computation is dedicated because it sounds like a good idea... but ultimately
there is very little to show for it all.

Way too little signal and way too much noise.

~~~
akiselev
[http://en.wikipedia.org/wiki/Drug_design#Rational_drug_disco...](http://en.wikipedia.org/wiki/Drug_design#Rational_drug_discovery)

Scroll down to Examples.

~~~
sp332
Why not just link there?
[https://en.wikipedia.org/wiki/Drug_design#Examples](https://en.wikipedia.org/wiki/Drug_design#Examples)

------
amartya916
For a review of a couple of the processors in the Kaveri range:
[http://www.anandtech.com/show/7677/amd-kaveri-
review-a8-7600...](http://www.anandtech.com/show/7677/amd-kaveri-
review-a8-7600-a10-7850k)

~~~
xentronium
Anandtech article is infinitely more helpful than the one in OP. Hopefully it
floats atop.

~~~
DonGateley
Not too likely. It's aimed at a relative handful of pretty narrow specialists.

------
AshleysBrain
I have a question: Previous systems with discrete GPU memory had some pretty
insane memory bandwidths which helped them be way faster than software
rendering. Now GPU and CPU share memory. Doesn't that mean the GPU is limited
to slower system RAM speeds? Can it still perform competitively with discrete
cards? Or is system RAM now as fast as discrete-card bandwidth? If so does
that mean software rendering is hardware-fast as well? Bit confused here...

~~~
wmf
Yes, these APUs are limited to fairly slow DDR3 RAM. It has been suggested
that making the GPU part of the chip bigger wouldn't help because it would be
bandwidth limited. There are a couple of possible solutions to this: The PS4
uses GDDR so it's more like a traditional GPU with a CPU added on. Intel's
Iris Pro uses a large fast L4 cache. The Xbox One has some fast graphics RAM
on chip. AMD will need to do something to increase memory bandwidth if they
want to sell processors for more than $200.

------
networked
This is an interesting development indeed. In light of
[http://images.anandtech.com/doci/7677/04%20-%20Heterogeneous...](http://images.anandtech.com/doci/7677/04%20-%20Heterogeneous%20Compute%20Software.jpg)
I wonder if we'll soon see a rise in cheap, low-power consumption dedicated
servers meant for GPU-accelerated tasks (e.g., for an image host to run
accelerated ImageMagick on to resize photographs). Do you think this would be
viable in terms of price/performance?

And in case you were, like me, wondering about how much the new AMD CPUs
improve on improve on their predecessors' single-thread performance you can
find some benchmarks at [http://www.anandtech.com/show/7677/amd-kaveri-
review-a8-7600...](http://www.anandtech.com/show/7677/amd-kaveri-
review-a8-7600-a10-7850k/10).

------
tommi
Kaveri means 'Buddy' in Finnish. I guess the CPU and graphics are buddies in
this case.

~~~
reactor
Also a river in India
([http://en.wikipedia.org/wiki/Kaveri](http://en.wikipedia.org/wiki/Kaveri)).

------
GigabyteCoin
Any initial insights as to whether this new CPU/GPU combo will play any nicer
with linux than previous AMD GPUs?

Setting up Catalyst and getting my ATI Radeon cards to work properly in a
linux setup is probably my least favorite step in setting up a linux computer.

~~~
Glyptodon
Personally, I find installing the AMD driver on common Linux distros pretty
easy. 1 command to build itself into a package for your distro, then 1 command
to install. Reboot and you're good.

Sure, you probably can't fix the screen tearing. And their VDPAU equivalent
isn't the greatest. But getting up and running? It's always been really easy.

~~~
GigabyteCoin
I can install the open source ATI package on Arch linux in one single command:

pacman -S xf86-video-ati

...but it doesn't actually work. The card doesn't get used to it's full
potential. Which two commands are you talking about?

~~~
Datsundere
open source drivers don't have full 3d support yet.

Also, you can change your power profiles and specify a few custom configs.

But typically open source drivers are still not as good as proprietary
regarding 3d support.

------
anonymfus
Die shot: [http://i.imgur.com/Unb9ng0.jpg](http://i.imgur.com/Unb9ng0.jpg)

~~~
dh_imu
As someone totally unfamiliar with hardware. Can someone explain what exactly
I'm looking at here? What do the holes and different colored sections mean?

~~~
sliverstorm
Teal squares on right: CPUs

Teal rectangles on right (in between squares): L2

Orange mass on left: GPU

Blueish rectangle on bottom: DDR interface (?) Possibly L3, I forget if these
actually have L3.

~~~
Narishma
They don't have L3 cache.

------
jcalvinowens
This is interesting, but my experience is that Intel's CPU's are so
monumentally superior that it will take a lot more than GPU improvements to
make me start buying AMD again.

Specifically I'm dealing with compile workloads here: compiling the Linux
kernel on my Haswell desktop CPU is almost a 4x speedup over an AMD Bulldozer
CPU I used to have. I used to think people exaggerated the difference, but
they don't: Intel is really that much better. And the Haswells have really
closed the price gulf.

~~~
higherpurpose
I'm actually expecting Nvidia and Apple to catch-up to Intel on CPU
performance before AMD does, and I think it will happen within a year or two,
after they switch to 16nm FinFET. This can happen among other reasons because
Intel has stopped focusing on increasing performance too much since Sandy
Bridge. They mainly focus on power consumption and increasing GPU performance
these days, which obviously leads to a compromise on CPU performance.

~~~
XorNot
You need to appreciate the economics to realize why it won't happen: a new
chip foundry for these types of processors costs something like $5 billion,
and that price has only gone up over the years (since we keep wanting to put
more and more mass-produced nanotechnology into these things).

Every single time you change a process in anyway, millions of dollars of
equipment - minimum - is being ripped out, retooled and replaced. And that's
fine, because this industry is all about economies-of-scale, but it means
Intel has a huge advantage: they can build more chips. As in, they can convert
several fabrication lines to build chips, and simply have more out the door
and on the market then their competitors, which means they can afford a price
drop which other people can't - because they need to pay for the upkeep,
running and loans to build those fab plants in the first place.

Intel is focusing on power and GPU because that's where the gains are to be
had and what the market needed, and because they have to - current gen high-
end CPUs have more thermal output density then a stove hotplate. Power use had
to drop to have any hope of running higher performance into the future, and
anyone hoping to compete has the exact same problems to contend with. And
since new battery technology isn't happening, mobile has to find power savings
on the demand side.

~~~
hershel
TSMC which is where nvidia and others make their chips already has 16nm
factory running.

~~~
cdash
You probably won't see anything on their 16nm node until some time in mid to
late 2015 and it should be pointed out that it is not a real node shrink but
instead a fake one where they use the same 20nm node but have introduced
FinFET. While this gets them increased energy efficiency it does not give them
the other benefit of a node shrink (increased transistor density) and so that
means that means there will be less chips made per wafer driving up the cost.

------
transfire
Hey, they finally built an Amiga-on-a-chip!

~~~
vanderZwan
I never owned an Amiga, but I thought one of its key features was its
modularity? Wouldn't Amiga-on-a-chip inherently be a contradiction then?

~~~
exDM69
[http://en.wikipedia.org/wiki/Original_Amiga_chipset](http://en.wikipedia.org/wiki/Original_Amiga_chipset)

The major difference between an Amiga and a PC (which was essentially "cpu
only") was that the Amiga had a chipset with some units that were somewhat
programmable.

------
dmmalam
This could be an interesting solution for a compact steambox, essentially very
similar to the hardware in the ps4 & xbox one, though I wonder if the lack of
memory bandwidth would hurt performance noticeably.

------
jjindev
"AMD says Kaveri has 2.4 billion transistors, or basic building blocks of
electronics, and 47 percent of them are aimed at better, high-end graphics."

This sentence would have been so much better off if they'd just punted on the
weak explanation of "transistor" and left it to anyone unsure to look it up.

------
malkia
Old ATI chips were named Rage. Kaveri seems to be a river in India.... but it
would've been much more cooler if it was named Kolaveri, which according to my
poor translation skills must mean Rage in Indian (or one of it's dialects -
possibly tamil).

And then there is the song... :)

~~~
teho
Kaveri is also finnish for a friend.

~~~
Sharlin
Friend, yes, but colloquial, so a more accurate translation might be "buddy".
To a Finn, it's rather amusing to see it used as a processor codename.

~~~
malkia
Aver (plural - averi) - means friend in bulgarian too (I'm bulgarian, but more
commonly people would use "priatel" or "drugar")

~~~
Sharlin
The words may be etymologically related. The word _kaveri_ has two suggested
etymologies; the first is that it's originally a variation of _toveri_ (which
is more formal; also means "friend", but has a leftist connotation, like
"comrade"). The latter is fairly straightforward loan from the Russian
_товарищ_ ( _tovarištš_ ), which is probably closely related to the Bulgarian
_aver_. Interestingly, the _ka-_ in _kaveri_ may originate from the Swedish
_kamrat_ , quite clearly related to the English "comrade".

The other hypothesis is that _kaveri_ is from the Yiddish _חבֿר_ ( _khaver_ ),
which is a direct loan from Hebrew.

~~~
malkia
Thanks! I forgot about tovarish (and russian was the first foreign language
I've learned in school).

------
higherpurpose
I wish Nvidia would join HSA already, and stop having such a Not Invented Here
mentality.

~~~
zanny
I wish Nvidia would join Mesa already, and stop having such a Not Invented
Here mentality.

I wish Nvidia would drop Cuda and focus on openCL already, and stop having
such a Not Invented Here mentality.

I wish Nvidia would use Miracast already, and stop having such a Not Invented
Here mentality. (with regards to their proprietary game streaming)

I wish Nvidia would push edp (/ displayport 1.4 variable refresh) instead of
their in-house proprietary gsync already, and stop having such a Not Invented
Here mentality.

I wish Nvidia would standardize unencumbered physx, and stop having such a Not
Invented Here mentality.

~~~
sharpneli
"I wish 3dfx would drop Glide and focus on OpenGL already, and stop having
such a Not Invented Here mentality" \- Nvidia in the end of 90's.

I still remember their rhetoric about open standards and how they are good for
consumer and that's why we should purchase their GPU's and that's why game
developers shouldn't use just Glide.

Somehow I feel like I was tricked by them. As I was in my early teens back
then I was somewhat naïve and thought they were serious. Oh how wrong I was.

------
annasaru
Nice name. A majestic river in South India..
[https://en.wikipedia.org/wiki/Kaveri](https://en.wikipedia.org/wiki/Kaveri)

------
grondilu
« The A-Series APUs are available today. »

It's nice to read a tech article about a new tech that is available _now_ ,
and not in an unknown point in the future.

------
rbanffy
Are there open-source drivers or will the driver builders have to reverse
engineer the thing?

~~~
ashf79
AMD releases public documentation [1] and employees several full time open
source driver developers.

[1] [http://developer.amd.com/community/blog/2013/10/18/amd-
gpu-3...](http://developer.amd.com/community/blog/2013/10/18/amd-
gpu-3dcompute-documentation/)

~~~
rbanffy
Is that documentation sufficient for a full-featured open-source driver on par
with their proprietary ones?

I ask so because, for the past decade or so, I've been using Intel CPUs and
GPUs exclusively for their excellent Linux support. If AMD can provide the
same or better level of support, I'd consider switching.

~~~
zanny
These new chips aren't supported in their Mesa driver, no. They probably will
be in a few months, albeit in a buggy state where some display outs may not
work and the HSA isn't in use at all.

And since its radeonSI based, you don't have opengl past 3.1 or opencl, and
you won't likely ever see the bulleted features like Mantle or TrueAudio.

Though, on the other side of the isle, Intel just got support for opengl 3.3
in their driver, and they don't support opencl at all on their IGP parts.

The only real toss between them when comparing gpu freedom is that Intel uses
wholly foss drivers while AMD ships proprietary boot firmware that they have
staunchly opposed getting rid of.

Then again, AMD supports coreboot on all their chipsets and don't use
proprietary signed microcode payloads on their cpus. And even in the driver
space, Intel ships firmware blobs for their wireless NICs, so they aren't
saints there either.

 _And_ Intel pushed uefi, which is such a colossal PITA that makes me angry on
any board I've dealt with it on. And even when Google pressures them into
coreboot support on _some_ boards only for Chromebooks, they still use
firmware blobs so obfuscate the chipset anyway.

Though Intel is pushing Wayland forward, mostly for Tizen, but still they are
paying a lot of Wayland devs, which is a good thing. AMD participates in
kernel development, but not nearly as much as Intel. Then again, Intel is a
magnitude larger company and has wiggle room on their budgeting since they are
dominate the industry so much with their ISA stranglehold, so I have to give
AMD some credence there.

In the end neither company is "great' for open source while the other is bad.
They both do good and evil in the ecosystem (unlike Nvidia, where publishing
2d documentation is supposed to be good enough). I try to support amd when I
can, if I have just an "A or B without preference" choice, since they are the
underdog. Also, they produce a lot more open standards - they pushed opencl,
they are supporting edp for variable refresh screens, etc - whereas Intel
keeps making proprietary technologies only for their stuff like smartconnect
or rapid storage.

Though some recent AMD technologies like TrueAudio and Mantle haven't been
open at all, so once again, it is a toss.

------
ck2
AMD needs to die shrink their R9 chip to 20nm or less and put four of them on
a single pci-e board.

They'd make a fortune.

~~~
zanny
Well, first, they need to spend the several hundred million to build a 20nm
fab plant since nobody else has one.

Then they need to change quantum mechanics so they could cool 4 300w 300mm die
packages on one pcb without liquid nitrogen or liquid epeen.

Sounds pretty expensive.

~~~
nhaehnle
Even 14nm fabs exist:
[http://en.wikipedia.org/wiki/14_nanometer](http://en.wikipedia.org/wiki/14_nanometer)

Vendors are pretty tight-lipped about yield, though, and your other points
stand as well, obviously.

------
Torn
> It is also the first series of chips to use a new approach to computing
> dubbed the Heterogeneous System Architecture

Are these not the same sort of AMD APU chips used in the PS4, i.e. the PS4
chips already have HSA?

According to the following article, The PS4 has some form of Jaguar-based APU:
[http://www.extremetech.com/extreme/171375-reverse-
engineered...](http://www.extremetech.com/extreme/171375-reverse-engineered-
ps4-apu-reveals-the-consoles-real-cpu-and-gpu-specs)

------
fidotron
This is great progress, and the inevitable way we're going to head for compute
heavy workloads. Once the ability to program the GPU side really becomes
commonplace then the CPU starts to look a lot less important and more like a
co-ordinator.

The question is, what are those compute bound workloads? I'm not persuaded
that there are too many of them anymore, and the real bottleneck for some time
with most problems has been I/O. This even extends to GPUs where fast memory
makes a huge difference.

Lack of bandwidth has ended up being the limiting factor for every program
I've written in the last 5 years, so my hope is while this is great for
compute now the programming models it encourages us to adopt can help us work
out the bandwidth problem further down the road.

Still, this is definitely the most exciting time in computing since the mid
80s.

~~~
weland
> The question is, what are those compute bound workloads? I'm not persuaded
> that there are too many of them anymore, and the real bottleneck for some
> time with most problems has been I/O. This even extends to GPUs where fast
> memory makes a huge difference.

The bottlenecks in the problems themselves shouldn't be underestimated though.
Some types of problems are intrinsically difficult or outright impossible to
reformulate so as to take advantage of vectorized processing.

That being said, there are a lot of other problems which do. I'm quite
enthusiastic about this.

------
ebbv
All of Intel's recent mass market chips have had built in GPUs as well. That's
not particularly revolutionary. The article itself states "9 out of 10"
computers sold today have an integrated GPU. That 9 out of 10 is Intel, not
AMD.

The integrated GPUs make sense from a mass market, basic user point of view.
The demands are not high.

But for enthusiasts, even if the on die GPU could theoretically perform
competitively with discrete GPUs (which is nonsensical if only due to thermal
limits), discrete GPUs have the major advantage of being independently
upgradeable.

Games are rarely limited by CPU any more once you reach a certain level. But
you will continue to see improvements from upgrading your GPU, especially as
the resolution of monitors is moving from 1920x1200 to 2560x1440 to 3840x2400.

~~~
Sanddancer
AMD's APUs have crossfire support built in. So if/when your graphics needs get
to the point where you need more oomph, you can add a discrete gpu and take
advantage of both the on-die and the plug-in gpu.

~~~
cdash
Well, not exactly. AMD calls this Dual Graphics and it only works with certain
graphics cards. In the case of Kaveri this is like 2 entry level R7 series
cards that use DDR3 memory instead of the traditional GDDR5 for more powerful
graphics cards.

This makes it almost pointless in my opinion as you can get a discrete card
that is more powerful on its own then the combination for not much more money.

------
higherpurpose
> AMD now needs either a Google or Microsoft to commit to optimizing their
> operating system for HSA to seal the deal, as it will make software that
> much easier to write.”

I'd say this is perfect for Android, especially since it deals with 3
architectures at once: ARM, x86, MIPS (which will probably see a small
resurgence once Imagination releases its own MIPS cores and on a competitive
manufacturing process), and AMD is already creating a native API for JVM, so
it's probably not hard to do it for Dalvik, too. It would be nice to see
support for it within a year. Maybe it would convince Nvidia to support it,
too, with their unified-memory Maxwell-based chip next year, instead of trying
to do their own thing.

~~~
zanny
I also don't get why AMD needs Google or MS to do anything. Do they mean
getting HSA in Java / C#? Because it seems to me that getting a gpu to do HSA
just requires the drivers and library infrastructure (libgl, libcl) to use it.

Do AMD even have Android drivers, or are they just using their Mesa or
Catalyst one? Even then, why not just contribute HSA support to the kernel /
their drivers?

------
vanderZwan
Here's something that confuses me, and maybe someone with better know-how can
explain this:

1: The one demo of Mantle I have seen so far[1] says they are _GPU_ bound in
their demo, even after underclocking the CPU processor.

2: Kaveri supports Mantle, but claims to be about 24% faster than Intel HD
processors, which are decent, but hardly in the ballpark of the type of
powerful graphics cards used in the demo.

So combining those two, aren't these two technologies trying to pull in
different directions?

[1] Somewhere around the 26 minute mark:
[http://www.youtube.com/watch?v=QIWyf8Hyjbg](http://www.youtube.com/watch?v=QIWyf8Hyjbg)

~~~
sliverstorm
I think you may have misunderstood the purpose of Mantle and that demo. Or he
may have explained it poorly in the video.

Saying _We are GPU bound even when we underclock the processor_ is attempting
to illustrate how cheap Mantle makes issuing tons of instructions to the GPU.
Mantle doesn't make the GPU faster, it makes submitting tasks to the GPU
faster in terms of CPU-time.

~~~
cdash
I'm pretty sure his point was that an integrated gpu has no need for this
because it is the limiting factor and not the CPU. This will probably not be
true in the future has CPU are not getting much faster and instead more cores
are being used which is one of the problems Mantle is intended to help with.

~~~
sliverstorm
Even if the integrated GPU is the limiting factor, it still saves you CPU
cycles to spend on other things, and has the additional bonus of simply being
compatible with code that you would run on discrete GPU's. T'would be awful if
games built with Mantle just couldn't run on APUs.

------
codereflection
It's really nice to see AMD getting back into being a game changer.

------
jsz0
The problem I see with AMD's APUs is the GPU performance, even if it's twice
as fast as Intel's GPUs, both Intel & AMD's integrated GPUs are totally
adequate for 2D graphics, low end gaming, and light GPU computing. Both
require a discrete card for anything more demanding. IMO AMD is sacrificing
too much CPU performance. Users with very basic needs will never notice the
GPU is 2x faster and people with more demanding needs will be using a discrete
GPU either way.

~~~
cdash
To explain the gap better an i3 dual core Intel chip for 100 dollars has more
performance than these quad core chips in the CPU department.

------
sharpneli
This looks really cool. However it suffers from the same issue as their Mantle
API suffers from. The actual interesting features are still just hype with no
way of us accessing them.

Yeah the HW supports them but before the drivers are actually out (HSA drivers
are supposedly out at Q2 2014) nothing fancy can be done. It'll probably be at
end of 2014 until the drivers are performant and robust enough to be of actual
use.

------
rch
> the power consumption will range from 45 watts to 95 watts. CPU frequency
> ranges from 3.1 gigahertz to 4.0 gigahertz.

I was fairly dispassionate until the last paragraph. My last Athlon (2003-ish)
system included fans that would emit 60dB under load. Even if I haven't gotten
exactly the progress I would have wanted, I have to admit that consumer kit
has come a long way in a decade.

------
hosh
I'm a bit slow on the uptake ... but does this remind anyone of the Cell
architecture? How different are those two architectures?

~~~
m_mueller
Cell was neither x86 for the main cores, nor had sufficient industry standards
and tooling ready (OpenCL, LLVM, OpenGL, DirectX..) for the accelerator part.
AMD's new offering is fully intended for mass market, while Cell was a strange
mixture of HPC architecture and PS3 processor. I'd say AMD has a significantly
higher chance for success, these new chips should be pretty much a no brainer
for mid-end media/ gaming PCs. If they can scale it down to a much lower TDP,
it could also become interesting for 'Surface Pro class' (if you can call that
a class) tablets.

------
erikj
The wheel of reincarnation [1] keeps spinning. I hardly see anything
revolutionary behind the barrage of hype produced by AMD's marketing
department.

[1] [http://www.catb.org/jargon/html/W/wheel-of-
reincarnation.htm...](http://www.catb.org/jargon/html/W/wheel-of-
reincarnation.html)

------
noonereally
"Kaveri" is name of one of major river in India. Must have involved ( or
headed) by Indian guy.

[http://en.wikipedia.org/wiki/Kaveri](http://en.wikipedia.org/wiki/Kaveri)

~~~
sliverstorm
I suppose the graphics cards could only have been headed by a Pacific Islander
then?

------
belorn
Will the APU and graphic card cooperate to form a multi-GPU with single
output? It sounds as it could create a more effective gaming platform than a
CPU and GPU combo.

~~~
zokier
Yes:

[http://www.amd.com/us/products/technologies/dual-
graphics/pa...](http://www.amd.com/us/products/technologies/dual-
graphics/pages/dual-graphics.aspx)

Of course it has all the issues of multi-GPU setup, so ymmv

------
devanti
Hope to see AMD back in its glory days since the Athlon XP

------
dkhenry
So we finally get to see what HSA can bring to the table.

~~~
venomsnake
Is there support on OS level for that? Something that rewrites existing
binaries on the fly and paralelises where possible? Is it even possible?

~~~
metrix
There is no OS level support, and there is nothing that rewrites existing
libraries but I am hoping that programming languages themselves (Ruby, Python)
get optimizations built in. An example would be hash lookups in Ruby: Why
couldn't the GPU do this for us in certain use cases? You could see large
performance increases for all apps written for the language with no code
changes needed for thousands of developers.

~~~
Arelius
> An example would be hash lookups in Ruby

You mean for a hash table? I don't think you'll be seeing that any time soon.
Hash computation will almost certainly be faster on the primary cpu then just
the scheduling and waiting overhead. And then the GPU isn't particularly good
at any pointer chasing required for the rest of the lookup.

~~~
metrix
Scheduling and waiting overhead? What waiting?

~~~
Arelius
I assumed that in most cases when you deal with a hash table you want some
data returned. That's perhaps not true in the case of adding to the hash
table, but if you don't need the result you can just add it to a queue, and do
it in any old thread, since it's clearly not performance sensitive.

------
adrianwaj
I wonder how well they can be used for mining scrypt.

------
X4
Want to buy, now! Can someone give me a hand at choosing a motherboard or
something that allows using about 4 to 8 of these APU's?

------
lispm
So the next computing revolution is based on more power hungry chips for
gamers?

------
imdsm
How do I get one?

~~~
zokier
Buy one?
[http://www.newegg.com/Product/Product.aspx?Item=N82E16819113...](http://www.newegg.com/Product/Product.aspx?Item=N82E16819113359)

~~~
sliverstorm
Holy cow, I am not used to product launching so close to the announcement.

~~~
zokier
Kaveri was _announced_ a year ago:

[http://www.engadget.com/2013/01/07/amd-temash-kabini-
richlan...](http://www.engadget.com/2013/01/07/amd-temash-kabini-richland-
kaveri-apu/)

~~~
sliverstorm
I mean, so close to release announcement. I swear sometimes I hear, "Such and
such a product has been released!" and then you can't buy it for 6 months.

