

End Of The Line for AMD FX CPUs - steve19
http://www.techpowerup.com/mobile/195355/vishera-end-of-the-line-for-amd-fx-cpus-roadmap.html
Now Intel has no competition for high end CPUs, which means price increases for us.
======
programminggeek
This is a good thing for the average computer because the average machine will
have a discrete class GPU on the CPU. That means potentially millions of more
potential gaming rigs to sell game software on. AMD's done their best work
over the years when they pushed hard in an important area that Intel couldn't
match (for a while).

This will make the average CPU a better processor for more users.

------
loser777
This news isn't a huge surprise, but it still saddens me. It seems that I
started getting into x86 hardware just as the CPU battle/Moore's law started
dying.

My first real build had an Athlon 64 X2, and back then I foolishly assumed
that the closeness of the competition was going to last (this was just before
Conroe/Core 2 launched). I decided to upgrade to a Core 2 Quad a few years
later, and an Ivy Bridge chip about a year ago. None of these upgrades,
however, feel quite as potent as that first time when I switched from a
Prescott Celeron to that Athlon 64 X2.

Sure, synthetic CPU benchmarks show that Moore's law isn't quite dead yet, but
singlethreaded performance just is not the battleground it was a decade ago. I
feel that somewhere along the line, people stopped pushing the envelope. Have
we gotten to the point where people have run out of ideas to push their
hardware with? Perhaps it's because physical limitations are being reached.
Still, I can't help but fantasize about what the world would be like if the
x86 market were as competitive as it were a decade ago.

~~~
alexwright
This is just a common misconception over what Moore said, and what his law
means. Hint: it has nothing to do with the clock speed or even directly the
prefromace of your CPUs.

 _Moore 's law is the observation that, over the history of computing
hardware, the number of transistors on integrated circuits doubles
approximately every two years._ Which is still very much the case:
[http://en.wikipedia.org/wiki/File:Moore%27s_law_graph.svg](http://en.wikipedia.org/wiki/File:Moore%27s_law_graph.svg)

------
maaku
The article presents this like it's a bad thing. APU-like architectures are
the future of computing, and I'm glad that AMD will be focusing on them.

Now if only we had really good GPGPU frameworks to compile generic code to
OpenCL and make use of those extra cores for non-graphics, non-numerical
applications.

~~~
fiatmoney
Those extra cores are designed to be good for streaming parallel data
computations, not "generic code" \- you can't even run a hash table
performantly on a GPU due to the random pointer accesses.

~~~
pandaman
Random pointer access is slow everywhere. However, a massively parallel GPU
code will likely beat multi-thread CPU code there.

A GPU can switch threads on a cache miss, something few CPUs can do. Even the
CPUs that do this switch between 2-8 threads (it's called SMT architecture,
for example, popular Intel Pentium/Core CPUs have at most 2 hardware threads
per core, Xenon Phi has 4). GPUs have dozens of threads per core and, often,
more cores than CPUs. Combined with the fact that a GPU thread is much wider
than a CPU this allows a GPU to saturate memory bandwidth even on a pointer
chasing code. The same code causes the CPU to stall and underutilize its
memory bus (as soon as the few threads hit cache miss there is nothing to do,
software multithreading cannot switch between threads that are waiting for
data).

Of course, as I said, it's still not efficient even on GPU (most of the
bandwidth is used to move unneeded data as you only need one word from the
whole cache line) and you still need to write massively parallel code.

~~~
weland
> However, a massively parallel GPU code will likely beat multi-thread CPU
> code there.

Yes, but if you actually try to do it, you'll find that "massively parallel"
code can be written only for certain problems that lend themselves to being,
well, "massively parallel".

I've seen people struggling with GPGPUs ever since they appeared (at a certain
point in my career I was involved in heavy number-crunching research). Six
years later, there are still a lot of real-life problems that can't be put in
a GPGPU-useful framework; in fact, coming up with good models so that you can
reason _if_ a given problem would ever lend itself to being solve like that is
one of the more important progresses of the scientific community.

Look, GPGPUs are cool and all, and useful for certain classes of problems, but
let's not do the whole Itanium thing all over again, please.

~~~
pandaman
Indeed, there are problems that cannot be solved in a parallel fashion. If you
are stuck with such a problem you are screwed no matter what is your
architecture.

A whole lot of real-life stuff is either parallel or does not need heavy
computing. This is why people started building massively parallel computers
well before the last 6 years[0].

[0]
[http://en.wikipedia.org/wiki/History_of_supercomputing](http://en.wikipedia.org/wiki/History_of_supercomputing)

~~~
weland
> A whole lot of real-life stuff is either parallel or does not need heavy
> computing. This is why people started building massively parallel computers
> well before the last 6 years[0].

Exactly. A lot of the GPGPU pitch is easily refuted with some hindsight.

------
steve19
With no competition for high end desktop CPUs we can expect price increases.

~~~
voltagex_
and RAM, and hard drives, and graphics cards... I suspect 2014 will be my last
high-end desktop purchase.

~~~
melling
And in 2015 we will start replacing our desktops with convertible mobile
devices. Slide your tablet or phone next to a keyboard, mouse, and monitor and
voilà, you've got a desktop. I'm really hoping we're completely wireless on
the monitor too.

~~~
pekk
Assuming I am willing to fork over a lot more money and use much less powerful
hardware running a glorified phone OS, all for the sake of having a smaller
computer.

~~~
girvo
Dell Venue 8 Pro...

Already we have tablets running x86 with good enough battery life, better
power than the ARM counterparts, and a full OS, with external hardware
support.

Sure, it's not completely there yet, but considering it's $399 today for
something that is pretty damned close (and close enough that my girlfriend
just ordered one, to use for most of her primary computing, but coupled with
the active digitizer for sketching; basically, a netbook), it's only a matter
of time before this is completely doable!

------
Cyph0n
Too bad. I recently built a PC and chose the 8-core AMD FX-8320 over the
4-core Intel i5. When it comes to multi-tasking, and even medium gaming, I
definitely haven't regretted the purchase.

~~~
zokier
I can't really imagine anyone regretting a middle-end CPU purchase these days,
no matter which one they'd pick.

Just to comment on your specific purchase, it would seem that your 8320 would
be relatively closely matching the corresponding Intel CPU (3470) in
performance, as you note AMD having slight lead on multi-threaded benchmarks
and Intel on single-threaded ones. But the major difference between the two is
that the Intel CPU does this while consuming 50ish watts less power (77W vs
125W TDP). Of course that doesn't matter for all people and TDPs are not
really comparable directly, but still it is something to note.

------
gaadd33
How does a leaked slide of their desktop roadmap indicate that they will only
make APUs now? Wouldn't it be likely that they still make server processors?

I wouldn't be surprised if the ultra high end desktop cpu market isn't that
large so they are assuming people who want that sort of power can just buy
server processors.

------
unethical_ban
I'm not sure this is the end of AMD, though.

Are they not going to make any server processors?

Can APU's not be used for high-end gaming? Can the GPU portion of the
processor not be looked at the same way FPU's used to be seen?

------
wila
Surprised everybody is in on the term APU, I had to look it up :)

[https://en.wikipedia.org/wiki/AMD_Accelerated_Processing_Uni...](https://en.wikipedia.org/wiki/AMD_Accelerated_Processing_Unit)

excerpt: * The Accelerated Processing Unit, formerly known as Fusion, is a
marketing name for a type of microprocessor from AMD designed to act as a CPU
and graphics accelerator (GPU) solution on a single chip *.

So I'd personally say not a very big surprise.

------
anon4
I wonder if I'll be able to use the built-in GPU while having a discrete
nVidia graphics card. Because I like AMD processors (plenty fast for me,
cheaper than intel), but I'll never stick and AMD graphics card in my machine
again (yes, I run Linux).

------
digitalzombie
No DDR4 ram?

I heard it was coming out in 2014.

This is good, I can't wait for HSA programming model.

~~~
wmf
Intel is getting DDR4 (only for servers I think) in 2014. AMD stays several
years behind Intel.

------
chipsy
I am using an AMD APU laptop right now, and I welcome this future.

~~~
kfcm
I'm curious about the APUs handling of heavy floating point work. I do a lot
of scientific/statistical computing work, and the FX series does have new
extensions for just this type of work.

Of course, the best of that series also comes with a 220W TDP.

~~~
Narishma
APUs (at least the desktop and laptop ones) use the same types of cores as the
FX series, so they should perform about the same.

------
zokier
Does anyone sell Opteron based workstations? Xeon based ones are common enogh,
but I can't recall seeing AMD based systems from any major OEM.

~~~
bluedino
Microway sells Opteron 6300-based systems (aka Piledriver) but it's really
been a while since AMD's chips have been competitive in performance per-core.

There was while when people would choose AMD because they could get enough
more cores for the price that the performance would make up for it, and you
could get more memory in a system than a similarly priced Intel. But those
workloads weren't as command as AMD would have liked.

