

Why are we still using CPUs instead of GPUs? - why-el
http://superuser.com/questions/308771/why-are-we-still-using-cpus-instead-of-gpus

======
tfm
I find this a really interesting area.

As a programmer rather than a gamer I'd tended to have this background
awareness of GPUs getting more and more powerful, and saw all the Amazing GPU
Stunts for {SETI,Folding,..}@home or Crack Your Neighbours' WiFi Using Just A
Perlin Render-Renderer, and had this vague idea that the graphics card was
this generic powerhouse that just needed a little love to get going. The
C-like syntax of your typical GL shader language is particularly encouraging
in this regard (not to suggest for a moment that Looking Like C is a generally
positive attribute).

Recently I have been giving it a little further inspection, and of course it
is not quite so cut and dry. Given the Single Instruction Multiple Data (SIMD)
nature, you are somewhere pretty good, roughly between a programmable DSP and
Full Gory Petaflop Gigacluster.

BUT. You get:

\- No recursion, but maybe we can just be careful and collapse everything into
loops.

\- No memory management (yay, maybe), but that makes sense since we're trying
to execute everything in parallel. Perhaps we can squeeze our dynamic
programming into a local 4x4 matrix?

\- Expensive branching, especially particularly very expensive if it's data-
dependent branching which goes different ways in different kernels. This
definitely makes sense if you think of the GPU as a hungry hoarde of soldiers
marching along in lockstep, slaying the data, then somebody has to stop to
handle an edge case and everyone else waits around ... I'd kinda envisaged it
more as an N-microthread model with a bunch of workers churning through
vertices and fragments as they become available. Probably it is time to take a
break from web services until I get over this, or develop an SOA GPU In The
Cloud (GPUaaS?).

\- &c. &c.

So, obviously there are some tasks which are well suited for this, and idioms
which are well supported. Other tasks and other habits don't fare very well at
all. Maybe an amazing recursive raytracer which was optimised for C is not
going to survive a transition to GLSL without some very clever trickery, but
perhaps there are subtasks which can be handed off instead.

The very interesting work being done on WebGL (Webkit, Firefox, Opera
thankyouverymuch) seems to have the potential to bubble the GPU to the surface
where it can be used and abused with a very small activation energy. I think
it would be nice to have some shader skills in the toolkit just in case the
right problem comes along.

TL;DR: GPU programming is a lot more like learning a new programming language
than I thought.

------
omi
Because it works in all cases to sell the product, you dummy.

