
Graphics Chips Help Process Big Data Sets in Milliseconds - robabbott
http://www.technologyreview.com/news/520021/graphics-chips-help-process-big-data-sets-in-milliseconds/
======
digitailor
It's cool that it's essentially a consumer-participatory GPU database, but
every time I read about GPU crunching I'm underwhelmed. The power waste and
cost is absurd. FPGAs have become as capable, or more, than GPUs at a
ridiculously higher level of efficiency. I understand that this platform is
about technology accessibility... but having casual computer users burning
their graphics card 24/7 seems cute but irrelevant to current methods of
supercomputation. I find a lot of people who care about efficient natural
resource usage don't think twice about very wasteful computing. Those watts
are a whole lot of coal.

~~~
yetanotherphd
Someone correct me if I'm out of line here, but I think it would be
appropriate to mention your personal interest in FPGA's (which I discovered
from your comments lower down).

On the point itself, "accessibility" is probably understating the nature of
the problem. Actually being able to program for a platform, and how easily you
can do it, is a huge issue. Issues of "accessibility" apply as much to huge
companies with giant server farms (note how little even GPU computing is used
by them) as they do to tinkerers.

~~~
digitailor
You're not out of line, and I haven't tried to hide the fact that I'm now
professionally interested, in fact that's all I'm blabbing about below. But
that's no coincidence- this is why I got into FPGAs: the problem spaces that
are like what this cool project addresses. But I rejected GPUs as a platform
completely for the reasons I list. And that's how I came to be interested in
FPGAs. I am not an EE and it was a hard decision to make the switch to
pursuing this, and I'm sharing that experience in the hope of
inspiring/meeting others. As well as trying to open myself up to input/advice
from people more experienced than I.

~~~
yetanotherphd
Fair point, as often happens I forgot to consider the possibility that your
preference for FPGA's caused you to get a job selling an FPGA platform and not
vice versa.

In any case, my own viewpoint is that CUDA and OpenCL are the best that has
been produced by a huge effort by well financed and technically sophisticated
groups. GPU computing offers advantages in flops/watt and flops/dollar, and
yet is still not widely utilized because of the increased programming
complexity.

Given this, I think it will be very hard for a small group to compete using a
completely new architecture. On the other hand, you are starting with a blank
slate and you are controlling the entire stack, so I hope you can use that to
your advantage.

------
polskibus
I wonder how it compares to:
[http://wiki.postgresql.org/wiki/PGStrom](http://wiki.postgresql.org/wiki/PGStrom)

(in terms of architecture, not performance benchmarks)

------
robabbott
In addition to GPU, Cray has recently added the Intel Phi coprocessor to its
XC30 Cascade supercomputers
([http://investors.cray.com/phoenix.zhtml?c=98390&p=irol-
newsA...](http://investors.cray.com/phoenix.zhtml?c=98390&p=irol-
newsArticle&ID=1860101&highlight=)). I think that this supports the argument
that certain problems are better handled on traditional processors than on
GPGPU platforms.

~~~
pbsd
The Xeon Phi resembles a GPU much more than it does a CPU. The original
project, Larrabee, was at one point meant to be a GPU that competed with
NVIDIA and ATI.

~~~
zurn
It's a multicore x86. It resembles GPUs only in the sense that it sits on its
own PCIe board and has a lot of cores, but the programming model is just x86.
It would have been very different from the NVidia/AMD competition if it had
ended up as a GPU.

One of the biggest obstacles to GPGPU exploitation is being at the mercy of
each vendors proprietary software stack and the resulting fragmentation & lack
of openness. It's like the pre-PC era, without Unix... Larrabee might have
helped. Now Xeon Phi is a niche product.

~~~
pbsd
It is an x86 with very weak in-order cores, which instead have very large
(512-bit) vector units. Very much like an NVIDIA "symmetric multiprocessor".
If you try to program the Xeon Phi anything like a general-purpose CPU, you
will not get within 5% of its power --- the programming model is essentially
that of a GPU. The one thing that makes the Xeon Phi more general-purposy than
current GPUs is the cache-coherence across cores.

~~~
zurn
They are programmed the way shared memory parallel machines have been used
since time immemorial in the HPC world. Hundreds of cores is normal there. The
same Intel software stack (OpenMP, Threaded Building Blocks, parallel Fortran
etc) is used, same one that Intel markets for regular HPC.

Yeah, you have slower cores and the SIMD is twice as wide, but its _main
selling point_ is that you get to keep the regular programming model - unlike
with NVidia or AMD. Along with benefits that come with it (mature software,
openness, etc).

See eg. Intel's Xeon Phi Programmin Guide for an intro:
[http://software.intel.com/sites/default/files/article/330164...](http://software.intel.com/sites/default/files/article/330164/an-
overview-of-programming-for-intel-xeon-processors-and-intel-xeon-phi-
coprocessors.pdf)

------
justin66
Has the guy released any code yet?

~~~
robabbott
I haven't seen any releases yet. One article says he plans to open source it
within the next year.

~~~
tmostak
Hey, MapD author here (Todd Mostak). I'm working on a release now - hopefully
it will be out by the end of the year! Would really appreciate help from
anyone interested in the project - todd@map-d.com

~~~
peatmoss
As a side note, I was excited to see that there was some geospatial support in
there. That's certainly piqued my interest! How much of a focus was spatial
support?

