

Virginia Tech unveils HokieSpeed, a powerful new supercomputer for the masses - alexobenauer
http://www.vtnews.vt.edu/articles/2012/01/010512-engineering-hokiespeed.html

======
hiptobecubic
I don't understand these articles that assume you understand what "two
2.40-gigahertz Intel Xeon E5645 6-core... and two NVIDIA M2050/C2050 448-core
.... which reside on a Supermicro 2026GT0TRF motherboard" means, but didn't
already know what "CPU" and "GPU" stand for.

------
yardie
As a Hokie, this is good but SystemX was way more revolutionary. At the time,
supercomputers weren't built on consumer hardware and the fact that it was in
the top 5 (3?) with a bunch of G5s was like a shockwave. Basically, COTS
supercomputers had finally arrived when SystemX came out.

~~~
wmf
Too bad System X was a publicity stunt that was never used for science AFAIK.

~~~
yardie
I left in 03 so AFAIK the researchers using SystemX were coming from ICAM and
the Bioinformatics department. The professor I worked under (as a gopher) had
a 4way G4 cluster doing protein synthesis. He was very excited to get
processor time on this supercomputer.

But whatever, some publicity stunt. They were stupid enough to waste money
upgrading and expanding it because no one was using it.

------
FeministHacker
A bit of engineer's rough estimating:

System was built in 2003 - 8 years ago. Assume moore's law of doubles every 2
years - expectation of 16x more powerful Actual increase in performance - 22x
in 1/4th the size

Given that it's then a quarter of the size of system X, that's an amazing
increase in peak performance.

There's only one problem - that speed increase appears to owe a lot to the use
of GPGPU. As I understand it, whilst research into GPGPU for HPC* is a hot
area at the moment, the scale of the actual benefits it offers is still a
matter of debate (especially when considering costs and power consumption).

~~~
scott_s
From my perspective, the biggest limitation in using GPUs for more general
purpose computations is the communication latency. I published a paper that
came to that conclusion:
<http://people.cs.vt.edu/~scschnei/papers/debs2010.pdf>

In short, parallelism is not enough to get benefit from using GPUs. You need
parallelism _and_ data reuse.

------
dfc
I am not trying to be difficult but how is a $1.4M anything for the masses?
Are they allowing the public to run jobs on the machine? Is there anything new
about HS that brings SC500 power closer to my home office?

~~~
jrappleye
If by 'SC500' you mean a supercomputer that has a high ranking on the Top500
list, it's already here:
[http://arstechnica.com/business/news/2011/11/amazons-
cloud-i...](http://arstechnica.com/business/news/2011/11/amazons-cloud-is-the-
worlds-42nd-fastest-supercomputer.ars)

~~~
dfc
I meant how does the VT system bring SC500 to the masses. As in the
"supercomputer for the masses"?

~~~
jrappleye
Sounds like they're interested in improving GPU development tools, and how to
utilize them for tasks they're good for versus what a traditional CPU is good
for. From [http://www.wired.com/wiredenterprise/2011/12/vt-
supercompute...](http://www.wired.com/wiredenterprise/2011/12/vt-
supercomputer/)

"The Virginia Tech team is figuring out how to best to farm out computing jobs
so that the GPUs and CPUs do what they do best, without ever going idle, and
without spending too much time communicating with one another.

It’s not easy, but they’re using HokieSpeed to build tools for designing and
compiling software so that it can be tweaked to run fast on these systems.
They’ve also built what they call an “automated runtime system,” which works
with the supercomputer’s operating system to speed things up even further."

I think the 'supercomputing for the masses' part comes from the fact that the
peak performance of a relatively low-cost GPU is large compared to a CPU, but
actually taking advantage of that performance is still difficult. They're
aiming to change that with improved software development tools. At least,
that's what I gathered from the article.

Here's a link to research they're doing on GPU computing:
<http://synergy.cs.vt.edu/>

------
Yhippa
From Feng: “The next frontier is to take high-performance computing, in
particular supercomputers such as HokieSpeed, and personalize it for the
masses.”

This is one of my favorite things about technology. As few as five years ago I
was drooling about an Intel C2D chip and looking forward to quad-core chips.
Nowadays that's standard technology in smartphones. I'm amazed by the graphics
performance in the new Asus Transformer Prime with the Tegra 3 chip which has
up to five cores.

The biggest problem with packing more power into smaller spaces is the battery
consumption issue. Maybe there's no silver bullet for that only more efficient
software and hardware usage.

