
A supercomputer at home - theoneill
http://badrix.wordpress.com/2008/07/14/a-supercomputer-at-home/
======
hugh
That was confusing and badly written ("This was not the first time I get
entrepreneurial ideas.") so I've read it several times and I still can't
figure out what he's on about.

Is his idea that somebody might want to pay to use his three-CPU,
inhomogeneous, linked-by-ethernet cluster of useless old machines?

Or does he want to expand it to a bigger inhomogeneous cluster of useless old
machines, until he figures out that he's going to need a much bigger air
conditioner (somewhere around the 16 CPU mark)?

~~~
gaius
Realistically the compute-density of anything but made-for-purpose kit at the
moment makes clustering desktop PCs at any scale economically infeasible. We
think in units of computational power per kilowatt (including cooling) and per
tonne of floor loading nowadays.

The only play in the desktop space is hoovering up unused cycles on desktops
that an organization happens to own anyway, primarily for interactive use.

Also, that FORTRAN is a strong presence in numerical computing is very, very
common knowledge. Someone who learnt that yesterday is unlikely to have much
experience with large-scale computation... Certainly not enough to pull off
the democratization of supercomputing by himself.

------
bbgm
The home built "cluster" is not something new. A lot of academics have been
doing this for years, sometimes in their labs, sometimes at home. The small
cluster is pretty much dead (as per Chris Dadginian). If I wanted to do a lot
of number crunching, the way to go would be to get a dedicated 8-core machine
and likely use some kind of accelerator (GPU or other). If I wanted more
juice, it does get tricky. A serious number crunching cluster needs some fast
interconnects, etc, which are not cheap.

The fact that he is not familiar with now much parallel code is written in
Fortran is a little troubling. I wonder how much he knows about Infiniband,
high performance storage, cooling, etc, the tricks that lead to real high
teraflops --> sub-petaflop performance, cause if I am not getting that, then
his offering has limited use.

------
Tichy
2P3s and 1 P4 is not a supercomputer.

I bought a Quadcore PC with 4GB Ram for 600€ from Dell recently. A price tag
that makes the time to glue your own rotting hardware together a very
questionable investment.

If really interested in "Supercomputing", I suppose Cell processor (PS3) or
GPUs are the way to go...

Also, energy consumption might be important. Probably the old CPUs don't
really look so good in that respect.

~~~
manvsmachine
I totally agree. He's trying to retrofit hardware that is ridiculously
outdated for this purpose. And why bother when a few thousand dollars will get
the average Joe a good size PS3 cluster, which has been proven to be effective
for near-supercomputing applications?

------
martey
This article was somewhat interesting, but I have to wonder about whether
building your own cluster at home would be more cost-effective (considering
paying for hardware replacement, electricity, etc.) that just using Amazon
S3/EC2.

Also, if I was one of the author's prospective customers, I would be more
likely to put my trust in Amazon than some guy with 3 computers in his
basement.

~~~
SingAlong
Maybe he has a plan to come up with something like Engine Yard. Every great
drawing starts with a dot.

------
iamwil
I've had the inclination to do this sort of thing, but the more hardware you
have, the more likely, you'll have a component on some machine that will fail.
As one person, you might end up spending all your time replacing or throwing
away broken parts to keep the cluster up.

The other this is to pay for the electricity of a couple boxes running,
regardless of how slow they are at a larger scale.

This sort of thing is good as a learning experience, imo, but if one's hoping
to scale it up expect to run into lots of infrastructure problems one has to
solve.

------
vizard
I wonder if a GPU might be more suitable for his task. Even the 8800GTX is
known to do single precision FFTs at more than 55 Gflops which is an order of
magnitude more than even contemporary CPUs let alone P4.

~~~
DarkShikari
One danger of the term "gigaflop" is how it is measured, and what you use to
measure it. Also note that CPUs get a lot better when you start using SIMD
code instead of scalar.

One classic example of the danger of the word "gigaflop" is that of the
exhaustive motion search. If we define a single mathematical operation as a
"flop" (technically an iop, since this is integer math), using Sequential
Elimination, an optimized exhaustive search algorithm, an 8-core Core 2 system
can crank out over 2.7 teraflop-equivalents of processing.

~~~
vizard
Sorry for replying a bit late. For FFTs, flops are measured in a standardized
way. If you are doing an FFT of length N, the number of flops is counted as 5
N log N no matter how the actual FFT is computed. So, in the case of FFT,
really you just specify a length N and measure the time.

For CPUs, the numbers using FFTW, one of the fastest FFT libraries that does
take advantage of SIMD, the numbers usually do not exceed 5-6 gflops
particularly for larger lengths.

OTOH, the above 55 gflops figure is also somewhat misleading since it does not
include transfer time of data b/w RAM and GPU. Actual throughput is somewhere
around 20gflops. On one particular project using FFT, I got around 15 gflops
using GPU including transfer time while testing several FFT libraries, I never
got above 3 gflops on a 2.4ghz quad-core using all four cores. The lenghts
were big enough not to fit into cache thus reducing CPU performance
considerably.

------
biohacker42
Supercomputers at home will be common soon. Intel is going the road of multi-
core, I'm typing this on a machine with two 4 core CPUs. Soon we'll have not
just 8 but 16 and then 64, etc, cores in a sub $2K machine you can buy from
Dell or HP or who ever.

~~~
gaius
Arguably supercomputers at home _are_ common, and have been for years. How
does your machine stack up against a Cray Y-MP?

(That Cray got 330 megaflops/core, Macs are peaking at around 1400 - each with
8).

~~~
wmf
The term "supercomputer" becomes meaningless unless its definition is
periodically revised. IMO these days you need over 1 TFLOPS to quality.

------
Anon84
You might also find this tutorial interesting

<http://www.linux.com/articles/49654>

------
lupin_sansei
If he just wants to make money from old computers he could install MythTV on
it, add tuner cards, and sell a subscription to a remote MythTV box. People
overseas might pay for it.

Or break the machines up and sell each part on ebay?

------
rw
"who would want a sumpercomputer you might ask?"

 _I_ want a sumpercomputer!

------
SingAlong
Hmm... Interesting.

Just a couple of days back I was wondering if I can my web apps from home with
a P3 and a P4. The max internet speed in my city is 2 Mbps. Will it be enough
if I have two 2 Mbps connections and bridge them? Is it enough to run a
Twitter clone for my classmates? (and also my news app)

But the security is what bothers me. I plan to use linux and I am using
Slackware on my desktop since a month (I refer the docs a dozen times a day).
Is very high knowledge of the linux internals necessary to run a web server?

~~~
Tichy
Go for it.

------
wmf
P3s and P4s? Congratulations, you're killing the planet.

