

$99 Raspberry Pi-sized “supercomputer” touted in Kickstarter project - suprgeek
http://arstechnica.com/information-technology/2012/09/99-raspberry-pi-sized-supercomputer-touted-in-kickstarter-project/

======
archangel_one
Ugh. They keep using the term "gigahertz", and it does not mean what they
think it means. Having 16 800MHz boards does _not_ mean that you have anything
running at 13GHz; it's nonsensical to combine it that way, as any high school
science student could tell them.

This doesn't necessarily invalidate the product, but it really does put me off
it. If they're going to market a product like that it's going to be to a
pretty techy crowd and they should get their terminology right.

~~~
UnoriginalGuy
While I agree that 16x800 Mhz is more accurate than 13 Ghz; I wouldn't say
they got the terminology "wrong."

A lot of companies are using the combined totals to tell how fast/big/etc
something is. It isn't even a recent development either.

Just for one example, if you go to Dell's store right now you'll find
computers with 8 GB of RAM; but what they fail to tell you is that that is 2x4
GB sticks rather than 1x8 GB stick. So it is actually the "total" RAM in the
machine.

Now you might argue that for technical reasons there is a difference between
how RAM and CPU-cores are exposed to the underlying software, but that within
its self is only true in some infrastructural designs, but not true in others
(e.g. Sometimes two cores pretend to be one, sometimes a single core pretends
to be two).

~~~
wheels
The RAM example is terrible. Accumulating RAM is application-transparent.
Processor accumulation is not.

Two small shovels aren't the same as one big shovel, but two small bags of
dirt are basically the same as one big bag of dirt. If you're going to
continue with the memory analogy, this would be like combining the graphics
memory and general purpose RAM into a single figure.

~~~
backprojection
So a better analogy would be to quote total FLOPS.

~~~
addandsubtract
It would've been better to call it a hexa-core 800mHz machine.

~~~
flurpitude
Isn't hexa-core six cores? Whereas this has sixteen.

~~~
archangel_one
Yes, probably it should be a hexadecicore or something along those lines.

~~~
Arelius
I take pause at the term of "hexadecicore" and as we go up, it just starts to
get more complicated... what would you call a 32 core machine? That just seems
like an overly complicated and confusing approach to the problem.

~~~
archangel_one
Well, I was obviously playing on "hexadecimal" which isn't great itself; it's
a mish-mash of Latin and Greek, and should probably be "deca" instead of
"deci" anyway. Regardless, it can't be worse than those stupid resolution
acronyms like WUQXVGA which are, for some inexplicable reason, still managing
to survive.

------
EwanToo
It's just misleading nonsense to call this a supercomputer, with or without
the quotes around it.

I dug out this article from 2007 about building a 26 gigaflops "supercomputer"
called the Microwulf Supercomputer which cost under $2500 [1].

And according to Intel [2], the Intel i5-650 processor can do 25.6 GFLOPs.

This is just the worst kind of kickstarter marketing.

1 - [http://www.geekologie.com/2007/09/supercomputer-
does-26-giga...](http://www.geekologie.com/2007/09/supercomputer-
does-26-gigaflop.php)

2 -
[http://download.intel.com/support/processors/corei5/sb/core_...](http://download.intel.com/support/processors/corei5/sb/core_i5-600_d.pdf)

~~~
bryanlarsen
There are GPUs that can do a teraflop, so yes, 26 gigaflops is terribly
unimpressive. There are some areas that can't be easily handled by a GPU, but
those are mainly bandwidth-limited, which wouldn't be improved by this
architecture.

The defining feature of this proposal is their 70 gigaflops/watt number, which
is impressive. But that doesn't make it a suitable target for hobbyists.
Hobbyists who want to play with supercomputing for cheap should just use their
existing video card.

~~~
astrodust
Even a relatively commodity card costing $100 can do 1 teraflop with the
higher-end cards already pushing beyond 8 teraflops.

The days of a gigaflop being an impressive unit of measurement are surely
over.

------
chubot
I don't get why it has only 1GB of RAM to go with all those cores. Doesn't
that drastically limit the potential applications?

It doesn't sound like they have compelling applications in mind -- they're
kind of throwing it out there to see what people will do with it. But there
are only a few things that needs so much computation and so little memory.
Even a lot of scientific computing today is more like "big data" than "big
compute".

Also, it seems obvious that you'll want to use more than one together, so some
info about network connectivity would be useful.

~~~
UnoriginalGuy
It is a compute device. I presume the use would be any scenario where you want
a very fast, very concurrent, calculation engine but one that doesn't depend
on an extremely large data set.

One random example I can think of would be cracking encryption and or hashes
(without the use of rainbow tables or similar).

Your entire comment essentially seems to be "I don't think a usage scenario
exists, because I cannot think of any, so why didn't they make it a 'large
data' device instead of a compute device?!" which seems more a limitation on
yourself than the device its self.

~~~
chubot
No, my comment is: what are some of those applications? If your answer is that
it's for running quadratic or exponential algorithms on medium-size data sets,
then that explains a lot.

The relatively few applications explains why they are using Kickstarter for
funding. If it had more applications, then people with deep pockets would be
falling over themselves to give them money.

There's nothing wrong with this -- the page is just so full of marketing-speak
that it's hard to tell.

------
ukoki
Sounds good. I have an issue with the "45GHz" claim though - it's kind of like
saying your collection of cars can go 2000 mph - you're not actually getting
anywhere faster.

~~~
abcd_f
Car analogy is off.

Going from A to B is exactly an example of a task that can't be solved in
parallel fashion. On the other hand, if you needed to merely cover N miles,
then a pool of cars would work just fine.

~~~
josephlord
Many computing problems can't be parallelised and in almost all there are
dependencies and synchronisation issues that mean a single double speed CPU is
almost always better than two normal ones. Exception might be that when a
buggy programme gets locked in a tight loop it will only be able to take half
the resources.

To use another car analogy. Counting clock speed is like counting the cars max
RPM instead of actually measuring the performance. And then they have totalled
the max RPM figures as if they have have 90,000RPM engine.

~~~
alexchamberlain
Conversely, there are many problems, most notably in the low-level
mathematics, which can be parallelised. Matrix multiplication for example...

~~~
josephlord
Yes there are infinitely* many problems that can be parallelised but even in
the best case they will only equal the single processor at double the speed.

[http://home.wlu.edu/~whaleyt/classes/parallel/topics/amdahl....](http://home.wlu.edu/~whaleyt/classes/parallel/topics/amdahl.html)

I should have referenced Amdahl's law in the previous post but didn't want to
misspell it and bother to look it up.

*Infinite numbers exist of both parallel and serial computing requirements. Practically there will almost always be additional cost in parallel implementation even if the extra is insignificant such as higher start up cost and coordination for shutdown.

~~~
mclarkk
Amdahl's Law did throw a damp cloth over the fire of enthusiasm about parallel
computing...for a while. But then we got Gustafson's Law:

<http://en.wikipedia.org/wiki/Gustafson%27s_Law>

Gustafson and Barsis pointed out that Amdahl assumed a fixed input size. If
instead you grow the problem size along with the number of processors, the
speedup from parallelism grows indefinitely. Of course, like Amdahl, they
assume perfect load balancing and don't factor in communication overhead. But
parallel still beats the pants off of serial if we want to keep solving larger
and larger problems.

------
ajdecon
Calling this a "supercomputer" is a misnomer (though it's a sexy one), because
it makes people expect real high performance. As many have pointed out, a
single GPU could beat this thing easily in terms of performance. It's really a
dev platform for shared-memory parallel computing, mimicking some types of
supercomputer architectures... which isn't to say it's not awesome.

I've seen similar projects in the distributed-memory space (tiny clusters like
Limulus[1], MicroWulf[2], or LittleFe[3]). These things are great for
educational purposes, and the low cost makes them a lot of fun for classes and
workshops. LittleFe, for example, supports distributed-memory and GPGPU
programming and teaches you about clusters, and the educational program at the
Supercomputing conference lets some educators build one and take it home for
free.

But for professional work, I think a workstation-class PC, maybe with a GPU,
is always going to win out.

[1] <http://limulus.basement-supercomputing.com/> [2]
<http://www.calvin.edu/~adams/research/microwulf/> [3] <http://littlefe.net/>

------
norswap
> The Kickstarter page went live today. A pledge of $99 guarantees supporters
> a 16-core board by May 2013, while a pledge of $499 guarantees delivery by
> February. The current hardware is in the prototype phase.

 _sigh_ There is no guarantee on Kickstarters.

------
jlebrech
So if those boards have the same performance of an i5 cpu (25GigaFlops) can
they be clustered together to reach a higher amount, say would 10 of them
produce 450 GigaFlops?

------
femto
Has anyone tried building something about a GreenArray [1]? 144 computers per
chip, $20 per chip in with an MOQ of 10. I'm not affiliated with them. It just
sounds like an interesting chip, and I'm curious whether there is a reason not
to buy some for experimentation.

[1] <http://www.greenarraychips.com/>

------
s_henry_paulson
Hopefully not a silly question, but would it make sense to mine bitcoins with
something like this, or are GPUs still more effective?

~~~
EwanToo
This has got less floating point calculation power than a single mid-range
Intel processor, let alone a high end GPU that people use for bitcoin mining.

~~~
wmf
Bitcoin mining is pure integer, actually.

------
duskwuff
Looks like their prototype is based on a Xilinx Zynq dev board from Digilent:

[https://www.digilentinc.com/Products/Detail.cfm?Prod=ZEDBOAR...](https://www.digilentinc.com/Products/Detail.cfm?Prod=ZEDBOARD)

No idea what's hiding on the FMC daughterboard, though.

------
ralph
See also <http://news.ycombinator.com/item?id=4583263> for comments though it
links straight to Kickstarter, and not Ars Technica.

------
icelancer
Interesting, but a laughably high funding goal. No chance they hit $750k.

------
swansong
Absolutely loving the 4MB jpeg on the homepage.

Seriously though, this sounds amazing :)

------
cgayle
For 99$, I would give this a try

