
Parallella, A $99 Supercomputer Running Ubuntu - vilgax
http://www.ubuntuvibes.com/2012/09/parallella-99-supercomputer-running.html
======
montecarl
This is hyperbole. A supercomputer in 2012 is a computing platform that can do
at least tens or maybe hundreds of Tera FLOPS. What is described here is dual
core ARM platform with some sort of vector like co-processor.

Even a huge grid of these probably wouldn't qualify as a supercomputer.
Gigabit ethernet has much too high of a latency to be a valid interconnect for
coupled parallel problems.

Don't get me wrong, its neat and I'd love to see some benchmarks, but its not
a SUPERcomputer.

~~~
pi18n
I'd like to see what instructions are available on the co-processors. It seems
like it would be great for physical simulations and other embarrassingly
parallel things. I guess there's some need for things in between regular CPUs
and expensive computing grids. Offhand, Master's and Phd. candidates might
have simulations to do that this would be useful for and other platforms would
be overkill/too expensive.

~~~
wmf
_I guess there's some need for things in between regular CPUs and expensive
computing grids._

Like GPUs?

~~~
dfc
Yes GPU-like devices _that consume 5 watts_.

~~~
wmf
If you cut down a GPU to only 64 "cores" and only 700 MHz it may very well
consume only 5 W. e.g. AMD Brazos/Kabini.

~~~
dfc
Come on, are you being difficult on purpose? How are you going to boot the
graphics card? Access network or disk resources?

~~~
wmf
I'm pretty sure the Adapteva chip can't do that either; it requires a host
processor just like a GPU does.

~~~
vidarh
... which is on the board.

------
microtherion
As a rule of thumb, I avoid buying supercomputers from people who state their
performance figures in "GHz".

~~~
gavanwoolery
Definitely a good point, but maybe the project creators are trying to appeal
to a broader audience (one that only understands GHz as a measure of CPU
power)? I'm sure they know that GHz does not directly correlate to
performance. Just playing devil's advocate, I have nothing riding on their
project, but I would like to see someone shake up the chip industry (if
possible).

------
SamuelKillin
1\. "it's laggy running ubuntu - what's the big deal". It's running Ubuntu
with the dual-core ARM CPU on board, not the Epiphany chips. The guy is
demonstrating that the boards they are shipping allow for a user friendly
environment for which you can jump onto and use Eclipse to write code for the
multicore chips. This has NOTHING to do with the multi-core chips themselves,
and has no intent of demonstrating the power of the "supercomputer" part of
the board.

2\. "Why would you use this if you could just use a GPU - they're really
parallel right??" - GPUs are very very different beasts to CPUs. They are
great at what they do, but they are tailored for very specific problems. Look
up SIMD. A tonne of general purpose programs which need, for example, a simple
'if statement' quickly break down under SIMD.

3\. "This will be great for mining bitcoins" - yeah. but you can do it on a
GPU so stick to that. As far I can see (and why I backed the project), this
board will be great for those problems which are not immediately or easily
implementable as a wavefrontable algorithm for the GPU. I'm hoping you can
just write a c program utilising pthreads which will be run on the Epiphanies
cores

------
anonymouz
"Making parallel computing easy to use has been described as "a problem as
hard as any that computer science has faced". With such a big challenge ahead,
we need to make sure that every programmer has access to cheap and open
parallel hardware and development tools."

But the real challenge is in parallelizing the algorithms, reducing data
dependencies, and so on. I can get my feet wet with parallel processing on a
multi-core PC just fine; making a program run efficiently in parallel is an
entirely different challenge, and I don't see how this platform can help me do
that.

~~~
forgottenpaswrd
"I can get my feet wet with parallel processing on a multi-core PC just fine"

No, you can't, unless you pay severals orders of magnitude more than $99.
(Affordable)Multi core today means 2,4 cores at most . You can get your feet
wet with the graphics card though. I did, and this is the reason I'm backing
this project.

Parallel computing is a different paradigm that serial, in fact is almost the
opposite, instead of a big central memory you program for small distributed
blocks of memory. Taking this into account means x200 faster than just not.

Once you have a parallel design you can change it for different platforms or
even hardware.(witch is parallel by nature), very easy. But you need a
platform that is flexible(more than FPGAs) and near to software tools enough
for testing and this is great.

------
neurotech1
This was covered by <http://news.ycombinator.com/item?id=4583263>

I'm cautiously optimistic The $99 board will succeed.

------
icelancer
It's interesting enough and I applaud the effort, but a $750k funding goal is
ludicrous. Places like Penny Arcade didn't even crack $600k to an infinitely
bigger audience.

~~~
wisty
$750k is peanuts to many of the people who will be interested in this. The
problem is, it's not a B2C sale, it's the kind of thing where you need a few
salesmen.

Hit up universities, research groups, Boeing, Ford, the NSA. Tell them it
won't just save them costs, but help train the next generation of modellers.

At the very least, they need resources (like a PPT deck) for _internal_
advocates to use.

------
army
The FLOPS/watt ratio might be higher than regular servers or desktop machines
( _might_ be, its not clear), but for a lot of small scale homebrewed
deployments it's not realistic to expect linear scaling (given the
problems/algorithms/available amount of development time/expertise) - in
practice decent single-core performance is important.

Ie. in many cases if you have the development and ops expertise to get stuff
to scale to many cores, then you probably have the budget to get more serious
hardware.

------
rocky1138
Watched a 20 minute long interview, watched a 45 second video of a person
using Ubuntu (performance was laggy, to be honest), and read numerous
articles.

There's only one question unanswered, and it's the most important one: What
can we DO with this thing?

It's not about the hardware, it's about the software! Show me demos of things
that are not possible without this hardware and I'll be impressed. Show me how
this new $99 multicore solution will offer new experiences and I'll be
interested.

------
comex
I'm inclined to really like this, even if the CPUs aren't really open, but one
of the videos is a bit odd:

[https://dl.dropbox.com/u/1237941/vlcsnap-2012-09-29-01h48m13...](https://dl.dropbox.com/u/1237941/vlcsnap-2012-09-29-01h48m13s97.png)

Never mind the dubious use of pure C rather than SIMD instructions... why are
they doing benchmarks with a function that has all the arguments marked
volatile!?

~~~
wmf
This chip doesn't have SIMD; in theory that makes it easier to program.

------
ck2
Advantages over multi-gpu core?

Other than the obvious of running a standard OS...

~~~
forgottenpaswrd
"Advantages over multi-gpu core?"

Well, multi GPU debugging is terrible. You need different cards(you can't use
the one that powers the display) and there is only one company that counts
there, Nvidia.

Nvidia is married with Microsoft, and the only intuitive tool you can use for
debugging is Windows-only, no mac or Linux support.

No UNIX support in a pro tool is a big no-no for me.

Another problem is that it evolves from graphics and you need to use graphic
concepts whether you need it or not.

The good side of doing that is that we can take advantage of the economies of
scale of game tech to get good prices.

The bad side is that you can't use it as a stand-alone tool for what you want,
like chemical or physical problems.

~~~
verroq
I take it you never used CUDA or OpenCL before? Because everything you said is
complete bullshit.

>Well, multi GPU debugging is terrible. You need different cards(you can't use
the one that powers the display) and there is only one company that counts
there, Nvidia.

You can run computations on the same card as the display. You can compile to
software emulation to debug logic code.

>Nvidia is married with Microsoft, and the only intuitive tool you can use for
debugging is Windows-only, no mac or Linux support.

CUDA works on Windows and Linux, not sure how good the mac support is.

>Another problem is that it evolves from graphics and you need to use graphic
concepts whether you need it or not.

You don't need to understand any graphics concepts. It's parallel programming
concepts you need.

------
alexchamberlain
With appropriate peripherals, would this make a great router platform?

~~~
foxhill
perhaps if you frequently perform computational fluid dynamics on it.

