This might seem impractical, but many companies, including Facebook, are examining "wimpy clusters" (actual term). Many ARM based devices can perform about 1/7 as fast as an i7, and this number is getting better every day.
The Raspberry Pi strikes me as a particularly ill suited platform choice for a cluster. I'm not going to complain about using ARM, that's a fair enough choice. But couldn't they have at least chosen an ARM that actually has ethernet on board instead of connected by USB? There are Pogoplugs with a faster ARM processor and built in GigE for $16 or $22 on Amazon.
For the cost of a $35 Raspberry Pi, you can rent a considerably more powerful "small" EC2 spot instance for 7 months, or a more comparable "micro" for 16 months. Much longer if you don't need it running 24/7, of course.
Granted, that's no substitute if you need something with physical I/O ports, but if you want to learn how to set up a real compute cluster, EC2 is a more realistic environment.
The value of students being able to see and touch what they are working with in the back of their classroom should really not be underestimated I think. Furthermore, a raspberry pi cluster allows the students the opportunity to work with pretty much every aspect of supercomputing.
Those two aspects combined more than make up for the really trivial cost Pi's may cost over micro EC2 instances unless you are really interested in pinching pennies above other considerations.
For a few hundred dollars you can build something that approximates a (very-slow!) supercompute cluster. Great for education or testing/debugging concurrent code. See the Limulus project for a slightly more upscale approach:
I'm guessing fun/education. This is both a fun project if your hobby involves Pi's or cluster computing, and it could be used to illustrate clusters to students.
A GA144 outperforms a Raspberry Pi by a factor of exactly 144 (that's how many equivalent cores it has) and costs less money to put together for the effort these guys put in.
Now, programming it might be a mindfuck, but that's another story.
It's hip. There's no real point to doing this. In terms of fixed costs and performance per watt, x86 is better. Even if this is for education, it'd be cheaper and easier to simply buy a single core i7 and build a virtual cluster on the machine.
The pi's real value is in the hardware decoder, the hdmi out, the gpio pins, and size of the device.
A cluster made of raspberrypi's can be used to illustrate all the pitfalls and issues associated with larger clusters, but costs a fraction as much. As such they are a perfect playground. How come this workload that runs in time 'x' on one pi doesn't run in time 'x/n' on n pis? That applies to any cluster, and figuring out the answers to that and related questions (reliability, latency and so on) is a really good application of such micro clusters.
Nobody is going to do run production jobs on a cluster like this. A virtual cluster on a single i7 performs subtly different in many ways and will be unable to illustrate many real world problems. So for education it's simply not the same thing.