Hacker News new | past | comments | ask | show | jobs | submit login
Raspberry Pi 2 Cluster Case, Part 2 (pocketcluster.wordpress.com)
73 points by stkim1 on July 23, 2015 | hide | past | favorite | 47 comments



Sometimes I get the feeling that I'm the only person who feels like these single board computers that use ARM SoCs designed for mobile devices have insufficient interconnect capability. The networking that these devices have always seems like an afterthought. For example a lot of these SoCs have the nic hanging off an internal USB port, which is a fairly convoluted way to go about things.

I really wish there were single board RPi / ODROID like devices available that sported support for a really performant and efficient interconnect fabric... like RapidI/O or something similar.


Parallella board ( http://www.parallella.org/board/ ) with Gbps FPGA Mezzanine connectors ( https://en.wikipedia.org/wiki/FPGA_Mezzanine_Card ).

Not in the price range of RPi but much more powerful. You can also try the Beaglebone Black... Also, from the ODROID block diagram, the Ethernet PHY is connected directly to the CPU MAC, afaik the issue is the bad (cost?) choice of the Raspberry Pi...


I was going to say, Pi isn't really the optimal choice here to begin with. For the same wattage, the Parallela is two orders of magnitude more powerful, but it's a more unusual processor to compute on. The Jetson TK-1 is the other option - an NVIDIA dev board for the Tegra K-1 SoC. It's also not entirely similar to CPU programming, but CUDA GPGPU is a more well-trodden path for HPC.

I don't know of any actual switched-fabric mezzanine cards available for the Parallela. If you have a link I'd be very interested. A similar approach would be to exploit a mini PCI-Express connector like the one on the Jetson. The Jetson also has a daughterboard connector available, but my understanding is that it's proprietary and I don't think NVIDIA offers a switched-fabric card.


Man, I was fully onboard with the Parallella back when it was on Kickstarter but I backed out towards the end once it became clear that they weren't going to be shipping the 64 core devices. To my knowledge there still is no way to connect them together except for the Nic that is integrated into the Zynq SoC.

I own a couple of ODROID boards and while the network performance is better than what you'll see on the RPi, it's still nothing to write home about. And that's sorta my whole point... if you're working with large enough problem using these low power boards that you've got to offload compute work to multiple devices, the interconnect fabric has got to be much better than what is commonly available right now.


What would be problems where multiple Raspi-like systems are be better than a single more powerful system?


Personally I'm not sure there are any, at least as things stand today.

However, I'm fairly convinced that future systems are not going to be powerful monolithic systems but instead large collections of low-power cooperative compute units. It's not a huge stretch to claim that these exercises are worthwhile efforts to learn how to best exploit future hardware architectures on hardware that exists and is available for purchase today.

I certainly would be happier, assuming limitations like the interconnect fabric were well solved, if a common unit of compute quanta was more along the lines of a €50-99, 10-15 watt, board rather than €1-2K, 1-1.5k watt, server.


Storage also is also such an issue. You also usually only get sata via USB. SATA and/or PCIe would be really nice to have


In fact, storage is a big issue. RPI tends to destroy SD card in a way that a cluster like mine isn't exactly safe unless you put high-grade SD card and very reliable power source. Those contribute somewhat higher operation cost which violate the purpose of these single board computer.


I've been really unsuccessful at getting a Pi to not destroy SD cards. Good quality cards, good quality power supplies, no swapping - it still eats them up.

The best way to fix this is to put the actual OS on an external disk. You absolutely must bootstrap from an SD card, there's no way around that. But that can be entirely read-only and then you bounce to the external drive. Or maybe you could do something like a PXE boot instead - copy a system image down from a server and boot that. Personal opinion, this is the only way I would try using Pis on a long-term basis.

The Jetson TK-1 board from NVIDIA includes a real SATA connector. Because it's GPGPU-capable it's also vastly more powerful than even a Pi2. It also has USB 3.0 so running swap on a memory stick would be much more performant.


PXE boot is something I thought about, but I don't know if it is possible on RPI. Do you have any link where I can just go read up about that?



Thank you!


Jetson is nice, if a bit pricey. Looking forward to X1 boards from NVIDIA


I don't suppose you've seen any hints of release dates have you?


Nothing so far, only available hardware that I know of using it is their Shield console


Are you storing the results of computation on the SD cards? Personally, I've mounted every directory that needs to be written to a tmpfs, USB disk or network mount as appropriate, and I've configured fstab to mount the root fs in readonly mode.

You can still write by just doing "mount -o remount, rw /" beforehand.


Yes I do. Since Apache Spark is chewing up memory like nothing, I cannot spare any portion of 1G mem for something else. This could lead to a shorter lifespan of SD cards though.


I generally set mine up to use the sd just for boot, and have the actual file system on a usb thumb drive. Works great, and haven't had any sd card corruption issues since I started doing it.


I agree and SATA via USB is just as suboptimal as GigE via USB (2.0).

I'm still sort of aggrieved that Solid State via DDR got caught up in lawsuits and is still commanding exploitative pricing. SATA for Solid State isn't exactly optimal either, though obviously not nearly as terrible as USB - SATA - SSD. SSDs with PCIe interfaces are so common now, it's shame not to see wider support for it.

A little tidbit I saw in HP's "The Machine" promotional talks was the idea of "fabric attached memory". If ARM 64 & RapidI/O ever make it into widespread use, it would be great if there were various forms of memory with RapidI/O interfaces commonly available.


Have a look at the Banana Pi or even the Banana Pro


There was some server company I can't remember the name of right now (bare metal or something?)that offered roughly rasppi sized servers for rent from them- they weren't offering it yet, but they said you should be able to buy them within a year or so from them.

Also, don't forget that USB is just another serial protocol, if it wasn't USB it would just be some other serial protocol. It may not be the best option, but with stuff like the rasspi I imagine price is a bigger concern.

Edit: found them https://www.scaleway.com/


Swedish FSdata does that.

https://fsdata.se/server/raspberry-pi-colocation/

Edit: They did that, apparently the service is "on pause".


This is the service mentioned on the Raspberry Pi Foundation's website:

http://raspberrycolocation.com/

For 36 Euro's (39.54 USD) per year, they will host your RPi with a 100Mbit pipe and 500GB of traffic.


The ESP8266 is generally pretty good this way - where many people use mqtt (over wifi). However, you're still right - would be nice to have something much lighter done well. The next Espressif chip will be SoC with onboard bluetooth LE.


I see @stkim1 is also the author of the blog. Performance-wise, how does this RPi cluster compare to a more standard platform? Did you make it only as a development platform or do you intend to have production code running in it?


I've never imagined the RPI cluster to be taken seriously for production purpose. It only serves two purposes; education and developmental staging. For those two, you have a real hardware cluster that serves you superbly showing every nature you can experience with big-ass datacenter cluster.


What do you think about using a cluster of VMs on a normal PC as an alternative? Did you look into that?


I'm looking into that. When you have a lot of memory, it's in fact more convenient in a way that you can carry the entire cluster in your laptop. lol


As someone else who's also built a cluster of Pi 2s, I would agree that the primary purpose is education and fun/tinkering, not performance.

That being said, for light web hosting purposes, a cluster of six Pi 2s performs roughly 70% as well as a similar cluster of 6 VMs on Digital Ocean. See more on this project's wiki, in the performance/benchmarking section: https://github.com/geerlingguy/raspberry-pi-dramble


Those little aluminium pillars are brilliant. You can see my (real PC) cluster here that I built using bare motherboards and pillars:

https://rwmj.wordpress.com/2014/04/28/caseless-virtualizatio...


You are da man. This is really cool! I can only imagine how much you've been through!



David Guill is another deadly serious RPI cluster builder. I don't know what he's running, but I can only admire what he had done.

http://likemagicappears.com/projects/raspberry-pi-cluster/


Here's a Beowulf Cluster with 2^5 Raspberry Pi's running Arch Linux (for ARM) http://coen.boisestate.edu/ece/files/2013/05/Creating.a.Rasp...


I've been tempted several times to build a small cluster of low-power boards for a compute cluster. Not that I have much direct use for that at the moment. I recently got a new laptop with 16GB of RAM, so just running 8 VMs with 1GB of RAM each still leaves plenty left for development. With the VMs, I can just run regular 64-bit Linux, so that makes software and setup a non-issue.

I've got so many other projects on the back-burner I shouldn't even be reading about this sort of thing. But it does look fun!


With 1GB of ram there are no problems running 64bit linux? I thought you needed 4+.


Yes, you can run 64-bit Linux with even less if you like, and that is quite common. [0]

More RAM is always nice, of course.

[0] https://www.digitalocean.com/pricing/


Other way around. You need 64bit to use more than 4GB ram.


What is the use case for clustering Raspberry PIs? Why not build a more powerful computer within that same footprint?


As a productive tool, I can't think of a use at all.

For learning distributed computing, I think has some advantages over just running all of the nodes on a single powerful computer. It means you can't hide from doing things scalably -- i.e. network costs between your processes are real; if your load isn't well-distributed across your nodes the OS scheduler can't save you; etc.


It's uhh.. cool. Haha. I can't think of a good reason either, but we're all hobbyists here, right? It looks like a fun weekend project.


There was a great article a while back about a guy running a warehouse-sized bitcoin mining operation, using RPi's to coordinate the many thousands of custom chips doing the mining. The Pi was the cheapest full linux a system available at the time, and performance wasn't really an issue.


Fun and learning. Is that so bad?


It only serves two fronts; education and development. Education-wise, you need something dirt-cheap to play with. For developer, you need some intermediate stage to deploy your work to test before going after a big cluster.


Raspberry Pi cluster? Average PC with virtualization would be better and probably cheaper choice.


Startech.com sells 15cm micro USB cables, they work great with my Pis.


Thank you very much! I'll look into that!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: