Hacker News new | past | comments | ask | show | jobs | submit login

Back at the office, we were recently talking about the possibility of really cheap (in terms of power requirement) cloud servers which are equivalent of raspberryPi's with soldered on flash cards around the 32-64gig range. I'd bet you can pack a shitload of these in a 1U box and still have power and cooling to spare. The only expensive part might be budgeting for all those ethernet ports on the switch and uplink capacity (for bandwidth intensive servers).

One of the engineers tried running our server stack on a raspberry for a laugh.. I was gobsmacked to hear that the whole thing just worked (it's a custom networking protocol stack running in userspace) if just a bit slower than usual. I can imagine making use of loads of ultra-cheap servers distributed all around the world... IF the networking issue can be solved.

Perhaps the time is right for a more compact and cheaper wired networking solution... maybe limit the bandwidth to 1Gbps but make it ultra compact with much cheaper electronics. Sigh... a man can dream.




Small systems are good for realtime applications. Then the resources for an application have to be ready all the time.

In terms of space/power/reliability/scalability large systems win. Sure a single raspberry doesn't draw much power. But it doesn't provide much computing power either. Throw in a few hundred of those systems and you feel the heat. Still the computing power is comparable to a single rack server. You want to use a RAID6 on the raspberry, sure, can be done but throw in 4 times the number storage devices. Compare that to the rack server with 16 SSDs configured as RAID6 where the data is shared by hundreds of virtual machines. If you compare the "energy per bit" or "energy per operation" the high-powered server CPUs win versus most anything out there.

So I'd say:

* If you can justify a real server, do use one instead of dozens of "simple machines".

* If you can use the cloud do it instead of providing your own hardware.

* Exceptions may apply where security or reliability is concerned. (You wouldn't run your heart monitor in the cloud when a small dedicated system does the job.)


What's the current state of the art for realtime + internet? I would have thought that once you get to the first other network device, whatever realtime guarantees your device offered would be toast. And stuff like packet loss or a TCP retransmission would be disasterous. They seem like entirely incompatible domains.


Makes sense, like semis vs trains.


More like a semi vs a dozen of Prii... Semi still wins.


>(in terms of power requirement)

You can buy a single off the shelf Proliant microserver which is the size of a shoebox, put Vmware on there (or your virtuaization product of choice) and have god knows how many VM's running with a very modest power load that will blow away any sort of jerry-rigged Pi's shoved into a 1U solution. And be reliable and have raid and have ecc memory. Hardware is kinda a solved problem now.

Shame the pi isn't more powerful. I was looking at deploying Freepbx in my home and the performance of the web interace of the Pi is terrible. I'm not sure what people actually use them for. I'm probably going to just get a beaglebone that's 2-3x as powerful for a measly $15 more.


I've been cabling servers, kind of for a hobby, for a couple of years now. Even a two-server-per-U cabling situation (with redundant ports on each server) is a nightmare. Once you're talking hundreds of wires in a rack, it's no longer fun.

I can't imagine going to top-of-rack directly from (say) 512 or 1024 Pis in a rack. So you need intermediate switches, probably a small one every couple of U. From there to top of rack at 10G (could get away with 1G if you know your network bandwidth over your couple U of Pis won't get saturated). Top of rack switch will need to be optical, probably redundant, at 40G aggregate or better. Did I mention that those first-layer switches probably take up a U themselves?

Per Pi, we might be spending as much on each switch port as we are on the Pi itself, maybe more (a 48-port switch that we use is about $2k delivered, or about $40/port, and that's just the first switching layer). You can probably buy cheaper switches than the ones we use; I don't know if there is drop in reliability. Haven't figured the cost of the optical links, either, but they can get spendy as well.

I think that box of Pis needs its own switching fabric, so that 1G link never leaves the chassis the Pi is in. The switch doesn't need to be fancy, but it looks pretty custom and you'll have to amortize its cost over a big build.

I really hate wires :-)


hmm... yeah that's what I thought. And yes, the best case might be have hundreds of these "systems on a chip" on a custom board with it's internal bus (PCI-whatever) with avirtual eth0's visible on each internal system. This whole rigmarole could be connected to the rest of the network via a couple of 10G networks. Then some flavor of SDN would work to divide the bandwidth in a fair manner among the boxes.

But doing all that custom electronics does take the fun out of the idea of "just a bunch of cheap raspberry pi's doing their thing". So maybe not.


It is possible that future servers might be connected to the network backplane, power, drives, and pretty much everything else by multiple USB-3.1 type connections. The bandwidth is there.

Just imagine racking in machines like that which take N USB connections, where that's 1, 2, 4, 8, 16 or whatever is necessary.


Hmm. USB fan-out is pretty cheap (the cabling is not fussy). Schedule maybe 70% of the raw bandwidth and you should have enough to drive 6-7 servers at 1GBit from each root hub. If you know your servers are less chatty you can get away with less guaranteed bandwidth. Can also play games with isoch.

You can even add redundancy by connecting each Pi server to more than one root hub.

Writing the USB-based switch for this would be fun (probably someone has done this, though).


With the Pi 2, it's not as bad as you'd think; see: https://github.com/geerlingguy/raspberry-pi-dramble/wiki/Dra... (Drupal 8 on a 6-server Pi 2 LEMP + Redis stack is about 80% as fast as running on Digital Ocean, in some benchmarks).

For test purposes, a local Pi cloud (or something similarly-priced) is a decent deal.


The biggest problem would be the reliability. I have a few friends who use RPis and BBBs as part of their home infrastructure - they have to replace them randomly as they stop working.

One guy's working theory is that they overheat, so last I heard he was attaching heatsinks to the major chips, but I haven't heard if that helped the reliability much.


There are companies out there that do raspberry pi colocation.

I guess it's just a novelty. Might as well go virtual.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: