Hacker News new | past | comments | ask | show | jobs | submit login

I love all the engineering Backblaze puts into this and their willingness to share their experience.

I've noticed Supermicro offers a 45-drive 4u chassis* that costs more than the Storage Pod's raw parts cost, but less than if you buy one preassembled by 45Drives. Does anyone have any experience with Supermicro's solution?

* Part: CSE-847E26-RJBOD1 (http://www.supermicro.com/products/chassis/4U/847/SC847E26-R...)




Supermicro makes good systems that are very widely used and we love that companies are continuing to work toward more dense and less costly storage systems. When we started Backblaze in 2007, all the options were astoundingly expensive.

One quick note on this particular Supermicro system - it's slightly apples/oranges as the Backblaze Storage Pod is a complete server and this Supermicro system is a JBOD, meaning it still needs to be plugged into a server to work.

Gleb, Backblaze co-founder


As a note to this: while the above linked chassis is indeed JBOD, and one of the parent posts also mentions the 90 disk JBOD chassis, there's an intermediate option which thanks to some sort of server-geometry-tetris-magic is actually a proper machine as well as supporting 72 drives.

http://www.supermicro.com/products/chassis/4U/?chs=417

(Disclaimer/answer to the parent post: we use the 24 drive unit for the GPU compute nodes in one of our clusters, and the 45 drive JBOD units for storage nodes in the same cluster. We have had a very positive experience with both (to the point that I got the 24 drive one as my home fileserver), as well as the Supermicro customer support for such.)


At KeepVault we've been using Supermicro since day 1 (pre-2007) and they've worked out very well. Supermicro is more expensive (on multiple metrics), but you're also getting different features like entry-level cost and power redundancy.

What happens when a Backblaze pod power-supply fails? I'd love to see a post about that. :)

David, KeepVault CEO


> What happens when a Backblaze pod power-supply fails?

From what I understand, the whole pod becomes unavailable. Which is why you would use a front-end system to have redundancy across pods.

The way I would do it myself, is set up network connections between two pods, and use DRBD, along with clustering software for the iSCSI or NAS (nfs/samba) daemons.


From an older blog, they hint at their architecture, basically storage over https -- with a little effort to make sure pods don't drop dead too easily (raid6):

http://www.backblaze.com/petabytes-on-a-budget-how-to-build-...

For replicating a similar architecture, I'd probably look at HekaFS/GlusterFS and/or CEPH.


Thanks for the reply! Yeah, this is a JBOD of course. I was thinking of their 36 drive server cases, thought one was 45 and didn't look too closely when I found the link I thought I was looking for. :)

Now that prices have come down so much on the 3rd party stuff, have you evaluated / considered using any of it?


Supermicro is a very popular "white box" server manufacturer. The chassis you linked is much more of a "real" server: hot swap drives, hot swap cooling and redundant PSU. Also, Supermicro even has a 90 (!) drive 4U chassis and for what it does, it's quite cheap.


I'm also inspired by how open Backblaze has been, one might even say they're "blazing" a trail. :)

I have a lot of experience with using the Supermicro chassis for storage. The question to ask yourself when deciding between "Supermicro or Backblaze pod" is: "Is storage density important to me?"

The final costs of a loaded Supermicro chassis vs a loaded Backblaze chassis are pretty close. In fact, the Supermicro may be a better option if storage density is not important to you (e.g home/office or inexepensitve city - http://imgur.com/gallery/QfD6qIw) But if your server is in a location where square-footage is expensive, the storage density is the primary metric you're looking to optimize for, since rent (and electricity) is your primary ongoing cost.

To run a scalable storage solution with the Backblaze pods you need GREAT SOFTWARE. Maybe a really well configured ZFS pool/custom software/and great sysops? As far as I can tell that's Backblaze's secret sauce.

For a company that wants to run a server that's going to be "pretty good" for what they need, using a Supermicro chassis with hardware RAID is definitely going to get you by for up to 96 TB.


This looks like a JBOD chassis. Am I missing something?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: