
Building a NAS - walterbell
http://jro.io/nas/
======
Teknoman117
Wow, I actually just got done building a home NAS for myself. As far as
storage controllers, I managed to find myself a working SAS 9201-16i for $50
on eBay (16 port SAS card, only supports IT/HBA mode). Also got some cheap
used 10 GbE cards off eBay as well for the link between my workstation and the
NAS. Got a 4 port card for the NAS to turn it into a cheapo 10 GbE switch
(OCe14104-U-TE with four SFP+ fiber optic transceivers for $190 USD).

I actually bumped into rclone and use Amazon Drive for backups myself. I was
worried they'd freak out if I stored multiple terabytes of data there, but if
you haven't got any emails about nearly 50 TB of data, I guess i shouldn't be
so concerned.

A silly toy project i've been working on is to be able to use Amazon Drive as
a block device via something like nbd (linux) or geom (freebsd). Would let me
use existing drive encryption mechanisms (no personal data scraping for
advertising purposes Amazon) and have the ACD storage be a zfs pool in its own
right. Basically, a virtual 'SSD' backed by a collection of chunks (1 to 8
MiB, still benchmarking...) and where TRIM deletes remotely stored chunks.

~~~
lewisl9029
Wow 50TB for $60 per month...

Amazon Drive sounds like an insanely great deal for me if it works the way I
think it does, but the reason I haven't tried it yet is I honestly don't see
them being able to keep storage space unlimited for very long, if it in fact
competes in the same real-time sync'd storage space as the likes of Google
Drive, OneDrive, Dropbox, etc.

Every such service that I've ever heard of that offered unlimited storage at
one point have had to backtrack on their unlimited storage claims after enough
users _really_ took them up on their offer. Note I'm talking about services
like Bitcasa and OneDrive, that offer real-time sync/access, not backup &
archival services like BackBlaze and Crashplan that don't have the same egress
bandwidth requirements.

For those who use Amazon Drive and understand it better, does it do real-time
sync like Google Drive, OneDrive and Dropbox? Or is it just another
unidirectional backup service?

~~~
ThatPlayer
If you like Google Drive, their Google Apps plan will give you unlimited
storage for a slightly more expensive plan.

One limitation I've met with Amazon Cloud Drive is a 50GB file size limit.
Both of these support rclone which is what I use for backups anyways.

~~~
theWatcher37
50GB size limit kills it for me :(

------
barrkel
I have a 36T NAS (26T usable raidz2), built using the Norco 4224 chassis. The
backplane / drive trays that it comes with aren't 100% reliable, so I would
recommend trying to get a SuperMicro chassis instead. I keep mine in the attic
to keep the noise down.

Mine runs fine on an old Q6600 with 4G of RAM. I'm not trying to do anything
silly, like enabling dedupe, so it's not a problem. I'm running ZFS on Linux,
again, not a problem with that amount of RAM despite ZoL having a less than
ideal caching situation in the kernel.

I do my backups with Crashplan. Not only do I back up the NAS with Crashplan,
but I back up my various PCs and laptops to my NAS, for faster restores should
I need it. Crashplan supports peer backup, which works well.

I tuned my recordsize by copying a representative sample of files to different
ZFS filesystems with different settings, and comparing du output. The
empirical calculation seemed easier.

I've used my NAS for a various things; video / photo archive, VM backing store
(serving up zvols as iSCSI endpoints), working space for a variety of side-
projects. More than anything else, it gets rid of the idea that storage space
is in any way scarce, and removes the requirement to delete things (very
often, anyway; I built up several years worth of motorcycle commute headcam
videos at one point). My pool wanders between 40 and 70% utilization.

~~~
melp
36T to CrashPlan sits on 36GB of RAM at all times... screw that. rclone never
uses more than ~2GB to manage my whole 50TB dataset.

edit: Also, I'm using a SuperMicro chassis, not a Norco. I've got a section
where I go into why I went with SuperMicro.

~~~
dom0
Uhm, what exactly is crashplan doing with all that memory?

~~~
glenneroo
I wonder if it's related to the memory "issue" with their client, where
exceeding 1TB/1 million files, a manual edit of the .ini file to bump the JVM
memory allocation is necessary. I remember reading somewhere that it has to do
with CRC checksum calculation for all the files? I've had to change the
setting multiple times (currently at 8GB for ~8TB/1 million files).

[https://support.code42.com/CrashPlan/6/Troubleshooting/Adjus...](https://support.code42.com/CrashPlan/6/Troubleshooting/Adjusting_CrashPlan_Settings_For_Memory_Usage_With_Large_Backups)

~~~
dom0
Hm, maybe give Borg a shot. For initial backups it's not exactly the fastest
thing, though much better after that.

------
linsomniac
Oddly, I recently went the other direction. In the past I've built some
storage servers similar to this, and 6 months ago I took 6x 2TB laptop drives
in a 6 drive "mobile rack" that fits in one 5.24 bay in my desktop (IcyDock
MB996SP-6SB).

Loving it!

Now, my storage needs are fairly modest in comparison to the author. I'm
running with RAID-Z2 and at 44% capacity. I have this unit in my work
workstation, at an external office. It is quiet and cool.

I used to have a big house with a room I could put a bunch of computers in. I
moved to a smaller house, but more importantly I was just tired of managing a
business-class infrastructure at home (VLANs, multiple APs, UPSs, batteries,
patch panel, servers, etc).

So I copied a backup of my storage server from an off-site box, to S3 with
Glacier, copied the primary to this ZFS array on my workstation, removed junk
I was just holding on to, and now my home infrastructure consists of a Cable
Modem and Google WiFi mesh. Huge improvement in maintenance!

Here's a blog post I wrote about one of the previous incarnations:
[https://www.tummy.com/articles/ultimatestorage2008/](https://www.tummy.com/articles/ultimatestorage2008/)

~~~
ValentineC
> _I took 6x 2TB laptop drives in a 6 drive "mobile rack" that fits in one
> 5.24 bay in my desktop (IcyDock MB996SP-6SB)_

Thanks, I didn't realise these exist.

I've been wanting to create a DIY version of Synology's "slim" model line [1]
for a while, but haven't been able to find an off-the-shelf enclosure that
would fit my needs. Putting six drives in a 5.25" bay is a fantastic idea.

[1] [https://www.synology.com/en-
global/products/DS416slim](https://www.synology.com/en-
global/products/DS416slim)

~~~
boredprogrammer
I came across this product recently:
[https://www.crowdsupply.com/gnubee/personal-
cloud-1](https://www.crowdsupply.com/gnubee/personal-cloud-1) which is seeking
crowd funding at the moment. Personally I think they've made a mistake by
going for 2.5" storage first with 3.5" coming later. Anyway, it sounds like it
might be of interest to you?

~~~
hrez
I like the design and 2.5" choice. But if it doesn't support zfs it's no go
for me. If you (abstract) are not concerned with bit rot you doing NAS wrong.
Something like that running FreeNAS (w/o dedup) would be ideal for me.

~~~
ValentineC
> _But if it doesn 't support zfs it's no go for me. If you (abstract) are not
> concerned with bit rot you doing NAS wrong._

What are your thoughts about Btrfs?

~~~
hrez
I trust zfs more but would use btrfs for personal storage provided backup
strategy is solid :) I'd go with raid1/raid10 though.

------
squarefoot
I rebuilt my home NAS a few months ago. Former one was a huge Debian tower box
with one system+temp storage disk plus 3 soft RAID1 pairs. It worked well for
about 6 years, save for a partial failure due to bad disk connectors (SATA
connectors are among the worst junk ever designed by a human). CPU was a low
power Celeron more than enough for the task. When building the new one I
wanted to reduce to the minimum both power consumption and hardware
maintenance hassles so I got on Ebay a used server 2U rack box including a
good quality PSU, then a Mini ITX industrial Atom main board with 4 SATA
ports. Not being that familiar with BSD and derivatives, initially I wanted to
stay with Linux, but this time I didn't want to fiddle with mdadm and other
stuff so I gave Openmediavault a try; I probably did something wrong, but the
experience was frustrating, from menus taking forever to appear to errors that
shouldn't be there in a self contained system, so in the end I turned my
attention to NAS4Free and never looked back. The only problem being the much
greater RAM requirements of ZFS, which makes the maximum RAM on that board
(4GB) barely enough, but in the end it works flawlessly as I keep running
services to the minimum (NFS, SMB, Transmission). Total expense was very low
as everything was purchased used, save of course the 4x3GB WD RED disks. The
system boots off a USB key which is plugged directly on an internal USB port,
so there are no dongles attached to the case.

~~~
ChefDenominator
For OMV, you did something wrong. I've been using the project since its first
public release. Never had any real issues.

~~~
squarefoot
Yup, I'm completely sure about that as nobody else seemed to have experienced
the same problems, but after trying to reflash a couple more times I
eventually gave up. That doesn't mean I won't try on a different system in the
future though.

------
melp
I originally posted this on reddit, but I'm happy to see it reposted
elsewhere. Let me know if anyone has any questions.

~~~
vmarsy
That's a huge setup :)

One questions I have regarding NAS is backups of the NAS itself.

If I have a very simple NAS with 2 drives in Raid 1 (let's call it Drive A and
B), and I want to make a physical backup of my NAS in a different location.
How easy is it ? What is the best practice. Ideally you could just have a big
rsync method that takes care of it, or rclone as described in the article, but
what if you want to do it without any network transfer (because your
connection is too slow/ you can't afford it)

Does the following protocol make sense: Removing Drive A , replacing it with
an empty drive C. Wait for the NAS to synchronize drive C. Then remove Drive
B, replace with drive D, wait for the NAS to synchronize drive D.

Taking Drive A and B to a different location, plug them, and have the backup
working out of the box.

Is it that easy? What about more complicated RAID setups?

is there an easy "Prepare Backup -> Please insert First drive for your backup
-> First drive filled up -> please insert second drive for your backup -> ...
-> Please insert last drive for your backup" and then you take all those newly
filled drives and shove them in a different box and they have all the data at
the time of the backup, with either the right zfs and raid configuration, or
at least a simple data dump in a non-raid configuration.

~~~
secabeen
If you're using ZFS, the right thing to do is to attach drives C and D
temporarily, create a second zpool on them, them use zfs send/receive to
replicate snapshots from the primary drives to the backups. You can then
export the zpool and move it to a different location.

Refreshing the backups is either done by putting the backups online at the
remote location and syncing the deltas between the last snapshot and the
current over the net, or by bringing the drives back to the primary, and
sending the deltas.

The sanoid/syncoid toolset will help immensely with handing the necessary zfs
snapshot and send/receive commands:
[https://github.com/jimsalterjrs/sanoid](https://github.com/jimsalterjrs/sanoid)

------
vbezhenar
I own HP Microserver Gen8. It allows to use 4 disks, it's a real server (ECC
memory, hardware watchdog, KVM over network and much more), it's extremely
quiet and it costs $200 with Celeron processor. Also it's quite outdated at
this moment and requires using "hardware raid" (implemented in software
proprietary driver) for optimal cooling, it's awesome machine and I didn't
found anything better yet. I only wish HP would continue this line.

~~~
keeperofdakeys
Why does it require using hardware raid for optimal cooling? I set up one of
these recently using AHCI (disabling the hardware raid controller) with mdadm.

~~~
vbezhenar
When you're using hardware raid, server is able to read HDD temperatures and
adjusts fans accordingly. When you're using AHCI, server can't read HDD
sensors and turns fans higher. I'm not sure why it's implemented this way, but
you can easily check it via iLo web interface. For me it's 20% fan vs 6% fan
and I can distinguish it, I can hear 20% fan at silent night, but 6% fan is
really silent, so my wife can sleep well.

If you don't have those problems with noise, it doesn't matter (and I would
recommend to avoid those "hardware" raids, I have much more trust with widely
used open source solutions).

------
roel_v
I used to build my own NAS systems; and it's fine if it's a hobby; but it's a
lot less trouble and a lot less stressful (when you actually have data to care
about) to use something ready made like a QNAP. I switched to a QNAP TS-863U a
few months ago, and boy is it great to have a UI that will let you do RAID
upgrades/resizing without having to lay awake at night wondering if you chose
the right incantation of mdadm out of the 3 or 4 you came up with that could
work.

~~~
zurn
Commercial consumer NAS boxes are notoriously bad wrt security though, even
firewalled. Apple Airport Extreme is probably the only one with a decent track
record (and I say this as someone who doesn't use Apple products).

Eg QNAP requires you to install security patches manually, and as a result had
a ShellShock worm exploiting QNAP boxes: [https://threatpost.com/shellshock-
worm-exploiting-unpatched-...](https://threatpost.com/shellshock-worm-
exploiting-unpatched-qnap-nas-devices/109870/)

~~~
roel_v
That report isn't quite accurate; you can install updates automatically on the
device I have (I don't know if there was some point in time where that wasn't
possible; seems like a basic feature). Some require reboots though, which is
why by default updates aren't automatic. Who sets their servers on fully
automatic updates (and reboots) anyway? I mean, even forgetting about the
reboots - you don't want fileservers to disconnect all clients at any random
point in time because it has to install some update to the service software.

~~~
zurn
This kind of NAS should definitely automatically update and reboot rather than
become wormable.

A high-availability file server is a different beast from a home NAS.

~~~
roel_v
The one I have is not a home nas. It's a rack mount machine with a bunch of
'enterprise' features, and marketed/sold as such.

------
fnj
People always make such a big deal out of getting ashift "right". In actual
fact, there is NEVER any valid excuse to make ashift any smaller than 12. It
should default to 12. It is a scandal that it still defaults to 9. You aren't
going to find a single hard drive in the world that you would ever use in ZFS,
which does _not_ have 4k sectors, papered over as _fake_ 512 byte sectors in
the interface.

~~~
mkup
In FreeBSD, there's a sysctl for setting minimal ashift to 12 and ignoring 9
reported by disk. I always have this one in my /etc/sysctl.conf:

vfs.zfs.min_auto_ashift=12

(This must be done before creating ZFS pool.)

------
mrbill
I did a smaller, simpler build with 8 x hotswap-bay SATA drives running off
two 4-port PCIe SATA controllers. Not counting drives, it came to about $550.

[http://www.mrbill.net/nasbuild/](http://www.mrbill.net/nasbuild/)

I originally started building it around a Dell YK838 SAS 6i PCIe controller
card, but got tired of fiddling with using other systems to reflash the
firmware and then having to mask off pins on the PCIe connector to get the
system to even POST.

I replaced all the "factory" fans, as well as the two on the back of the
hotswap drive modules, with Noctua equivalents.

------
PhantomGremlin
Whenever I see one of these periodic postings of people using FreeNAS, and
they come up on HN every few months, I'm always looking for the answer to IMO
the most obvious, most basic question:

    
    
       Why FreeNAS; why not vanilla FreeBSD?
    

I've _never_ seen a good, detailed answer to this. Mostly the response is
along the lines of: "well, of course use FreeNAS, after all you're building a
NAS!"

From what I can tell, FreeNAS offers a pretty GUI and some tuning on top of
FreeBSD. Anything else?

~~~
melp
Because I've never done this before and I wanted a more manageable learning
curve. I had very little *nix experience and zero bsd or zfs experience before
I started this project. My next server will probably be vanilla bsd, but then
again it might be freenas because it just makes the whole setup process
easier.

------
Jaruzel
People are going to hate me for this, as it's not a Linux solution... but
after trying various *nix NAS distros and being disappointed, I settled on
letting Windows do it for me:

\- Get a generic small tower box with 3-4 year old hardware in it via eBay (or
a business IT clearance site)

\- If it's not got Windows on it, find a dirt cheap copy of Windows Server
2008 (again eBay etc).

\- If there's no SATA RAID on the motherboard (unlikely), get a 4 to 8 port
PCIe RAID card.

\- Thrown in a bunch of identical disks of your preferred size

\- For Raid 1: In Windows mount them all, and create Mirrored sets in software
(via Diskmgmt)

\- For Raid 5: Either do it via the BIOS (if supported) or via Diskmgmt
(however RAID 5 in software is quite slow).

\- Create file shares (SMB/CIFS/FTP etc).

\- Job done.

I have this as my main File Server, and unless I'm hammering the box from
multiple clients simultaneously I get max Gigabit throughput on all file
transfers.

Also, and this is the big bonus for me, Software RAID 1 in Windows doesn't
create funny disk volumes, so you can break the mirror and still access all
your data from the remaining drive(s) - I've seen horror stories of bespoke
partitioning in commercial NASs, and people losing data when the motherboards
die - I don't want that ever happening to me.

Finally - Windows Server also supports iSCSI, so you can just keep adding new
boxes with disks in, all presented via the same File Server.

~~~
cannonpr
Out of curiosity, how do windows filesystems for solutions like these compare
to ZFS in terms of reliability and features ? ZFS has been my only choice for
NAS servers at home primarily due to it's data integrity features.

~~~
deadbunny
They don't, they are equivalent to ext4 (ie. no error checking, dedupe
etc...).

~~~
justin66
Deduplication went into NTFS on Windows Server in 2012.

------
johnthomas00
System is probably good for 3 or 4 years, right? If your cost of funds is 6%,
that works out to: 3 years: $177/month 4 years: $136/month

Add $20/month for electricity: 3 years: $197/month 4 years: $156/month

Could that rent respectable enough device(s) in the cloud?

~~~
AtticusTheGreat
Getting 60TB of usable space in the cloud is pretty expensive. Honestly I have
no idea why he needs so much space, but if he does need it all, he probably
didn't make out too badly. One of the main benefits of the cloud, though, is
that you only pay for what you use, and he has to over-provision from the
beginning, so let's consider that.

So he's currently using about 10TB of 60TB of usable space. If he uses Amazon
S3's standard storage, he would be paying about $230/mo. If he uses infrequent
storage that is $125/mo. That goes up as his usage goes, so when he's using
30TB that will be $690/mo and $375/mo respectively. He also has the benefit of
high speed ethernet with the home NAS, unless he has fiber 1Gig internet, in
which case speed is probably a wash. I'm not sure if there are other
significantly cheaper cloud storage solutions at that scale.

So I'd say he hasn't done too badly for himself, though he probably could have
saved some on the hardware by getting cheaper parts.

~~~
paulannesley
> So he's currently using about 10TB of 60TB of usable space.

Other way around; he's using ~50TB of ~60TB with ~10TB free.

------
StillBored
You didn't say how fast it is?

That many drives, you should be able to saturate a good part of a 10Gb link.
The difference for streaming read/write loads (like copying movies..) is night
and day vs 1Gbit. That is assuming you've got enough CPU to run ZFS that fast.
That is part of the reasons I stick to linux MD/xfs. Every time I try to use
ZFS I run out of CPU or RAM. Also, given that I want low idle power means I'm
not willing to throw enough hardware at it to make ZFS run well.

~~~
melp
It'll saturate a 1Gb link, obviously. I've been eyeing 10Gb configurations for
a while, that will be my next upgrade. I'd really like to stick with copper so
I can wire the whole house for 10Gb and have some super fast X11 nodes, but
we'll see.

~~~
StillBored
BTW: I'm running 10GbaseT at my house, and every single foot of it is cheap
cat5e. Its rock solid (yah I can monitor mangled packets drop rates on my
switch). I've had a lot of people talk down copper 10G and tell me things like
its not possible to use anything but cat6A or better. Which is a load of BS if
your runs are well under the 200M that is possible with cat 6A. I think my
switch actually says that that it supports runs of 45M on cat5E, which is
probably still 2x the max run in my house.

Frankly, the asus XG-D2008 and a few Chinese x540 boards (~$100 for two
ports)with cat5 cost less than some of the fancy home AP's and is well within
the prices I paid for my first 1G hardware. Two workstations and a server
should be less than $600.

Adding to this this, there is the Ubiquiti US-16-XG which also has a bunch of
SFPs for under $600.

------
TheAceOfHearts
Wow, this is an amazingly detailed blog post. I only skimmed it, but since I'm
in the process of setting up my own NAS I'll definitely have to revisit it and
read slowly.

Is anyone here using FreeNAS Corral? I've been toying around with it for a few
days, and its been a frustrating experience. My first sign that I should've
avoided it was that trying to install 10.0.3 with UEFI just fails. With 9.10
they have a nice docs page with lots of details and info, but with Corral they
just threw it all out. The web interface provides no help, and the cli just
gives a brief sentence which is usually of little help. I can see the long-
term potential in Corral, but right now it feels like a rushed out beta.
Should I just install FreeNAS 9.10 instead, or is it worth sticking with
Corral? Are there any other OSs that I should try out?

Since we're on the subject of home networks, what are people's thoughts on
running FreeIPA and FreeRadius? I'm hoping to use it to setup a home VPN
server, as well as provide a means of performing authN/authZ for multiple
"personal cloud" applications. My goal is to reduce my reliance on cloud
providers, since I've grown increasingly uncomfortable with their practices
and the loss of privacy.

~~~
xeroxmalf
They have recently updated the download page for FreeNAS Corral to say in
large letters "NOT FOR PRODUCTION" and after using it for the last month, I
can see why. A good example of some issues, would be found here:
[https://forums.freenas.org/index.php?threads/10-0-3-problems...](https://forums.freenas.org/index.php?threads/10-0-3-problems.53205/)

------
cm2187
The problem with these NAS is noise. I see on the pictures that the NAS is
just next to his desk. Even if you get a quiet chassis (rack servers tend to
be noisy as hell, but synology chassis are pretty quiet), having 12+ disks
running next to you is very noisy too. I am looking forward the time SSD will
be cheap enough to be used for mass storage.

~~~
melp
I have this system sitting next to me in my home office which I share with my
wife. Both of us work from home full time and the server is quiet enough that
she doesn't complain about it.

------
newman314
Has anyone taken a deep dive look into rclone crypto? I looked around but
didn't find anything in a quick search.

Only thing I've seen so far is
[https://rclone.org/crypt/](https://rclone.org/crypt/)

~~~
niftich
Not any more than this HN comment [1] and its replies, one of which is mine.
It's using golang's own extensions for secretbox [2], a NaCl-secretbox-
compatible implementation, and it's adhering to the uniqueness of nonces as
far as I can tell [3].

In other words, to my non-crypto-expert eyes, there is no glaring misuse of
the "golang.org/x/crypto/nacl/secretbox" API that jumps out at me; I haven't
looked at that package to see if it's okay.

[1]
[https://news.ycombinator.com/item?id=12398303#12398727](https://news.ycombinator.com/item?id=12398303#12398727)
[2]
[https://godoc.org/golang.org/x/crypto/nacl/secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox)
[3]
[https://github.com/ncw/rclone/blob/master/crypt/cipher.go](https://github.com/ncw/rclone/blob/master/crypt/cipher.go)

------
myrandomcomment
Eh. Cool. To much work. I just bought a FreeNAS Mini because I am lazy. But
this very cool.

~~~
tracker1
been thinking of DIYing a freenas mini xl.. can save about $500 by building
with the same MB, but don't like the case options.. not sure what case
iXSystems uses, or where they get it (probably custom)... but with the same MB
and an 8-drive itx case, came out around $900, but concerned about the
clock/brick issue with that CPU.

~~~
varky
The 8-drive version is built around an Ablecom CS-T80 case.

The 4-drive version is a SuperMicro 721TQ-250B system.

------
SirFatty
I've had two of those chassis fail over the last couple years... would not
recommend.

~~~
tjoff
Fail how?

------
thomasjudge
What's your cpu utilization like? Xeon E5-1630 seems a little overkill

~~~
melp
Oh, it's totally overkill, but I don't have any regrets. I have a couple VMs
that can sit on a lot of CPU time when they're in active use.

------
opk
I like the creative use of what appears to be an Ikea table for a rack.

~~~
abusque
That's actually a fairly common setup, known as a LackRack [0], although this
one seems to be a different size/model, but maybe that's my eyes playing
tricks on me.

Edit: actually now that I look back at it I'm almost positive this is the
deeper lack "coffee table".

[0]
[https://wiki.eth0.nl/index.php/LackRack](https://wiki.eth0.nl/index.php/LackRack)

~~~
melp
Enterprise Edition, baby! It's an official variant.

By the way, if anyone is considering deploying their own LackRack, I would
highly recommend reading the installation section in the OP. It's got some
quirks that are worth considering before you dive in.

------
alphapapa
Incredibly comprehensive article. Thanks for sharing.

First thing that jumps out at me, though: in the photos at the top, the UPS is
on the floor.

Get that UPS off the floor! When your house suffers minor flooding (burst
pipe, overflowing toilet, leaky roof, etc), it will be sitting in it. You
think it won't happen--but then it does, and if your electronics on the floor
don't get damaged, you're lucky.

~~~
melp
Good call... I'll prop it up on something soon. This is on the second floor of
my house, so it's safe from flooding, but I'd hate to have spilled water kill
my stuff.

~~~
alphapapa
> This is on the second floor of my house, so it's safe from flooding

Not if there's a toilet on the second floor. I speak from experience. :(
Neighbor's upstairs toilet tank burst while she was at church. Couple hours
later, it's raining inside my apartment and her whole place is ruined. I was
very lucky that my computers didn't get ruined. 50-cent piece of plastic
connecting the tank to the supply line caused thousands of dollars of damage.

------
technofiend
It's stupid overkill but I plan to do this using CEPH. The hard part is
finding the right low power high performance server. Suggestions welcome.

