I actually bumped into rclone and use Amazon Drive for backups myself. I was worried they'd freak out if I stored multiple terabytes of data there, but if you haven't got any emails about nearly 50 TB of data, I guess i shouldn't be so concerned.
A silly toy project i've been working on is to be able to use Amazon Drive as a block device via something like nbd (linux) or geom (freebsd). Would let me use existing drive encryption mechanisms (no personal data scraping for advertising purposes Amazon) and have the ACD storage be a zfs pool in its own right. Basically, a virtual 'SSD' backed by a collection of chunks (1 to 8 MiB, still benchmarking...) and where TRIM deletes remotely stored chunks.
Similarly older storage controllers require more power even when idle, older CPUs and mainboards draw more power when idle. Fan control in server boards has improved somewhat, and these can draw a lot of power (....just ask any Sun Fire server....) as well.
Amazon Drive sounds like an insanely great deal for me if it works the way I think it does, but the reason I haven't tried it yet is I honestly don't see them being able to keep storage space unlimited for very long, if it in fact competes in the same real-time sync'd storage space as the likes of Google Drive, OneDrive, Dropbox, etc.
Every such service that I've ever heard of that offered unlimited storage at one point have had to backtrack on their unlimited storage claims after enough users really took them up on their offer. Note I'm talking about services like Bitcasa and OneDrive, that offer real-time sync/access, not backup & archival services like BackBlaze and Crashplan that don't have the same egress bandwidth requirements.
For those who use Amazon Drive and understand it better, does it do real-time sync like Google Drive, OneDrive and Dropbox? Or is it just another unidirectional backup service?
One limitation I've met with Amazon Cloud Drive is a 50GB file size limit. Both of these support rclone which is what I use for backups anyways.
It can be a bottleneck for a NAS-
That said, depending on use case,
the extra expense of 10GbE still might not be worth the expense..
Mine runs fine on an old Q6600 with 4G of RAM. I'm not trying to do anything silly, like enabling dedupe, so it's not a problem. I'm running ZFS on Linux, again, not a problem with that amount of RAM despite ZoL having a less than ideal caching situation in the kernel.
I do my backups with Crashplan. Not only do I back up the NAS with Crashplan, but I back up my various PCs and laptops to my NAS, for faster restores should I need it. Crashplan supports peer backup, which works well.
I tuned my recordsize by copying a representative sample of files to different ZFS filesystems with different settings, and comparing du output. The empirical calculation seemed easier.
I've used my NAS for a various things; video / photo archive, VM backing store (serving up zvols as iSCSI endpoints), working space for a variety of side-projects. More than anything else, it gets rid of the idea that storage space is in any way scarce, and removes the requirement to delete things (very often, anyway; I built up several years worth of motorcycle commute headcam videos at one point). My pool wanders between 40 and 70% utilization.
edit: Also, I'm using a SuperMicro chassis, not a Norco. I've got a section where I go into why I went with SuperMicro.
How do you handle file versions with rclone?
Cost is in low thousands, I haven't tabulated it recently.
I've been using crashplan on my NAS for a few years now, but I feel bad and only really backup critical stuff. (aka its just a few TB) rather than the full 50T+ on the NAS.
Now, my storage needs are fairly modest in comparison to the author. I'm running with RAID-Z2 and at 44% capacity. I have this unit in my work workstation, at an external office. It is quiet and cool.
I used to have a big house with a room I could put a bunch of computers in. I moved to a smaller house, but more importantly I was just tired of managing a business-class infrastructure at home (VLANs, multiple APs, UPSs, batteries, patch panel, servers, etc).
So I copied a backup of my storage server from an off-site box, to S3 with Glacier, copied the primary to this ZFS array on my workstation, removed junk I was just holding on to, and now my home infrastructure consists of a Cable Modem and Google WiFi mesh. Huge improvement in maintenance!
Here's a blog post I wrote about one of the previous incarnations: https://www.tummy.com/articles/ultimatestorage2008/
Thanks, I didn't realise these exist.
I've been wanting to create a DIY version of Synology's "slim" model line  for a while, but haven't been able to find an off-the-shelf enclosure that would fit my needs. Putting six drives in a 5.25" bay is a fantastic idea.
I'm not sure if an ARM chipset would do justice for a multi-disk NAS setup though (or if the GnuBee would support RAID5/6 — it does seem to, with LVM/mdadm). My experience with consumer ARM-based NASes was that they suffered on transfer speed. I ended up going with a 4th generation Intel Pentium chip on a mini-ITX board, which offered the best compromise between price, performance, and power consumption for my use case.
What are your thoughts about Btrfs?
I think these are the Toshiba drives. Seems like 2TB is still as big as you can go on laptop form factor spinning drives.
Seagate has 4TB drives now (and I think they're still shuckable), but they're 15mm in height.
I read somewhere on Reddit's r/DataHoarder that they might be moving away from a SATA connector on their portable drives though…
One questions I have regarding NAS is backups of the NAS itself.
If I have a very simple NAS with 2 drives in Raid 1 (let's call it Drive A and B), and I want to make a physical backup of my NAS in a different location. How easy is it ? What is the best practice. Ideally you could just have a big rsync method that takes care of it, or rclone as described in the article, but what if you want to do it without any network transfer (because your connection is too slow/ you can't afford it)
Does the following protocol make sense:
Removing Drive A , replacing it with an empty drive C. Wait for the NAS to synchronize drive C. Then remove Drive B, replace with drive D, wait for the NAS to synchronize drive D.
Taking Drive A and B to a different location, plug them, and have the backup working out of the box.
Is it that easy? What about more complicated RAID setups?
is there an easy "Prepare Backup -> Please insert First drive for your backup -> First drive filled up -> please insert second drive for your backup -> ... -> Please insert last drive for your backup" and then you take all those newly filled drives and shove them in a different box and they have all the data at the time of the backup, with either the right zfs and raid configuration, or at least a simple data dump in a non-raid configuration.
Refreshing the backups is either done by putting the backups online at the remote location and syncing the deltas between the last snapshot and the current over the net, or by bringing the drives back to the primary, and sending the deltas.
The sanoid/syncoid toolset will help immensely with handing the necessary zfs snapshot and send/receive commands:
I've got a QNAP 4-drive NAS and the stuff I want to keep backed up fits in under 2TB. Based on that, I've got a 2TB external drive and have set up a backup job to sync the stuff I want to keep to that drive every morning.
There's no reason you can't get two drives and swap them every week. Just set your backup job to run weekly and unmount/eject the target drive when complete. Every Monday you just grab your drive and take it to work, and every Friday you bring the old drive home and plug it in.
Of course, any backup that requires manual work is likely to get neglected eventually. "Oh, I haven't done much work in the last week, I don't need to take the drive to work today" is something I've told myself all too often when it comes to my Mac backups.
Automated is better. If you have a second NAS at a remote site, you should be able to use ZFS send/receive to update a remote NAS over the internet.
For me, I have 2x extra drives and a USB caddy,
and rsync the array onto the caddy automatically,
keepping the unused drive offsite.
This does mean that the wear on the live array
is higher than the offsite ones, and I have to have 4 drives
total, but since RAID1 with a traditional filesystem doesn't provide integrity protection (e.g. bit errors on 1 drive can cause silent corruption), I don't have to worry about subtle raid rebuild issues gradually propagating through the entire raid set.
The 'cheaper' version would be to only have 1 offsite drive,
but that means my data on the raid array is only protected
from severe failures up to the last time I ran the sync.
Longer run, I'm looking at moving up to something with
integrity protection, but since my server is OpenBSD (less storage configuration options), this means RAID5 which was only recently OK'd for rebuilds, and soft-raid5 rebuilds take forever on spinny drives -
Will probably wait and upgrade to 4x SSD's 1st or get a hardware
raid card (my data set is fairly small).
Another thought I had was to setup a raspberry PI at a friend/relative's place and have an Rsync run nightly to it,
and offer same to them... but haven't gotten around to it.
...but really, you should figure out a way to go over ethernet. Even if it takes a really long time, it'll be soooo much easier to have everything automated. You also don't have to risk pulling drives, etc...
Personally, I'm thinking of building a FreeNAS for photo storage / media, but unfortunately I've only got a four-drive array SuperMicro rack server (spare one I have from decomissioning a business).
Other things I'm interested in is perhaps hosting a few gaming servers and such for friends/family (Minecraft, etc.)
Hard to find uses for it other than offline-lowpower-media-storage.
I look at it as a modern version of the boxes full of pictures and slides my parents and grandparents kept in their basement for decades and never looked at.
My experience with the FreeNAS people was "build a second nas", but that always struck me as stupid for a home setup.
I've got rclone set up to encrypt and upload everything to ACD. There's a section at the bottom of the article that goes into some depth on this and some other backup strategies I've tried in the past (including CrashPlan, Backblaze, and Zoolz, all of which are awful). Check http://jro.io/nas#rclone . I never considered building a second NAS, it does seem pretty stupid, even for an enterprise setup. The whole idea is to get the data off-site.
As a side note, some of the so-called "FreeNAS people" can tend to blindly parrot a given general guideline without really understanding the reasoning behind it or why it might be perfectly valid in certain situations to disregard it. For instance, ask them about bhyve and I promise you'll get at least one response along the lines of "bhyve isn't officially supported in FreeNAS so you shouldn't use it under any circumstances, period."
If you're able to tag stuff you know you want to keep and it's a smaller set you could look in to something like Backblaze B2 (previously a featured story on HN); the storage costs are relatively moderate, but restoration from it will cost you.
I haven't yet heard of any solution along the lines of "Rent a (large) NAS for a month" for those times that you're upgrading your array and need to switch filesystem formats. Having that option would make the juggling much easier and safer. Looking at the S3 storage and bandwidth costs I imagine there is actually a market to be served by such a product.
Maybe renting one of those higher end tape drives makes sense... but I can't get over the idea that even renting a stack of hard disks would be cheaper and more effective at this scale.
PS: Make sure you encrypt all of the data going in to the temporary storage; those aren't your disks.
My home nas is small enough where I can reliably backup to some USB external drives and store in a drawer offsite. According to FreeNAS, that's a horrible solution because USB is too error prone and moving disks shortens their life, and blah blah blah, and so USB backup is explicitly a WILLNOTFIX, and a sign that the requestor is stupid, as opposed to knowing full well what the risks are, and is satisfied with them. The horrible FreeNAS community, and the lack of this feature was why I adopted OpenMediaVault. (I highly recommend OMV.)
I guess I could always upload TARs to Glacier. That might be a legitimate solution.
Think pretty much the only viable solution for this for home users
is to have a 'peering agreement' with a trusted friend where you each
colo the others machine at your home.. however this can be tricky
because you're sticking all of your sensitive stuff in someone elses
house and trading some level of full network acess to each other -
though I suppose trading access to some kind of encrypted rsync-like dumps or similar might work without some of those risks being too high
My only suggestion would be to try out gaffer tape instead of the duct/masking tape combo. I started using it about a year ago, and haven't looked back. No residue and no dry flaking over time.
You'll be pleased to know that in the 9.10 nightly train, the base OS has been upgraded to FreeBSD 11 and the GUI has a UI for Bhyve, also enables VGA consoles.
So the 9.10 FreeNAS GUI will have bhyve management stuff in it? Or did I misunderstand that?
Anyway, I use a little script which monitors activity against my MD device (yah software RAID, another discussion) and sends full spindown commands to the drives. I've been experimenting with a full drive powerdown, and full port power down (different machine). If I weren't using the NAS as a DHCP/DDNS/etc server I would probably put it into S3 standby and then wake it on CIFS/NFS/RPC inbound.
I'm also using cheap x540 based 10G boards, which add about ~5W a port, but I turn them off on an idle timer and fail-over to a 1G port that I can't measure the power on.
Bottom line, a home NAS device isn't a server that needs to run 24-7. Its not hard to tweak stuff to pull the idle power way down. Given a long enough timer (say 1-2 hours) you will only notice the machine resume/spin up once a day when you initially sit down at your desktop.
as a comparison, s3 for 60tb is $1600 a month.
One thing to consider is that I'm in the northeast US (philly) where the ambient temperature is generally lower than other places. I've also only had this server running through the colder months (built it in fall), so it may get noisy as the house gets warmer in summer. I would definitely not want this server in my bedroom or living room, but so far, it's been okay in my office.
edit: I should also point out that, regarding the server passing the "wife test", she tends to get hyper focused on whatever task she's working on, so background noise doesn't bother her as much as it does other people. YMMV!
If you don't have those problems with noise, it doesn't matter (and I would recommend to avoid those "hardware" raids, I have much more trust with widely used open source solutions).
Eg QNAP requires you to install security patches manually, and as a result had a ShellShock worm exploiting QNAP boxes: https://threatpost.com/shellshock-worm-exploiting-unpatched-...
A high-availability file server is a different beast from a home NAS.
(This must be done before creating ZFS pool.)
I originally started building it around a Dell YK838 SAS 6i PCIe controller card, but got tired of fiddling with using other systems to reflash the firmware and then having to mask off pins on the PCIe connector to get the system to even POST.
I replaced all the "factory" fans, as well as the two on the back of the hotswap drive modules, with Noctua equivalents.
Why FreeNAS; why not vanilla FreeBSD?
From what I can tell, FreeNAS offers a pretty GUI and some tuning on top of FreeBSD. Anything else?
I however wanted to learn all the details when I set up my NAS, so went with Debian Jessie and BTRFS. I only use AFS and NFS, and I'm the only user, so don't need half the features FreeNAS provides. Graphs are pretty, but I can quickly get all the information I need via SSH.
- Get a generic small tower box with 3-4 year old hardware in it via eBay (or a business IT clearance site)
- If it's not got Windows on it, find a dirt cheap copy of Windows Server 2008 (again eBay etc).
- If there's no SATA RAID on the motherboard (unlikely), get a 4 to 8 port PCIe RAID card.
- Thrown in a bunch of identical disks of your preferred size
- For Raid 1: In Windows mount them all, and create Mirrored sets in software (via Diskmgmt)
- For Raid 5: Either do it via the BIOS (if supported) or via Diskmgmt (however RAID 5 in software is quite slow).
- Create file shares (SMB/CIFS/FTP etc).
- Job done.
I have this as my main File Server, and unless I'm hammering the box from multiple clients simultaneously I get max Gigabit throughput on all file transfers.
Also, and this is the big bonus for me, Software RAID 1 in Windows doesn't create funny disk volumes, so you can break the mirror and still access all your data from the remaining drive(s) - I've seen horror stories of bespoke partitioning in commercial NASs, and people losing data when the motherboards die - I don't want that ever happening to me.
Finally - Windows Server also supports iSCSI, so you can just keep adding new boxes with disks in, all presented via the same File Server.
One advantage of this setup is you can have multiple block devices of differing parity on top of the same pool. The pool is also more flexible than ZFS, allowing you to add drives of different sizes and later remove drives.
The main thing I miss from ZFS is the ability to create snapshots and do zfs send/receive for backups. Also not being able to read the source code is a bummer.
Any link please? Googling just returned a bunch of results about the meaning of clearance vs sale and why business do clearance sales.
Add $20/month for electricity:
3 years: $197/month
4 years: $156/month
Could that rent respectable enough device(s) in the cloud?
Granted I didn't do this math before I dropped a few stacks on hardware. I really wanted to build it, configure it, play with it, etc. It's been a really fun project for me, and I've got a bunch of other stuff I want to do with it in the next few years (10GbE, X11 nodes, set up dedupe, etc).
So he's currently using about 10TB of 60TB of usable space. If he uses Amazon S3's standard storage, he would be paying about $230/mo. If he uses infrequent storage that is $125/mo. That goes up as his usage goes, so when he's using 30TB that will be $690/mo and $375/mo respectively. He also has the benefit of high speed ethernet with the home NAS, unless he has fiber 1Gig internet, in which case speed is probably a wash. I'm not sure if there are other significantly cheaper cloud storage solutions at that scale.
So I'd say he hasn't done too badly for himself, though he probably could have saved some on the hardware by getting cheaper parts.
Other way around; he's using ~50TB of ~60TB with ~10TB free.
Amazon cloud drive is unlimited and 60$ a year.
That many drives, you should be able to saturate a good part of a 10Gb link. The difference for streaming read/write loads (like copying movies..) is night and day vs 1Gbit. That is assuming you've got enough CPU to run ZFS that fast. That is part of the reasons I stick to linux MD/xfs. Every time I try to use ZFS I run out of CPU or RAM. Also, given that I want low idle power means I'm not willing to throw enough hardware at it to make ZFS run well.
Frankly, the asus XG-D2008 and a few Chinese x540 boards (~$100 for two ports)with cat5 cost less than some of the fancy home AP's and is well within the prices I paid for my first 1G hardware. Two workstations and a server should be less than $600.
Adding to this this, there is the Ubiquiti US-16-XG which also has a bunch of SFPs for under $600.
Is anyone here using FreeNAS Corral? I've been toying around with it for a few days, and its been a frustrating experience. My first sign that I should've avoided it was that trying to install 10.0.3 with UEFI just fails. With 9.10 they have a nice docs page with lots of details and info, but with Corral they just threw it all out. The web interface provides no help, and the cli just gives a brief sentence which is usually of little help. I can see the long-term potential in Corral, but right now it feels like a rushed out beta. Should I just install FreeNAS 9.10 instead, or is it worth sticking with Corral? Are there any other OSs that I should try out?
Since we're on the subject of home networks, what are people's thoughts on running FreeIPA and FreeRadius? I'm hoping to use it to setup a home VPN server, as well as provide a means of performing authN/authZ for multiple "personal cloud" applications. My goal is to reduce my reliance on cloud providers, since I've grown increasingly uncomfortable with their practices and the loss of privacy.
After yet another issue with basic functionality in the Corral release - and reading about even worse experiences on the forums, I decided I couldn't trust this product with the most crucial role in my environment and installed NAS4Free instead.
Only thing I've seen so far is https://rclone.org/crypt/
In other words, to my non-crypto-expert eyes, there is no glaring misuse of the "golang.org/x/crypto/nacl/secretbox" API that jumps out at me; I haven't looked at that package to see if it's okay.
The 4-drive version is a SuperMicro 721TQ-250B system.
This processor supports RAM 128GB+. ZFS dedup uses a lot of ram.
Edit: actually now that I look back at it I'm almost positive this is the deeper lack "coffee table".
By the way, if anyone is considering deploying their own LackRack, I would highly recommend reading the installation section in the OP. It's got some quirks that are worth considering before you dive in.
I wish I'd have known/thought about that years ago when I bought a metal rack which i subsequently ditched due to space constraints.. though at that point I was young/stupid/wanting 'cool'/enough that I probably wouldn't have cared..
First thing that jumps out at me, though: in the photos at the top, the UPS is on the floor.
Get that UPS off the floor! When your house suffers minor flooding (burst pipe, overflowing toilet, leaky roof, etc), it will be sitting in it. You think it won't happen--but then it does, and if your electronics on the floor don't get damaged, you're lucky.
Not if there's a toilet on the second floor. I speak from experience. :( Neighbor's upstairs toilet tank burst while she was at church. Couple hours later, it's raining inside my apartment and her whole place is ruined. I was very lucky that my computers didn't get ruined. 50-cent piece of plastic connecting the tank to the supply line caused thousands of dollars of damage.