For those that don't know, borg is a backup utility that has been called the "holy grail of backups".
It takes your plaintext files and directories, chops them into gpg-encrypted chunks with encrypted, random filenames, and will upload (and maintain) them, with an efficient, changes-only update, to any SFTP/SSH capable server.
My understanding is that the reason people are using borg instead of duplicity is that duplicity forces you to re-upload your entire backup set every month or two or three, depending on how often you update ... and borg just lets you keep updating the remote copy forever.
Last time I checked there was still no work being done on improving performance. Multithreading has been on the agenda for years and will probably never come, much like improved crypto.
Thus his post was valuable to me.
"Why not add a PR" is not a reasonable response to something like this. I'm looking to use a backup system, not create one. If something works so poorly that I'd have to start submitting PRs before using it, I'm far more inclined to go elsewhere.
> The two points you mention are both known and on the agenda for this year
This is a rather strange response to the above...
The MAC should prevent the remote server changing the counter or any encrypted bits.
(I'm the original author, but Thomas, the current Borg maintainer is also very active.)
Edit: It looks like Vorta doesn't obey Mac OS's Dark Mode so it looks like the app doesn't launch.
You could probably run it today using the Windows 10 Linux subsystem, but it's not fully tested and will need some small fixes. Maybe later this year.
If anyone is interested in working on this platform, just post at . It's probably very doable.
(Note: I work on Arq)
As is it's really best for workstations rather than the NAS/servers a lot of people who care about their data use (not just basement-dwelling data hoarders but also professionals like photographers and videographers).
I'm quite sure if Arq was available in a command-line version for Linux/FreeBSD, i would use that instead. It has been nothing but rock solid on my client machines.
Restore through the UI is a bit wonky, but once it's started it works well.
I don't think Arq brings something to the table that Borg/Restic/Duplicity doesn't already provide, besides a piece of software i'm familiar with, and trust. (and trust doesn't come easy when it comes to backup software!)
As i wrote, i already use Arq on my client machines instead of the mess that is "TimeMachine over wifi", but when it came to backing up my NAS and server/VPS, i had to look for something else, as Arq is not available on those platforms.
My use case would be backing up with Arq from a FreeBSD/Linux NAS/Server to a remote (networked at least) target, and in case of a full restore, i would use the command line as well.
I would then use Arq for Mac/Windows for "plucking" single file restores.
Now I’m using both Duplicity and Restic, for different backups.
* Fully open-source, command line implementation. Script as needed.
* Supports ton of backends, including directories, SFTP, and many cloud services.
* Supports multiple backup operations in parallel. Create one repo for multiple machines.
* A repo is really an encrypted object store with a lists of directory/file/metadata references (snapshots). If the same file exists on multiple directories/drives/machines, it is deduplicated.
* Supports tagging, which allow you to identify independent snapshots and manage them.
* Supports purging old data with conditions, ie. keep last 7 daily, 3 weekly, and 6 monthly.
* Can verify and repair repositories.
I regularly use it to backup my home server to a backup drive and to Backblaze. I've also used to back up my home directory of my development machine to replicate it on another machine.
I'm setting up backups and looked into most available solutions, including restic, borg, and duplicity, but Bareos by far seems the best one. It's just a little harder to set up, but totally worth it.
According to a more up to date comparison (recommends Arq), Borg is also slow because it processes sequentially.
This is true of most backup programs.
Took me forever to find that myself so just posting it here in the hopes that more people will support the developers.
This can be supplemented further by having it auto sync to a computer in your office whenever you pull into work. Add in some monitoring so you can get a phone alert if any of the synced copies are more than a few days out of date, and you are almost golden.
It's not perfect, but it's way better than not having any offsite backups at all. I have a deep distrust of Cloud storage services so for me I think this setup is the best solution I'm going to get.
 I'm in the UK, so our typical 'sheds' are absolutely tiny - this one is not.
The data I really care about is encrypted right on my laptop,and I backup that encrypted data.
Use a solution that you can easily switch from one provider to another one, switch when there is any warning that the one you use might be in trouble. In the same line, make sure people in your family (or technically savvy friends) understand your system, are able to switch and know to pay the could provider bills if you're not around - unless it is ok for your entire storage to disappear when you die of course.
It's susceptible to macspoofing but I hope my Pi won't join a spoofed network.
What if the drive gets stolen?
All data is encrypted before sending to it. If it gets stolen, well it is only one of the copies of the data (however it lives hidden under the dash, and is small enough that it looks like a piece of car equipment).
This was so I could use it to back up my Synology box to a set of external USB drives and retire the older Mac Mini I had been using. I'm still using the Mini...
In the case of the author, +1 for doing backups, but -1 for not establishing a chain of off-site copies (a safety deposit box at your bank, a relatives house that's some distance away, etc)
If a crappy NAS can run disk encryption, so can a raspberry pi. They're deceivingly potent.
Yes they can, and Golden Eye. I have just played Mario Kart with the kid before bedtime. It stutters a little if you have too many karts on screen, but that’s not a problem when you’re winning.
Lot fewer choices are available.
NUC is expensive. I wonder when will 8core ARM based NUC will arrive in the market.
No idea is NUC can be powered by Xiomi Mi 2i powerbank but a Raspberry Pi can be
1. Gigabyte ethernet
2. Small form factor NUC or smaller.
3. 8+ cores
4. Very low power consumption
It's an 8 core ARM SBC, with 2GB Ram, Gigabit ethernet on one USB3 bus, and a SATA connector on another USB3 bus. It can fully saturate spinning rush, though it might have problems with SSD drives. In any case, it will fully saturate your GigE link with even a slow harddrive.
It runs a standard Linux kernel, with the "driver" for the hardware added, which is why it requires a special build. Latest kernel version is 4.18. There is also an OMV build for it.
Mine uses around 4-8W, depending on how much the harddrive is being used.
I've got a couple of them running as backup targets (Arq, Borg, etc) with a 4TB WD Red and Btrfs, and they've been rock solid.
If you need hardware AES, you should probably be looking elsewhere.
I have experimented with a very large number of devices between NUC and Pi.
While the best processors for such devices would be ARM processors (with Cortex-A76 or Cortex-A75), nobody offers adequate devices.
You can find only devices with obsolete slow ARM processors, e.g. RK3399, which are much faster than Pi but much slower than x86, or you can find development systems or modules with modern smartphone ARM processors, but those are much more expensive than a faster Intel NUC.
Therefore, for the moment the only sensible choices are devices with either Intel Gemini Lake processors or with Intel Y-Series processors.
The devices with Intel Y-Series processors are much faster, but they are also more expensive. While Zotac has presented at CES one such device, its price and time of availability are unknown.
For now, the only device available with Y-Series is "LattePanda Alpha". Considering that its price includes 8 GB RAM, 64 GB flash and the Windows licence (even if I wiped it and I installed Linux) it is cheaper than an i3 NUC where at the same price you have no memory.
It has everything a NUC has, except that it has one less USB port (it has 1 C + 3 A, while some NUCs have 1 C + 4 A). I have installed in its 2 M.2 connectors a NVMe SSD and a second Ethernet interface. The speed was excellent, because the Kaby Lake CPU was configured for 7 W TDP with 15 W TDP for the first 28 seconds. The included fan does not start unless you run a heavy workload and even then it is silent. The size is smaller than a NUC but larger than a pico-ITX board.
If you want something cheaper, or if you want, like me, both full-size DisplayPort & HDMI connectors (not DisplayPort on USB C), then you must choose a Gemini Lake device, i.e. either Zotac ZBOX PI335 Gemini Lake (with N4100, not to be confused with the obsolete ZBOX PI335 with N3350) or an ODROID H2.
The ODROID H2 is faster and possibly cheaper (depending on the memory you buy or you already have; ZBOX has soldered memory), but it is larger. ODROID H2 is NUC-sized while PI335 Gemini Lake is pico-ITX sized. ODROID H2 has 2 Ethernet, but PI335 Gemini Lake includes WiFi & BT and it comes in a closed case which will protect it in dusty environments (both have passive cooling). PI335-GK uses a 5 V power supply, so it should be easy to be powered from a power bank. The CPU TDP is configured for steady state 6 W and 15 W for the first 28 seconds. The idle CPU power is less than 2 W. I have not measured the total idle power of the computer. With the CPU halted, the power consumption should be much less, but I have not measured it.
Gemini Lake has a speed just a little less than a Snapdragon 845, but it is much faster than older ARM processors and than older Atom processors.
If you'd put your disks into a replacement Synology unit, you would have been back online - config, data, and all - within a few minutes.
But this scenario? Lost almost all his data?
I once closed the terminal window where I spawned gparted from while resizing a partition with it... this was nerverracking
And joy of joy's it's not a standard off the shelf part, and pray you're not outside the warranty period because the price of a spare is eye watering.
Steve has posted a few other videos about failed Synology's, worth a watch before parting with you cash.
I recently did similar with an old server and Freenas, but I am still not sure if I have a safe (enough) system overall. I chose RAIDZ2 with 4 4TB WD Red drives, ECC RAM, seems to be working well so far.
System has been running a year now without any bigger maintenance.
From this I learned about single points of failure.
Edit: also in the decade and a half since I learned that you should never trust a magic box, magic piece of software or magic container file system for backups. A plain file system you can just copy your shit back from is the closest thing to a guarantee. Also it’s cheaper to curate your data carefully than end up with 4TiB of crap you’re too scared to deal with on your hands.
These days, I'd suggest using a file system with some form of file checksum metadata. If one values the integrity of their data, bit rot is a thing.
The catastrophic failure modes are present only through hardware and misuse issues on a mature file system.
Probably close to 100% if your house catches on fire.
Wildfires for example often destroy homes, but generally allow most people to evacuate.
If you don't have 3 copies of your data (with at least 1 offsite) then your data doesn't really exist.
But, the other lesson is: Backups. Sounds like backups were shut off 6 months prior.
The other lesson is: Monitoring. Backups were going to the USB drive, but it that stopped working at some point. Unless you have some tested monitoring of your backups, you are likely to lose data.
Glad this story had a happy ending.
This was one of the reasons I was okay with paying their prices. Even if the device completely craps the bed, I'll be able to hook the drives containing absolutely normal LVM/btrfs volumes up to another machine and get my data out.
We use a box with 2x2TB mirror zpools. It runs Nas4free off an 128GB SSD. Initially we wanted to use Freenas but ir needs 16GB of RAM, while Nas4free works fine with 8. It also does SMART monitoring sending emails when something is up.
The Drobo is literally a black box to me. It either mounts and you can see disks, or it doesn’t. And if it doesn’t ... well. Yep. Right-o. I’ll try rebooting it, I guess.
Exactly. Onsite backup is not a backup, especially if it is directly connected to the primary data store.
And restores can use the open-source tool restic , so you don't have to be locked into Relica for accessing your data.
We're working on the ability to do byte-for-byte copies of a repository to other destinations to make data even harder to lose in these kinds of disaster scenarios, as well as a new UI to make it more pleasant and powerful.
Anyway, our goal is help make robust backup strategies like what this guy needs really, really painless, because I'm as paranoid as he should have been about losing data.
Relica is archival software, but you can backup+archive your synced folders of course.
He didn't have a surge protector? Sweet Jesus. I don't plug my backed-up PC into anything not surge protected.
I'm not saying it's worthless, but it's not silver bullet. You must plan for failure.
One of the few things that actually routinely comes with real surge arresters (i.e. gas-discharge tubes, not the MOVs plastered all over the place) is anything connected to a telephone line, e.g. A/VDSL modems. On some old PBXs for analog lines these were even contained in field-replaceable modules.
As others have said, electronics can get fried by power surges while plugged into even commercial grade surge suppressors. Home grade surge suppressors provide even less protection.
Much of Europe uses buried cables, so there's less risk from lightning strikes, and I think the higher line voltage (230V) reduces the effect of something like a dodgy elevator motor in the same billing.
I have visited the US a few times, where I remember the lights would dim momentarily if the garbage compactor or the dryer were used.
I'd run some batteries in the basement and run my Synology (or similar) off of those. Additionally, the USB Backup of backups should be in the garage or the attic, if at all possible. Also, a nice cloud-backup solution that is capable of delta-uploads is a very good idea (cover against fire in the house, or any other "catastrophic" failure).
If you don't have DC experience or if you didn't do much hardware, it's common to over-focus on software. And to be fair, vice-versa :)
Except the ext filesystem was unreadable because it used a different page size. Required some shenanigans in userland by thankfully I was able to recover the data. Seemed like a software fault on the box.
The chassis had to be destroyed to remove the drive and it was interesting to see the warranty explicitly mention the customer was allowed to do this to recover their data.
First, the USB external is probably OK except that its USB circuitry has taken a power hit. If it's a standard SATA drive inside that could probably be shucked and accessed. Counter: Some of these have drives that are no longer SATA but have a bunch of the USB connectivity built into the drive. At that point, you'd probably be looking at a few hundred $ of data recovery costs (yes, that little). Professional recovery of the RAID would be more expensive because pricing is often based on the number and capacity of the drives.
Second, RAID5? I know these were only 1TB drives, but be very wary of anything with only a single parity disk if you're looking at drives of 1TB or larger, particularly if they're sequentially-numbered drives from the same lot. With modern TB+ drives there's a not-insignificant chance of drive errors as you hammer the remaining drives to rebuild the array. If building one of these now, the price difference between a RAID5 of smaller disks and a RAID6 of larger ones is probably only a few dollars.
Third, if actually doing recovery the first thing you want to do is image the drives and work from the images. ddrescue is probably your simplest option there, but yes you're going to need a big chunk of drive space available.
A bit of annecdata: I've been running raid5 and raid6 for years without issues on 4 and 8TB drives, scrubs come back successful every month despite claimed error rates, the drive deaths I've had have been sudden whole drive failures or write failures to a large portion of the drive.
Most notably, if a drive has failed there's a reason for it and a lot of the possible reasons will be shared with other drives in the array. Was there a manufacturing issue and all of the drives are from the same lot (pretty common). Is the RAID in a hostile environment (heat, vibration, bad power, etc.)? Heck, is the RAID normally very lightly used and going to have heat problems if it's under full load for hours (days?) during a rebuild?
There are also factors like how the RAID controller is going to handle another read error - will it drop a second disk if the drive reports a failure? For that matter, if an array drops to "degraded" due to an error are you going to immediately replace the drive or are you going to write it off as a one-time fluke and let the array rebuild? Do you keep a pool of spare drives around that you'll drop the failed one into after testing? I've regularly seen an array drop a drive due to something transient, then rebuild onto it and not have any more problems for years.
Even with a 2-drive failure in a RAID5 you're unlikely to lose much data - almost everything is likely still there on the disks unless there've been catastrophic failures (e.g. an array of the old "Deathstar" drives which were prone to head crashes). You just may need to do recovery which will generally mean imaging each of the drives and doing recovery based on working with those images instead of the original drives.
First, the USB external is probably OK except
that its USB circuitry has taken a power hit.
- My workstation has linux software raid-1 of 2x 6TiB drives (this provides robustness and uptime in case of single drive failures and ease of recovery).
- Another machine in my garage doing incremental daily backup pulls over the network. It is setup as multiple discrete hard drives thus partitioning single drive failures (low cost, the garage is a separate building, the host machine is an arm board that actually turns off HD PSU when not backing up, so hard drives are fairly isolated from power spikes).
- I make a monthly incremental backup (three sets) onto an external 6TB usb hard drive encrypted with luks. This drive spends 99% of it's life powered down in a cabinet at my office at work. It is protected against theft, fire, electrical spikes, etc... by my employer.
- The is not a *ucking cloud anywhere in this picture. I can get access to my backups within ~1hr in worst case (round trip drive to office to pickup my drive).
You Kids need to learn how to take care of your shit - now get off my lawn!!
The NAS holds all our media/documents/music/whatever, and where possible, this gets auto uploaded from workstations to the NAS, mostly through Resilio Sync, but also ChronoSync.
A local Raspberry Pi (different building) acts as a node in the Resilio Sync setup, adding more redundancy.
The NAS backs up to a local USB drive nightly, as well as a remote (4km distance) Odroid HC2 with a WD Red drive. This device also runs Resilio Sync as a redundant node.
All machines run Btrfs where possible, with smart monitoring, daily short smart tests, weekly long smart tests, monthly scrubs, and log monitoring emailed to my inbox every morning.
Finally i make yearly archive discs (100GB M-Disc) with the data from the past 12 months. I burn these in 2 copies, one is stored locally, the other is stored remotely.
Along with these drives, i also maintain a couple of 4TB USB3 drives, which i freshen (nondestructive badblocks) yearly, and update. Again, one is stored locally, the other remotely with the M-discs.
Even with the above setup, there is a theoretical possibility of losing data, but as most data lives on both the NAS and the client machines, as well as a remote target, i would need to lose all 3 at the same time.
The only irreplaceable data would be our family photos, and those are also stored on optical and magnetic media (spending 360/365 days powered off), adding at least a couple more layers of redundancy to the equation.
This is what I call a poor man's offsite backup.
I've seen a few cold backup setups like this, but I would both worry about the significant gap in coverage between the drive you left at Christmas and next time you update your backups. And also the shelf life of mechanical hard drives not in operation is poor.
I had backups of the data itself. But I'd been doing lots of data massaging, and didn't have enough storage to keep copies of every step.
Anyway, so I bought a couple new servers. One to replace the dead one, to be setup with SQL Server. And a low-end one that would accept the controller from the old server. I just left the drives in their cage, and jury rigged power and data connections.
And it worked.
First of all, given the price of storage, for backups, I don’t think anything other than mirroring makes sense. Just get 2 big hard drives for your NAS and set them up for mirroring. In the event of a filmier, you can read directly.
Next, you don’t have a full backup without offsite storage. Even if it wasn’t a power surge, there could be flooding or fire at a single location.
Always remember the basic 3 2 1 rule!
Years ago I had a desktop with a 4-disk RAID 5 where the SSDs failed in quick succession, it's common. I lost some data-- or rather recovered files manually I think from a failed RAID and cold backup, switched to spinning disks, but after a while the new disks started beeping and generating RAID errors.
After much time and anxious guessing, I swapped the power supply and never had a problem since.
At the moment, I really like Perkeep (https://perkeep.org/). But I'm not sure whether this is a solution for everything.
On the hardware side, I also have not really decided. I want to build up my own NAS (custom hardware, no preconfigured thing), which should be quiet (if it is not doing anything, i.e. most of the time), as it will be in my home. Another NAS maybe at my parents home. And then maybe some cloud storage.
Another thing I was considering was M-Disk, that can, supposedly, hold information for 1000 years. But I wonder what other people's experience with it is.
Alternatively, I understand normal Blu-Ray disks should be able to have data retention of about 40 years, and that sounds decent to me.
I was looking into Amazon glacier and/or google cold line, and the prices seem decent, but I do not like the fact that you have to pay monthly, even if a small sum, it's just one more thing to concern yourself with. I would like to prepay and know my stuff is up there for a couple of years.
Normal Dropbox/Google drive stuff is too expensive to store big amounts of data, so not worth it. Plus, you have a copy of it locally also, (at least in normal use cases)
Thinking about it, I think this would be a great idea for a startup. Cheap data storage for long term storage, with competitive prices, the ability to per-pay, user-side encryption and a simple UI that grandma can use to drop photos. It should be able to guarantee that the information you desire to will outlive you.
As a consumer, I wouldn't trust a startup for such needs, because there's a likelihood that the startup would either raise prices, pivot to a different service offering, or shut down entirely.
Google Cloud Platform lets you make a manual prepayment , so that's an option worth considering. I know Google gets a lot of flak for shutting down consumer services, but I'm inclined to believe that they wouldn't shut down Nearline/Coldline without a significant amount of notice.
Fair enough. It's just that I think something in the field that can do these things would be filling a need. And all companies have to start _somewhere_
But I agree, getting a promise of long-term availability from a new company is a bit rich.
Thanks for the info about GCP - I looked mostly into AWS glacier and I know AWS did not have a prepay offer.
What finally clicked for me, is https://www.greyhole.net
It's like magic. Decide how much redundancy you want. Then just add drives to it. It balances files automatically across drives. You can have remote drives in the mix.
What it is not good at, is many small files. But for my use, media files and backups (tar archives) it's a breeze. And the files are stored as normal file on the drives it distributes too, so there is nothing complicated to dig into should disaster strike. (Not that it has happened to me.)
No affiliation, just finally in a Zen state of mind when it comes to my home NAS.
Next step - make sure all of that is backed up off site too, but that is another thing altogether...
Throwing it on multiple clouds with version history isn't an issue.
(I'd recommend an O365 sub + duplicati...in theory you can push like 5TB to MS cloud).
Plus ironically all the content that would score me a TOS violation doesn't get backed up since it's easily replaceable...
MacBook Pro in for repair incl. wiped disk. No problem, I have two external drives with regular TimeMachine backups, so go ahead.
Having received the laptop back, I plug in one of the disk drives and - oops, it's encrypted (with a good, safe, long password of random characters), which had previously been stored in the local login keychain, so that the disk drives have always just silently automatically mounted over the years, and I totally forgot that they were encrypted!
Fortunately, I had the password saved somewhere and access to it. Otherwise my backups would have been for nought (though of course they themselves had the disk password inside... well protected by itself.)
Since my son was born the amount of photos and shitty quality videos that I backup is staggering.
Crashplan killed off their consumer plan a while ago, so I ended up moving to their business plan at double the price. In my case even at that price it was still worth it considering the amount of storage I'm using.
Anyone aware of other services offering unlimited storage for a single user? I know Dropbox, Google Drive, and OneDrive all offer unlimited storage in their business plans, but they all require a certain number of users before the unlimited storage kicks in.
When I shoot a model I will take 400-600 images in an hour, each about 25Mb in size. The shoot I had last weekend resulted in approximately 18Gb of RAW files, and output JPG files.
In total I have just under 3Tb of RAW, JPG, and other media files. (Sometimes I film shoots, or do some video-work at the same time.)
That kind of volume is not huge, but still painful to upload remotely. Its also at the cusp of the kind of data you can backup to a cheap SAN-box locally. I currently have two toy NAS devices each with 2x4Tb drives. If I want to bump my local capacity to 8Tb, or similar, it'll get quite expensive.
For my non-pro and far more modest collection of around 300GB of images, I keep copies on three local machines, and one remote (family member) with sync changes being able to be done over home grade ADSL. With your volumes you could do off site sync via usb disk easily enough for new large ingests, and propagate smaller changes over the wire. Having a friend or family in the same city is very convenient compared to trying to hunt down the best all you can eat deal du jour, with no need to handle the regular t&c changes those services suffer. Good reciprocal opportunities too, of course.
Everything was on a decent UPS... but I’d completely forgotten the cable line.
You can completely air gap your network from the outside line by converting to fiber at some point (probably between your cable modem and your router, in a DOCSIS setup). Isn't foolproof, because lightning can induce current on your wires directly, but it'll help these kinds of scenarios.
- synced to home server, which, at this point, is a thinkpad x201 on an ultrabase with 2 disks - it has built-in ups, called a battery
- all of this synced to off-site rented server in Germany
- irreplacable photos are on blu-ray on yearly archives
This covers lost laptop, burglary, house fire/flood, etc. To avoid problems with lost ssh keys, I have a few users on that rented server which can log in with a password, in case of emergency.
Personal media archive, Windows 10 Pro, Storage Space with Parity across 4 HDD in a sata jbod. This is a purely software RAID. Moving the drives (or part of them) to another Windows 10 system allows for seamless recovery.
This archive, as well as all personal computers use Backblaze for offsite backup (including versioning). Versioning is important in case malware/accidents/buggy software. I don't consider any backup plan complete without this and being off-site (fire/theft)
For my business servers I use Tarsnap. (Off-site and versioning, 45 days)
Edit: Oh and everything is on UPS
Never worry about this again.
Online storage is cheap. Bandwidth is cheap. There's a multitude of solutions in the comments if a consumer solution like Dropbox or Google Drive isn't good enough for you.
Most ISPs in the US now have a 1TB datacap and charge ridiculous rates for overages.
AT&T gigabit customers are on unlimited plan for anyone else seeing this who is a customer, see https://www.att.com/esupport/article.html#!/u-verse-high-spe...
So downloading random Steam games to play or taking a few videos of the kids for family can easily result in an extra 150gb of incidental bandwidth.
Over Thanksgiving we hit 973gb with guests in the house. I reduced all the camera and Netflix quality settings the last week of November to avoid the penalty fee.
Generally the input it directly connected to the output, through the ferrite ring. Only in cases of a power outage does a relay trigger and switch to the batteries. But that doesn't prevent a surge from going through the UPS.
You can hear the relay trigger when you disconnect it from the wall power, the latency is high enough to allow plenty of damage.
There do exist UPSs without this problem, but they are more expensive, heavier, generate more heat, and consume more power. Look for double conversion UPSs that in the normal state go from AC -> DC -> AC.
Kinda weird that while computers are 100% DC, they are fed with 100% AC. I was pleasantly surprised that pretty much all ham radio stuff is DC fed. MUCH easier to do UPSs, solar, wind, or have multiple pieces of equipment share the same power supply. Imagine racks with a big power supply on top (and getting rid of all the AC -> DC conversion heat). I've actually seen these, but they were unfortunately cost prohibitive.
Also ... RAID is not a backup.
So I guess the options would be desktop backup on the various computers that feed the current NAS solution.
- originals: 2TB google drive + 50GB icloud
- backup #1 (autosync from drive/icloud): 4TB internal drive
- backup #2
(cold storage): 4TB external drive
Some degree of cold storage would have reduced the OP's stress level when restoring their NAS mind you.
From my families perspective it’s just one drive, but it’s replicated everywhere.
You know, the data is still there, I just don't know what hashes correspond to each objects.
 - http://devan.blaze.com.au/blog/2019/1/20/the-folly-of-unimpo...
> My Synology has 4 x 1TB disks in a RAID 5
I'll bet this guy did not have a surge protector in front of his Synology PSU.
You will have mostly current backups of your projectdata and workdata on all those machines you migrate from and towards.
The problem theire then becomes destructive automation. You must avoid automating syncs with half-corrupted or full-corrupted instances of your work environment.
Also backups. Always backups.
The NAS should also have a warranty of some kind or the controller could be repaired for cheap
He was never in any data peril, so just fix that and then add an offsite backup to the mix
tl;dr I'm trusting Apple.
And at least Raid-Z. Raid-5 and Raid-6 are now at probability of failure levels that your rebuild is likely to throw an error.
I think the best you can do is just get a top-quality power supply. SeaSonic will sell you a ridiculously overengineered box with a 12-year warranty for $160; it's guaranteed to have a longer useful lifetime than any other component in your computer, probably even including the case itself.
Cheap whole house surge protectors are pretty much useless. It will work for a few times, but the MOVs will degrade and fail pretty quickly. Every time a big inductive load switches on or off its going to cause a voltage spike that is going to trip the MOVs. or the MOV's Trip voltage is so high (to avoid quick degration) that they really don't provide any protect. Really a good whole house protector needs to use SADs (Silicone Avalance Diodes) or a passive LC filter. Transtector, Thor Systems (SADs) or Pricewheeler (LC Filter). These protectors can take many more surge hits than the ones you listed above. Why whole house Surge protectors don't work.https://zerosurge.com/wp-content/uploads/2016/10/USTech.pdfE... with a whole house protector it still would be a good idea to use point of use surge protectors. Since the surge has to reach the Whole House protector before it can be clamped. Surges can reach your electrical devices faster than a whole house protector can clamp. It only takes a few nanoseconds to destroy a microchip. For instance say you have a vacuum cleaner plugged into the same circuit your computer is connected to. the vacuum cleaner jams, causing a surge that will hit your computer (since it closer) than the whole house proetector is (down in your basement).
Source: comment section on https://www.youtube.com/watch?v=6PqO0aQaGDY
Should have read the manual though, it does tell you to make a backup of certain data ranges in case of encrypted ZFS for this specific case, so it's partly my fault.
That said, I'm using ZFS ever since, but on top of LUKs with linux.
I've just rebuilt my home server/nas going from 2x3tb disks in a mirror no encryption to 4 x 4tb disks in a striped mirror with native ZFS encryption on ZOL.
Most of my data is read only media content, 90% read use which is also extremely low io wise and write is only when adding new media. I was thinking power loss would only be an issue loosing data that is currently being written so would not corrupt the entire zpool?
If native ZFS encryption can lead to loosing a zpool on powerloss I might look in to buying a used cheap Dell r210 ii, stick two 8tb drives in as a mirror then stick it in a collocation data center and use native zfs send/recv incremental snapshots for offsite. Looks to be cheaper then rsync.net $30 per TB/month for ZFS when you got 2-3TB+, also can do the initial sync locally. Still looking at $700+ a year after initial hardware costs.
If you build this kind of thing because you enjoy it, more power to you. But I see no valid argument for its practicality.
2 different Media (or vendors)
1 Offsite / Offline
There are all kinds of things I could think of that may cause problems with your dropbox plans but what would keep me up at night is all the ways I could not think of
never trust a single vendor, a single NAS, a single anything... NEVER
This is still putting all of your eggs in one basket. What happens if I lose access to my account?
We had a similar debate at work, where I questioned the need for backup of my workstation, arguing that there's nothing on it that's not also in git, on a network share, in LastPass or the mail server. Apparently only two people out of a hundred, me and the CEO, do have important stuff stored solely on our workstations.
Calling people with 4Tb of data hoarders may be a little unnecessary, but I do question how much of that data is ever going to be accessed in the future.
My photos alone are north of 4TB. That's DSLR, but not a crazy high-res one (to say nothing of people with video hobbies). I've always worked in small 2-8 person teams for companies that I've been heavily a part of, so that's a huge chunk too. But even discounting that data, I have quite a lot of projects that weigh in pretty heavily.
Yeah, I do have some datahoarding-type collections, because that's the sort of thing you end up doing when automation and total storage become commonplace, but just looking at "bytes I have created and will be lost forever if they vanish", I'm well north of 4TB. I think a lot of other people are too.
I don't mean this disparagingly, but if a person hasn't had any data-heavy hobbies, and has always been some kind of employee to a larger entity who manages data elsewhere, then yeah, your data footprint might be small. I imagine lots of HN regular types don't fit that mold, though.
(On the original link--I don't have much to add to the other comments here. But calling a 4-drive RAID5 setup robust in any sense is nuts. That's data loss waiting to happen, and probably made worse by thinking it is robust)
I wouldn't have that much to backup but some do, depending on their hobbies and work.
1. Backups should not be on a single drive.
2. Backups without checksums will result in corruption.
3. Offsite is a must.
4. Unencrypted off site backup means someone already copied your data.
5. Encrypted offsite backups should have forward secrecy. So different keys for each file and keys file gets backed up encrypted.
My backup strategy:
File server runs zfs raidz with Daily/weekly/monthly snapshots on disk.
Snapshots get copied to 2 external drives, zfs mirrored.
Files get encrypted and uploaded to backblaze using my custom software. Nothing fancy, just standard authenticated encryption (chacha20poly-poly1306) but with per file key management and argon2.
Any references on PFS for backups? Was there no existing OSS backup solution that implements PFS?
Now, PFS would allow you to handle key compromise by making future backups unreadable. But there are other solutions for this (such as upgradeable encryption).