I don't trust Time Machine any more. Years ago I wrote some shell scripts that help me mostly automate my complete system setup (with brew and friends). From time to time, I wipe my entire system and restore it with these scripts. Sometimes I have to adjust them, but mostly, they work without changes.
For my data backups, I use restic. Big advantage is, that I can read my backups even when I don't have a macOS system present (e.g. my only macOS system had a hardware issue and my Time Machine Backup was pretty much useless until I got a new one).
I know, this solution is not for everybody, but Time Machine corrupted my backups more than 5 times now and it feels so slow compared to restic, that I don't even think about retrying it after a new macOS release any more - even if my solution is a bit more work to do.
People who want something similar may want to look at Arq [1]. Similar to restic, it provides incremental encrypted backups to most cloud providers (or a machine with SSH). But it is a Mac app, making it easy to configure and maintain. I never had issues with data corruption so far.
Disclaimer: I am not affiliated with them, just a happy user for 9 years.
It's incredibly freeing to be able to dump your remote backups wherever you have space. Right now, the best for me is to send them to a virtual Linux box where I have enough space left.
It supports Amazon Drive, AWS, Backblaze B2, Dropbox, Filebase, Google Cloud, Google Drive, OneDrive, SFTP, SharePoint, Storj, Wasabi, network share or some other random S3-compatible server.
It's a native, let me repeat that, NATIVE Mac app and behaves exactly like one. Incredibly satisfying. Unlike Time Machine, it'll tell you what went right and what went wrong. I have a lifetime license.
It looks great but I suppose the downside, like a lot of nice native Mac apps, is that it seems to be Mac-only. So if you also want to back up non-Mac systems (e.g. to make off-site backups of NAS or server data), you'll need to set up, maybe pay for, and grok 2+ backup systems.
There was a time when Time Machine was pretty good. But Apple became increasingly parochial about TB backups such that you cannot cull them even with root access. Any culling must be done with the TM app from the original machine that owned the files. This makes it impossible to be the sysadmin for your family's backups.
Now I use Carbon Copy Cloner, Syncthing, and Arq. My family's backups are quicker, more seamless and much easier to administer as a result.
I was trying out Restic (through auto-restic) this weekend. And I really wanted to make it work.
However, I have 2 user accounts on my macOS (mine and spouse’s) and I couldn’t get Restic to access the other account’s data. I am admin, I ran Restic as root, gave it “full disk access” - still couldn’t make it work.
Would you be able to share those scripts? Or if not, an obfuscated/abstracted version of them?
I use brew but even with brew and brew cask, I find there's plenty of GUI apps that require manual installation, which will also have their own config spread out in various places, plus whatever CLI or background stuff I've installed myself (mostly hidden files/directories in my home folder so probably easier), system settings, etc. I don't know how I'd reasonably automate a restore of all of that stuff without having a Time Machine-style backup (essentially a disk image).
Unfortunatly, they contain too much sensitive information to provide them online and I don't have the time to make them nice and clean (it all started with a 50 lines bash script). But there are plenty of users providing macOS RICE / dotfiles...
I use Restic and backup to rsync.net. The only reason why I use them is because they use ZFS and have a admin/user account system that allows for immutable snapshots once everything is set up.
Honestly though it doesn’t matter. The backups are encrypted outside of the PC you are using Restic on. You just need a drive or a PC with storage space and SSH access that you can access.
Just start with an external usb drive. You can pickup 8tb for under $200. If you want to go the extra step, you can buy two or three and physically swap them every N weeks. Store the older one at a different location.
Far less sophisticated than a real NAS, but also fewer moving pieces and things to go wrong.
Keep brew far away from your prod system. Everything you can‘t do in macos just use a vm.
Don‘t touch the system and Timemachine will never corrupt, its that easy in my experience.
But… brew doesn’t touch the system, and never did?
Even when it used /usr/local (before the root system became hard readonly) that was an unused folder path, and it explicitly refuses to install as root.
RPis are great hobby electronics projects, but IMO they don’t have any benefit for this kind of use over a thin client?
They’re more expensive, you need an extra RPi enclosure and power supply, you can’t use M.2 SSD sticks without an additional USB3 enclosure, they have a history of corrupting flash cards, and their idle power is only a little bit lower.
Can confirm about MicroSD flash cards, they're too unreliable for stuff that must be on 24/7, and get corrupt easily. Good for tinkering projects and media players, not for critical stuff where the downtime needed to pull the card and fsck/reinstall it might not be an option.
I've been running a weather website on a RPi2 since 2014. And it's on the same SD card. MicroSD cards don't get corrupted as much as the interwebz says they do.
It has probably to do with the number of unmanaged power cycles. A hobby electronics project will see many of those. A weather station may only see them when the power in the neighbor goes down.
For something critical, especially one which already involves an external drive, there is zero reason to store your root fs on an SD card. Booting using other methods has been supported for literally years.
That seems pretty similar to this, just swapping out the thin client and pi (and the exact Linux distro, if that matters). Which makes it a trade in all the particulars:), depending on exact price, power use, etc.
My advice to retain your sanity: stop using Time Machine and use Carbon Copy Cloner [0] instead. It works. It keeps working. It has excellent documentation for any possible backup and restore cases. It is transparent about what it is doing.
Time Machine works fine until it doesn't. And it won't tell you that a backup is broken until you try to restore from it. The errors are going to be cryptic. There is going to be no support and the forums are not going to help. The broken backup is not going to be able to be repaired. Time Machine uses the "fuck you user" approach of not providing any information about what it does, or doesn't, or intends to do or whatever.
If your data is worth backing up, don't use Time Machine.
> Time Machine works fine until it doesn't. And it won't tell you that a backup is broken until you try to restore from it. The errors are going to be cryptic. There is going to be no support and the forums are not going to help.
and
> Time Machine uses the "fuck you user" approach
apply to every piece of software or service out there by Apple. Every single one!
iCloud? The only answer you’re ever going to get anywhere is - “No, your data is getting synced”. “Why don’t you enable and disable it”. Or “restart it”. My favourite is when Apple Support asks you to reinstall or reset the iOS or reinstall macOS entirely even if something as tiny as some sync problem occurs, with such confidence and matter of fact tone that you are forced to debate whether you are talking to some kafkasque and sadist bot.
This is the Apple playbook. Naming and shaming them on social media doesn't work. Emails don’t get answered. Your customer request gets stonewalled. Sometimes you feel maybe it’s Apple paying you to use their products and not the other way round.
An ex manager was thought to be crazy by us freshers when he would say that the moment he gets a Mac, for personal or work usage, the first thing he does is install Linux on that. We used to think he was some GNU/FOSS nut. He would smile and say - give it time, you’ll understand.
And I understand now how helplessly hostile the Apple ecosystem is. You literally feel like a hostage — you can use it with such stonewalling and infuriating limits and you start feeling this is the only way!
At this point I don’t think it’s anything other than self sabotage if anyone still says
- but I use Time Machine (or any XYZ Apple thing that involves data integrity and reliability) - it works, or is “good enough”
- or relies on something like iCloud for something like “data backup and integrity” in anyway.
Can’t wait for the day when file access is easier on iOS (of course aft explicit permissions - which is the case even today actually) etc, or Android sucks less - in this glorious duopoly. I meant when they’re forced to open up their platform, because they’re not going to do it themselves!
I mean Apple was never perfect. But I could at least understand that, in the old days they were nearly bankrupt, they had comparatively speaking extremely limited resources. And they had to focus on certain things and neglect others. iPhone arguably took all their resources from 2005-2015. But somewhere along that line they had way more resources than they need or know how to spend on it. Some of the lacking attention and details no longer makes any sense. And not only have to not gone to fix it. Their quality has gone downhill.
It is sad. You would expect a bigger Apple to be a better Apple. But almost like everything in life they are counter intuitive.
Around 2007 I remember complementing apple ux to a long time multiple eras of job graybeard apple employee and he sighed with frustration and said "apple really cares about user experience when they're not doing well, but as soon as the money comes in it goes out the window"
Hah, I think I hit a nerve there. I hear you. Overall I'm actually super happy with my Macbook. Luckily for the most part no one is forced to use the Apple eco system.
CCC for system backups. Tresorit for backing individual directories with a bit more privacy and flexibility than Dropbox.
I don't think it's self-sabotage to keep using apple software, but more a case of people not having been in the situation where things go wrong and there is nothing you can do :)
Personally I just use it for my documents and similar folders. It's not intended for full system backups. It's pretty much like Dropbox, but it's end-to-end encrypted so personally I feel better about trusting it with my documents.
I don't know, I have an 11y old HP elitebook from 2011 and nothing aesthetic is failing. It has however seen a battery replacement and some memory and ssd upgrades. I think battery and hard drive should be the only thing we should expect to replace in a 10y lifespan.
Agreed. And in this case, the rubber failure is almost entirely aesthetic: it does keep the screen away from the lower chassis/keyboard when closed, but you don't need _all_ of it to do that.
In my case, there are a few spots where the rubber's shape is no longer smooth: small bits have worn away and/or broken off. There's a few small bits on the top, and the lower right-hand side feels quite rough.
But it won't stop me continuing to use this laptop as my main personal machine for a little while longer (until OCLP can't get me security patches any longer).
You quite likely can? Especially if you go to a non-Apple, but Apple-authorized repair shop -- they'll have access to the parts, and will likely be willing to do it.
It might be an integrated component though? ie. you might need to pay for a new screen, not just the rubber?
I agree on iCloud, which behaves in a very erratic and intransparent way. Don’t rely on it. I use Nextcloud instead and it works much better.
One software that syncs extremely well with iCloud is Obsidian, for reasons unknown (to me).
CCC costs $77.50 AUD per major version just to use the app - it might be OK but that's a lot of money!
TimeMachine is free and "good enough" for local / local network backups for most people, for remote backup BorgBase (although the "Vorta" borg GUI app is dreadful) and Backblaze are affordable-ish options.
Also it doesn't look like CCC has write-only backups (immutable from the client side once created) like Borg has, there's no mention of encryption or deduplication in their feature list either - https://bombich.com/features
> TimeMachine is free and "good enough" for local / local network backups for most people
If you backup to an APFS-formatted network share, Time Machine is quite good and much more reliable than it was when backing up to HFS+ volumes. There was a lot of hackery going on to make HFS+ act like a modern file system emulating snapshots, which it didn't have. You could end up with millions of hard links; turns out that was kinda fragile…
APFS supports snapshots and a bunch of other features that makes Time Machine faster and more reliable.
I use TM with both a local usb SSD and a samba network share on my home server which works great - I use XFS on the server but because it’s encrypted by TM it creates an APFS sparse container within the share.
Yes, it is pricey. But you get what you pay for. The attention to detail in the documentation and patch release notes from Bombich is on point.
Zero regrets from my end about the money spent for it and it is one of the few software products where the price tag seems justified.
FWIW I first bought a license in 05/2019. The only major version update since then has been in 07/2021. And the license for the upgrade cost $30.47 AUD. They don't seem to be exploiting version updates as a form of revenue generation.
And yes, I'm sure there are backup solutions that work better in different scenarios. My main point is that Time Machine is rubbish for full system backups. CCC is polished, powerful, robust and power user friendly and one option that I have been super happy with.
I think they're maybe alluding the fact that if you have a backup system which fails when you need to use it (e.g. your Time Machine backup turns out to be corrupted, which quite a few people in the comments seem to have experienced), you'd wish that you'd paid money to have a backup system which worked instead.
You wouldn’t usually just have a single backup system, If the disk image that time machine is stored on head issues that would usually be picked up at the time. It was being mounted and either repaired or you’d get less that the discount is corrupt. it’s never going to be as good as something like Borg where you’re doing regular data consistency checks but you can still check the fast system of the DMG as frequently as you wish.
Can I ask what you find inadequate about Vorta? I've recently started using it and it seems perfectly functional to me (although it's probably not an app I'd recommend to someone who doesn't already understand borg).
It’s x86 / intel only so runs in emulation mode, the GUI is clunky, confusing and slow, it spews errors every time my machine sleeps and doesn’t make managing exclusions easy.
Agreed, I use it because I use borgbase for server backups but I wouldn’t recommend it, especially on laptops. The backups stalls every other time after waking up from sleep.
There’s no real need to back up root owned files on macOS though, applications are all owned by the user and have their data sandboxed and user container anyway.
No files that you’d want to back up, should be owned by root.
You’re probably spending upwards of $1000 USD to buy your Mac. You’d might as well factor in the one time (or maybe two time if you need to upgrade the major version again during the lifetime of the mac) purchase of a backup solution into the cost of your Mac.
Because relying on Time Machine basically means your computer is a ticking time bomb just waiting to lose your data.
Over the years, I've used both CCC and SuperDuper in order to have bootable backups. I've always used these in addition to TimeMachine though.
I currently use TM to both my TrueNAS server and to a local disk (when I remember to plug it in). I use Arq to B2 for my home directory.
For my family, I have them use Backblaze. It's the only thing I've found that's really set and forget. They never remember to plug in a local TM drive. And setting up TM with a network drive, it flakes out every few months, then they just ignore the messages from TM that their drive isn't being backed up.
Backblaze pretty much Just Works. And it sends me an email report once a week.
A full restore via Backblaze would be a bit painful. But it's better than no backup at all.
They're pretty beta so if you use them please be careful. There are confirmations for major metadata changes, and the dirdedupe script runs in test mode by default (you have to use the --execute flag if you want it to actually do anything).
CCC is great, but it doesn’t back up in near-real time and doesn’t do versioning. It wouldn’t have saved my butt while I was on the road last Friday like “built-in” Time Machine in my SD slot did.
How reliable do you find it? I suppose if it's just for catastrophic emergencies, it will probably work through needing to use it. I'm guessing it will fail to write before it fails to read.
Carbon Copy Cloner has a mode to periodically check backup integrity, and will alert you on mismatch. Given that unlike ZFS, APFS doesn't actually checksum user data, it's worth it for that feature alone. If you use ZFS for your backup destination, then this effectively ends up giving you verification of your non-ZFS source as well. And even if you don't backup to ZFS, then you still get notified that corruption was detected, allowing you to triage and take action.
I think it's CCC5 or CCC6 that actually exposed the list of file mismatches to the user UI, but the diffs were greppable in the diagnostic logs all the way back to CCC3.
I've found that the ideal size of Timemachine drive is just about twice of the target drive. It last for months if not years -- enough to go back. Beyond that, there must be an archival/backup system in place if you still want to get back files deleted years ago.
In my experience there's no need to worry about long term disk usage of Time Machine. The backup is going to get in a bad state and start failing to add new backups, requiring you to wipe it and start over before the drive fills up.
My anecdata is the opposite of yours. My backup history is over 15 years old!
I started backing up using Time Machine when I got my first Mac in 2006. Migrated my sparsebundle from my first Time Capsule to another when I needed a bigger one, then migrated that to a number of other drives drives over the years (which thankfully never failed, but I replaced them every 4 years). Have successfully restored from a backup onto at least ten different laptops (I was upgrading every year for a while). I have literally only ever had “one” Time Machine backup in the entire time that I have had a Mac. Honestly, I’m blown away it still works.
I maintain other backups too, I’m not crazy to think that this is rock solid, but at this point I’m keeping it going just to see how long it will last.
I’ve had this happen enough to just expect it “someday” and learned not to fight it.
When Time Machine goes stupid, just wipe it and start over. Any efforts to try and recover or repair are just folly. Just watch your life tick away for days and days only to inevitably fail.
It doesn’t happen often, but does happen every couple years. Many times it’s an indicator the drive is bad.
That said, I’ve never had a problem recovering from TM. Whether it’s a single file, directory, or a complete reinstall.
I have TM for my main drive, and use BackBlaze for everything (including my main and media drives).
I use Carbon Copy Cloner to have a replica of my drive. However, I still use Time Machine as a convenience to go back in time for my mistakes. My daughter thinks I'm one of those nerds that married a regular girl from college and can do magic with software because I can make her computer go back in time and bring back versions of her files.
> My daughter thinks I'm one of those nerds that married a regular girl from college and can do magic with software because I can make her computer go back in time and bring back versions of her files.
Wait until she discovers git!
As an aside, I wish there was an alternative like Time Machine on Windows.
Windows has one habit which I detest and I don't know why it acts this way: if you leave your drive disconnected for long enough, it will disable File History entirely and nag you about reconnecting and re-setting it up all over again. That recalcitrance is what discourages me from keeping it going at all. I just give up, it's not important enough to keep going through all that.
If it were really important for Windows not to have the drive disconnected for too long, it would issue pre-emptive warnings before the timeout was reached, but it doesn't, it just shames you after the horse has left the barn.
File history in Windows is based on shadow copies (Snapshots) that run from the server at a scheduled point. The explorer client just accesses those snapshots.
I have been backing up to Time Machine for years without issues. Recently I started using pnpm (Nodejs package manager) and the backup process hangs on Mui material icons files (@mui/icons-material). Using "sudo fs_usage -f filesys backupd" I see that it is looping over the same files over and over.
In addition to Time Machine, I use iCloud for generic Mac apps, Backblaze for real disasters, and GitHub for all my personal projects. So my backup needs were already mostly covered without TM.
Though the one time I needed a full SSD restore, Time Machine was the fastest way to do it, and it worked like a charm, after literally 8 years of not looking at it.
But if I want to switch to CCC, you run their client app on your Mac, but how do you set up the server? Is it just a Linux NAS running Samba?
You just restore the OS without the time machine backup and then you browse the time machine backup manually and copy the files you want. It's usually just a random file it can't read correctly in it's index so it breaks the whole thing, it's not like the whole backup is useless, you can manually browse all the files and you are still good, not sure how that is the 'fuck you user' approach.
CCC is great although it’s not quite as useful as it is on Intel Macs on AS ones because the whole paradigm of “clone the drive onto a USB and boot it” doesn’t work.
Ah, when I installed Asahi a while back I read that it wasn’t possible to install to an external drive and must have gotten it confused with not being able to install any OS to an external drive.
Generally I think the use case is "duplicate the boot volume of my machine onto an attached usb drive" though I think these days not all Macs will boot from an external drive? I have SuperDuper doing scheduled backups like this, along with Backblaze for offsite (but not so bootable) backups.
Maybe if the location can be mounted as a network drive. But I'd say it would be making it do something it is not intended to do. I'd say it is really meant for backing up to external storage or drives on the local network.
I note that $25 got the author 16GB of storage - another $70 was spent on boosting that to 2TB.
For GNU/Linux users - but will obviously work just fine for Microsoft and Apple clients - a good combination is SyncThing on local + remote mini-server, with BorgBackup running on just the server side.
SyncThing gives you near-instant synchronisation, and BorgBackup gives you periodic archives (at your periodicity & retention preferences).
For isolation purposes, I went for a small-footprint VM for each family member, and instructed that anything they really care about needs to go into ~/work/ or ~/private/ to be looked after.
You can often get those for as little as $10 these days, I paid $25 in 2020.
Interestingly the APU supports AES-NI and has a little crypto accelerator for SHA (unlike RPi which didn't splurge the extra dollar on Armv8 Cryptography Extensions).
I'd recommend going for t620 over the t520 for double the cores at the same idle power draw (6.5W).
If you are fine with just two cores Fujitsu Futro s520 is also great.
I’m not sure if AFP (via Netatalk) is really the right thing to use. I’m pretty sure that “native” Time Machine now prefera CIFS (Samba) over the network.
> I’m not sure if AFP (via Netatalk) is really the right thing to use.
For years, Netatalk was still the more dependable solution, despite the deprecation warnings. I've had very good luck re: my current set up (ZFS and Samba 4.15.13 on Ubuntu 22.04) but Netatalk I believe still works very well.
Ok so like this seems to have been written today but I think it’s notable that the Mac they showed in the post is running a five year old version of macOS: I don’t actually think this is a good idea anymore, and I say this as someone who ran a Raspberry Pi in a similar configuration for a while.
Today, if you’re using Time Machine, you should back up to an APFS volume with snapshots. They are far faster and more reliable than HFS+ disks. I get the feeling that macOS might at some point stop supporting new HFS+ backups altogether. So, I wouldn’t recommend making them anymore.
Unfortunately, APFS drivers for Linux kinda sorta don’t exist. Not to the point where you’d want to rely on them for backups, anyways. So the move here is really to just get a Mac to handle this. My network has an old MacBook Pro (one of the last pre-Touch Bar ones) wired into the backbone for this. It’s pricey but believe me, this is the setup you want.
(By the way, if you’re using Time Machine to a network share, I recommend keeping another “worst case” backup that you refresh periodically. The network disk is convenient which is good for having periodic backups and quick access but from time to time this setup is liable to Time Machine corrupting itself in a way it doesn’t know how to fix.)
When Time Machine backups over the network, it creates a disk image and writes to that. So I don't understand why you would need APFS drivers on Linux?
Another cheap networked Time Machine solution is a used Intel Mac Mini (~$120 for 2.5 Ghz Intel i5, 8 GB Ram, 256 GB SSD), and set it up to export your time machine disks over the network. Not only is it "real" Time Machine, it allows you to add to an existing Time Machine disk that used to be plugged into your machine. No need to learn anything new.
My main TimeMachine backup (for my and my wife's MBPs) is over SMB to ZFS (RAIDZ2) on my NAS. Running pretty well for 2 years or so now. The only issues I've had seem to be related to running out of space with the quotas I've set (so I make sure I leave some space for my wife), since the ZFS snapshots prevent the TimeMachine's auto pruning mechanisms from freeing up space like it expects. Thankfully I was always able to ZFS roll back and find a recent a known good snapshot, manually prune old ZFS snapshots to make space, run a Time Machine verification, and keep going (after making a manual "known good" snapshot).
Thankfully, I recently discovered the ZFS refquota setting that seems to have resolved the wonky dance above and lets my quotas work as TimeMachine expects (though extra space is used for my ZFS snapshots, which is fine because the TImeMachine stuff is only a small part of the array's storage).
The only other issue I've had is with ZFS on arch (kernel upgrades / dkms issuss), so I recently moved to NixOS an it's been solid so far. TimeMachine backups over Tailscale are pretty sweet!
Separately, I have a second TimeMachine backup to an AirPort Extreme-attached USB drive.
I also use restic for cross-platform backups to the same NAS above, which then sends ZFS snapshots (of the TM + restic data) to a remote server.
Finally I have a USB drive in a safe that I manually take out and run a locally-attached TimeMachine backup to every month or so.
Keeping everything backed up was a (tongue-in-cheek) part of my wedding vows -- stakes are high.
I opened the link mostly to see which thin client model the author chose, to discover it is a HP T520, a machine it happens I also own an unit.
I cannot comment on TimeMachine, but regarding hardware + Linux, it has been a more stable server than equivalent RPi or Odroid SBCs while not consuming much more power and just having a bigger size.
Running TrueNAS on an ex-lease Dell Micro 7060 with a bunch of USB 4TB drives, is currently the best Time Machine destination I've ever used; fast, reliable, always available.
Is anyone else using borgbackup on their Mac? It lets me dedupe across OSes and computers, better than Time Machine. I’ve been using it for years and have accumulated some paths to exclude in my home folder, but constantly unsure if I should be excluding some other system metadata.
Personally I've tried to set up Samba as a Time Machine backend twice and with both setups the initial backup would just completely lock up after a while. I might try again with Netatalk, but thought AFP was deprecated.
> Is netatalk the key thing here that makes it Time Machine compatible? The rest seems like standard DIY home server stuff.
The last update to AFP shipped with Mac OS X Mountain Lion in 2012 [1]. Nobody should be using AFP in 2023, unless you have no other choice.
While you can use Time Machine to backup to an SMB share, ideally you want to use an APFS volume to get all of the benefits, especially if you're concerned about data integrity.
You can read about using APFS network shares for Time Machine at The Eclectic Light Company [2], a Mac-focused blog/website by Howard Oakley.
If you're interested in the details of how Time Machine works, nobody I'm aware of has written more about its details than he has [3].
He's written a slew of useful macOS utilities, including T2M2 for Time Machine [4].
Time Machine created an APFS sparse bundle on the SMB share. How is an APFS sparse bundle on an APFS volume better than an APFS sparse bundle on another volume?
Yeah this is as pointless as every "life hack" on YouTube. Haven't you been able to Time Machine backup to any HFS-formatted sparse bundle since forever? Located on literally ANY samba server - Windows, Linux, or ye olde NAS. I have personally done this for years. I don't get the novelty here unless you're into the "I built a submarine with parts I got off eBay" angle.
I write these blog posts for primarily for myself, and hopefully because somebody might find them useful. I claim no novelty, there are no ads, no traffic tracking, there is no intention whatsoever to make money from it.
But if my posts irritate some jerk on the Internet who has nothing better to do than spread negativity, then that’s definitely a nice unintended benefit.
> But if my posts irritate some jerk on the Internet who has nothing better to do than spread negativity, then that’s definitely a nice unintended benefit.
No malice intended, but I genuinely felt as if you had wasted your own time. Sorry if I don't subscribe to the usual HN tug-fest. If you learned something - good for you but I've been down this exact road many times before. Oftentimes I have to mentor younger engineers that there is often a much simpler solution if you can see the forest for the trees.
We see projects like this almost daily on HN. Sometimes people just like to tinker and share, so yeah, the “I built a submarine with parts I got off eBay” type work can be fun and even useful. You never know who you might help!
If that irritates you, this might not be the site for you.
A superpower of doing Time Machine backups to a Synology is that you can put it on a BTRFS drive with automatic snapshotting. About once a year something seems to happen that causes Time Machine to give a “whoopsie, gotta start over from the beginning again!” sort of error. I’ve found that going back a day or two with snapshots on BTRFS fixes the problem.
For me it was more like every day. I gave up on Time Machine very quickly.
The underlying problem is that Time Machine stores a disk image over the network[0]. If the disk image is not unmounted cleanly, Time Machine treats it as horribly corrupted and refuses to touch it. And this happens very often if you're using a laptop that will be unexpectedly disconnected from its storage constantly.
[0] This is to support things like hardlinks, etc. Ironically this is to emulate what BTRFS snapshots do natively. No clue if modern Time Machine uses APFS, but so long as they shove the actual data inside of a disk image this problem will continue occurring.
I've given up on it and bought a SSD to do local backups. My new plan for the Synology is to get Minio running on it and then use Arq Backup to send the data.
Update: This is working quite well. I occasionally run into errors "Error: The network connection was lost." but I think this is mostly because of my Wifi.
Just posted this in another comment as well. Ditch Time Machine altogether. Use Carbon Copy Cloner instead. It backs up straight to a network drive. Been using it with my Synology for years. Using Time Machine is playing russian roulette with backup data.
In the pre-APFS days, there was one difference in that using time machine would give you snapshots whereas CCC would only be a "dest like source" backup (safetynet would preserve deleted files, but not version modified ones). You could work around this with ZFS snapshots on the destination, but since it's not natively supported it's all a bit clunky. With APFS you get snapshots too, so the only real difference is the UI for restoring a snapshot.
I do wish Apple would allow 3rd party applications to populate "versions", so the inbuilt UI could be used with other apps. It could even fit with their existing Finder extensions: it could bring together different versions of a file on cloud providers, local backups, etc., and present them in a unified UI.
Something happened around Big Sur that made Synology and Time Machine not get along for me. I set the share up with a quota and everything is great and happy. Time Machine backs up. But once that quota is exceeded, Time Machine complains about not having enough free space and quits. It used to delete old backups but now, no matter what I do, it just complains about free space and quits. :(
Yes, you're right. There was a time around Big Sur where it didn't work. It did eventually start working again after some unmentioned fixes from Apple. I skipped Catalina and went straight to Monterey and it's been still fine. I'll be slipping Ventura and going straight to Sonoma.
I just checked the box in unsaid for “make this share a Time Machine volume”, and added the unsaid machine and my MacBook to tailscale, configured the disk over tailscale and now I have remote Time Machine anywhere I have internet. This appliance seems like a good ticket if you just want a time capsule replacement, though.
I used to do a similar thing - I set up a Debian VM and let it serve netatalk and behave like a Time Capsule for my home network.
Now I use a mix of iCloud to keep files in sync (DropBox to transport between Macs and Linux) and dock-attached local storage to host the Time Machine volumes.
Apple Silicon MacBook Pros come with an SD slot. There are some MicroSD adapters (I use a BaseQi 303A) that are flush with the case - so flush infact that they are a bit hard to remove. Those are excellent for TimeMachine backups.
Mine is a SanDisk Extreme PRO 512 GB. But you could get e.g. one of the "High Endurance" ones if you are concerned about that.
To add more context, I only use TimeMachine for the use case "I deleted something I haven't committed to Git and want to revert to the state of 1h ago". My real backup is offsite + encrypted (Arq + paid GDrive).
> My MacBook has 460GB of data. Time Machine backups are atomic: a backup either completes or it doesn’t. And when it doesn’t, the backup needs to start all over again.
> Initially, backups were ridiculously slow: around 2 Mbits/s, meaning it would take around 21 days to back up the full drive. During that time, you can’t power cycle the server or take your laptop out of your wifi network or you need to start all over again.
This is why I'd rather play with fire than subject myself to such stupid implementation of a basic backup function.
It's not really $25, though, is it? It's over $100 plus replacement storage and my valuable time.
Instead, I just turned on the Time Machine option on my Synology. Job done. In addition to network Time Machine I also run a local Time Machine on an external SSD. Backup redundancy is key.
Also of note: I use a similar thin client running ESXi to run old Intel Windows VMs that I occasionally need. The cost of that one was also $25 plus replacement RAM and SSD.
Home storage is the only thing I don't do a DIY solution. I used to run a samba server with timemachine settings and pointed it to an external SSD. But now I just run a Synology NAS with a RAID configuration. Definitely more expensive than $25 but worth the peace of mind IMO.
After years of struggling with a NAS from a well known manufacturer, my solution is a 2012 Mac Mini. I replaced the internal 2,5" HDD with an 8TB SATA SSD.
Advantages:
- extremely reliable
- completely silent
- low power usage
- all in one (no external drives, no external power brick)
Time Machine itself worked okay on Synology, it was just slow, despite two 480GB SSD cache drives.
My main issue were normal network shares where I had issues with unicode characters in filenames and with symbolic links. Depending on whether you'd use SMB or AFP one thing or another was broken.
My Mac Mini now works flawlessly. Time Machine is still slow, though, I think it's just a slow protocol.
I'm currently using Time Machine for my Macs, running on a Debian box that runs OpenMediaVault. It's a pretty simple switch to enable Time Machine on a network share, and it seems to work fine.
For my data backups, I use restic. Big advantage is, that I can read my backups even when I don't have a macOS system present (e.g. my only macOS system had a hardware issue and my Time Machine Backup was pretty much useless until I got a new one).
I know, this solution is not for everybody, but Time Machine corrupted my backups more than 5 times now and it feels so slow compared to restic, that I don't even think about retrying it after a new macOS release any more - even if my solution is a bit more work to do.