APFS already supports volume snapshots but not (I think) snapshot send / receive, à la zfs & btrfs. I thought that would be the logical next step. If Apple do add this or similar feature, would Time Machine have to switch its backing store yet again?
I've been generally happy with Time Machine's stability, but this is getting me a little worried. I guess now's a good time to looking into Arq (or similar) to replace my janky secondary backup scripts...
I agree backups are important to get right. I'm not as confident as you are in Time Machine today. I've gotten this message a few times:
> Time Machine completed a verification of your backups on "nas.<mydomain>". To improve reliability, Time Machine must create a new backup for you.
which I'd paraphrase as "oops, we corrupted your backup. try again maybe?"
It's not the NAS fault usually.
I've switched to an external HD always plugged on my "docking station" as Time Machine backup for this reason. Although it's annoying to remember to unmount the HD (and check if a Time Machine backup is in progress) before disconnecting the laptop :/
Maybe the most sketchy thing is that I have an ancient SSD stuck in there as cache. I could try removing it, though its smart output also looks good.
Next sketchiest is that it's using btrfs, but other people tell me this is trustworthy. /shruggie
The error is rare enough that I'll never really know it has gone away permanently or what fixed it if it has, because I'm never going to go for say a year without changing more than one variable. For example, I'm not going to stop doing macOS or Synology updates for that long.
SMART will tell you that your disk is failing, not that it’s not failing.
Unhappy smart = unhappy disk but happy smart /= happy disk.
Another poster suggested this happens with AFP, which would likely mean a macOS defect. I'm using AFP. I think I'll switch to SMB...
I wouldn't put too much trust on SMART: since yesterday I have a disk in a raid 5 array making loud clicking noises, mdadm says everything is fine and all disks report perfect SMART data.
 I trust SMART when it says it's not OK though
When the firmware detects this, it tries to reset the head, usually by parking it for a moment and then unparking it. Most of the time this solves the issue.
This inherently doesn't cause data loss or corruption, just high latency when using the disk. It can lead to an increased number of high fly writes as the head is unparked and is usually a bit too high for a bit until it finds it's air cushion again.
The usual suspects in your smart data are "Recalibration Retries", "Seek Error Rate", "Head Stability" (if WD), "High-Fly Writes", "GMR Head Amplitude" and "Head Flying Hours".
All of these generally don't cause runtime issues with the drive, but a clicking sound or an increase in these numbers means the mechanical assembly of the HDD, while still in spec and good enough to operate the HDD almost normaly, is degraded.
Of course, your head may crash down on the platter the next recelibration try, the clicking causes a lot of head movement and parking, so it also causes a lot of wear. Drives which are clicking age a lot faster and in reverse, clicking is an easy indicator the drive is beginning the end of it's useful lifecycle.
The SMART data changed, I think it was the seek error rate that went through the roof.
I've never had this issue when using an external hard drive connected via USB and I've never had it happen when I wasn't mucking with it. For awhile I was able to "fix" these errors by running diskutil and resetting the state (my Time Machine server was a flaky Raspberry Pi at the time).
All in all, it is concerning when you "lose" your backup no matter the circumstances.
Interesting idea to reset the state. I think I can take snapshots with Synology. I'm unsure how to manage the details though:
* I let Time Machine starts backups automatically whenever it pleases. Does it have a hook to run a command to take a snapshot?
* Does it do the verification every time, or (say) 1 in 10? If the latter, going back to the most recent snapshot isn't enough. And if you do say 3 backups then get rid of #3, can it take incremental #4 on top of #2, or will it complain that the #3 state it expects is absent?
I really should just set up a different backup system. I do at least have Google Drive sync on most everything important. It's not a "real" backup as another poster pointed out, but it's better than nothing.
I don't believe it does the verification every time. I can't remember what the criteria is (I assumed it was similar to fsck on boot for Linux). You can run verify manually (Option-click the menu bar) and I'm sure there's a command line--so maybe you want to schedule your backups explicitly and verify every time? I've always had a corrupt backup when I've manually mounted the disk image and mucked around, so it's easy to restore within two days. I thought I might have issues with the client/server state mismatch, but haven't in practice (I've only needed to two this a few times). I haven't kept close track to see if I lose any in between backups--but I restore so rarely I wouldn't miss them.
> I really should just set up a different backup system. I do at least have Google Drive sync on most everything important. It's not a "real" backup as another poster pointed out, but it's better than nothing.
I've kept to the 3-2-1 backup rule and kept bootable backups local and non-bootable ones remote.
I don't know, but you can trigger backups on demand using tmutil(8), and that's what I do.
> And if you do say 3 backups then get rid of #3, can it take incremental #4 on top of #2, or will it complain that the #3 state it expects is absent?
You can delete arbitrary backup(s). "Classic" Time Machine backups are snapshots deduped using hard links. HFS+ even supports directory hard links.
Same here. I'm also using BTRFS (I've not had a problem and my understanding is Synology doesn't use the flakier features of BTRFS). I can't remember if I'm using AFP or SMB (SMB I think is the only one Apple supports), but my backup is encrypted and I have a specific user. I'm not sure BTRFS matters all that much because the backup is a disk image. When I've had them get corrupted I've run disk utility on the disk image to fix things--there's a lot of layers to unwind.
I hear stories of people having issues with similar setups, but I haven't (I know that's a sucky answer to hear). Years and years ago I felt like I would get a corrupt backup when I closed my lid during a network backup (I believe this was pre-APFS), but I haven't noticed an issue in years. I have it auto-backup and it usually does it at night when the lid is closed and it's charging.
In Snapshot Replication I have a daily snapshot created at midnight with a 2 snapshot retention. I tend not to run out of space, so I'm fine with any deletes taking a few days to roll off. "Recover" under "Recovery" can restore old snapshots easily. I do this for most all of my Shared Folders.
In particular, your account's Local Items keychain, which is used by Safari and some other Apple applications to store passwords, is encrypted in such a way that it can only be restored onto the exact same machine — knowing the user's password is insufficient. Thus, if your machine is lost, stolen, or damaged, a restore from Time Machine backup will not properly restore your keychain. Nor will the keychain properly migrate to another machine using Migration Assistant (luckily, this is how I discovered this behavior).
There is no warning of this behavior in any documentation, and no workaround whatsoever except to use a different browser or password manager, or to manually store all passwords yourself separately in the Login (rather than Local Items) keychain.
There's also no way to export the items in the the Local Items keychain except manually copying them to another keychain, during which you must enter your password one-by-one for each copied item (people have written some AppleScripts to automate this, e.g. https://gist.github.com/rmondello/b933231b1fcc83a7db0b).
If you use iCloud Keychain, then the Local Items keychain is just a machine-local cache of the iCloud data, which is fine, I guess. But this behavior is just dangerously broken if you do not use iCloud for this purpose.
Some more information on this issue:
I now use the password generation and Keychain Password manager so much I dont even know any password of my online account.
How is that acceptable and not widely known?
I wouldn't go so far as to say Time Machine/Time Capsule is useless, but it does have this severe sharp edge. So you need to have a plan for storing/migrating your passwords separately from the Local Items keychain.
Me, I took this as an excuse to switch to Firefox as my primary browser, and then I used one of those AppleScripts to migrate all my old passwords (thankfully, I still had my old computer).
Along with iPhone iTunes Backup Corruption issues which is still not fixed. But it is hard to provide any evidence other than personal experience.
Since many if those links are from when Yosemite was current, it wouldn’t surprise me if that was when the bug was introduced. Yosemite made a mess of things in a lot of ways, and Apple has never recovered.
Turn off automatic updates and only update the OS just before the next one is released. Stay one OS version behind current one, giving time for the bugs to get worked out.
- Upgraded directly from Mojave 10.14.6 to 10.15.7 (went smoothly)
- Considered migrating from Visual Studio 2017 (15.9) to 2019 (16.7), but have not yet done so.
- Refused to upgrade the proprietary firmware on my wifi AP's. The manufacturer has provided no changelog and does not provide a download link for the current firmware to enable a downgrade if it goes south.
I'm a software developer.
That is a luxury. My cable modem updated itself automatically to include a bloated asp server for modem interface and the modem crashes regularly, because it can not handle the load of the asp server on top of 4 devices connected to it.
Plus, they added a "Community Wifi" to share my already crappy connecting with strangers, overloading the modem even more.
Michael Tsai has a good summary of the situation that he wrote when Arq 6 was just out, and echos my recollection of issues with data loss .
I run backups on three Macs every 48 hours, then once a month or so I will use Carbon Copy Cloner to clone the two more important ones to sparse image files on a backup hard drive which is physically secured afterwards.
I'm sorry you feel support isn't responsive. We aim to answer emails within 1 business day, as we always have. We've added staff to do this.
I don't have any particular support complaints myself; my comment was based more on perception based on Twitter & other discussions I'd seen.
As an aside, I’m surround by people who consider cloud syncing to be a backup, and just yesterday witnessed someone in the grief stage when they deleted a load of photos to save space because they thought they would remain in the cloud. They had even clicked through the “delete delete? Are you sure?”.
When I worked there, I was surprised to learn that they also often act as mitigation for ransomware attacks (they could roll back your account in time if you contacted CS and explained your situation).
It’s almost been enough to get me to have a test LAN and real one so as to maintain household tranquility.
Which is to say, a block-based snapshot is very efficient - your snapshots only take up the space of the blocks that have changed - regardless of which files those blocks make up.
A hard-link based snapshot scheme is also fairly efficient for most use-cases - the snapshot only uses space equal to the files that have changed - even if those files only changed a little bit.
For most use-cases, I find the difference is negligible. However, if you have large files that change by small amounts, a block based scheme (a la zfs) is much more efficient.
Also, at a certain point it just can’t keep up with the changes.
For the longest time, I backed up my MacBooks with Time Machine to a NAS.
Seemed to work fine - Time Machine was successful and I was able to browse previous versions on the machine being backed up without an issue (browse history of machine A on machine A).
Then one day I was planning to wipe a MacBook and do a clean install - figured I'd confirm I could browse my backups made on machine A on machine B before I wiped A. I spent over an hour attempting to open the sparse bundle (w/ Time Machine and manually) and just couldn't do it - kept loading forever or giving me errors about volume verification among other things.
> I guess now's a good time to looking into Arq (or similar)
Like you, I decided to take a look at alternatives. I'd previously played around with Arq (v5) and it looked awesome - stable, well-documented, etc. Well, by the time I actually needed an alternative to Time Machine, Arq had released v6 - earlier this year.
Unfortunately it appeared to be bug-prone (not great for backups!) and lacked ANY documentation (one of the great things about v5 was the in-depth documentation, particularly around backup format). Users on the subreddit weren't thrilled and you can't purchase v5 licenses (and TBH I wouldn't recommend purchasing software that isn't supported anymore).
Within the last week or two, Arq has released a second major version within a year - v7. Feedback appears to be better, and the author has acknowledged mistakes, but TBH I'm wary. Definitely not adopting two-week old software as my primary method for backing up.
I've been playing around with Carbon Copy Cloner more recently.
The ideal goal would be bootable backups to a disk image hosted remotely but that doesn't appear to be possible - so I'm resigning myself to file-based instead - no bootable disk image, but at least I'm a little more confident in my backups? And a single "file" (or image) becoming corrupt doesn't blow away the rest of my backup ¯\_(ツ)_/¯
If anyone has any suggestions or ideas, I'm all ears.
Edit: Probably worth noting that in this case machine A was running 10.14 (and HFS) and machine B 10.15 (and APFS) - but I'd imagine 10.15 should be able to open a 10.14 HFS sparse bundle without an issue.
Because I have Code42 for versioning and going back further in time (though WFH due to COVID has truly brought to bear the shitty upload speed my home connection has), I don't utilize the SafetyNet feature,so I can't speak to the efficacy of it, but for straight daily dumb snapshots I love CCC. When I went 100% remote back in March I opted to get a specced up Mac mini instead of a Macbook Pro. CCC made moving everything over barely a speedbump. It'll alert you to any issues and walk you through the restore when it senses it's being run off a booted external volume group. It really only took a couple of clicks. Dead simple. It also provides a handy GUI for APFS snapshots.
I don't buy a lot of...serious software (either work buys it for me or it's a PC game), but I don't regret the $40 CCC set me back. Plus, the devs are pretty much always ready for the new OS in fall, which to me is an important feature separating an OK Mac app from a good one.
I've been using it for many years and every single time it's worked brilliantly.
What’s the old saw about “the bandwidth of a station wagon full of tapes”? Yeah, that. It would be faster (if obviously not economical) to just mail a hard drive back to the home office every week than trying to back up my entire machine over residential cable internet.
Relevant to you, earlier this year (using Catalina) I swapped computers. My usual process is to create a disk image (usually with Disk Utility or SuperDuper) of my old computer and these overlay systems are tripping me up.
I remember my first attempt only looked to clone the OS / root system--not my user data files. I can't remember exactly what I did next, maybe clone to USB drive then create an image of that? But I pulled it up the other day and the disk image size covered my whole hard drive, but when I mounted it I just saw the root system (I thought I lost all my backup data). Using DaisyDisk I saw a lot of "hidden space." I noticed Disk Utility mounted two disks; Macintosh HD and Macintosh HD - Data, but Finder only showed the first on the desktop/sidebar and I had to hunt for the second under /Volumes/.
For simplicity I'm going to create a new disk image and pull out the relevant data. I never really had a need for a bootable backup on the network. I just figured I'd backup OS files in case I needed to pull up some oddball system hack I had on an old system.
I’m in the opposite camp, having used (the free version of) SuperDuper! in the past but switched to CCC. SuperDuper! seemed to have a simpler interface. I’ve been using CCC since it seemed(to me) to have a better development cycle with quicker support for new releases of the OS. Maybe I was mistaken in thinking that, and maybe Shirt Pocket’s really old style website played a part in that. Recently I looked up the release history of SuperDuper! and found that V3 was released about three years ago. I’ve been trying to figure out if it provides a longer update path (for a particular cost) than CCC.
Note to readers: the seemingly weird punctuation you see in this comment is because the application is called “SuperDuper!” (with the exclamation).
I set up a Pi4 with an external drive attached to it via USB3 about a year ago. I wanted to set up a solution for me and my wife's Macs. There's plenty of "tutorials" out there and found there were all pretty much the same - HFS+ and AFP. If you follow them, they work great - for a few days.
Eventually all my Macs (Mojave and Catalina) would get a Time Machine error saying the backup was corrupted and would have to build a new version. This happened a couple of times, and eventually tried using SMB. That made the problem go away. All great, so I thought.
Then, any time I had a power outage, the whole file system on the drive would get corrupted. They could only be mounted as read only, and no amount of fsck fixed it. Switching from HFS+ to ext4 fixed that issue.
Things have been pretty reliable since then. I've been able to recover a few files here and there, but haven't had to fully recover from a disaster.
TL;DR - every blog that tells you to create a TM machine/Pi backup using hfsprogs and Netatalk is wrong. Don't do it that way. Use smb and ext4
Cargo-cult HOWTOs is the only explanation I can think of.
Pretty much. When I google "raspberry Pi time machine," every result I see, except one, tells you to do the HFS+/AFP method. These results include content churning sites like techradar and howtogeek.
The only site that recommends ext4/smb that comes up, fourth on my list of results, is this one: https://mudge.name/2019/11/12/using-a-raspberry-pi-for-time-...
TL;DR though: add the following configuration to the global section of your smb.conf:
vfs objects = fruit streams_xattr
fruit:aapl = yes
fruit:metadata = netatalk
fruit:time machine = yes
Otherwise, are you suggesting that you were able to authenticate the Time Machine volume and then were still unable to browse it within the UI?
Personally no issues with TM, except when it was working for too long initially I decided to wipe it and restart from scratch. (Don’t think it helped speed it up.)
That said I spend some time carefully picking which folders to include in the backup to avoid slowness and bloat.
What directories do you exclude? I feel like there must be lots of superfluous stuff in my backups but I don't understand macOS well enough to know for sure.
Disclaimer: I am the author of Vimalin , the tool that Howard references as a possible solution for VM backups.
This was easier pre-Catalina
You can already can do this without any logical volume management. Time Machine backups on a disk go into the /Backups.backupdb directory. You can create other subdirectories of the root directory and put other files there (or put files directly into the volume root), and Time Machine will just ignore them completely.
Safest option is to just look at 'btrfs fi us' and see which type of block group is the least full. Do a filtered balance on it. e.g. Use something like '-dusage=5' to essentially get rid of any data block groups that are 5% or less full. Their data extents get moved to other block groups. Now that space is "unallocated" and can be allocated into new metadata block groups.
These sorts of gymnastics should be rare. There is https://github.com/kdave/btrfsmaintenance that will do them automatically for those who encounter the problem with their particular workload more often than they care to deal with manually. I never have to manually balance, and I also don't use the automated scripts - it's been years since I've hit enospc that wasn't a case of a 100% full file system.
Then you can delete your files. Not sure if it'll work on APFS.
This happened to me recently, couldn’t get my head round it but a reboot fixed things so I didn’t investigate more.
Reboot must delete the swap or something.
Of course I have no clue whether Apple is thinking of this or not.
Basically I keep corrupting my backups because I forget it's running and disconnect the hard drive when I move my macbook. Sigh.
>> Apple can currently just take the ZFS CDDL code and incorporate it (like they did with DTrace), but it may be that they wanted a "private license" from Sun (with appropriate technical support and indemnification), and the two entities couldn't come to mutually agreeable terms.
> I cannot disclose details, but that is the essence of it.
There was a NetApp-Sun legal spat going on due to NetApp's WAFL patents (Sun won IIRC), and indemnification could have been a source of tension.
APFS was pretty explicitly designed for mobile, so I wouldn't be surprised if ZFS had been adopted on desktop they still made APFS for mobile.
Maybe it’s time to swap to an external ssd...
First I try to copy the file out of the iCloud sync folder to a local folder that doesn’t sync. No change.
Then, I go onto iCloud.com and download the file. The downloaded file shows up and works as expected. I ctrl + c then ctrl + command + v to move it into the local folder I had created.
Now, according to finder, I have two files with the same name. One is readable and the other is not.
I cannot delete either.
I’ll be playing with the terminal next to see if anything can be done there.