Hacker News new | past | comments | ask | show | jobs | submit login
APFS changes in Big Sur: how Time Machine backs up to APFS (eclecticlight.co)
167 points by angristan on Sept 28, 2020 | hide | past | favorite | 123 comments



If I read the article correctly, Time Machine's switching to something similar to overlay FS, but nested? I really hope they get it right: backup is where my tolerance for bugs is really low.

APFS already supports volume snapshots but not (I think) snapshot send / receive, à la zfs & btrfs. I thought that would be the logical next step. If Apple do add this or similar feature, would Time Machine have to switch its backing store yet again?

I've been generally happy with Time Machine's stability, but this is getting me a little worried. I guess now's a good time to looking into Arq (or similar) to replace my janky secondary backup scripts...


> If I read the article correctly, Time Machine's switching to something similar to overlay FS, but nested? I really hope they get it right: backup is where my tolerance for bugs is really low. ... I've been generally happy with Time Machine's stability, but this is getting me a little worried.

I agree backups are important to get right. I'm not as confident as you are in Time Machine today. I've gotten this message a few times:

> Time Machine completed a verification of your backups on "nas.<mydomain>". To improve reliability, Time Machine must create a new backup for you.

which I'd paraphrase as "oops, we corrupted your backup. try again maybe?"


That error means TM detected bit rot via failing checksums or due to an IO error during a read operation, usually because of a failing disk, or other software on the NAS you're using corrupting your backups. It's not caused by TM itself corrupting its own backups.


There is a long standing bug that AFAIK it was never fixed, that it corrupts the backups if you close/disconnect the laptop while it's doing a backup it will leave the disk in a corrupted state.

It's not the NAS fault usually.

I've switched to an external HD always plugged on my "docking station" as Time Machine backup for this reason. Although it's annoying to remember to unmount the HD (and check if a Time Machine backup is in progress) before disconnecting the laptop :/


The number one cause of that error seems to be using AFP for backups. Had the same issue on a Apple Time Capsule and a Synology NAS. Went away on the Synology once I disabled AFP in favor of SMB.


Wasn't AFP deprecated in Mavericks? I'm fairly certain that it hasn't had an update for some time. At this stage, it's certainly advisable not using it for critical applications.


It probably is (and e.g. you can't do AFP shares of APFS drives), but Apple's SMB implementation is fairly bad, and sometimes the only workaround is to connect via AFP instead (sometimes I've also had much better performance over AFP)


Isn’t it just samba?


It used to be, but Samba relicensed to GPLv3, so Apple switched to a custom SMB implementation


Samba is GPL so it is certainly not used by Apple.


I'm using AFP. I'll switch. Thanks for the idea!


Hard to completely disprove but doesn't seem terribly likely. Name-brand NAS (Synology) with stock up-to-date firmware and nothing weird running on it, RAID-1, WD Red (pre-SMR) drives, happy SMART output.

Maybe the most sketchy thing is that I have an ancient SSD stuck in there as cache. I could try removing it, though its smart output also looks good.

Next sketchiest is that it's using btrfs, but other people tell me this is trustworthy. /shruggie

The error is rare enough that I'll never really know it has gone away permanently or what fixed it if it has, because I'm never going to go for say a year without changing more than one variable. For example, I'm not going to stop doing macOS or Synology updates for that long.


> happy SMART output

SMART will tell you that your disk is failing, not that it’s not failing.

Unhappy smart = unhappy disk but happy smart /= happy disk.


As I said, hard to disprove but seems unlikely. Backblaze says 76.7% chance these SMART attributes will be bad on a failed disk. [1] More importantly, I said I'm using RAID-1, so a single bad disk shouldn't be a problem.

Another poster suggested this happens with AFP, which would likely mean a macOS defect. I'm using AFP. I think I'll switch to SMB...

[1] https://www.backblaze.com/blog/what-smart-stats-indicate-har...


> happy SMART output

I wouldn't put too much trust on SMART[0]: since yesterday I have a disk in a raid 5 array making loud clicking noises, mdadm says everything is fine and all disks report perfect SMART data.

[0] I trust SMART when it says it's not OK though


SMART tells you what the disk knows. Clicking in HDDs is usually caused by the head not moving fluidly over the platter.

When the firmware detects this, it tries to reset the head, usually by parking it for a moment and then unparking it. Most of the time this solves the issue.

This inherently doesn't cause data loss or corruption, just high latency when using the disk. It can lead to an increased number of high fly writes as the head is unparked and is usually a bit too high for a bit until it finds it's air cushion again.

The usual suspects in your smart data are "Recalibration Retries", "Seek Error Rate", "Head Stability" (if WD), "High-Fly Writes", "GMR Head Amplitude" and "Head Flying Hours".

All of these generally don't cause runtime issues with the drive, but a clicking sound or an increase in these numbers means the mechanical assembly of the HDD, while still in spec and good enough to operate the HDD almost normaly, is degraded.

Of course, your head may crash down on the platter the next recelibration try, the clicking causes a lot of head movement and parking, so it also causes a lot of wear. Drives which are clicking age a lot faster and in reverse, clicking is an easy indicator the drive is beginning the end of it's useful lifecycle.


Thanks for the detailed answer. I managed to identify the faulty disk, the array has been rebuilt last night.

The SMART data changed, I think it was the seek error rate that went through the roof.


Yeah, my "solution" to that problem is to backup to a zfs NAS, and through shitty^Wclever scripting, create a snapshot after each successful backup. If TM backup fails for any reason, I roll back on the NAS end and start again.


I have a nightly snapshot and do the same if there's an issue. To be fair, I've only ever hit these "verification" errors when I manually mounted and mucked around with the backup and a backup started at the same time--all over the network.

I've never had this issue when using an external hard drive connected via USB and I've never had it happen when I wasn't mucking with it. For awhile I was able to "fix" these errors by running diskutil and resetting the state (my Time Machine server was a flaky Raspberry Pi at the time).

All in all, it is concerning when you "lose" your backup no matter the circumstances.


In my case, the NAS is a Synology server. I don't know how to determine authoritatively where the fault lies, but I trust it more than I trust Time Machine. (Although I am using btrfs on that volume so maybe it's a wash.) I'm sure I've disconnected my Mac from my Thunderbolt dock (and thus the active NIC) and/or put my Mac to sleep mid-backup many times, though. This is something that I would like it to be robust to but may be "the problem".

Interesting idea to reset the state. I think I can take snapshots with Synology. I'm unsure how to manage the details though:

* I let Time Machine starts backups automatically whenever it pleases. Does it have a hook to run a command to take a snapshot?

* Does it do the verification every time, or (say) 1 in 10? If the latter, going back to the most recent snapshot isn't enough. And if you do say 3 backups then get rid of #3, can it take incremental #4 on top of #2, or will it complain that the #3 state it expects is absent?

I really should just set up a different backup system. I do at least have Google Drive sync on most everything important. It's not a "real" backup as another poster pointed out, but it's better than nothing.


> * Does it do the verification every time, or 1 in 10? If the latter, going back to the most recent snapshot isn't enough. And if you do say 3 backups then get rid of #3, can it take incremental #4 on top of #2, or will it complain that the #3 state it expects is absent?

I don't believe it does the verification every time. I can't remember what the criteria is (I assumed it was similar to fsck on boot for Linux). You can run verify manually (Option-click the menu bar) and I'm sure there's a command line--so maybe you want to schedule your backups explicitly and verify every time? I've always had a corrupt backup when I've manually mounted the disk image and mucked around, so it's easy to restore within two days. I thought I might have issues with the client/server state mismatch, but haven't in practice (I've only needed to two this a few times). I haven't kept close track to see if I lose any in between backups--but I restore so rarely I wouldn't miss them.

> I really should just set up a different backup system. I do at least have Google Drive sync on most everything important. It's not a "real" backup as another poster pointed out, but it's better than nothing.

I've kept to the 3-2-1 backup rule and kept bootable backups local and non-bootable ones remote.


> I let Time Machine starts backups automatically whenever it pleases. Does it have a hook to run a command to take a snapshot?

I don't know, but you can trigger backups on demand using tmutil(8), and that's what I do.

> And if you do say 3 backups then get rid of #3, can it take incremental #4 on top of #2, or will it complain that the #3 state it expects is absent?

You can delete arbitrary backup(s). "Classic" Time Machine backups are snapshots deduped using hard links. HFS+ even supports directory hard links.


> In my case, the NAS is a Synology server.

Same here. I'm also using BTRFS (I've not had a problem and my understanding is Synology doesn't use the flakier features of BTRFS). I can't remember if I'm using AFP or SMB (SMB I think is the only one Apple supports), but my backup is encrypted and I have a specific user. I'm not sure BTRFS matters all that much because the backup is a disk image. When I've had them get corrupted I've run disk utility on the disk image to fix things--there's a lot of layers to unwind.

I hear stories of people having issues with similar setups, but I haven't (I know that's a sucky answer to hear). Years and years ago I felt like I would get a corrupt backup when I closed my lid during a network backup (I believe this was pre-APFS), but I haven't noticed an issue in years. I have it auto-backup and it usually does it at night when the lid is closed and it's charging.

In Snapshot Replication I have a daily snapshot created at midnight with a 2 snapshot retention. I tend not to run out of space, so I'm fine with any deletes taking a few days to roll off. "Recover" under "Recovery" can restore old snapshots easily. I do this for most all of my Shared Folders.


As a public service announcement, I recently discovered that, since OS X 10.9 Mavericks (released 2013), Time Machine's backup of keychains is completely broken for anyone who does not use iCloud Keychain. This bug is actually so completely unacceptable that it was hard for me to believe when I encountered it, but anyone can easily verify this for themself.

In particular, your account's Local Items keychain, which is used by Safari and some other Apple applications to store passwords, is encrypted in such a way that it can only be restored onto the exact same machine — knowing the user's password is insufficient. Thus, if your machine is lost, stolen, or damaged, a restore from Time Machine backup will not properly restore your keychain. Nor will the keychain properly migrate to another machine using Migration Assistant (luckily, this is how I discovered this behavior).

There is no warning of this behavior in any documentation, and no workaround whatsoever except to use a different browser or password manager, or to manually store all passwords yourself separately in the Login (rather than Local Items) keychain.

There's also no way to export the items in the the Local Items keychain except manually copying them to another keychain, during which you must enter your password one-by-one for each copied item (people have written some AppleScripts to automate this, e.g. https://gist.github.com/rmondello/b933231b1fcc83a7db0b).

If you use iCloud Keychain, then the Local Items keychain is just a machine-local cache of the iCloud data, which is fine, I guess. But this behavior is just dangerously broken if you do not use iCloud for this purpose.

Some more information on this issue:

https://forums.macrumors.com/threads/data-migration-local-ke...

https://apple.stackexchange.com/questions/142123/extract-pas...

https://apple.stackexchange.com/questions/137250/export-keyc...

https://forums.macrumors.com/threads/how-do-i-copy-the-local...


So basically Time Capsule backup is useless if you want to buy a new Mac?

I now use the password generation and Keychain Password manager so much I dont even know any password of my online account.

How is that acceptable and not widely known?


It's not acceptable. I don't understand how it's not widely known, nor how it's gone seven years without a fix.

I wouldn't go so far as to say Time Machine/Time Capsule is useless, but it does have this severe sharp edge. So you need to have a plan for storing/migrating your passwords separately from the Local Items keychain.

Me, I took this as an excuse to switch to Firefox as my primary browser, and then I used one of those AppleScripts to migrate all my old passwords (thankfully, I still had my old computer).


I think this needs to be made to HN front page or viral on twitter.

Along with iPhone iTunes Backup Corruption issues which is still not fixed. But it is hard to provide any evidence other than personal experience.



Fwiw, this bug must have been introduced more recently than Mavericks. I just tested moving my login keychain from one Mavericks computer to another Mavericks computer, and it worked fine. iCloud Keychain disabled on both machines.

Since many if those links are from when Yosemite was current, it wouldn’t surprise me if that was when the bug was introduced. Yosemite made a mess of things in a lot of ways, and Apple has never recovered.


The problem is not with the Login keychain, but the Local Items keychain.


>If I read the article correctly, Time Machine's switching to something similar to overlay FS, but nested? I really hope they get it right: backup is where my tolerance for bugs is really low.

Turn off automatic updates and only update the OS just before the next one is released. Stay one OS version behind current one, giving time for the bugs to get worked out.


This is a good strategy for all but a handful of proven and/or trivial programs. In the last week I've:

- Upgraded directly from Mojave 10.14.6 to 10.15.7 (went smoothly)

- Considered migrating from Visual Studio 2017 (15.9) to 2019 (16.7), but have not yet done so.

- Refused to upgrade the proprietary firmware on my wifi AP's. The manufacturer has provided no changelog and does not provide a download link for the current firmware to enable a downgrade if it goes south.

I'm a software developer.


> Refused to upgrade the proprietary firmware on my wifi AP's. The manufacturer has provided no changelog and does not provide a download link for the current firmware to enable a downgrade if it goes south.

That is a luxury. My cable modem updated itself automatically to include a bloated asp server for modem interface and the modem crashes regularly, because it can not handle the load of the asp server on top of 4 devices connected to it.

Plus, they added a "Community Wifi" to share my already crappy connecting with strangers, overloading the modem even more.


FYI: I've used Arq for nearly a decade and have loved it, but the latest version, 6—which was something like a complete refactor IIRC—seems to be riddled with bugs (including, I think, data loss) and the support experience has not been good (both being deviations from normal). I'm not up-to-date on the whole saga but it's been disheartening to watch. I'm sticking with 5 for now. I just Googled around and it looks like a v7 is in beta testing and reports are that it's better than 6...


Could you elaborate on the data loss bugs. I recently had to restore from an Arq 6 based backup and ran into some issues where it seemed like if the screen saver started during the restore it would just stop restoring but would not give any error. I was able to get all my data back by disabling the screen saver, but seemed very strange.


Not trying to peddle hearsay, but with just some cursory Googling I can't pull up any citations. IIRC there were several complaints of data loss in the ArqBackup subreddit [0], I think mostly around botched Arq 5 imports (Arq 6 uses a new format and perhaps data was lost in conversions gone awry).

Michael Tsai has a good summary of the situation that he wrote when Arq 6 was just out, and echos my recollection of issues with data loss [1].

[0] https://old.reddit.com/r/Arqbackup/

[1] https://mjtsai.com/blog/2020/04/13/arq-6/


I'm holding steady on v5 to Backblaze as the v6 migration path was promised post-v6 release, in June, and still hasn't appeared. And it also appears that support responsiveness has gotten worse.

I run backups on three Macs every 48 hours, then once a month or so I will use Carbon Copy Cloner to clone the two more important ones to sparse image files on a backup hard drive which is physically secured afterwards.


In the course of building that data compatibility, our plans changed, we think for the better. We wrote about it here: https://www.arqbackup.com/blog/next-up-arq-7/

I'm sorry you feel support isn't responsive. We aim to answer emails within 1 business day, as we always have. We've added staff to do this.


Thanks for the update and for responding here. Hadn't heard about the v7 plans, so good luck — sounds like a good move.

I don't have any particular support complaints myself; my comment was based more on perception based on Twitter & other discussions I'd seen.


I really like TM as well, and find it quite useful. It’s like incremental backup on steroids. But I don’t feel it replaces a “snapshot” backup system.


Yes I don't think you can/should rely on TM only. It's convenient to keep running in the background but you'll still want to do manual backups to multiple places (external disks, Dropbox, Github etc)


Manual backups fall by the wayside in every environment I’ve seen them used. You are a better person than me and those are me. I’m a big fan of backblaze for this reason.

As an aside, I’m surround by people who consider cloud syncing to be a backup, and just yesterday witnessed someone in the grief stage when they deleted a load of photos to save space because they thought they would remain in the cloud. They had even clicked through the “delete delete? Are you sure?”.


To be fair, Dropbox works pretty well as a backup (with the ability to restore deleted files via the web UI), as long as you catch it in the 30 day window.

When I worked there, I was surprised to learn that they also often act as mitigation for ransomware attacks (they could roll back your account in time if you contacted CS and explained your situation).


Usually, after I complete a major project milestone, or do a lot of housekeeping (computerkeeping), or just when I feel like it's too been too long since the last manual full disk snapshot, I tend to do one.


I do this with my firewall. I’ve been caught out by making a seemingly minor change before saving a config then breaking everything.

It’s almost been enough to get me to have a test LAN and real one so as to maintain household tranquility.


Doesn't Backblaze delete copies of volumes that you haven't accessed in a certain amount of time?


Yes. It’s 30 days. Way too little imho.


By snapshot I mean filesystem snapshots as implemented by zfs and btrfs. So functionally they're snapshots, but space usage wise they're incremental backups.


Actually it's sort of halfway between ...

Which is to say, a block-based snapshot is very efficient - your snapshots only take up the space of the blocks that have changed - regardless of which files those blocks make up.

A hard-link based snapshot scheme is also fairly efficient for most use-cases - the snapshot only uses space equal to the files that have changed - even if those files only changed a little bit.

For most use-cases, I find the difference is negligible. However, if you have large files that change by small amounts, a block based scheme (a la zfs) is much more efficient.


I’ve seen too many TM corrupt itself (wifi). It’s nice for the TM “feature”. Quickly going back. But it’s not reliable as a backup solution.

Also, at a certain point it just can’t keep up with the changes.


> I've been generally happy with Time Machine's stability, but this is getting me a little worried.

For the longest time, I backed up my MacBooks with Time Machine to a NAS.

Seemed to work fine - Time Machine was successful and I was able to browse previous versions on the machine being backed up without an issue (browse history of machine A on machine A).

Then one day I was planning to wipe a MacBook and do a clean install - figured I'd confirm I could browse my backups made on machine A on machine B before I wiped A. I spent over an hour attempting to open the sparse bundle (w/ Time Machine and manually) and just couldn't do it - kept loading forever or giving me errors about volume verification among other things[0].

> I guess now's a good time to looking into Arq (or similar)

Like you, I decided to take a look at alternatives. I'd previously played around with Arq (v5) and it looked awesome - stable, well-documented, etc. Well, by the time I actually needed an alternative to Time Machine, Arq had released v6 - earlier this year[1].

Unfortunately it appeared to be bug-prone (not great for backups!) and lacked ANY documentation (one of the great things about v5 was the in-depth documentation, particularly around backup format). Users on the subreddit[2] weren't thrilled and you can't purchase v5 licenses (and TBH I wouldn't recommend purchasing software that isn't supported anymore).

Within the last week or two, Arq has released a second major version within a year - v7[3]. Feedback appears to be better, and the author has acknowledged mistakes, but TBH I'm wary. Definitely not adopting two-week old software as my primary method for backing up.

I've been playing around with Carbon Copy Cloner[4] more recently.

The ideal goal would be bootable backups to a disk image hosted remotely but that doesn't appear to be possible[5] - so I'm resigning myself to file-based instead - no bootable disk image, but at least I'm a little more confident in my backups? And a single "file" (or image) becoming corrupt doesn't blow away the rest of my backup ¯\_(ツ)_/¯

If anyone has any suggestions or ideas, I'm all ears.

Edit: Probably worth noting that in this case machine A was running 10.14 (and HFS) and machine B 10.15 (and APFS) - but I'd imagine 10.15 should be able to open a 10.14 HFS sparse bundle without an issue.

[0] https://pastebin.com/Le6Q407e

[1] https://www.arqbackup.com/blog/arq-6-more-power-more-securit...

[2] https://old.reddit.com/r/Arqbackup/

[3] https://www.arqbackup.com/blog/next-up-arq-7/

[4] https://bombich.com/

[5] https://bombich.com/kb/ccc5/i-want-back-up-my-whole-mac-time...


I use CCC in addition to work-provided Code42 and can't recommend it enough. I have a 1TB SSD hanging off the back of my Mac that holds a full bootable copy of my boot volume as it stands, everyday at 4PM.

Because I have Code42 for versioning and going back further in time (though WFH due to COVID has truly brought to bear the shitty upload speed my home connection has), I don't utilize the SafetyNet feature,so I can't speak to the efficacy of it, but for straight daily dumb snapshots I love CCC. When I went 100% remote back in March I opted to get a specced up Mac mini instead of a Macbook Pro. CCC made moving everything over barely a speedbump. It'll alert you to any issues and walk you through the restore when it senses it's being run off a booted external volume group. It really only took a couple of clicks. Dead simple. It also provides a handy GUI for APFS snapshots.

I don't buy a lot of...serious software (either work buys it for me or it's a PC game), but I don't regret the $40 CCC set me back. Plus, the devs are pretty much always ready for the new OS in fall, which to me is an important feature separating an OK Mac app from a good one.


for $40 CCC is beautifully cheap software for the peace of mind.

I've been using it for many years and every single time it's worked brilliantly.


(though WFH due to COVID has truly brought to bear the shitty upload speed my home connection has)

What’s the old saw about “the bandwidth of a station wagon full of tapes”? Yeah, that. It would be faster (if obviously not economical) to just mail a hard drive back to the home office every week than trying to back up my entire machine over residential cable internet.


Personally, I have a network Time Machine backup and use Backblaze. I also keep a USB drive handy and clone a bootable backup via SuperDuper periodically--especially before large changes like an OS update. I've had to dig into the nitty gritty of Time Machine every year or so (usually stuff I do, but in any case failed backups are concerning).

Relevant to you, earlier this year (using Catalina) I swapped computers. My usual process is to create a disk image (usually with Disk Utility or SuperDuper) of my old computer and these overlay systems are tripping me up.

I remember my first attempt only looked to clone the OS / root system--not my user data files. I can't remember exactly what I did next, maybe clone to USB drive then create an image of that? But I pulled it up the other day and the disk image size covered my whole hard drive, but when I mounted it I just saw the root system (I thought I lost all my backup data). Using DaisyDisk I saw a lot of "hidden space." I noticed Disk Utility mounted two disks; Macintosh HD and Macintosh HD - Data, but Finder only showed the first on the desktop/sidebar and I had to hunt for the second under /Volumes/.

For simplicity I'm going to create a new disk image and pull out the relevant data. I never really had a need for a bootable backup on the network. I just figured I'd backup OS files in case I needed to pull up some oddball system hack I had on an old system.


I have used CCC in past and liked it, I am currently using SuperDuper[1] which has also been good for keeping a bootable copy of my machine's SSD on an spare 2.5in SSD I had kicking around, in a cheap enclosure.

1: https://shirt-pocket.com/SuperDuper/SuperDuperDescription.ht...


Can you expand on why you switched to SuperDuper!?

I’m in the opposite camp, having used (the free version of) SuperDuper! in the past but switched to CCC. SuperDuper! seemed to have a simpler interface. I’ve been using CCC since it seemed(to me) to have a better development cycle with quicker support for new releases of the OS. Maybe I was mistaken in thinking that, and maybe Shirt Pocket’s really old style website played a part in that. Recently I looked up the release history of SuperDuper! and found that V3 was released about three years ago. I’ve been trying to figure out if it provides a longer update path (for a particular cost) than CCC.

Note to readers: the seemingly weird punctuation you see in this comment is because the application is called “SuperDuper!” (with the exclamation).


For me it was sort of the other way around, I had paid for a SuperDuper license years ago, and I went and looked at both it and CCC and saw that SD would work for backing up my new machine (which has Catalina on it out of the box). I agree that ShirtPocket's aesthetic is old skool for sure, but I have no complaints about the actual scheduled backups (well, actually one, which may not be their fault, I have it scheduled to dupe my SSD once a week in the middle of the night, and I notice that my machine won't sleep after that is completed).


Thanks for the warning about TM to NAS. Out of curiosity, were you using HFS+ and AFP via hfsprogs and Netatalk?

I set up a Pi4 with an external drive attached to it via USB3 about a year ago. I wanted to set up a solution for me and my wife's Macs. There's plenty of "tutorials" out there and found there were all pretty much the same - HFS+ and AFP. If you follow them, they work great - for a few days.

Eventually all my Macs (Mojave and Catalina) would get a Time Machine error saying the backup was corrupted and would have to build a new version. This happened a couple of times, and eventually tried using SMB. That made the problem go away. All great, so I thought.

Then, any time I had a power outage, the whole file system on the drive would get corrupted. They could only be mounted as read only, and no amount of fsck fixed it. Switching from HFS+ to ext4 fixed that issue.

Things have been pretty reliable since then. I've been able to recover a few files here and there, but haven't had to fully recover from a disaster.

TL;DR - every blog that tells you to create a TM machine/Pi backup using hfsprogs and Netatalk is wrong. Don't do it that way. Use smb and ext4


That’s bizarre, I have no idea why anyone would recommend HFS+ for that. It’s a Linux system running Netatalk, which uses normal POSIX file I/O. ext4 is a far better choice.

Cargo-cult HOWTOs is the only explanation I can think of.


> Cargo-cult HOWTOs is the only explanation I can think of.

Pretty much. When I google "raspberry Pi time machine," every result I see, except one, tells you to do the HFS+/AFP method. These results include content churning sites like techradar and howtogeek.

The only site that recommends ext4/smb that comes up, fourth on my list of results, is this one: https://mudge.name/2019/11/12/using-a-raspberry-pi-for-time-...


netatalk is also the wrong solution nowadays. AFP is unmaintained, both on the netatalk side and on macOS. Instead, Samba supports Time Machine backups via the "fruit" (!) extension, which also provides some nice performance improvements for macOS clients.

https://manpages.debian.org/buster/samba-vfs-modules/vfs_fru...

TL;DR though: add the following configuration to the global section of your smb.conf:

    vfs objects = fruit streams_xattr
    fruit:aapl = yes
    fruit:metadata = netatalk
And under the section for your Time Machine share:

    fruit:time machine = yes


SMB on ZFS also works excellently, 4 years and counting with many power losses, ungraceful ejections, etc. never a corrupted backup.


Interesting. I have a backup using Netatalk, but on ext4. Are you saying a simple samba share should work just fine?



Out of curiosity, how long has it been since you switched to ext4?


6 months. 3 power failures in the meantime and remounting has been flawless.


Wouldn't this be intentional that you couldn't open a Time Machine backup from another user account? That feels like a really, really big security hole to be able to view the contents of files in a Time Machine backup on a machine that either wasn't the original or an account that wasn't restored from that same user account.

Otherwise, are you suggesting that you were able to authenticate the Time Machine volume and then were still unable to browse it within the UI?


I recall permission issues when I needed to recover a couple of files from a Time Machine backup from an old machine without restoring the whole volume. Made sense, I somehow worked around it though. Not sure I entered the old system’s credentials, I might have used the terminal with sudo or something, but memory is hazy.

Personally no issues with TM, except when it was working for too long initially I decided to wipe it and restart from scratch. (Don’t think it helped speed it up.)

That said I spend some time carefully picking which folders to include in the backup to avoid slowness and bloat.


Again, though, isn't that intentional? If it wasn't, what would stop someone from, for example, stealing a Time Machine HDD and then having access to all the files on it? It's one thing to steal the origin machine but I was under the impression that TM is specifically designed not to allow access from other user accounts unless you know the credentials for the original account from which the backups are created. Is that not accurate? That seems like a giant security hole if it's not...


> That said I spend some time carefully picking which folders to include in the backup to avoid slowness and bloat.

What directories do you exclude? I feel like there must be lots of superfluous stuff in my backups but I don't understand macOS well enough to know for sure.


Not OP, but you should exclude large files such as VMs [0] as those don't work well with Time Machine.

Disclaimer: I am the author of Vimalin [1], the tool that Howard references as a possible solution for VM backups.

[0] https://eclecticlight.co/2020/03/02/time-machine-15-large-fi...

[1] https://vimalin.com


I use CCC for full machine level backups that can be booted from in an emergency. I then do Restic backups daily direct to B2 Cloud Storage (Backblaze)


You can do. Bootable backup to a remote system. The initial back up needs to be done locally then the back up disk can be moved remote.

This was easier pre-Catalina

https://bombich.com/kb/ccc5/using-carbon-copy-cloner-back-up...


Meanwhile I can’t believe Apple went to the trouble of an entirely new FS without implementing BTRFS style block-level checksums. The number of data corruption issues this could have prevented by now are too high to count. Not to mention there is a direct monetary incentive for Apple to do so, as it slightly inflates storage requirements at the cost of safety (and apple commonly stratifies their products by storage capacity). It’s 2020, we need ECC RAM and block level checksums for all new platforms going forward.


DDR5 has ECC, so at least that is happening soon.


I looked up DDR5 on Wikipedia, but it seems like ECC is just an option, which I understand was an option with previous generations too (at a much higher cost). Please elaborate on the ECC part, especially if manufacturers have committed to making only ECC versions.


DDR5 standard has on-die ECC, it's not optional.


I'm not sure I got this from the article but what exactly is the advantage of using APFS over HFS+ for Time Machine? The only advantage that was highlighted was per file encryption which seems to be an exclusive to the Apple Silicon Macs. And perhaps the logical volume management that allows you to use an external drive as both a backup disk and a general purpose disk?


I think that because APFS uses snapshots to monitor how files change over time, and APFS uses a block-level copy-on-write mechanism for files that change, time machine may be able to create smaller diffs when backing up over APFS because it only needs to store modified blocks, not entire modified files.


Yes definitely. The internals of APFS offer lots of things like these that would be very benefitial for backups. So using APFS on the backup volume should increase backup speeds big time.


That sounds great! I have a workflow that uses a network mounted HFS+ image for Time Machine backups that are then backed up to my GDrive. Anything to reduce the incremental backup size would help this.


> And perhaps the logical volume management that allows you to use an external drive as both a backup disk and a general purpose disk?

You can already can do this without any logical volume management. Time Machine backups on a disk go into the /Backups.backupdb directory. You can create other subdirectories of the root directory and put other files there (or put files directly into the volume root), and Time Machine will just ignore them completely.


Copying files to an external APFS is a lot faster than copying to HFS+.


So, I generally like APFS, especially because I can have a case-sensitive volume for git repositories without stealing space from the root volume. However, has Apple fixed the issue where filling up the hard drive makes it impossible to delete files to free up some space?


Nope. I'm living on the edge with my 256gb MBP, and if it fills up the only recourse is rebooting, which frees up the swap file so you have enough space to delete things...


I read that onyx can delete snapshots without needing to reboot.


This is a fundamental problem of all copy-on-write filesystems. With btrfs, a quick fix is to plug in a spare usb stick whose space btrfs can borrow in order to delete files on the main drive.


I might be missing something, but couldn’t you reserve a couple gigs of scratch space for this?


Btrfs has separate metadata and data block groups, allocated dynamically since some workloads can be light or heavy on metadata. And the data:metadata usage ratio can change after allocation and freeing space by deleting files. The idea of converting between block groups is being kicked around. At this point it is kind of a "last mile" sort of problem, not that common. But annoying for those who run into it because it's non-obvious how to get out of it.

Safest option is to just look at 'btrfs fi us' and see which type of block group is the least full. Do a filtered balance on it. e.g. Use something like '-dusage=5' to essentially get rid of any data block groups that are 5% or less full. Their data extents get moved to other block groups. Now that space is "unallocated" and can be allocated into new metadata block groups.

These sorts of gymnastics should be rare. There is https://github.com/kdave/btrfsmaintenance that will do them automatically for those who encounter the problem with their particular workload more often than they care to deal with manually. I never have to manually balance, and I also don't use the automated scripts - it's been years since I've hit enospc that wasn't a case of a 100% full file system.


Sorry, I meant “couldn’t the developers of APFS reserve a couple gigs”. I assume there’s some implementation detail that makes this complicated.


Btrfs actually reserves some space so that these operations always succeed. There has been bugs but that's the intention.


I've hit this on other filesystems and I think I ended up doing: echo "" > largefile

Then you can delete your files. Not sure if it'll work on APFS.


aaaah! That explains it!

This happened to me recently, couldn’t get my head round it but a reboot fixed things so I didn’t investigate more.

Reboot must delete the swap or something.


I wish they would fix the bug in Catalina where you can't restore to Mail from TM backups, it won't access any available backups. If you go into ~/Library/Mail and manually locate the account, it will access previous versions, but restoring these doesn't seem to recover individual inboxes, likely because there is an index somewhere else. Anyway, pretty bad...


It'd sure be nice if Time Machine learned to create reliable backups over the network one of these decades.


I really wish iCloud could be a Time Machine target while we’re talking about network backups.


I agree with you 100% - this is something I've wanted for a really long time, and I just feel like it is something really obvious that Apple is overlooking. I'm not sure if it is ever coming, but man I'd love to see it happen.


My impression is this would be more bandwidth efficient (and thus perhaps possible) with APFS than HFS+ because of block level updates.

Of course I have no clue whether Apple is thinking of this or not.


To do that, they'd have to lower those prices. They have been the same since 2017


I really wish Time Capsule could do iOS Backup.


Not sure why you have been downvoted but I can never get Time Machine to back up to my Synology NAS. Google search says there are lot of people who have the same issues.


I'm doing precisely that and never have a problem. Weird.


Do you have a 1 bay NAS or more than 1 bay?


Two, in RAID 1.


I'm having enough trouble to get it on an (admittedly 10 years old) USB drive, it takes half a day just to 'prepare' the backup then another day to very slowly transfer 100+ GB over to it.

Basically I keep corrupting my backups because I forget it's running and disconnect the hard drive when I move my macbook. Sigh.


It's too bad Apple had to scrap the plans to switch to ZFS back in the late oughts. We could have been here like 10 years ago!


"had to" could be too strong a phrase, but not entirely wrong in a way:

>> Apple can currently just take the ZFS CDDL code and incorporate it (like they did with DTrace), but it may be that they wanted a "private license" from Sun (with appropriate technical support and indemnification), and the two entities couldn't come to mutually agreeable terms.

> I cannot disclose details, but that is the essence of it.

* https://marc.info/?l=zfs-discuss&m=125642378308127

* https://arstechnica.com/gadgets/2009/10/apple-abandons-zfs-o...

There was a NetApp-Sun legal spat going on due to NetApp's WAFL patents (Sun won IIRC), and indemnification could have been a source of tension.


With OpenZFS, we still could (and I believe, can, just unofficially).


My understanding is that while ZFS might be a great fit on the desktop, but it's way too RAM hungry for phones/ipads. Maybe with Apple contribution it could work better on mobile? Maybe I'm just ignorant of different tuning or features that could be disabled? I've only used it in rather simple scenarios.

APFS was pretty explicitly designed for mobile, so I wouldn't be surprised if ZFS had been adopted on desktop they still made APFS for mobile.


I use ZFS on my RaspberryPi. It works great even in lower RAM situations.


Apple has historically be very stingy about RAM in iPhones. It looks like modern RaspberryPis today have 2, 4, or 8gb. The current-gen iPhone SE has 3gb, 11 standard/pro/max have 4gb. They're in the same ballpark. So it is possible, but Apple is so incredibly focused on mobile I can see them still investing in a custom filesystem focused on SSD, low memory overhead, per-file encryption, firmlinks, and whatever custom stuff they add for system updates, securing the boot volume, etc.


Maybe, but that didn't stop them from ZFS on the Mac.


These days it works great on low ram devices. My router runs ZFS on openbsd and uses maybe a 1-200 hundred megs for FS stuff, at most.


I've tried to look it up and it seems that ZFS doesn't work on OpenBSD. Is it some new development?


Oops, I meant FreeBSD, haha.


Oh, makes sense. Thanks for clarification!


Still no data checksums .. I will stick with ZFS


Being a ZFS (on FBSD) user myself, it is a bit sad that everyone seams to forget NILFS2 (checksum s for Meta and Data)


Man, I love this guy. He always gives the best geek stuff (Think "Mr. Universe," in Serenity).


Hm...I backup to a ubuntu lts box using TM over Samba...is this solution not going to be viable going forward?

Maybe it’s time to swap to an external ssd...


Not excited about this. Using APFS on my MacBook Pro, fighting a losing battle today where a file stored in an iCloud synced folder isn’t readable.

First I try to copy the file out of the iCloud sync folder to a local folder that doesn’t sync. No change.

Then, I go onto iCloud.com and download the file. The downloaded file shows up and works as expected. I ctrl + c then ctrl + command + v to move it into the local folder I had created.

Now, according to finder, I have two files with the same name. One is readable and the other is not.

I cannot delete either.

I’ll be playing with the terminal next to see if anything can be done there.


What makes you think it's an APFS bug?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: