Best practice is to build a new vdev with the 4x4TB drives and move the entire dataset across. It isn't like other systems where you can just bolt in another drive and it expands to absorb that.
The critical concept here is that where other systems use physical disks, ZFS uses vdevs. vdevs are 1 or more disks but are presented to the RAID system as a single 'storage entity'. Thus, you don't add disks to a storage pool, you add vdevs.
Doing some more reading, this sounds like it "degrades" the array each time and could be risky. Other replies to my comment seem to suggest it can't be done.
I believe there are no problems with your scenario, there's zpool replace command which does this:
zpool replace [-f] pool device [new_device]
Replaces old_device with new_device. This is equivalent to attaching
new_device, waiting for it to resilver, and then detaching
old_device.
The size of new_device must be greater than or equal to the minimum
size of all the devices in a mirror or raidz configuration.
new_device is required if the pool is not redundant. If new_device is
not specified, it defaults to old_device. This form of replacement
is useful after an existing disk has failed and has been physically
replaced. In this case, the new disk may have the same /dev path as
the old device, even though it is actually a different disk. ZFS
recognizes this.
-f Forces use of new_device, even if its appears to be in use.
Not all devices can be overridden in this manner.
Thanks for pointing this out. I still don't know how to see all of this in a home NAS context however. It would be really great if someone could explain.
Lets say I have 6 SATA ports. I have 4 drives that I collected from various computers that I now want to unify in a home built NAS:
A. 1TB
B. 2TB
C. 4TB
D. 1TB
E. -empty-
F. -empty-
Now all my drives are full and I want to either add a disk or replace a disk; How do I:
1. replace disk A. with a 4TB disk
2. add a 4TB disk on slot E.
As long as you have room to connect one extra drive there's no "degrading" - add new disk, zfs replace, remove old disk, repeat. I've done this and it worked, and I've had the new disk fail partway through and not had to spend any time resilvering, so I don't think there's any degrading. (In any case you're certainly in no worse a situation than you would be if you had a single-drive failure. If you're worried about having a second drive fail while you're replacing a disk after a failure, use raidz2).
Obviously if you only have 4 drive slots and have to remove a disk to replace it then you're going to have less redundancy while you're in the process of doing the replacement.
You can replace the 4x3TB drives with 4x4TB drives for more space. But you can't replace the 4x3TB drives with 3x4TB drives or 2x6TB drives. If you started with two mirrored pairs and want to change to a raidz, you can't without destroying the pool. You can upgrade disks and add more disks, but you can't remove disks without replacing them.
I'd thought that being able to use different-sized drives was a selling point, for piecemeal upgrades, but the article says to never mix sizes across vdevs or spools.
You can upgrade the capacity of a pool, but you have to upgrade each drive, one at a time, with time to resilver in between, and the bigger capacity doesn't come online until all drives are incorporated.
This might kill my dreams of a ZFS NAS...