Raid-1 is doing exactly what you recommend without any effort. A perfect replica of the disk. And if the other one dies, who cares, the beauty of raid-1 is you don‘t need the other one to have a full copy.
I think the idea here is that RAID1 forces both SSDs to write every block at the same time. With identical SSDs and very similar write endurance profiles you're likely to have them both give up at the same time.
Even just a nightly rsync would decorrelate what is right now nearly perfect correlation.
> identical SSDs ... you're likely to have them both give up at the same time
I wouldn't say much more likely than with traditional drives, unless you are getting towards EOL in terms of how much rewriting has been done but after that much time I'd expect randomness to separate things out at least a bit.
The main concern I have with either drive type is finding out blocks that haven't been touched in ages have quietly gone bad, and you don't notice until trying to read them to rebuild the array once a failed drive has been replaced - that applies equally unless you run a regular verify. Other failure modes like the controller dying are less likely to happen concurrently, unless there is a power problem or some such in the machine of course, but again these might affect all drive types and this is one of the reasons you need proper backups as well as RAID (the age-old mantra: RAID is not a backup solution, RAID increases availability and reduces the chance you'll need to restore from backup).
Having said that, my home server deliberately has different drives (different controllers, highly unlikely that even if the memory on each is from the same manufacturer it is from the same batch) in its R1 mirror of SSDs, just in case. The spinning metal drives it also has in another array were bought in a way to decrease the chance of getting multiple from one batch in case it is a bad batch.
> nightly rsync
The problem with that and other filesystem level options is that depending on the filesystem and what you are running, some things might be missed due to file-locking. As RAID is block-device level this is never going to be the case, though of course in either case you can't catch what is in RAM and not yet physically written.
Of course this problem will be present for most off-device backup solutions too, so you could use the same mitigation you have there for backing up between the drives too.
Again, both NVMe modules are likely to fail simultaneously when used in a RAID-1 mirror on the same chassis, controller and PSU, under the same workload, especially if they are the same model and age.
I'm not sure this issue is significantly worse for SSDs compared to other drive types, except once they get really old and are close to EOL as defined by the amount written, though I'm happy to be proven wrong if you have some references where the theory has been tested.
If you are really worried about that, perhaps artificially stress one of the drives for some days before building the array so it is more likely to go first by enough time to replace and bring the array back from a degraded state?
Unfortunately that won't wash if you are renting a server of collocating. Replacing a drive without it having already failed would most likely result in a charge for unnecessary hands-on maintenance time.
Though if you are expanding the storage anyway you could do it with an identically sized pair of drives, sync the existing array over to one of the new ones, drop the extra, then you have to unmatched drives for a new array. If using LVM you can join that to the existing VG or (less safe but better performing once you are done) you can try reshape the two arrays into a stripeset for RAID1+0. And hope that the new drives are not from the same batch as your existing ones and have just been sat in a store cupboard for the last year.