typing in 'rm' in any script I write scares the bejeebus out of me. I tend to write 'echo rm' so I get a chance to review while testing to catch this specific type of issue.
Instead of deleting anything, my scripts usually mv files to a timestamped folder under /tmp. In practical terms, it’s rarely a noticeable difference in performance or disk usage. Also makes scripts easier to debug when you can inspect transient artifacts.
I manage large video/audio assets, so disk usage is very noticeable. I've done the mv to a designated trash folder with another script that finds files in that folder older than designated time to live and then -exec rm -f {} \; type of stuff. Even typing that out still gives me pause. Nobody ever needs a file with such urgency as just 24 hours after it was deleted, but not in the designated time out window.
My workstation machines take hourly(+at boot+on demand) snapshots of the filesystem. Doing it on the system level is a lot simpler than repeating the logic over and over, and /tmp is often a different mount then where the files first resided so moving things over there is a copy+delete.
In case you're interested, I have adopted a pattern that works for me in bash (I don't use zsh so caveat shellator)
N=${N:-} # if you use (-u)
$N rm ./whatever
and then you can exercise the script via
N=echo ./something-dangerous
but without the N defined it will run it as expected. More nuanced commands (e.g. rsync --delete --dry-run which will provide a lot more detail about what it thinks it is going to do) could be written as `rsync --delete ${N:+--dry-run}` type deal
Can use -i to confirm deletions also, to not have to edit and re-do the command. The downside is being asked for everything individually rather than confirming one (big) list, so not sure if this fits your use-case