Creating a text stream which you then pipe to a shell is a good way of debugging complex operations, especially potentially destructive ones that may need inspection to verify. It's a technique I use a lot, to the point that some of my more destructive scripts don't do anything - they just output shell commands, and it's up to me to pipe them to the shell.
Thanks for the -ok option. was not aware -- but yeah, I've been a heavy user of -exec option for years, and it always blows my mind when people get crazy complicated xargs going.
I would have expected an article on how to use these tools "like a boss" to point out that if there is whitespace in the filenames you will run into problems unless you use "find ... -print0 | xargs -0 ...".
I very rarely use find -exec, because I usually want to massage the list of files further. So I have a simple utility, print0, which reads lines and outputs null-terminated lines. It turns any list separated with newlines into input suited to xargs -0. File names with embedded newlines are vanishingly rare, compared to those with spaces, so it has always worked out well for me.
The link in the post to which you replied covers that; it's the section of the man page which deals specifically with differences between xargs and parallel. Having used neither of these tools before, I think two relevant lines from that link would be:
>xargs can run a given number of jobs in parallel, but has no support for running number-of-cpu-cores jobs in parallel.
>xargs has no support for keeping the order of the output, therefore if running jobs in parallel using xargs the output of the second job cannot be postponed till the first job is done.
The author would benefit from replacing his backticks with $(pwd) instead. You can nest this form arbitrarily deep as well as having it interpolate in strings or heredocs.
There's no need to use xargs, and pipe to sh in this scenario. The same functionality can be achieved by using only find: