I'm into using transactions as explicit guards around chunks of known-non-reversible logic as a robust way to warn that manual intervention may be necessary, but using them to replace lockfiles seems to have the worst drawbacks of both systems: it's inherently at-least-once, plus it's substantially more risk surface than a lockfile to establish and maintain a database connection/transaction.
What if the database has gone away or crashed by the end of your possibly-long-running script (unless you're using sqlite, in which case, why not use a lockfile? On second thought, sqlite is probably better at lockfiles than flock...)?
What about the tech choices that using a transaction precludes? You can't write ops scripts in pure shell any more (without some seriously weird file descriptor munging and job backgrounding to keep the transaction open while you do stuff). Installing your runtime of choice plus a database driver may be unnecessary and time/space-consuming on a lot of systems you need to manage this way.
You now also have a bootstrap problem: your database driver and associated tooling may not already be available on your target system, so using such transaction-managed scripts to provision clean/empty systems is another area where additional challenges emerge.
All of those can be mitigated or worked around, and this shouldn't be taken as advice to never use an RDBMS's transaction system to manage ops tasks, but I think the cases where it adds more than it costs are pretty rare.