- poorly executed cleanup (aka regressions)
- misunderstanding of requirements (new bugs)
- "standardization" of naming (now you have two standards https://xkcd.com/927/)
- restructuring of code/renaming of abstractions which then adds friction to original authors of code (provided they are still around)
Left unchecked this can all happen, at the expense of feature work, and an unmerged PR resulting in missed deadlines and a frustrated junior. Alas we usually learn best by making mistakes though and I find this one hard to teach for some juniors.
I would argue that a measured tolerance of ugly code (and sometimes bad code) is more important until you learn how to spot and write code that doesn't look wrong.
Two articles I found helpful:
In particular, it might make you think about game theory. Is the expectation value of this change more bugs, or less? Will making this change have at least a 50% chance of preventing a future bug?