There is a balance. I've seen plenty of under-engineered software with the same code copy-and-paste dozens of times. I've also seen incredibly complex abstraction layers that only made sense to their original authors (long gone...) and were incredibly hard to navigate and maintain (class hierarchies 5 layers deep, etc.)
It's worth noting that oftentimes a complex abstraction layer can be a sign that they were building upon an under-engineered base. A corollary to Hanlon's Razor could be:
"Never attribute to over-engineering that which is adequately explained by repeated under-engineering."
The problem, IMHO, is that often "engineering" is ad-hoc, divorced from the big picture:
Developer sees codebase with too few / bad abstractions that make changing the code base in the way they want to hard. They invent some new abstraction (e.g. class hierarchy) that solves the immediate problem and makes adding the new code easier.
The problem is that these new abstractions may only make sense in the particular state the code is in right now and don't necessarily correspond to intrinsic / natural concepts that can be verbalised when talking about the solution.
A good indicator of this is that you have a bunch of "...Service" classes that don't really tell you what their responsibility is. In the end, you might have 10 such classes all calling each other, and new code is added to these classes seemingly at random without any sense of coherence to the individual components. Then, in the worst case, some people go overboard unit testing these classes with questionable semantics and interfaces, mock everything away, and refactoring the mess becomes a huge pain.