One of the ways I complain about particularly bad decomposition (the sort of practices that lead to parodies like Enterprise FizzBuzz) is the ridiculousness of stacktraces for errors in these systems.
We tell people to use delegation but many have trouble differentiating delegation from indirection. You know things have gotten particularly bad when you have traces with the same sequence of three or more functions appearing three times. Debugging this is a nightmare. It’s literally a maze of logic. This type of code has to be memorized to be understood, which further makes an existential threat of a saner person’s attempts to refactor it - moving things around to be discoverable and debuggable comes at a cost to the people who already memorized it.
There is also DAMP vs DRY and “desertification” of code, which is related to the good versus bad indirection problem.
When you get a prolific “clever” person who suffers from these problems, the whole team suffers with them (which is why I need a new job...).
Someone above mentioned flame graphs, which is a trace of every call in the system, typically for the purpose of visualizing where time is spent by the CPU. In thinking about this thread, I now want to look into using them as a measure of time spent by the reader.
My overall philosophy on code is that we should use our best days to protect ourselves from our worst days. I expend most of my clever on trying to make things look easy, which is a bit of a challenge come review time because one of the hallmarks of really clever reasoning is that people react by saying things like, “well of course it works that way”.