I don't think that follows at all. I've seen my share of development groups where someone had previously made a bad hire or brought in the wrong consultants and that had left them with a body of code that simply wasn't very good. That doesn't necessarily reflect the overall culture at the organisation and it doesn't say anything about whether the people currently available would make the same mistakes.
The assumption that a big rewrite would be too expensive and end up with the same problems is itself quite dangerous. Some of those groups I mentioned knew very well they had a pile of junk but management had apparently read the usual advocacy about how Big Rewrites Are Bad and stubbornly insisted on adapting the existing code instead of recognising that it should be written off. They spent far more time and money on the updates than it would have taken to do a clean rewrite. And then they got into this kind of sunk cost fallacy where because they'd spent months doing what should have been weeks of work once they then became even more attached to the flawed code and kept repeating the same mistake.
> They spent far more time and money on the updates than it would have taken to do a clean rewrite
Of course this claim assumes you can reliably estimate the time and cost of the rewrite and the claimed improved productivity after the rewrite.
It is still unclear to me what kind of improvements cannot be applied to an existing code base through refactoring and gradual improvements, but requires all the code to be written from scratch.
Of course this claim assumes you can reliably estimate the time and cost of the rewrite and the claimed improved productivity after the rewrite.
I almost preempted that counter-argument in my previous comment. :)
Some software development is 90% research and 10% development and it's true that you never really know how long it's going to take and how well it will work until it's almost done anyway. But the tar pits I'm talking about were not that kind of software development. Most of the cases I'm thinking of were the stereotype over-engineered "enterprise" code that had become bloated and excessively interconnected. Others were "clever" code where someone had tried some fancy design patterns or data structures or algorithms, typically with a severe YAGNI complex as well. Either way making simple changes required many times the effort it should. And yet a drop-in replacement for the whole system would have been a low risk project, with very predictable work required that could be done by any mid-senior developer on the relevant team, taking a fraction of the time.
The assumption that a big rewrite would be too expensive and end up with the same problems is itself quite dangerous. Some of those groups I mentioned knew very well they had a pile of junk but management had apparently read the usual advocacy about how Big Rewrites Are Bad and stubbornly insisted on adapting the existing code instead of recognising that it should be written off. They spent far more time and money on the updates than it would have taken to do a clean rewrite. And then they got into this kind of sunk cost fallacy where because they'd spent months doing what should have been weeks of work once they then became even more attached to the flawed code and kept repeating the same mistake.