That said, there are certainly engineers who particularly excel at working on legacy code bases. Refactoring and modernising an existing codebase poses an interesting challenge and it provides actual business value. Anyone can start a new greenfield application. Working on a legacy codebase takes at least a bit of courage and a deep understanding of the business involved.
People often seem to forget that a supposedly shitty but working codebase for the most part probably didn't become that way because its original developers were just stupid but because the business requirements are inherently complex and contain a lot of edge cases. By decrying legacy code you might throw out years or even decades of expert knowledge about the business.
I don't care how 10X someone thinks they are. No one can walk into a business, have a one hour meeting glancing at some screens, click through some code ("Oh wow. Shitty. This is terrible.") and learn even a tiny bit of how things actually work. Yet I see proposals for tens or hundreds of thousands of dollars based on that kind of shallow analysis, and a customer high on the promise of seeing their dreams realized in what will be the first perfect software development project in history.
You're exactly right, "Anyone can start a new greenfield application. Working on a legacy codebase takes at least a bit of courage and a deep understanding of the business involved."
First off not all legacy code is "shitty." The main reason I am 10 or 100 or 1000 times more effective with legacy code is that I'm usually the only programmer the customer has talked to who will work with it. Everyone else has one solution: Throw it away and start over with [insert favorite cool language/framework/ideology]. So at least in my experience it's not that I'm 10X more effective, it's that everyone else is 0X effective because they won't even try.
It's easy and maybe fun to simply dismiss legacy systems as junk and shit and catastrophes, but when a business has a system that mostly works and they need some debugging, enhancement, improvement it's usually irresponsible to tell them to start over. When you do that you are telling the customer to throw away something that works, even though it has problems, and take a huge gamble on a new system that is 75% unlikely to even go live. You are telling your customer that your one hour reading of some code that you declare "shitty" is more informed than all of the cumulative experience of previous developers. You are telling your customer to change business processes, retrain staff, all on your word that you are more clever than the last team.
Imagine if you had a leak in the roof at your house and the first five people you called out to repair it told you to tear the house down and start over. And they told you that because the house wasn't built with the kind of lumber and roofing they prefer, or they don't think it's aesthetically pleasing based on their strict adherence to Frank Lloyd Wright's style. Or that whomever built the house must have been morons because they didn't properly label all of the wires and pipes. That's a story I hear about software systems every month.
The most important metric of software is whether it solves a business problem and adds value or not. What language it's written in, or whether the last programmer used spaces or tabs or camelCase or knew OOP as well as you think you do are not relevant metrics from the customer's perspective.
And since a lot of the code I work on is not actually that old -- in fact it's just as likely to be the unfinished remains of the last green fields team -- I can say without hesitating that the worst code to work on comes from programmers who are so committed to an ideology or language or toolkit they lose sight of requirements and the only goal that matters: adding value to the business.
Sometimes they are required to tell you that. If there are obvious structural faults with a house then it's an OHS issue to be on the roof in the first place and the whole building could have to be condemned. A lot of software is in that state, but the organisations that use it have a remarkable ability to route around errors.
I do agree for the most part, but as an industry we still rarely find that middle ground where maintainers can actually do preventative maintenance and slowly improve things. Very few maintenance programmers are actually doing maintenance work, there bolting on new features or fixing prioritized bugs. If I was in charge of building maintenance then my success threshold would be considerably higher than "the buildings still standing".
In the last 10 years freelancing I've taken on close to 40 projects, I've only told two customers that they should start over. In both of those cases we weren't looking at crufty 20-year-old code. The software was fairly new, just poorly designed or implemented (WordPress is not a great platform for everything, folks). One project was a pile of random piecework from low-price contractors hired online -- no conceptual integrity. Talking to other programmers who do the same kind of work I do, I get the same anecdotal evidence: most software systems are maintainable despite the best efforts of programmers to make things too complicated and obscure.
Rather than rewrite from scratch it's safer to refactor bit by bit, fixing a defined set of problems little by little. That keeps your chance of success high and customer risk low. Try to remove dependencies rather than add more. Carefully get things like version control, testing, tooling in place, but don't get carried away, some code resists retroactive unit tests and you shouldn't be breaking stuff just to make it fit your toolchain. Resist the urge to "refactor" everything to suit your aesthetics or received wisdom about how code should look.
You're right that it's more common to see features added on and the worst bugs addressed. That's usually because the core of the system is solid and doesn't need a lot of maintenance. It can also happen when the core is a black box no one understands and the programmers are afraid they'll break it.
Most important is to listen to the customer but don't believe everything they say -- customers often can't describe in detail how their own business processes really work. Don't assume that just because the code is written in an old language or isn't based on OOP or doesn't have unit tests that it's shit -- you can blind yourself to good design principles that aren't in fashion anymore. And have some humility -- it's unlikely you actually know how to rewrite a non-trivial application from scratch and get a better result than what the customer already has, unless it's an unusable pile of goo already.
I work with legacy systems all the time, lately mostly CRUD web apps, but I have debugged myself through pages of Excel macros, database drivers in enterprise systems and some code that was almost as old as me, and 100% agree with what you have written.
Bookmarking this post for future reference.
For example, if the shitty code base doesn't have unit tests a 10x programmer might decide to put everything else on hold and write a unit test suite. A 1x programmer would perhaps not understand the long term benefits of testing, and instead spend his or her time patching bug after bug.
Another trap a 10x programmer would not fall into is deciding to rewrite the whole system from scratch because the code is hard to understand. But sometimes the only way forward is to rewrite it all. It boils down to being a judgement call and the more experienced you are the better judgement calls you make.
Sometimes that means fixing existing codebase instead of the massive effort to do a rewrite (see gregjor's great comment).
Sometimes that means figuring out that the custom legacy software can be replaced by an off-the-shelf solution.
Sometimes that means figuring out that only some of the legacy software is actually important, so only that part needs to be maintained and the rest can be deprecated.
See also https://codewithoutrules.com/2016/08/25/the-01x-programmer/
Usually someone described (probably by themselves) as a 10xer are the ones responsible for the code base being so bad in the first place. 10xers only work on green fields, they never get a chance to see what a catastrophe their ideas turned out to be.
I agree that measuring individual programming productivity, or making any general statements about it, can be so hard that it's almost meaningless to say anything. Someone can be wildly good on project A with team X, and then flail on project B with yeam Y. I wrote about this: http://typicalprogrammer.com/why-dont-software-development-m...
The distribution of developer skill (if that's even something that can be plotted) is not bimodal with good developers and bad developers, it's normal. Sure someone 2 SDs above the mean might be 10x more productive than someone 2 SDs below the mean, but the same can be said for literally anything that requires skill.
While we're at it, why don't we call Gordon Ramsay a 10x chef? Or Yo-Yo Ma a 10x cellist. While this is technically true, it's not particularly insightful, and certainly not something you should use to label yourself.
To answer your question, yes someone skilled at software development will be slowed down if dealing with a legacy codebase.
Working with legacy code is a skill too. I don't think it's true that a skilled programmer will necessarily be slowed down with a legacy codebase. Another way to look at legacy vs. green field is that with a legacy system there's already a fairly complete spec, expressed in the code. Learning to read and understand code is a skill that programmers can learn if they care to.
For example, we had a 10x guy recommend that we re-write our health insurance web app back-end in Golang, but then, we realized that meant we couldn't cover generic prescriptions.
In that case, he wouldn't have remained 10x over the older Python/Django code.