Owning the legacy software that runs the business tends to provide job security at your current job, but can hinder your professional growth at both your current job as well as any future jobs.
Getting on to projects that are intended to replace legacy software tends to get you a lot of positive visibility politically and the ability to learn new technology and skills. If the new project fails there tends to be little fallout from the failure, since so many people are attached to the project at all levels. The goal posts will move so it can be reframed as a success.
If legacy software fails it usually means long hours and a lot of people nervously asking you when it will be fixed.
Unfortunately there's just not a lot of value to being able to put on your resume that you're an expert at an older language that nobody has heard of, built on an in-house framework that will never be used outside of your current company.
I think this is, fortunately, only approximately true. Most of my career to-date has involved exactly this kind of work and it's certainly not true that there's no value in it, even on a resumé or mentioned in a job interview. And certainly some non-zero amount of people derive considerable value from exactly that kind of experience as some non-zero number of employers or customers need someone with it.
And, in a lot of ways, working on legacy systems can be both immensely fun and rewarding. Even just incremental improvement in or around such systems can itself be immensely valuable, e.g. setting-up automated builds or deployments, adding integration tests (to make future refactoring easier and safer), or rewriting or replacing a key portion of the system using newer tools or with a better design made possible due to the wisdom accumulated by a system performing real work over a significant period of time.
There is as much pleasure in making something broken work properly as creating the thing (imo).
It’s a steady career as well, most programming is maintenance outside of the fail fast world of startups.
The startup is 7 yo, I joined almost 2 years ago.
The gap between old and new features is enormous .. and it will take years to reach some cohesion.
It was refreshing to find out that one of the approximately 7 languages that contribute to the final executable that was abandoned in ~2007 has been revived and is now (2016ish) being maintained by Eclipse
These kind of things make you realise that the Silicon Valley way of doing things is not the silver bullet.
People that have spent their careers in the silicon valley have a very distorted view of tech.
Working in maintaining and supporting legacy software is both an absolute necessity for the business and a possible career dead-ending move for you. You're at the same time doing important work -- if not particularly interesting or groundbreaking -- and signalling to your employer "I'm not marketable enough that you must raise my salary or risk me leaving".
Working on legacy software can also be a dead-end when interviewing with some startups. Interviewers from MuleSoft once told me to change jobs and work on more interesting, high-profile open source tech stacks before interviewing again with them. (I did, but didn't try another interview with them because I later learned what they do isn't particularly interesting, either).
Some people do specialize in obsolete software and make it their niche, but I find in my country (not the US) the degree of success with this is overstated. For example, I know very few COBOL programmers who earn a lot of money. Most are in a lose-lose situation, earning average money and... working with COBOL.
Depends on where you live I suppose. My city has more COBOL jobs than Rust/Elixir/Go/Haskel/Erlang/Ruby jobs combined. It’s still mostly C#/JAVA/PHP, but those hyped languages we read about all day on HN never actually become a thing around here. Node.js is the only exception I can think of. Python has certainly picked up, but most python jobs here require a degree in math or statistics.
I guess it’s probably very different in Silicon Valley and other tech hubs, but it’s my impression that most of the world still runs on something old.
This became my issue with contracting. I had known expertise in one thing, but never a chance to work on another thing professionally. I could see the writing on the wall that this thing wouldn't be that competitive in the future, but the incentives to get out of it weren't there.
I agree that someone handing you a project that you have no idea what it does, what the features are and going and randomly refactor is stupid.
I haven't seen the results of putting that on a resume yet though.
Maybe that's a good thing though, because people sort themselves to the jobs they are needed at. You don't want overly ambitious risk takers who seek visibility to work on the mission critical core, you want people that value job security and long-term thinking if you really need to rely on that system. On the other hand, you don't want people planning every thing to last for decades if you're only interested in a general exploration of possibility.
This is what our industry values; make your choices accordingly. It’s also one reason for the perception that developers are more obsessed with new shiny things than learning their older counterparts. We’re incentivized to act that way.
You can’t only look at your work in terms of what is most valuable for the business, because that may turn out to conflict with what is best for you and your career.
It's not so bad for big projects with enough staff. It gets tough when the legacy code does something hard, and the maintenance team doesn't really understand why something was done in some way.
Second Life, the virtual world, has that problem. It's written in C++, and some very good people wrote it about 10-15 years ago. They're all gone. The people who maintain it today are struggling to fix serious bugs that have been outstanding for 5-8 years. It's not just "legacy", it's "not web-like". It's a distributed system with tens of thousands of servers in one data center. It's a tightly coordinated soft real time system. There's a huge amount of in-memory state, which is changeable in real time yet is constantly being backed up. This is totally alien to people who only know transaction-type web-based systems. So they can't hire anybody and have them be productive quickly.
Second Life is divided into "regions", 256m on a side, each managed by a separate process constantly communicating with its neighbors. This geographical distribution system is unique to Second Life. User avatars and objects can move from one region to another. You can look across region boundaries. Running Mono programs inside objects are stopped, frozen, copied across the network, and restarted on a different machine.
Most big-world MMOs cheat somehow so that they don't have to really solve the distribution problem. They're often sharded, so that the number of players that can interact is limited. Or they're smaller. Second Life's world is 100x the size of GTA V. Or they're portal based; you can only get somewhere via a controlled portal. In Second Life, you can fly over the whole world. (Mostly. Fast vehicles hit bugs the devs have been unable to fix for a decade. Another "legacy" problem.)
This is what the machinery for a "Ready Player One" or "Snow Crash" world looks like.
It's not parallel enough, and the servers keep running out of CPU time on the main thread. Everything then gets sluggish in world. The system needs an overhaul to be more parallel internally on the core functions, and that's really hard, expensive, and needs a dev team the company lacks. Yet another legacy problem.
The technology is all unique to this one system. The only thing that works even vaguely like this is the new Spatial OS from Improbable, which took 150 people to develop, is proprietary, and hasn't been shown to really scale yet. We'll know late this year, as Nostos, a new game from China, rolls out, how well it really scales. That's the first AAA title to use Spatial OS. Spatial OS has a deal with Google where it has to run on Google Cloud servers, which is scaring off most of the big game development shops. That costs too much, and betting your business on a lesser Google product usually ends badly.
Hence the recruiting problem Linden Lab faces. You want to tie your career to this one-off strange system?
They pay an incredibly high price for some design decisions that I think users don't even get much value from. It turns out that most users would rather have a private island than live on a continuous continent where neighbors are always putting up eyesores. If you were to start with that fact, remove the requirement that private regions even exist on the global map, and let them spin down when no one's home, then you could give paying users a lot more space for their money while also reducing the company's spend on servers.
While I take your point about work on Second Life not being super transferrable to any of the AAA game engines, it seems like it would be very transferrable to creating such engines themselves. And what engineer wouldn't want to help build the world of Snow Crash or Ready Player One?
I think the deeper problem is that Linden Lab has stopped prioritizing investment in the SL platform. They've set their sights on creating VR-focused Sansar instead. But it's tough to convince people to move over to a new world when it means leaving behind the bigger community, the bigger economy, and all the clothes in their inventory.
Linden Lab tried that. That's what Sansar is. It averages 13 concurrent users on Steam. Maybe some more who signed up outside Steam, but under 100. Sansar is a "VR game level loader", not a world like SL. Somebody creates a level map, and others can visit, but not change much. Sansar has a Star Wars prop museum, a Ready Player One prop museum, etc. They look great. You visit once, and you're done.
Other VR game level loaders are SineSpace and High Fidelity. (High Fidelity just gave up, and "pivoted to enterprise".) They also have user counts in the 2-digit range, but worse content than Sansar. The hook for that market segment was supposed to be VR headsets, which turned out to be a niche product. Even VRchat, after a surge in 2017, dropped to about half its initial peak and is stuck at a few thousand concurrent users. Facebook Spaces? Whatever happened to that?
Meanwhile, Second Life continues to plug along, with 30,000 to 50,000 users connected. That's about where GTA V online is, and would be 11th place on Steam if SL was on Steam. SL was maybe twice as big at peak, 7-10 years ago.
Hence the legacy code problem. It runs, it's profitable, it has a significant user base, and it needs improvement.
I've seen legacy code so buggy that it couldn't be fixed, that new features required quarters to be added, that customer service team had to double size every year to help customers with their bad experience.
When legacy code is not properly maintained, it can become this unescapable hell. Yes, it sort of works but at what cost?
What I mean is that you may want to take "ownership" in the sense of learning it and improving it but because it's not new development, large barriers to paying down technical debt and updating to newer development methodologies is disallowed.
So it stays, untouched, bit-rotted, and inflexible.
My application is in .NET Core 2. There are some issues as anything IBM can be a pain in the ass to work with. Also, the as400 dev and me have a tremendous knowledge gap. I know everything new such as unit testing, proper source control, dependency injection, linq and ORM tools. However, when it comes to speed of queries he is much better at optimization. Since, he has worked with some ancient languages he has a better understanding of low-level programming.
Honestly, it is pretty interesting at what speeds the current tools allow us to develop applications. For example, he would need to set up the DB, then security, then applications, stored procedures, map parameters in code and so on. I can accomplish most of these things with an ORM like Entity Framework in less than an hour and have a working project, obviously depending on the scale of it. However, both approaches have pros and cons.
Having old legacy code is problematic. It is harder and harder to find developers and rates are going up as the BIG fish still rely on them and can pay more.
The reason he is better at optimization is because he is not using an ORM. ORMs are a leaky abstraction that loses the power of SQL for the limited benefit of slightly easier programming. They also tend to move things that in SQL would be a JOIN that is executed in the database back into code.
Data is not an object, much that some "modern" languages would prefer to lose the distinction, with ORMs and DTOs obfuscating it.
RDMSs are based on cohesive logic (set theory) and have been tuned and optimized by people to do the best possible automation of data storage and retrieval, while abstracting the very low level.
ORMs on the other hand, take the logic that should be at the data management level, including invariants and moves it into code that does what SQL does, but slowly and badly.
Report Generator (RPG) is designed for an 80 column punch card. Put a F in column 6 to mean this, put a "C" to mean something else, match one of the 99 variables (named, intuitively 01 through 99) to make output happen.
I had the distinct displeasure in 1984 of trying to maintain a warehouse stock control system that some evil people had decided to implement in RPG. From memory, RPG only has the equivalent of single dimensioned arrays, so the location of items in the warehouse was an intersection of three arrays pointing to yet another array containing the SKUs.
It's designed to take one or more input files with defined columns in each row (card) and do some stuff (mostly subtotalling etc) and produce output.
Of course, it's been stretched beyond all recognition since it was introduced in 1959. It's literally 60 years old this year.
When I learned RPG, it was through physical audio tapes. In 2005, with broadband. I haven't come across to many good places to learn from (but I haven't really been looking, either).
It's just a bit easier, not a lot. I certainly use ORMs now, but it's a convenience rather than a revolution. The truly revolutionary thing for code from that era of C# were lambda expressions.
In all honesty, he's an idiot for not updating his skills. If he simply learnt a bit of modern EF and learnt how to use git, he'd run rings around you. I'd note that "proper" source control practice has been around for well over a decade now (SVN was adequate, just not great for distributed teams), so he must be really adverse to change.
The unit testing and DI, on the other hand, is all pretty useless noise in a statically typed language like C#, for all MS push it. The only benefit in DI is keeping the same instance of the EF around, and it's pretty hard to actually get that working properly through-out the entire stack when you want to do anything even remotely creative.
As for unit testing, never seen the benefit. I took over a project with something like 50% unit test coverage that I never bothered to keep up, in 3 years the existing tests caught 1 bug.
All that highlighted to me that it was an utter waste of time and money it was writing all those tests.
People have been using source code control since then, but the history of computing is of wheel reinvention and not-invented-here, so we are condemned to rediscover things.
Unit testing is useful for business logic, not for individual functions. If you're writing unit tests around whether or not you go outside an array bound, then you're testing the wrong thing. If, on the other hand, you're making sure that someone under 13 can't signup for a service, that's a reasonable unit test.
DI can be useful for running code, allowing you to instrument the code to identify problems. But that's runtime DI, not compile/deployment time.
As a design pattern, it's more an abstraction / reduction of general parameterized polymorphism, usually dragging in an opinionated framework that requires you to follow a set pattern of development and deployment.
I personally would not enjoy it as all you end up doing is maintenance and working with nasty old code bases.
In the beginning it goes over code smells and how to find refacing seams in your language of choice (C, C++, or Java) and then each chapter is a group of techniques you can use.
Working on a service that had both a very old monolith and some brand new microservices, I found it invaluable. I think the first lesson I applied from it was using pinning tests for safer refactoring.
Thanks for the book recommendation. Funny to realize I have done this in the past without prior knowledge; it just made sense.
One of the problems that the microservice architecture is trying to solve is by making a a large software entity a collection of smaller easier to manage entities. It's like single cell organisms evolving to multicell organisms...each cell becomes smaller and simpler and more specialized and replacement of cells becomes easier and allows the overall lifeform to "scale" and become more complex.
Mature software companies have gone through this transformation multiple times already in the form of, n-tier, SOA, microservices, and now serverless architectures. This is all part of a natural progression of making individual components simpler, the overall structure more granular, and as a result more resilient. This resiliency opens up new capabilities to scale a complex system.
My long winded point being that legacy software will always exist yes, but each piece of legacy software is already getting smaller and more granular where rewriting it will eventually be a more continuous operation of refactoring small things--kind of like skin cells falling off your body or hair falling out.
I think the comments about "heroic" culture being fostered by the lack of ownership is spot on, but it isn't a bad thing unless those heroic actors aren't actively trying to get attention put on said pieces of legendary software. Imo its better to have heroic actors than nothing, and especially in the context of an opensource application/library that is used worldwide, I hope we can find more to generate more interest in working on them.
Because Extreme Programming (and other agile practices?)
make Shared Code-Ownership a "value".
In terms of code ownership, you can run your group in a number of different ways. I've seen projects with code ownership work and I've seen projects without code ownership work. I vastly prefer the latter, personally. As long as you are churning the code regularly, a lack of code ownership means that internal conflicts about how things should be designed are forced out in the open. They don't fester for years and years, where groups of developers end up saying, "I can't work with that person. They are crazy." You have the conflict early when it doesn't have such a large impact and you sort it out early (Note: some people are just inflexible -- if you find that kind of person, knowing about it early is also good. You can deal with it).
When you avoid code ownership, you are also forced to have code that is clear for your entire group. For example, if you have a single person and they work in isolation, their code may be impenetrable to the others. But if each person in the group has to work on the code, code that most people don't understand has very little chance of surviving. Overall, I find that staying away from code ownership results in considerably more maintainable code.
However, there are clearly advantages to code ownership as well. One shouldn't snub their nose at the benefit of being able to see a piece of code and say, "I wrote that". Having ownership makes it easier to have pride in your work. It can be a very motivating factor. If you have people on your team who are feeling disconnected and don't feel like they are personally making an impact on the group, giving them a little piece of code to own can be very good for them.
Similarly, sometimes you don't have a choice for your team. Sometimes you have a team of people who just don't work well with each other and there is nothing you can do about it. Partitioning the code and allowing these people to keep their distance may be the only thing you can do. Ideals are great and if you can achieve them, it's wonderful. But you have to be conscious of reality. People are people.
I could go on and on, but I guess my point is that very often I see remarks like the one you made that seem to take a very superficial view of things. There seems to be no effort made to understand why other people have a differing viewpoint. Of course, you also get fan-boi style postings of, "The-new-hotness is the best thing because reasons" which are also unfortunate. Usually the truth is somewhere in between.
MVP = (in this case, probably) Minimum Viable Product, see https://en.wikipedia.org/wiki/Minimum_viable_product
Mostly a good article, but what is with this quote? Aside from being factually wrong from a dollars and sense standpoint, doesn't it contradict their point that legacy code doesn't get enough love?
I don't think it's wrong in most (nearly all) cases. The cost of running software, keeping it updated, changing the HW it runs on, keeping it compliant, etc over time costs much more than the dev time to build it.
Unless your service is completely replaced every couple years, I'd venture operations cost more than development. (even if dev handles the ops).
The point of the statement is designing for operability helps with ownership and combats the "leave it in the corner and hope for the best" mentality.
When you have a legacy system that "sits in the corner" earning money, you're [the company] paying engineers to retain their services in emergencies. Features get added too, if you're interested in keeping or gaining customers. So construction cost as a proportion of total cost does trend to zero.
The company/stockholders doesn't care about hot new technologies. They care about revenue. Customers don't care their software is written in this or that framework or is powered by deep learning. They care about stable, usable, useful apps. Businesspeople know this. Many engineers do not. I think that disconnect causes much strife in tech businesses.
People didn't think up microservices yesterday, they aren't some hot new fad that'll be forgotten in a year, and they aren't being developed because people just have nothing better to do. They're replacing legacy systems which nobody wants to touch at a rapid clip. People don't like working on legacy systems because they're at an unfortunate intersection of being business critical yet flaky and hard to work on.
In the context of this story, well... your protagonist is actually on life support. But not to worry! They'll be taken off it soon...
Certainly applications can be supplanted or replaced, and often are, at the discretion of users and customers.
But 'living' (running) systems can often only be successfully replaced in the manner of the Ship of Theseus.
"Soon" is often...not soon. I have never, ever seen a legacy system of any substantial complexity taken fully offline. Nor have I seen even partial replacements for such a system deliver a reliable alternative anywhere close to on time. In fact, I've seen more half-baked successors be either scrapped or put on life support themselves than I have seen get even into the ballpark of what could generously be considered success.
No and shame on you for pushing this idea, is my first reaction. "worked" doesn't mean anything.
Positing this redefinition of Legacy (and implying what "worked" means) rubs me the wrong way, especially when tying it to the idea of code ownership. Maybe I just have a warped perspective, except that I've run into these scenarios.
"worked" might mean "it was easy to fix and it broke a lot", "it calculated the numbers correctly but nobody knows how to change it without it breaking", or "it costs $5k a run but it gives us the right count of a database column". Legacy means it might be working, depending on your CURRENT needs. Your current needs change, even when the code does not.
Code ownership isn't at a single level, which makes some of this analysis seem idealistic.
Code doesn't become legacy if it didn't fulfill some purpose. If it didn't work, it wouldn't be legacy, it'd be source code in somebody's home directory that was never deployed.
All working software is not legacy software. All legacy software (still running) works, for some definitions of works.
That's not to say your points aren't also really correct. Software is built with current requirements and knowledge in mind. Best decisions with info we have..etc.
No working should mean "It fulfills the requirements". If the requirements change then the code should change.
The other things you have mentioned are important but they aren't necessarily requirements e.g. Uptime / Reliability / Performance might be requirements but without stating what they should be you can't say that a piece of code doesn't work if it "slow".
This sort of stuff is real basic software engineering.
That's the nature of the term "Legacy", that you are using. It met requirements that are not the same today (sometimes it's just standards of coding). It didn't change, hence it's legacy code.
Firstly I was commenting on your very strange definition of what working meant. What we (programmers) normally mean by working we mean "It fulfills the requirement that it was developed against". If the requirements change, the code must change to reflect the changes in requirements. If the software is changed it is a different version of that software.
Legacy has nothing to do with that. Legacy until relatively recently meant "Not supported" i.e. Windows XP is "Legacy" whereas Windows 7 is not.
Consider the scenario:
I am asked to write a program to play sound. So lets make up some trite requirements.
* It be able to play .wav files
* Sound should always come out of the default audio device as designated by the OS.
It is written and released. We will call it version 1.0.0 and it is supported to December 2021.
People ask for more features. These new features are:
* It is able to play .mp3 files
* The ability to choose the audio device that you can play sound through.
It is written and released and it is called version 1.1.0 and it is supported to December 2022.
Now the requirements have changed however because we are still in the year 2019 they are both still supported. However the requirements to the software has changed and thus there is a new version of the software to reflect the change in requirements. That should be related in the change-log (remember those things).
In 2022 version 1.0 will no longer supported. There will be no defect fixes to it, but it still works i.e. fulfills the original requirements as listed above. However under the more traditional definition of Legacy Software it is considered legacy.
Things like code quality, maintainability etc. are separate issues.
That's your characterization and not all how development works in practice. The vast majority of software barely has anything like versioning and a scant few have maintenance/support windows. That's a fact by volume.
Defining the term "Legacy" to mean something specific as a subjective point of reference (since there is no formal definition), eg
> Legacy until relatively recently meant "Not supported" i.e. Windows XP is "Legacy" whereas Windows 7 is not
is helpful, because it lets me understand what you are thinking, clearly. This does not change my position, as my experience is that working code is still called legacy internally, all the time. The historic resource problems with things like broken dependency chains (you can't even build it anymore), or availability of platforms to test on, is common.
"worked" means it gets the job done and legacy means it's old. Code only gets old when it worked (at some point).
So I don't think this redefinition is that far off.
However I agree in general most people mean "Old code I don't want to work on because I don't like the technology it is built in".