Hacker News new | past | comments | ask | show | jobs | submit login
The little legacy code that could: a fable of software ownership (circleci.com)
216 points by icoe 3 months ago | hide | past | web | favorite | 74 comments



This is one of those interesting paradoxes that I haven't figured out yet.

Owning the legacy software that runs the business tends to provide job security at your current job, but can hinder your professional growth at both your current job as well as any future jobs.

Getting on to projects that are intended to replace legacy software tends to get you a lot of positive visibility politically and the ability to learn new technology and skills. If the new project fails there tends to be little fallout from the failure, since so many people are attached to the project at all levels. The goal posts will move so it can be reframed as a success.

If legacy software fails it usually means long hours and a lot of people nervously asking you when it will be fixed.

Unfortunately there's just not a lot of value to being able to put on your resume that you're an expert at an older language that nobody has heard of, built on an in-house framework that will never be used outside of your current company.


> Unfortunately there's just not a lot of value to being able to put on your resume that you're an expert at an older language that nobody has heard of, built on an in-house framework that will never be used outside of your current company.

I think this is, fortunately, only approximately true. Most of my career to-date has involved exactly this kind of work and it's certainly not true that there's no value in it, even on a resumé or mentioned in a job interview. And certainly some non-zero amount of people derive considerable value from exactly that kind of experience as some non-zero number of employers or customers need someone with it.

And, in a lot of ways, working on legacy systems can be both immensely fun and rewarding. Even just incremental improvement in or around such systems can itself be immensely valuable, e.g. setting-up automated builds or deployments, adding integration tests (to make future refactoring easier and safer), or rewriting or replacing a key portion of the system using newer tools or with a better design made possible due to the wisdom accumulated by a system performing real work over a significant period of time.


Most of my career has been working on legacy projects (or fixing broken ones), I like greenfield well enough but I’ve no preference for it over maintenance.

There is as much pleasure in making something broken work properly as creating the thing (imo).

It’s a steady career as well, most programming is maintenance outside of the fail fast world of startups.


Even at my current job in a startup, 90% of the code I own can be considered legacy.

The startup is 7 yo, I joined almost 2 years ago.

The gap between old and new features is enormous .. and it will take years to reach some cohesion.


I am relatively new to a project that is being sold and used in the order of millions for approximately 25 to thirty years.

It was refreshing to find out that one of the approximately 7 languages that contribute to the final executable that was abandoned in ~2007 has been revived and is now (2016ish) being maintained by Eclipse

These kind of things make you realise that the Silicon Valley way of doing things is not the silver bullet.


Yep .. it is always painful to hear stuff like "if you don't use [insert whatever is hype right now]; then you are not a real engineer" coming from a colleague.

People that have spent their careers in the silicon valley have a very distorted view of tech.


It is indeed a conundrum.

Working in maintaining and supporting legacy software is both an absolute necessity for the business and a possible career dead-ending move for you. You're at the same time doing important work -- if not particularly interesting or groundbreaking -- and signalling to your employer "I'm not marketable enough that you must raise my salary or risk me leaving".

Working on legacy software can also be a dead-end when interviewing with some startups. Interviewers from MuleSoft once told me to change jobs and work on more interesting, high-profile open source tech stacks before interviewing again with them. (I did, but didn't try another interview with them because I later learned what they do isn't particularly interesting, either).

Some people do specialize in obsolete software and make it their niche, but I find in my country (not the US) the degree of success with this is overstated. For example, I know very few COBOL programmers who earn a lot of money. Most are in a lose-lose situation, earning average money and... working with COBOL.


The upside of being a COBOL jockey is that you have a steady job and basically just work a runbook. Some people dig that!


> Unfortunately there's just not a lot of value to being able to put on your resume that you're an expert at an older language that nobody has heard of...

Depends on where you live I suppose. My city has more COBOL jobs than Rust/Elixir/Go/Haskel/Erlang/Ruby jobs combined. It’s still mostly C#/JAVA/PHP, but those hyped languages we read about all day on HN never actually become a thing around here. Node.js is the only exception I can think of. Python has certainly picked up, but most python jobs here require a degree in math or statistics.

I guess it’s probably very different in Silicon Valley and other tech hubs, but it’s my impression that most of the world still runs on something old.


> Owning the legacy software that runs the business tends to provide job security at your current job, but can hinder your professional growth at both your current job as well as any future jobs.

This became my issue with contracting. I had known expertise in one thing, but never a chance to work on another thing professionally. I could see the writing on the wall that this thing wouldn't be that competitive in the future, but the incentives to get out of it weren't there.


I take pleasure in refactoring code, fixing bugs that customers hit and the satisfaction I have when that user gets a quick fix or solution for his problem.


This only works when you have a test suite with total coverage. Otherwise, you're bound to introduce bugs and annoy users and colleagues.


Tests are nice to have but there are not that many tests in the projects I work. Sometimes the code is not that critical, no lives and money at risk. Then I do not refactor the entire project but only 1 section and I refactor it because it is required, like I discover a function with 5 level of nesting and 500 lines of code, if I do not rush then refactoring should be simple and I can reduce the big function in a few smaller functions , reduce some big unclear code in something that is readable like a story.

I agree that someone handing you a project that you have no idea what it does, what the features are and going and randomly refactor is stupid.


one skill I've learned working on/deprecating and replacing legacy code is what I'd call Code Archaeology: the practice of identifying what a legacy system does given some edge case; why it does it; whether that's the right thing to do, given that the ecosystem has changed significantly; and whether that section can be safely turned off.

I haven't seen the results of putting that on a resume yet though.


Author Vernor Vinge describes a future "programmer–archaeologist" role in his books. The idea being that humans just keep building layer upon layer of abstractions and systems, and in the distant future there's value in exploring, understanding, and potentially modifying the older layers.


Isn't that true for most things in most industries? Building an airport gets you recognition, but few people ever talk about the team that keeps it running.

Maybe that's a good thing though, because people sort themselves to the jobs they are needed at. You don't want overly ambitious risk takers who seek visibility to work on the mission critical core, you want people that value job security and long-term thinking if you really need to rely on that system. On the other hand, you don't want people planning every thing to last for decades if you're only interested in a general exploration of possibility.


It's sort of a fundamental trade-off, the better you get at team specific actions, the more time you're spending not improving at general actions.


Yeah. Stated another way, building new stuff is more highly valued than maintenance. Even though the new stuff may fail, or the maintenance may end up requiring more skill than building.

This is what our industry values; make your choices accordingly. It’s also one reason for the perception that developers are more obsessed with new shiny things than learning their older counterparts. We’re incentivized to act that way.

You can’t only look at your work in terms of what is most valuable for the business, because that may turn out to conflict with what is best for you and your career.


The underlying technology doesn't have to be obsolete to have legacy code problems. Linux, Windows, GCC, Microsoft Office - all have serious internal problems from legacy code, and are tough to maintain.

It's not so bad for big projects with enough staff. It gets tough when the legacy code does something hard, and the maintenance team doesn't really understand why something was done in some way.

Second Life, the virtual world, has that problem. It's written in C++, and some very good people wrote it about 10-15 years ago. They're all gone. The people who maintain it today are struggling to fix serious bugs that have been outstanding for 5-8 years. It's not just "legacy", it's "not web-like". It's a distributed system with tens of thousands of servers in one data center. It's a tightly coordinated soft real time system. There's a huge amount of in-memory state, which is changeable in real time yet is constantly being backed up. This is totally alien to people who only know transaction-type web-based systems. So they can't hire anybody and have them be productive quickly.


Doesn't that describe a lot of MMOs too, though? The inventory/transaction systems seem like they'd have similar needs (but with a LOT more items in the Second Life database). And the in-world state is a lot more variable than in an MMO. But compared to MMOs, those seem more like differences in degree than differences in kind. Programmers with that background should be able to get productive a lot quicker than web devs I'd imagine.


Most major games are built on one of a few game engines (Unity, UE4, etc.) which have their own ecosystems. People become Unity or UE4 experts. They have forums, conferences, tech support. SL is its own pocket universe.

Second Life is divided into "regions", 256m on a side, each managed by a separate process constantly communicating with its neighbors. This geographical distribution system is unique to Second Life. User avatars and objects can move from one region to another. You can look across region boundaries. Running Mono programs inside objects are stopped, frozen, copied across the network, and restarted on a different machine.

Most big-world MMOs cheat somehow so that they don't have to really solve the distribution problem. They're often sharded, so that the number of players that can interact is limited. Or they're smaller. Second Life's world is 100x the size of GTA V. Or they're portal based; you can only get somewhere via a controlled portal. In Second Life, you can fly over the whole world. (Mostly. Fast vehicles hit bugs the devs have been unable to fix for a decade. Another "legacy" problem.)

This is what the machinery for a "Ready Player One" or "Snow Crash" world looks like.

It's not parallel enough, and the servers keep running out of CPU time on the main thread. Everything then gets sluggish in world. The system needs an overhaul to be more parallel internally on the core functions, and that's really hard, expensive, and needs a dev team the company lacks. Yet another legacy problem.

The technology is all unique to this one system. The only thing that works even vaguely like this is the new Spatial OS from Improbable, which took 150 people to develop, is proprietary, and hasn't been shown to really scale yet. We'll know late this year, as Nostos, a new game from China, rolls out, how well it really scales. That's the first AAA title to use Spatial OS. Spatial OS has a deal with Google where it has to run on Google Cloud servers, which is scaring off most of the big game development shops. That costs too much, and betting your business on a lesser Google product usually ends badly.

Hence the recruiting problem Linden Lab faces. You want to tie your career to this one-off strange system?


I'm familiar with Second Life's distinctive traits. I played there for years, and did contract programming for the Linden Department of Public Works for a couple of them (including coding around those janky region crossing issues you mention).

They pay an incredibly high price for some design decisions that I think users don't even get much value from. It turns out that most users would rather have a private island than live on a continuous continent where neighbors are always putting up eyesores. If you were to start with that fact, remove the requirement that private regions even exist on the global map, and let them spin down when no one's home, then you could give paying users a lot more space for their money while also reducing the company's spend on servers.

While I take your point about work on Second Life not being super transferrable to any of the AAA game engines, it seems like it would be very transferrable to creating such engines themselves. And what engineer wouldn't want to help build the world of Snow Crash or Ready Player One?

I think the deeper problem is that Linden Lab has stopped prioritizing investment in the SL platform. They've set their sights on creating VR-focused Sansar instead. But it's tough to convince people to move over to a new world when it means leaving behind the bigger community, the bigger economy, and all the clothes in their inventory.


It turns out that most users would rather have a private island than live on a continuous continent where neighbors are always putting up eyesores.

Linden Lab tried that. That's what Sansar is. It averages 13 concurrent users on Steam. Maybe some more who signed up outside Steam, but under 100. Sansar is a "VR game level loader", not a world like SL. Somebody creates a level map, and others can visit, but not change much. Sansar has a Star Wars prop museum, a Ready Player One prop museum, etc. They look great. You visit once, and you're done.

Other VR game level loaders are SineSpace and High Fidelity. (High Fidelity just gave up, and "pivoted to enterprise".) They also have user counts in the 2-digit range, but worse content than Sansar. The hook for that market segment was supposed to be VR headsets, which turned out to be a niche product. Even VRchat, after a surge in 2017, dropped to about half its initial peak and is stuck at a few thousand concurrent users. Facebook Spaces? Whatever happened to that?

Meanwhile, Second Life continues to plug along, with 30,000 to 50,000 users connected. That's about where GTA V online is, and would be 11th place on Steam if SL was on Steam. SL was maybe twice as big at peak, 7-10 years ago.

Hence the legacy code problem. It runs, it's profitable, it has a significant user base, and it needs improvement.


"legacy code runs the business". While I do agree with this general comment, I've also seen legacy code killing the business.

I've seen legacy code so buggy that it couldn't be fixed, that new features required quarters to be added, that customer service team had to double size every year to help customers with their bad experience.

When legacy code is not properly maintained, it can become this unescapable hell. Yes, it sort of works but at what cost?


I read this article and I have to disagree with one of the major premises that "no one wants to own legacy code" -- it's more that in a lot of organizations, you're not ALLOWED to own legacy code.

What I mean is that you may want to take "ownership" in the sense of learning it and improving it but because it's not new development, large barriers to paying down technical debt and updating to newer development methodologies is disallowed.

So it stays, untouched, bit-rotted, and inflexible.


I am currently rewriting a 10 year old VB application, which is in web forms. That application communicates with AS400 DB2 database. I get to use as400 here and there.

My application is in .NET Core 2. There are some issues as anything IBM can be a pain in the ass to work with. Also, the as400 dev and me have a tremendous knowledge gap. I know everything new such as unit testing, proper source control, dependency injection, linq and ORM tools. However, when it comes to speed of queries he is much better at optimization. Since, he has worked with some ancient languages he has a better understanding of low-level programming.

Honestly, it is pretty interesting at what speeds the current tools allow us to develop applications. For example, he would need to set up the DB, then security, then applications, stored procedures, map parameters in code and so on. I can accomplish most of these things with an ORM like Entity Framework in less than an hour and have a working project, obviously depending on the scale of it. However, both approaches have pros and cons.

Having old legacy code is problematic. It is harder and harder to find developers and rates are going up as the BIG fish still rely on them and can pay more.


> I know everything new such as unit testing, proper source control, dependency injection, linq and ORM tools. However, when it comes to speed of queries he is much better at optimization.

The reason he is better at optimization is because he is not using an ORM. ORMs are a leaky abstraction that loses the power of SQL for the limited benefit of slightly easier programming. They also tend to move things that in SQL would be a JOIN that is executed in the database back into code.

Data is not an object, much that some "modern" languages would prefer to lose the distinction, with ORMs and DTOs obfuscating it.

RDMSs are based on cohesive logic (set theory) and have been tuned and optimized by people to do the best possible automation of data storage and retrieval, while abstracting the very low level.

ORMs on the other hand, take the logic that should be at the data management level, including invariants and moves it into code that does what SQL does, but slowly and badly.


I worked at an AS/400 shop out of college (this was in the 2000's, though). The other programmers there did NOT have a better understanding of low-level programming than me. In fact, it was hard to call them programmers at all.


Yeah, a lot of AS/400 programming is actually pretty high-level (comparatively speaking). COBOL in general was meant to be accessible to end users (or at most power users) such that they could readily define business logic without having to resort to something like assembly or PL/I or what have you.


Yeah, these guys didn't really know COBOL either. They used RPG (an AS/400 exclusive language).


RPG is more like throwing random characters on a screen and seeing what happens.

Report Generator (RPG) is designed for an 80 column punch card. Put a F in column 6 to mean this, put a "C" to mean something else, match one of the 99 variables (named, intuitively 01 through 99) to make output happen.

I had the distinct displeasure in 1984 of trying to maintain a warehouse stock control system that some evil people had decided to implement in RPG. From memory, RPG only has the equivalent of single dimensioned arrays, so the location of items in the warehouse was an intersection of three arrays pointing to yet another array containing the SKUs.

It's designed to take one or more input files with defined columns in each row (card) and do some stuff (mostly subtotalling etc) and produce output.

Of course, it's been stretched beyond all recognition since it was introduced in 1959. It's literally 60 years old this year.


Yup, though we technically used RPGLE which was slightly better. It also had a "free text mode" which looked a bit more like a real language, but my coworkers were terrified of that.


Is there any resources you could recommend? I would interested in maybe learning more.


What are you looking to learn more about?

When I learned RPG, it was through physical audio tapes. In 2005, with broadband. I haven't come across to many good places to learn from (but I haven't really been looking, either).


Having been someone who's had to "setup" the db, app, etc. before ORMs were mainstream, you're wildly overestimating how complicated it was. It doesn't actually take very long, you'd maybe save a couple of days or so on a month project, having to write a a CREATE TABLE and then the corresponding class are actually trivial. You make the design decisions when you write the create statement, the class is simply a copy of it.

It's just a bit easier, not a lot. I certainly use ORMs now, but it's a convenience rather than a revolution. The truly revolutionary thing for code from that era of C# were lambda expressions.

In all honesty, he's an idiot for not updating his skills. If he simply learnt a bit of modern EF and learnt how to use git, he'd run rings around you. I'd note that "proper" source control practice has been around for well over a decade now (SVN was adequate, just not great for distributed teams), so he must be really adverse to change.

The unit testing and DI, on the other hand, is all pretty useless noise in a statically typed language like C#, for all MS push it. The only benefit in DI is keeping the same instance of the EF around, and it's pretty hard to actually get that working properly through-out the entire stack when you want to do anything even remotely creative.

As for unit testing, never seen the benefit. I took over a project with something like 50% unit test coverage that I never bothered to keep up, in 3 years the existing tests caught 1 bug.

All that highlighted to me that it was an utter waste of time and money it was writing all those tests.


Source code control has been around since SCCS in the early 80s (although wikipedia says 1977), it begat RCS which begat CVS, which begat SVN.

People have been using source code control since then, but the history of computing is of wheel reinvention and not-invented-here, so we are condemned to rediscover things.

Unit testing is useful for business logic, not for individual functions. If you're writing unit tests around whether or not you go outside an array bound, then you're testing the wrong thing. If, on the other hand, you're making sure that someone under 13 can't signup for a service, that's a reasonable unit test.

DI can be useful for running code, allowing you to instrument the code to identify problems. But that's runtime DI, not compile/deployment time.

As a design pattern, it's more an abstraction / reduction of general parameterized polymorphism, usually dragging in an opinionated framework that requires you to follow a set pattern of development and deployment.


100% test coverage is not so valuable. But, I would argue that test coverage for the bits of complex logic that people are “afraid to touch for fear of breaking it” are valuable. Create tests for this code before you change it for assurance you haven’t broken it. If there are weird border cases the original developer wants to ensure remain supported, you need a test case for it.


A decade of source control is an understatement. If you weren’t using it in the 2000s you were woefully behind. In the 90s it was exotic and terrible though. Even Microsoft had a horrendous system called SLM (or slime for short).


Interesting perspective. I do agree that it is not good that he did not keep updating his skills but he has a few clients even bigger than us and is going to retire soon, AS400 has served him very well.


I've met programmers who've done the same thing (for COBOL).

I personally would not enjoy it as all you end up doing is maintenance and working with nasty old code bases.


For anyone who wants to learn about working with and on a "legacy" code-base, check out Michael Feather's book "Working Effectively with Legacy Code"

In the beginning it goes over code smells and how to find refacing seams in your language of choice (C, C++, or Java) and then each chapter is a group of techniques you can use.

Working on a service that had both a very old monolith and some brand new microservices, I found it invaluable. I think the first lesson I applied from it was using pinning tests for safer refactoring.


> I think the first lesson I applied from it was using pinning tests for safer refactoring.

Thanks for the book recommendation. Funny to realize I have done this in the past without prior knowledge; it just made sense.


I think one of the reasons why it's such a conundrum is because software development as a practice is just now getting to the level of complexity where large groups of people have collaborated over a large codebase over a long time.

One of the problems that the microservice architecture is trying to solve is by making a a large software entity a collection of smaller easier to manage entities. It's like single cell organisms evolving to multicell organisms...each cell becomes smaller and simpler and more specialized and replacement of cells becomes easier and allows the overall lifeform to "scale" and become more complex.

Mature software companies have gone through this transformation multiple times already in the form of, n-tier, SOA, microservices, and now serverless architectures. This is all part of a natural progression of making individual components simpler, the overall structure more granular, and as a result more resilient. This resiliency opens up new capabilities to scale a complex system.

My long winded point being that legacy software will always exist yes, but each piece of legacy software is already getting smaller and more granular where rewriting it will eventually be a more continuous operation of refactoring small things--kind of like skin cells falling off your body or hair falling out.


This made me think of an enterprise-software version of Wreck-it Ralph. I'd watch that, but I'd be part of a small audience (though arguably, Tron qualifies).


I'd watch a movie based on The Phoenix Project book.


I wonder if someone has optioned The Phoenix Project?


A small audience, but one with a good amount of disposable income.


I think its very interesting to think about this article in the context on the open source projects that relied upon by the vast majority of major applications and companies. Specifically OpenSSL comes to mind here. That library has been used/leveraged in so many ways to build empires. Yet it's currently struggling to find dedicated engineers that will work on its core logic. It's an unfortunate situation and while yes there is LibreSSL and BoringSSL its non-trivial to move to their implementations (I have tried, its wrought with peril). OpenSSL isn't on its own either, there are many open source libraries/applications that are struggling to find "owners". None of which can be "easily" replaced by any means.

I think the comments about "heroic" culture being fostered by the lack of ownership is spot on, but it isn't a bad thing unless those heroic actors aren't actively trying to get attention put on said pieces of legendary software. Imo its better to have heroic actors than nothing, and especially in the context of an opensource application/library that is used worldwide, I hope we can find more to generate more interest in working on them.



"Why is there unclear ownership?"

Because Extreme Programming (and other agile practices?) make Shared Code-Ownership a "value".


Well, I think the article not so great on this one. There is unclear ownership because management thinks that software is finished at some point. They think in terms of large, vague blocks of functionality and have difficulty understanding things at a finer scale. When requests come in from users of the software, they are often ignored by management because none of the requests are large enough to fit into their large, vague ideas. However, all the requests put together add up to a very large impact. The result is that the legacy code gets neglected and new greenfield projects get the attention. Because nobody works with the legacy code, it becomes strange and foreign. It is built with older technology which is no longer sexy and is boring on your CV, so the movers and shakers in the IT group tend to avoid it. It's not code ownership that's a problem -- the whole company washes their hands of it.

In terms of code ownership, you can run your group in a number of different ways. I've seen projects with code ownership work and I've seen projects without code ownership work. I vastly prefer the latter, personally. As long as you are churning the code regularly, a lack of code ownership means that internal conflicts about how things should be designed are forced out in the open. They don't fester for years and years, where groups of developers end up saying, "I can't work with that person. They are crazy." You have the conflict early when it doesn't have such a large impact and you sort it out early (Note: some people are just inflexible -- if you find that kind of person, knowing about it early is also good. You can deal with it).

When you avoid code ownership, you are also forced to have code that is clear for your entire group. For example, if you have a single person and they work in isolation, their code may be impenetrable to the others. But if each person in the group has to work on the code, code that most people don't understand has very little chance of surviving. Overall, I find that staying away from code ownership results in considerably more maintainable code.

However, there are clearly advantages to code ownership as well. One shouldn't snub their nose at the benefit of being able to see a piece of code and say, "I wrote that". Having ownership makes it easier to have pride in your work. It can be a very motivating factor. If you have people on your team who are feeling disconnected and don't feel like they are personally making an impact on the group, giving them a little piece of code to own can be very good for them.

Similarly, sometimes you don't have a choice for your team. Sometimes you have a team of people who just don't work well with each other and there is nothing you can do about it. Partitioning the code and allowing these people to keep their distance may be the only thing you can do. Ideals are great and if you can achieve them, it's wonderful. But you have to be conscious of reality. People are people.

I could go on and on, but I guess my point is that very often I see remarks like the one you made that seem to take a very superficial view of things. There seems to be no effort made to understand why other people have a differing viewpoint. Of course, you also get fan-boi style postings of, "The-new-hotness is the best thing because reasons" which are also unfortunate. Usually the truth is somewhere in between.


SRE = Site Reliability Engineering, see https://en.wikipedia.org/wiki/Site_Reliability_Engineering

MVP = (in this case, probably) Minimum Viable Product, see https://en.wikipedia.org/wiki/Minimum_viable_product


>The initial development cost of software rounds to zero when compared to operating costs in perpetuity.

Mostly a good article, but what is with this quote? Aside from being factually wrong from a dollars and sense standpoint, doesn't it contradict their point that legacy code doesn't get enough love?


Author here.

I don't think it's wrong in most (nearly all) cases. The cost of running software, keeping it updated, changing the HW it runs on, keeping it compliant, etc over time costs much more than the dev time to build it.

Unless your service is completely replaced every couple years, I'd venture operations cost more than development. (even if dev handles the ops).

The point of the statement is designing for operability helps with ownership and combats the "leave it in the corner and hope for the best" mentality.


This seems like moving the goalposts though. "Much more" is not "rounds to zero". Software is just expensive, period - to build, and to maintain.


I don't think it contradicts the point.

When you have a legacy system that "sits in the corner" earning money, you're [the company] paying engineers to retain their services in emergencies. Features get added too, if you're interested in keeping or gaining customers. So construction cost as a proportion of total cost does trend to zero.

The company/stockholders doesn't care about hot new technologies. They care about revenue. Customers don't care their software is written in this or that framework or is powered by deep learning. They care about stable, usable, useful apps. Businesspeople know this. Many engineers do not. I think that disconnect causes much strife in tech businesses.


I really like how they tell the story at the beginning, I felt sorry for an hypothetical piece of legendary code


It's kinda dumb but I do like this children's book retelling of software development


It's not dumb. Some of the most impactful stories are fables. You may also like Java Koans.


I wanted to update my comment but for some reason I can't. I mistakenly said Java Koans when I actually meant The Codeless Code, located here http://thecodelesscode.com/contents. Sorry about that!


This would have been much improved if the cute legacy system has been speaking a dialect of Latin found in ninth-century Bavaria, with a slang vocabulary developed for its specific problem. And insisted that everyone else use that dialect when speaking to it.


Cute story, but I detect a certain bitterness and refusal to accept the "future" of software engineering is actually the "now."

People didn't think up microservices yesterday, they aren't some hot new fad that'll be forgotten in a year, and they aren't being developed because people just have nothing better to do. They're replacing legacy systems which nobody wants to touch at a rapid clip. People don't like working on legacy systems because they're at an unfortunate intersection of being business critical yet flaky and hard to work on.

In the context of this story, well... your protagonist is actually on life support. But not to worry! They'll be taken off it soon...


If anything, it's a pretty pragmatic view about the high frequency with which new systems intended to replace legacy ones fail to do so.

Certainly applications can be supplanted or replaced, and often are, at the discretion of users and customers.

But 'living' (running) systems can often only be successfully replaced in the manner of the Ship of Theseus.


I think you missed the point. Whatever you call your code, monolith or microservice, it's still code and tomorrow or next year it will be legacy code. I'm not sure if the size of the codebase actually matters, developers just want to develop and will move on quickly to the next thing. It takes time and patience and persistence to maintain legacy code and the systems it's embedded in and it's far from sexy or exciting. The young dominate IT and are way more idealistic and filled with enthusiasm and new ideas than those of us who are getting on. They want to make their mark and solve the world's problems and that means more and more code! But new code! This new code will be better, you'll see.


> In the context of this story, well... your protagonist is actually on life support. But not to worry! They'll be taken off it soon...

"Soon" is often...not soon. I have never, ever seen a legacy system of any substantial complexity taken fully offline. Nor have I seen even partial replacements for such a system deliver a reliable alternative anywhere close to on time. In fact, I've seen more half-baked successors be either scrapped or put on life support themselves than I have seen get even into the ballpark of what could generously be considered success.


> Legacy means it worked.

No and shame on you for pushing this idea, is my first reaction. "worked" doesn't mean anything.

Positing this redefinition of Legacy (and implying what "worked" means) rubs me the wrong way, especially when tying it to the idea of code ownership. Maybe I just have a warped perspective, except that I've run into these scenarios.

"worked" might mean "it was easy to fix and it broke a lot", "it calculated the numbers correctly but nobody knows how to change it without it breaking", or "it costs $5k a run but it gives us the right count of a database column". Legacy means it might be working, depending on your CURRENT needs. Your current needs change, even when the code does not.

Code ownership isn't at a single level, which makes some of this analysis seem idealistic.


Author here.

Code doesn't become legacy if it didn't fulfill some purpose. If it didn't work, it wouldn't be legacy, it'd be source code in somebody's home directory that was never deployed.

All working software is not legacy software. All legacy software (still running) works, for some definitions of works.

That's not to say your points aren't also really correct. Software is built with current requirements and knowledge in mind. Best decisions with info we have..etc.


I know of at least three projects (with huge budgets) right now that are spawned from the corpses of old, failed projects that never worked. Those old projects make up the legacy of the current effort and can’t be easily thrown away.


> Legacy means it might be working, depending on your CURRENT needs. Your current needs change, even when the code does not.

No working should mean "It fulfills the requirements". If the requirements change then the code should change.

The other things you have mentioned are important but they aren't necessarily requirements e.g. Uptime / Reliability / Performance might be requirements but without stating what they should be you can't say that a piece of code doesn't work if it "slow".

This sort of stuff is real basic software engineering.


> If the requirements change then the code should change.

That's the nature of the term "Legacy", that you are using. It met requirements that are not the same today (sometimes it's just standards of coding). It didn't change, hence it's legacy code.


There is all sorts of wrong here. You are conflating several concepts.

Firstly I was commenting on your very strange definition of what working meant. What we (programmers) normally mean by working we mean "It fulfills the requirement that it was developed against". If the requirements change, the code must change to reflect the changes in requirements. If the software is changed it is a different version of that software.

Legacy has nothing to do with that. Legacy until relatively recently meant "Not supported" i.e. Windows XP is "Legacy" whereas Windows 7 is not.

Consider the scenario:

I am asked to write a program to play sound. So lets make up some trite requirements.

* It be able to play .wav files

* Sound should always come out of the default audio device as designated by the OS.

It is written and released. We will call it version 1.0.0 and it is supported to December 2021.

People ask for more features. These new features are:

* It is able to play .mp3 files

* The ability to choose the audio device that you can play sound through.

It is written and released and it is called version 1.1.0 and it is supported to December 2022.

Now the requirements have changed however because we are still in the year 2019 they are both still supported. However the requirements to the software has changed and thus there is a new version of the software to reflect the change in requirements. That should be related in the change-log (remember those things).

In 2022 version 1.0 will no longer supported. There will be no defect fixes to it, but it still works i.e. fulfills the original requirements as listed above. However under the more traditional definition of Legacy Software it is considered legacy.

Things like code quality, maintainability etc. are separate issues.


> What we (programmers) normally mean by working we mean "It fulfills the requirement that it was developed against". > Things like code quality, maintainability etc. are separate issues.

That's your characterization and not all how development works in practice. The vast majority of software barely has anything like versioning and a scant few have maintenance/support windows. That's a fact by volume.

Defining the term "Legacy" to mean something specific as a subjective point of reference (since there is no formal definition), eg

> Legacy until relatively recently meant "Not supported" i.e. Windows XP is "Legacy" whereas Windows 7 is not

is helpful, because it lets me understand what you are thinking, clearly. This does not change my position, as my experience is that working code is still called legacy internally, all the time. The historic resource problems with things like broken dependency chains (you can't even build it anymore), or availability of platforms to test on, is common.


> "worked" doesn't mean anything.

"worked" means it gets the job done and legacy means it's old. Code only gets old when it worked (at some point). So I don't think this redefinition is that far off.


I think they are referring to the fact that "Legacy" has now been defined as code without unit tests in some circles.

However I agree in general most people mean "Old code I don't want to work on because I don't like the technology it is built in".




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: