Legacy systems are still legacy because nobody ever wanted to pay to replace it. Later on it becomes so old that nobody even knows how to replace it, which adds to the cost - in investigation, and in the replacement not meeting feature-parity, and in cost of failure. Basically, you should fully expect any replacement system to be too expensive and take too long.
So this book is correct that managing the organization is probably the most important part of transitioning a legacy system. Somebody has to trick finance into giving you twice the budget, and product & sales into giving you twice the time, and build an engineering department strong and large enough to do two things: 1) run a legacy system, and 2) build and run a new system that will be way more complex than the old system team can handle - including the moving-saucers-of-tea-between-two-moving-18-wheelers-on-a-plank migrations. It's only once you've got the organization well trained (like a prize Poodle) that you can think about the code.
And I'm not sure if "Future Proof" is tongue-in-cheek, but it should be. Every system should come with a sunset date and a plan to transition to a replacement system. You can't predict the future system, but you can at least estimate how to migrate to a nearly-identical system - the cost, the timing, the reliability, the portability, etc. My favorite sign of an unprepared manager is when I ask them about their plans for when we will eventually transition away from the system, and they reply "Well I hope we don't move away from it, it's costing $X to build!" When what they really mean is, "I'll be long gone by then."
I don't think "future proof" has to be tongue-in-cheek.
There's about a billion buzzwords that are misappropriated which'll make everything slower, but behind the buzzword there's typically some insight that'll help with some problems when they're not just a catch-all buzzwords.
Personally, I don't actually think the following is a terrible tactic for most of our CRUD apps.
- Start with a fullstack monolith framework because it's probably going to be the fastest and simplest way to market.
- Break up the monolith. Microservices everything!
- Use some buzzword system that'll help facilitate this ( your testing an operations just got harder, it's a trade off, k8s now might make sense ).
Stepping away from just following the fad, at some point, if you're successful, your company is going to be complicated enough where you have, for example, 5 ways of sending emails. None of them quite complete enough to handle blacklisting, delivery failure, debugging, or audit logging. Que the email gateway service.
You now, if done right, hopefully have encapsulated the email problem into a single codebase, run from a single location, owned by a single team, that understands the issues and actually completes the implementation. If you go into this eyes-wide open, it doesn't need to be a massive problem when it happens.
Of course, this doesn't always go well. The most common failure I see in big orgs is it gets political, gets stolen by architecture, they bootstrap an internal services team who refuses to release anything until it's done, dissapear into the void, whilst banning any other development org from working on the problem.
This has less to do with the path being wrong, and more to do with the culture being wrong. Give the problem to someone who needs to release something, i.e. the onboarding team sends most emails, they can build the common email gateway, and they'll accept PRs as it's not the only problem.
Semantics are contextual, and your legacy bread / legacy software parallel doesn't convince me. Is it a precise term? no, but most ppl here will have a good idea of what it means.
In their defense email is pretty inherently insecure with metadata and message subjects always leaking even when message bodies are encrypted.
The fire and forget model is also a big contributor to spam. If email were invented today I imagine it would look more like an E2E system that put the storage burden on senders.
The reason for the negative connotations is when it's used it also usually means "A system which nobody really understands", usually because of lack of tests and documentation. Maybe the high cost to modify it is working as intended in some cases (don't modify this project, start a new one). But when the business expects the legacy product to be modified, then you run into trouble.
To me it means existing code. If it has been accepted by the PO it is now legacy. The new code that was just accepted now influences any decision involving it going forward. Legacy.
I list on my website that I work with legacy computers, often moving them to emulators to preserve them from crashing due to a hardware failure. I use QEMU because it emulates many systems not just PCs.
So this book is correct that managing the organization is probably the most important part of transitioning a legacy system. Somebody has to trick finance into giving you twice the budget, and product & sales into giving you twice the time, and build an engineering department strong and large enough to do two things: 1) run a legacy system, and 2) build and run a new system that will be way more complex than the old system team can handle - including the moving-saucers-of-tea-between-two-moving-18-wheelers-on-a-plank migrations. It's only once you've got the organization well trained (like a prize Poodle) that you can think about the code.
And I'm not sure if "Future Proof" is tongue-in-cheek, but it should be. Every system should come with a sunset date and a plan to transition to a replacement system. You can't predict the future system, but you can at least estimate how to migrate to a nearly-identical system - the cost, the timing, the reliability, the portability, etc. My favorite sign of an unprepared manager is when I ask them about their plans for when we will eventually transition away from the system, and they reply "Well I hope we don't move away from it, it's costing $X to build!" When what they really mean is, "I'll be long gone by then."