Hacker News new | past | comments | ask | show | jobs | submit login

There's more to refactoring than just code; when systems are this legacy and cruft-ridden the whole thing needs to be looked at from first principles and an actual workflow designed based on what's possible today or seemingly within grasp.

Only with a clear idea of what things should look like can the new structure be built, tested, (and while testing) training written and validated, and then a rollout planned (which is it's own whole other project).

Do you have practical experience rewriting those kind of legacy systems ?

Because my personnal intuition would be that trying to do both a revamp of the process, as well as performing the technical migration is the actual recipe for disaster.

I would do the migration while remaining a close as possible to the original system (removing the obvious unused functions), and only then start transforming the business process.

Studying the past lets us learn from what it took to take advantage of new technologies.

The desire for very little change (which is difficult and does need someone to push the politics of that change through) would leave us in the belt and pulley driven workshop layout making horse and carriage gear.

( Ref: https://en.wikipedia.org/wiki/Electrification#Benefits_of_el... )

Also considering forward thinking, this is baseless speculation but, my gut feeling is that science fiction written today is more likely to be 'close enough' to how every day computers might work in the future that such systems would seem like a plausible alternate reality. Contrast that with what science fiction written even 50 years ago thinks about anything involving computers or automation.

That's why it seems likely that the overall workflow should be examined again including a look at what is actually needed and what tools we currently or might have to accomplish those tasks. The existing systems, interfaces, and forms are __some__ of the tools to consider, but if there are actually good reasons for evolving or replacing them those changes should be documented and made.

I'm currently working on this process, replacing 40+ year Cobol systems with modern services. It's in private business, so we probably have more flexibility than in government, but a lot of the same principles apply.

What we don't do is look at the old code, document how it works, and then reproduce that. Prior to this legacy system being built the entire company worked with paper processes and documentation, so it was a paradigm shift for how the business worked. That system is slow to update, so how they work is heavily influenced by the business' thinking 40 years ago. Our replatforming project is seen as essential for the business' continual survival, so we're allowed to question processes, simplify where we can, and work as equals with the business in defining new processes. There are definitely hold-outs and resistance from some quarters, but once you launch some successes people start converting and accepting the process.

interesting. I guess extremely old systems are a different beast than just old ones.

it's also probably very different whether the system has been continuously adjusted and fixed for 40 years, or if it's just stayed there.

If a system has been continuously adjusted over time (like ours) we still work from first principles. Even though we have institutional knowledge in the form of people who have worked here for 40 years, it’s often impossible to know the reasoning behind why a change was made. Most ageing code bases contain redundant logic due to some situation in the past which no longer occurs, or a constraint imposed by a dependency which will now be removed.

For example, what was once a file based batch process meant that other processes had to wait and split their processes accordingly. Replace that with an event based system and everything can run contiously and a lot of the restrictions disappear.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact