In other words, unless you are also keeping your entire universe in revision control (all code changes, lists of program and library versions, old machines, etc.), then there isn’t much point in just preserving code. That code probably won’t copy/paste into the current environment with the same behavior, and it’s almost worse if it does “fit” because the behavior may now be misleading.
(I kinda like the trick with reference to commit, but these become even more mysterious over many years, unfortunately, since the commit numbers can get lost.)
Now if the idea behind the code itself becomes obsolete, you can always delete it. Although working on a 30-year old code bases (without any unit tests) taught me that I should never delete things unless I fully understand why they are there.
Perhaps we could acknowledge that there should be a different set of practices for working with 1-year old code and 30-year old code.
Then put a fucking comment saying this: "this commented out code below is to preserve idea FOO". Leaving commented out code without a real comment is just showing how unprofessional you are (or how you want to keep your job by trying to keep a knowledge-silo in your head).
Really? I LOVE being able to delete stuff.
I used to use AccuRev, and it had a number of real problems in bringing back old code. Retrieving old code was possible, but required creating 'history streams' on the project and you basically had to do a whole new checkout into a new workspace. The commands that worked with history were also sometimes rather sketchy. There was always a question 'will a revert actually succeed?' It was much safer to manually undo changes.
I never realized how painful it was to retrieve old code in that system, because I simply never bothered to do so. We designed our workflows around not having to do that.
When I first learned git and discovered you could just grab an old file with git checkout <revision> -- <filename> I was shocked. I'd never really thought about my version control system before. I just used what I was given. But, at that moment, I realized that I'd been working with a terrible system for years.
//If service reverts to this behavior, this is the spot where $ESOTERIC_CALL should be.
Actually, I prefer to just throw it away and worry about figuring it out again later if it becomes an issue again. I don't have the problem of not being able to figure out how to do things again and needing to look back at old code. But if someone I'm working with absolutely insists, then I use the `if(false)` block.
I suppose in C languages the "better" variant would be an "#if false" pre compiler directive.
Frankly, I don't even want the cognitive load of dead code, so as I said, I only do this for people who insist they need this sort of crutch.
I have worked with multiple large legacy codebases and usually need to do global searches across them in order to figure out basic things like how functions are called and from where for one example. When this generates a huge list of search results you then need to consider all possible paths in order to complete a bug fix, new feature or refactor.
One thing I see way too often are files sitting next to each other that look like this:
There is nothing more enjoyable than deleting a big chunk of cruftiness and then watching all the regression tests come back green.
*very high meaning covering all rational execution paths