Code doesn’t usually exist in a vacuum; it has dependencies on an environment that keeps moving forward. If your old commented-out code is calling any functions, those functions may have changed. It probably used a bunch of libraries. It probably needed a certain compiler or interpreter version. And if it ran on just one OS or hardware, you might have a hard time even finding the right system to run the old code.
In other words, unless you are also keeping your entire universe in revision control (all code changes, lists of program and library versions, old machines, etc.), then there isn’t much point in just preserving code. That code probably won’t copy/paste into the current environment with the same behavior, and it’s almost worse if it does “fit” because the behavior may now be misleading.
I disagree. It's sometimes useful to preserve commented out code just because it represents some idea. (This is actually case of the example from the article.) You can always look into historical source for context, but it's impossible to search source for something you don't know exists.
(I kinda like the trick with reference to commit, but these become even more mysterious over many years, unfortunately, since the commit numbers can get lost.)
Now if the idea behind the code itself becomes obsolete, you can always delete it. Although working on a 30-year old code bases (without any unit tests) taught me that I should never delete things unless I fully understand why they are there.
Perhaps we could acknowledge that there should be a different set of practices for working with 1-year old code and 30-year old code.
> It's sometimes useful to preserve commented out code just because it represents some idea.
Then put a fucking comment saying this: "this commented out code below is to preserve idea FOO". Leaving commented out code without a real comment is just showing how unprofessional you are (or how you want to keep your job by trying to keep a knowledge-silo in your head).
I often want to delete what seems to be obsolete code, but I am never sure it is going to break something somewhere done the line. My tactic these days is top comment it out, put a data and explanation next to it. If i come across the same commented out code a while later then I can safely delete it.
I think git has made it easier to clean up code secure in the knowledge that we really can get the old code back in all its crufty glory if we really must.
The people downvoting this probably haven't worked with version control systems in which it's actually really difficult to get the old versions of files. You'd think that it would always be easy, but you would be wrong.
I used to use AccuRev, and it had a number of real problems in bringing back old code. Retrieving old code was possible, but required creating 'history streams' on the project and you basically had to do a whole new checkout into a new workspace. The commands that worked with history were also sometimes rather sketchy. There was always a question 'will a revert actually succeed?' It was much safer to manually undo changes.
I never realized how painful it was to retrieve old code in that system, because I simply never bothered to do so. We designed our workflows around not having to do that.
When I first learned git and discovered you could just grab an old file with git checkout <revision> -- <filename> I was shocked. I'd never really thought about my version control system before. I just used what I was given. But, at that moment, I realized that I'd been working with a terrible system for years.
Which version control systems are these? All the usual suspects make this easy - in fact I find git to be somewhat more difficult than other simpler tools like cvs or svn. Even (shudder) Visual SourceSafe offers this functionality.
There are still version control systems around that don't have the concept of a commit of multiple files, Rational ClearCase for example. So if your change involves multiple files, undoing it requires remembering which files where involved and undoing each one separately. That's quite a pain, especially if you delete whole files that you want to have back.
Yep, especially since ClearCase records adding/removing files not only as a change to these files themselves, but also as a completely separate change to the parent directory. Really terrible system.
I prefer to leave that sort of stuff behind an `if(false)` block, so that the compiler or linter can reach it and check it (and ultimately optimize it away, so I get the best of both worlds).
Actually, I prefer to just throw it away and worry about figuring it out again later if it becomes an issue again. I don't have the problem of not being able to figure out how to do things again and needing to look back at old code. But if someone I'm working with absolutely insists, then I use the `if(false)` block.
I don't like that solution, because you either don't have the compiler warning for unused/dead code turned on, or you're ignoring it. My preference is definitely for deletion.
I suppose in C languages the "better" variant would be an "#if false" pre compiler directive.
No, that is not better, because it removes the code before getting to the static checker. Most compilers can optimize dead code away automatically, at least the ones for sane languages that understand they are there to serve the programmer and not the other way around. Dead code checking is one of the last checks an optimizing compiler does, in the hopes that other optimizations have made sections of code dead (things like inlining constant expressions need to happen first). So I still want this code to be checked, in case any dependencies it has change. But ultimately it still goes away.
Frankly, I don't even want the cognitive load of dead code, so as I said, I only do this for people who insist they need this sort of crutch.
I have worked with multiple large legacy codebases and usually need to do global searches across them in order to figure out basic things like how functions are called and from where for one example. When this generates a huge list of search results you then need to consider all possible paths in order to complete a bug fix, new feature or refactor.
One thing I see way too often are files sitting next to each other that look like this:
This is why I prefer git merge to git rebase — I delete code left and right, and occasionally I need to restore a method or two I thought I won't need anymore, but I was wrong. Much easier if you keep your git history sacred and don't rewrite it.
In other words, unless you are also keeping your entire universe in revision control (all code changes, lists of program and library versions, old machines, etc.), then there isn’t much point in just preserving code. That code probably won’t copy/paste into the current environment with the same behavior, and it’s almost worse if it does “fit” because the behavior may now be misleading.