
Things You Should Never Do, Part I (2000) - joseflavio
http://www.joelonsoftware.com/articles/fog0000000069.html
======
bguthrie
If you absolutely must rewrite, consider isolating and replacing components
piecemeal rather than scrapping the whole thing and starting over. You don't
eat the whole cost at once, you can keep moving forward with new features if
needed, and you get more modular architecture to boot.

It's called the Strangler Pattern.
[http://www.martinfowler.com/bliki/StranglerApplication.html](http://www.martinfowler.com/bliki/StranglerApplication.html)

~~~
nawitus
The problem with that seems to be that you're tying yourself to the old
application in one way or another (e.g. old interfaces, old architecture, old
programming paradigms).

~~~
tomjen3
Not at all. It is perfectly possible to go from a big freaking desktop app
(BFDA) and rewrite it, piecemeal, to be a single page web app.

The way you do that is that you first, piece by piece, split the BFDA into
multiple layers, then you make it client-server, split the data-storage into
its own server, split the gui of, split the business logic of. Then you
rewrite the parts that request stuff for the client into using JSON and rest,
then all you have to do is to port the (hopefully) tiny amount of gui
interface the user needs to Javascript and the entire app still works.

Is it easy? No. Does it waste some work? Yes.

------
moron4hire
This article, when I originally read it 5 years ago, was hugely influential to
me. It completely convinced me of the superiority of refactoring existing
systems to an ideal over rewriting. It doesn't take anywhere near as much time
to fix cruft as it is to create a system from scratch and try to avoid your
own cruft. But it's scary and it's an unknown. But if you approach it with
courage, it falls quite easily.

EDIT: refactoring large systems has also convinced me of the superiority of
statically typed programming languages for large projects. When I break code,
I want it to break as hard as possible. I want to see absolutely everywhere
function XYZ is called, and one way to do that is to rename XYZ to XYZ___ and
see where all of the compiler errors show up.

~~~
famousactress
The article and Fowler's Refactoring book had a similar effect on me as well.
Your point about statically typed programming languages is one that I've been
feeling the brunt of, though I'm not convinced yet.

I've been 100% python for the last few years, after a career mostly in Java
and I definitely miss the sweeping refactoring abilities and hard-breaks
during those kinds of refactorings. It does lend courage for sure. I'm just SO
MUCH MORE productive in python for most of what I do that I'm not willing to
throw out the baby with the bathwater yet. Instead I've been trying to focus
on leveling up test coverage, and experimenting with Rope for refactorings (
[http://rope.sourceforge.net/](http://rope.sourceforge.net/) ). I'm also
slowly learning patterns for coding to make refactorings easier moving
forward.. choosing code and data structures that are more easily affect-able
later, or even things as simple as naming things for grep-ability.

~~~
GyrosOfWar
Modern statically typed languages (cough cough not Java cough) generally have
static type inference, which removes a lot of redundancy from the code. Being
able to do

val object = MyObject(someOtherObject(12, "hello")) // Scala

instead of:

MyObject object = new MyObject(new someOtherObject(12, "hello")); // Java

saves a lot of time and it just looks nicer. All the while retaining the
advantages that static, strong typing have (great potential for refactoring, a
lot of errors can't happen at all). Of course, there are some things that will
still take up more space than they do in Python or Ruby but it's a great
improvement. Languages like Rust, Scala or C# also have a good variety of
functional programming tools that can make your life a lot easier.

~~~
dllthomas
Interestingly, this (narrow) case _is_ supported by C++ (auto) now.

Even C, kind of:

#define AUTO_TYPE(X,Y) __typeof(Y) X = Y

... though I'm not sure I'd recommend it.

Type inference in full generality can give you a bunch more, as I just went
into over here:
[https://news.ycombinator.com/item?id=6327327](https://news.ycombinator.com/item?id=6327327)

------
romaniv
This is the most overrated and overquoted piece of (bad) software engineering
advice ever. Sure, it's harder to read unfamiliar code than write new code,
because your understand the reasoning behind the new code. That's exactly why
rewrites often make much more sense than labor-intensive incremental changes.

I've had many real-life situations where I successfully chain-deleted
thousands of lines of legacy code and replaced it with some sub-100-line
method that simply worked. All that without understanding how legacy code
works. You're wondering what's the secret here? Instead of trying to deduce
the undocumented logic behind legacy code I gathered current requirements and
implemented them in the simplest way possible.

In most cases the results were dramatically more readable, because I used
build-in language capabilities and standard libraries whereas old code
spectacularly failed to do so. Also, I didn't have to worry about requirements
that were no longer relevant.

~~~
enraged_camel
>>I've had many real-life situations where I successfully chain-deleted
thousands of lines of legacy code and replaced it with some sub-100-line
method that simply worked. All that without understanding how legacy code
works. You're wondering what's the secret here? Instead of trying to deduce
the undocumented logic behind legacy code I gathered current requirements and
implemented them in the simplest way possible.

Except one of the main reasons legacy code tends to be so long is that it
deals with tens/hundreds of small edge-cases or bugs that your "sub-100-line
method" simply cannot account for. So what you are doing when you re-write the
legacy code without even understanding what it does is regressing the product
by several years in terms of maturity and stability. Your new code will face
the same problems the legacy code did, except it will break. And then you will
have to start adding to it...

Now, it is absolutely possible that those edge-cases and bugs that the legacy
code dealt with are no longer an issue: maybe they were added back in Windows
XP days, and your users no longer use Windows XP, or something. In that case,
yes, go ahead and rewrite it. But you need to think about and understand what
it does first, and why.

~~~
barking
Exactly, It comes across as really arrogant to replace thousands of lines like
that without even looking at them. Maybe those old timers first iteration only
had 100 lines too.

~~~
nairteashop
I've done massive refactorings in the past as the parent is suggesting. But
before I do it, I try to figure out whether the code is hard to read because
the developer was bad, or because a good developer was forced to add support
for hundreds of edge cases over the years.

If I notice at a cursory glance that I can replace more than a few
ridiculously convoluted chunks of code with something simpler and be sure that
I broke no edge case for those chunks, I just assume that the previous
developer was simply incompetent and rewrite the whole thing to meet the
original requirements. (And perhaps down the line a developer better than I
will do the same with my code!)

------
RogerL
We have plenty of empirical evidence that you can rewrite from scratch - every
competing product in the marketplace is essentially a 'rewrite from scratch'
from the perspective of all the other products. Almost very start up poster on
here is 'rewriting' an app in that sense.

If you can't 'rewrite from scratch' (i.e. write a new app) you probably don't
have very good programmers (they are only comfortable bandaiding and throwing
a few routines in existing code), or, perhaps, management is tying their hands
someway. So, I don't buy into the horror stories. Well, I buy that they exist,
just not that they should drive my decisions.

Of course, you should expect the cost of your new app to be about the same as
entering the business cold. It's particularly difficult to meet every
requirement/feature/bug of the old system. Orgs get fossilized around
features. If you were to download and install a new IDE you'd expect it to
have different features from the old one. You will miss some of the old
features, but presumably the new features more than make up for it (otherwise
why would you switch). Yet, when we upgrade apps, it's "remove nothing at any
cost". It doesn't necessarily make a lot of sense. It can take enormous effort
to re-implement some broken feature as opposed to just offering a new way to
do things.

None of that is to say I blithely throw code away. I did, once, amidst a bunch
of hue and cry of 'oh no', but it was the correct decision because the old
system was hacked together by 'weekend programmer' (my boss at the time who
was very smart but learned programming from a book). I turned it into a
modular, decoupled system, and most of those parts got reused over and over in
different parts of our vertical. If you have a plan, and the old code doesn't,
it can pay off.

There's more to be said, but this is a wall of text already. Every day I work
in the refactor mode, even while mostly adding new features. Its an extremely
powerful idiom. But I also have my eye on code that really needs to go to the
big bit bucket in the sky. No one understands it, every fix introduces new
bugs, and what it is trying to do is very well served by existing open source
code, or code be written from scratch.

~~~
obiterdictum
Note the article is from 2000. Joel mostly talks about big and mature desktop
software (big as in Microsoft Excel). Average startup's code is small,
narrowly focused, has a (relatively) short lifespan, so it can be rewritten
from scratch by 1 or 2 people.

If you've ever worked with legacy systems older than 10 years, you'd notice
that they become a victim of their own "success". You can't throw away
features added over years, because a lot of users depend on it, and if you
tried to rewrite it, you'd have to rewrite bug-to-bug.

I worked on such systems, and I did try rewriting, and gave up because of
sheer volume of work and the knowledge of institutional lore that was required
to do it. On the other hand I'm currently embarking on our startup's "rewrite"
and things are much much easier, because the feature set is small and I can
freely throw away stuff that didn't work.

See also,
[http://en.wikipedia.org/wiki/Gall's_law](http://en.wikipedia.org/wiki/Gall's_law)

~~~
mathattack
It isn't just that people depend on the features, you frequently lose track of
which features were created for who, and for what reason. Sometimes it's just
easier to keep things in motion, rather than start over and wait for someone
to scream. (A case can be made for both)

~~~
RogerL
Right. I do not argue that rewrite is always the right thing to do; indeed, I
argue that very most often it is the wrong thing.

But, answer me this. How many companies have imploded because their software
was not maintainable? That's the other half of this article (Joel only wrote
half an article, I contend). You can no longer make competitive bids for work
because your impossible to understand. It takes months for the simplest
change. Your customers leave in droves because your code is endlessly buggy,
and you are pouring money into the drain of bug fixes that just introduce new
bugs. Or, it is just a long drawn battle, as your profit margins slowly erode
away as each new feature becomes incrementally more expensive to implement,
until you are at negative return.

I say again; we have massive empirical evidence that total rewrites of very
large infrastructure works. If you don't do it, your competitors will do it
for you. And, of course, if you do it when there is no competitive need for
it, you will be flushing money and/or your company down the drain.

(edited to fix some grammar and clarify a few poorly worded points)

~~~
mathattack
I wish good data existed on this, but most would be confidential, and this is
very hard to measure to begin with. Ultimately it's a judgment call. There
isn't a black and white, but Joel is giving a Year 2000 plea that people at
the time were tilting the wrong way.

------
shubb
This is an interesting article because though the rewrite killed Netscape, the
rewrite became Firefox, possibly the most popular browser in the world.

It became that way because developers were able to run laps round Microsoft
(who clung to an old, bad codebase).

And because it was a redesign not just a rewrite, and the redesign redefined
what browsers did.

Firefox, as we use it today is a rewrite of that rewrite, with god forbid far
less features than the original!

I wonder, with a little more money behind Netscape, and without anti-
competitive MS browser bundling, would Joel's piece look so correct in
hindsight?

~~~
hkarthik
I remember using IE when they started bundling it with Windows and it was a
FAR better experience with instant loading instead of waiting Netscape to
load. Everyone likes to blame the bundling of IE as causing Netscape to fail,
but in reality, the initial versions of IE just worked better. Netscape's
engineers knew this, and probably realized they had to rewrite to compete with
the speed of IE.

Interestingly enough, these days hardly anyone I know uses IE or Safari even
though both are bundled with their respective operating systems. Most of these
folks install Chrome because they are familiar with it and it doesn't impose
any significant performance penalty.

Ideally, Netscape should have been optimizing their browser for speed all
along. But since they were the only kid on the block for so long, they became
numb to how slow things had gotten. It took IE coming in with a much better
experience to shake them into action. Unfortunately, by then it was too late
for Netscape as a business to recover.

~~~
emiliobumachar
For your information, The bulk of IE loading happened during Windows boot. The
user just wasn't informed of it.

I don't know whether Netscape could have done the same and didn't, or whether
it required some inside access to Windows.

~~~
shubb
In oldschool firefox, maybe in navigator too, pre-loading was implemented as a
tray icon. The tray icon would load at startup, and could be used to launch
the browser. It still felt overly heavy.

------
freework
If I had superpowers that allowed me to delete one website from the in
internet forever, it would be this article.

I find the "complete code rewrite" to be the most powerful programming
technique I have in my repertoire.

In 2007 I wrote an application called FlightLogg.in. It was basically a clone
of logshare.com that I wrote in my spare time with PHP. It was my first really
'big' project. You can see the code here: [https://github.com/priestc/old-
flightloggin](https://github.com/priestc/old-flightloggin)

If you look around that codebase, you'll see nothing but spaghetti. I starting
writing the app in late 2007, back when you could say I was a 'noob' (so
noobish in fact, I didn't even use source control). At the time I thought I
was writing the most awesome code ever. By the middle of 2008, that project
had reached a state where it was pretty much 'done'. I then stopped working on
that codebase, and moved on to other projects.

Then by late 2008, FlightLogg.in's traffic had kept growing, and there was a
list of bugs and really cool features that I had thought up since I last
touched the codebase a few months earlier. I set out to continue development
on the PHP codebase. The problem was that in the past few months I've been
away, I had become a better programmer, and the sight of that spaghetti code
made me vomit.

It was less about the code being 'bad', and more about me not remembering how
things worked. The small set of fetures and fixes I wanted to make would have
taken me a few hours to do back when I was first working on this codebase.
Since a few months time had passed, it was taking me much longer.

Eventually I decided to do a complete code rewrite. This was before I ever saw
that Joel Spolsky article. The new codebase is here:
[https://github.com/priestc/flightloggin2](https://github.com/priestc/flightloggin2)
It took me roughtly the same amount of time to build the new codebase as it
did the old codebase.

Ever since, each and every project I take on, I accept the reality of doing a
complete code rewrite might come up. For my own personal projects, I do
complete code rewrites all the time. On the other hand, at the various jobs
I've had over the years, its a much different story. If I came into work today
and brought up the idea of doing a complete code rewrite, I'd either get
laughed out of the room, or even worse, threatened with being fired, thanks to
this Joel Spolsky article. Thanks Joel.

~~~
trustfundbaby
Joel isn't talking about the kind of software that can be rewritten in a few
days or weeks. He's talking about "large scale commercial applications" ...

If you have an app with a complex codebase that is making money in a pretty
competitive space and you decide to stop everything and do a complete refactor
(the blocking kind in which you can't do anything else while that is going on)
then you find yourself in a situation where you can easily be surpassed by a
competitor. Its probably a smarter move to do a more 'gradual' refactor.

~~~
freework
Its not like I had to shut off the old site while I worked on the new
codebase... The old codebase was running perfectly fine, up until I shut it
off and replaced it with the new code.

~~~
aidos
There are plenty of projects where by the time you've finished your rewrite
the product will have moved on so far that you'll have to rewrite some again.
I seem to recall Facebook making several attempts to rewrite in a language
other than php but the codebase just evolved too quickly.

~~~
etler
You don't rewrite for features. You rewrite for better maintainability and
productivity. After your rewrite you shouldn't have to rewrite it again. The
balancing act is that a rewrite should improve your implementation velocity,
so while it may be a setback, your velocity should be much higher than your
competitors, so over time you would be able to surpass them and then gain a
lead they cannot beat thanks to your higher productivity. As the technical
debt grows, your productivity will get slower and slower, so you won't be able
to keep up with your competitors anyways. That's the gamble.

Just because Facebook was written in PHP doesn't mean that the code was
unmaintainable. I don't know if it was or not. The problem they had was that
it was unscalable, and they didn't do nothing about it. They wrote a PHP to c
compiler, and that's a monumental task. So they paid their technical debt in a
different way that they decided was the most efficient way for them to deal
with it. If the problem isn't scalability, and is maintainability, you may not
have that luxury. Unmaintainable code's problem is inherently in its
structure, and the only solution for that is refactorization or a rewrite. The
code may be so unmaintainable that even refactorization isn't really possible.
The best offense is a good defense. Deal with technical debt from the get go.
Sure, write your MVP in a weekend just to test it, but rewrite it early for
maintainability while it's still small and possible.

~~~
aidos
Fair points. I guess I was reacting more to the fact that the original example
probably wasn't of the scale (single person) to use as an example. The
Facebook case is definitely different in that they were building new features
too fast to keep up with.

My project (single developer) is undergoing a big underlying change (switching
from MongoDB to Postgresql) at the moment. I still need to keep development
going on the live branch while I do this work. I'm doing it because I'm having
maintainability issues with the data store that I've been putting off for a
couple of months. It's as you say - make sure you refactor before you discover
that refactoring is impossible. It's something I do regularly and my business
partner has noticed that after a refactor of a component he gets other
features faster so that's the only justification he needs.

------
zimpenfish
If you read jwz's output, it's not that "Mozilla decided to rewrite it", it's
more like "bunch of idiots who didn't know what they were doing took over and
tried to rewrite things whilst ignoring all the sane advice".

------
hvs
Great article that is still relevant today (sadly, I remember reading it when
it first came out). If you really, really feel the need to rewrite from
scratch, I recommend instead picking up a copy of "Working Effectively with
Legacy Code" by Michael Feathers [1]. It will give you ways to improve those
terrible code bases while not throwing out the existing code. Plus you'll
still get that "new car smell" of working on your code.

[1] [http://www.amazon.com/Working-Effectively-Legacy-Michael-
Fea...](http://www.amazon.com/Working-Effectively-Legacy-Michael-
Feathers/dp/0131177052)

------
TheRealDunkirk
Seems to me that a lot of what Joel was complaining about could be mitigated
by some well-placed comments in the code. "This function is 2 pages long! Oh,
this part does that, and THIS part does THAT. I see what's going on. This is
all good."

This seems to be something people don't even bother to argue about any more.
Everyone's given up! I don't do much myself, but no one I have worked with
over the past 20 years has done ANY. Scary. I've only recently moved out of
engineering to a "real" coding firm, but I'm still not seeing it.

I guess coders, in general, either think that everyone should immediately
understand WHY code should be doing what it's doing, or be fired, or they
think it's job security to not share these details.

~~~
pjungwir
That's sad to hear. Part of it is an ideology that code that needs commenting
should simply be rewritten. But I've written many paragraph-long (or page-
long) comments, especially at the class and module level. Where comments are
really necessary is to give a big picture overview how all the parts come
together. Or to explain the "why". My personal theory is that comments are
like chess annotations. Sure you could just read the notation, but the
comments show you what is unseen, like what would have happened in six moves
if white had taken that pawn. . . .

~~~
omegaham
This is a really insightful comment. I've struggled in the past with what the
purpose of commenting is. I've written some programs where comments were the
majority of the code, and it was actually more difficult to read than if I'd
just left it unannotated. I've also written quick-and-dirty programs that
gradually turned into behemoths, left them alone for six months, and then come
back to them and felt like kicking myself for not putting comments in.

I just had a crazy idea for commenting, though - I'm imagining an IDE that is
built for two monitors. The left monitor is your source code. The right
monitor is your comments. You wouldn't put your comments in and around the
code; you'd put them into this application that would be running on the right
monitor. When you select a line of source code, the relevant comments pop up
on the right side. They can be as complex as you want them to be. If your
class needs five pages of comments, you can put five pages of comments and not
have to worry that it's going to disrupt the source code. Then, when you
select something else, that comment disappears and is replaced by the comments
for that line. You could even have extra monitors (or sections of that comment
monitor) showing multiple things - for example, you could have the function's
comments along with the line-by-line comments.

Is this crazy, or more importantly, has this been done before? I think the
first thing that someone will say is that it's too much effort, but it doesn't
have to be used for everything. You don't need to make a five-page comment for
your for-loop, but you might for your massive class that encapsulates ten
objects and has forty-five functions that modify different things.

~~~
pjungwir
I've never heard of showing two different view styles of the same source code
document in two separate windows/monitors, but it makes sense. Perhaps this
could be done with literate programming, or LaTeX packages (where you use the
same file either to build the documentation or install the package), or even
Javadoc and its imitators, with an IDE that lets you collapse either the
comments or the function bodies. Maybe Light Table could have something like
this? But if your tooling had good support for it, like being able to click on
a function in the code window and make the doc window bring up that function's
info, that'd be pretty nice.

------
ronilan
The rewrite meant Netscape 6 used Gecko, which brought us Firefox, which
revived the browser wars, which brought us the WebKit browsers.

History is a twisted mess...

~~~
dubcanada
The real question is would Firefox on old Netscape 4/5 engine run the same and
allow them to do what they did with Firefox? Or would they have eventually
rewrote it into what Gecko is today?

~~~
yuhong
I think Mariner (the codename for the cancelled project) was supposed to be
developed in parallel with Gecko.

------
GVIrish
One of the huge pitfalls in the writing from scratch situation is that often
times the original application didn't have adequate requirements. So the team
sets about building the new version without a comprehensive and detailed
understanding of what the old version did. That alone can lead to huge
timeline and cost overruns.

And if your organization didn't learn from its mistakes the first time around,
it'll make many of the same mistakes the second time around, except this time
they'll be concentrated in one monster release rather than a number of smaller
releases. Especially if the original developers have left.

------
brudgers
_It is easier to write an incorrect program than understand a correct one._

\-- Alan J. Perlis, Epigram 7

------
danso
I picked up the Pragmatic Programmer some time ago and among the great lessons
it had, the one that sticks with me every day at the code editor is: You spend
far more time reading your code than you will writing it.

With that in mind, it makes it easy to write tests and more comprehensible
variable names (still thinking and re-thinking how to best do
documentation...was checking out github's Ruby guide and Tomdoc yesterday
[http://tomdoc.org/](http://tomdoc.org/))

~~~
nahname
That looks like noise to me. There is almost four times more code dedicated to
comments than useful code. If you write unit tests it would cover/explain all
these cases. I don't see the point.

~~~
moron4hire
I agree. It would work a lot better of the function were called something like
"stringRepeat" or "duplicateText" or something other than "multiplex", because
that's really not what multiplexing means. Then, rename that one parameter
from "count" to "repetitions" and the function is almost completely self-
explanatory.

Not to mention, it probably doesn't deserve to exist at all. "text * count" is
simple enough that it should probably be inlined.

~~~
nahname
Comments are generally superfluous, sometimes incorrect and often used to
explain away bad code.

~~~
collyw
Depends on the comments. I had to go and fix someone else's scripts, with
comments like:

#open files and read contents

#loop through list

No idea what was in the file, or why the contents were being read. What was
the purpose of the loop.

Commenting on those things would have saved me a lot of time. (It took me a
week to fix something that should have taken a day, if it had been coded
better in the first place.)

(To put it into context, it was a previous workers code which interacted with
the database I had written. When I changed some things in my code, her code
stopped working. I didn't know how exactly her code worked, or the nature of
the files that it was reading from the system, but I did know what working
right would be. I am sure this is not an uncommon situation.)

------
chatman
This is esp. true for dynamically typed languages.

An IDE helps a lot more deterministically for a static typed language like
Java.

~~~
kyllo
Say what you want about Java, its awful "kingdom of nouns" paradigm, its lack
of nice functional features, etc., but it is a highly readable programming
language.

~~~
jeremyjh
I think all the boilerplate really does detract from the readability. Your
eyes sort of glaze over and you can miss small departures from the boilerplate
you expect to see.

~~~
crucini
Really? You have trouble with code like this?

    
    
        RegistrarInstallationsChoicesSubclass registrarInstallationsChoicesSubclass = millisDOMX509Grow.getRegistrarInstallationsChoicesSubclass();
        KeyedGSSPolygons keyedGSSPolygons = new KeyedGSSPolygons(incompatibleTransliterationNoPerfCombiningContention, iSO885915ConnectionlessAsyncDiagonal, dotDataetJDIMarshallingResourceszhObservablesDiagonal, native2asciizhDelegateEnvarLearned, insertStrategyRTFLearned);
        keyedGSSPolygons.addISO885915ConnectionlessAsyncDiagonal(iSO885915ConnectionlessAsyncDiagonal);
        AppendingGuiceJustificationMarshalling appendingGuiceJustificationMarshalling = instantiatorParenthesisForeachOutlineCounted.getAppendingGuiceJustificationMarshalling();
        AdaptiveReflogMerges adaptiveReflogMerges = new AdaptiveReflogMerges();
        adaptiveReflogMerges.addIishCoolbarTuple(iishCoolbarTuple);
        AppendingGuiceJustificationMarshalling appendingGuiceJustificationMarshalling = keyedGSSPolygons.getAppendingGuiceJustificationMarshalling();
        RanksPessimisticSeparableDialFirst ranksPessimisticSeparableDialFirst = new RanksPessimisticSeparableDialFirst(native2asciizhDelegateEnvarLearned, colorizeHmacComplexJanitorEnvar);
        ranksPessimisticSeparableDialFirst.removeIncompatibleTransliterationNoPerfCombiningContention(incompatibleTransliterationNoPerfCombiningContention);
        AdaptiveReflogMerges adaptiveReflogMerges = fixerAppletPerfEs.getAdaptiveReflogMerges();
        ReferencerBreakingViewersvAes256RenderingsGJ referencerBreakingViewersvAes256RenderingsGJ = new ReferencerBreakingViewersvAes256RenderingsGJ(membershipDriverConsistency, iishCoolbarTuple, sectorPartsGJForking, ranksPessimisticSeparableDialFirst);
        referencerBreakingViewersvAes256RenderingsGJ.addISO885915ConnectionlessAsyncDiagonal(iSO885915ConnectionlessAsyncDiagonal);
        DitherTWSDLGraphemeInserting ditherTWSDLGraphemeInserting = cnxnsCNTWSDL.getDitherTWSDLGraphemeInserting();
        Message12DL2ExposeReconcilablePerfEditionUnaligned message12DL2ExposeReconcilablePerfEditionUnaligned = new Message12DL2ExposeReconcilablePerfEditionUnaligned(dotDataetJDIMarshallingResourceszhObservablesDiagonal, sAXXMINamessvInconsistencyAnswerLoopingCommunicatorEdition, insertStrategyRTFLearned, dotDataetJDIMarshallingResourceszhObservablesDiagonal, nIDiffuseGraphemeINAes256Kind);

~~~
kyllo
Lulz. This is why you don't do graphics programming in Java.

------
eldavido
This article shows how much progress we've made in languages and tools over 13
years. It's vastly more difficult to read large, c++ codebases with tons of
memory allocation, pointer arithmetic, macros, and complex locking schemes
than much of the Java/Ruby/other mainstream language code written today.

Also, it strikes me that this was written in an era where most large-scale
systems were giant monolithic codebases sharing an address space, rather than
distributed groups of services communicating over a network. Distributed
systems are inherently easier to rewrite because a lot of the hard problems
around abstraction, state, representation, etc. already have to have been
solved by the architecture -- you can't exactly pass a naked void* over the
wire.

------
semjada
So he was wrong then? NS6/firefox seems to be doing alright today... Rewriting
your OWN codebase from scratch certainly makes it get better with each
iteration - as long as "from scratch" implies constantly referring to the old
code.

~~~
rythie
In 2000 Netscape had 19.25% market share, it took 8 years to get back to that
position with Firefox:
[http://en.wikipedia.org/wiki/Usage_share_of_web_browsers#The...](http://en.wikipedia.org/wiki/Usage_share_of_web_browsers#TheCounter.com_.282000_to_2009.29)

------
kbd
This is from 2000. People need to stop posting old articles without putting a
date in the title.

~~~
mattlutze
Relevance and recency are not always directly proportional.

------
etler
The new historical perspective on this is very interesting. The code rewrite
killed the company, but it gave birth to mozilla, which came back to topple IE
from its throne and help bring through a new renaissance of browser
development. If they hadn't rewritten their code how would have history
changed? Would we be where we are today? How many years behind would that
setback have put us? The release of Firefox was instrumental at disrupting and
evolving the web. With a poor code base could they have done that as
effectively as they did?

A company may be transient, and killed by choices that hurt them in the short
run, but will support them in the long run. But good open source code is
eternal. The quality speaks for itself. Their decision may have killed their
company but it was astronomical for the web. From a higher level perspective,
they did the right thing, and it shows that good code is so powerful, it can
even outlive the company that made it.

~~~
masswerk
On the other hand it was particularly for this stand-still of NS, why IE was
on the throne. And this made HTML/JS effectively a proprietary standard, which
brought web development eventually to a stand-still also.

When open standards enforcing browsers finally became available (and – saying
this as a once ardent NS-follower* – what a buggy mess NS 6.0 was in the
beginning!), it was essentially a political question, whether they would
succeed or not (as they were incompatible to most of the existing web code
base).

*) I used to code NS4.x-compatibility and open standards calls then, even, if I wasn't able to bill for these as clients were only interested in "does it work on IE, so it's fine".

------
lxe
I know I don't have the expertise to argue with Joel Spolsky about software
engineering, and while I agree that rewriting from scratch is a huge strategic
mistake, it is hardly the worst.

Have any of you ever worked for a large engineering firm with a sizable
operating budget? There are many "strategic" errors that happen on the daily
basis in those kind of shops that range from software stack choices to
engineering process drowned in bureaucracy. Rewriting code is hardly the
worst.

Also, don't assume that even though rewriting code is bad for business, it's
bad overall.

You iterate, and learn from mistakes you made that were caused by the lack of
experience at the time previous code was written.

You de-couple components during a rewrite and eliminate unnecessary features,
drastically improving maintainability.

You make it easier for other programmers (and yourself) to read the code.

You can alter core platform pieces, make tech stack alterations, and eliminate
reliance on legacy components during a rewrite.

The list goes on...

------
ht_th
At first sight, this statement seems odd. For sure, writing a text is more
difficult than reading it, and programming is writing a text. But a different
kind of text. A program is written to communicate ideas with a computer first,
and with humans second, whereas an article or novel is written to communicate
ideas with humans only. What if we would have a programming language that
allowed us to write to communicate ideas to humans first and have the
compiler/interpreter automate the communicating with the computer?

Or could it be that a non-trivial program is essentially more complex than an
article or novel or any other regular text we encounter daily?

Or is reading a program similar to reading a text in an unknown language? Like
trying to read some Roman text when you don't really know Latin, but you've
access to a Latin-English dictionary and a good grasp of history and the
context and topic of the Latin text?

~~~
derefr
Reading and writing themselves are the least difficult parts of reading and
writing programs. The majority of writing a program is conceiving of a well-
defined model--after that, all you are doing is typing. The majority of
reading a program is attempting to reconstruct someone else's conception of a
model into your own brain from the typed description. Reading just the text
written for the computer answers all of the "what" (as if it didn't, the
program would not run), but none of the "why" \-- and so makes reconstructing
that model very difficult.

~~~
jeremyjh
Yes and even after you have enough working knowledge to support or enhance the
code you will still find yourself continuously suprised because your model is
not a perfect understanding of the author's intended model, AND the code
itself is not a perfect representation of that model to begin with.

------
cehlen
Code may not rust overtime but it does become harder to maintain and more
importantly harder to find quality innovative people willing to work on it. I
would guess that very few of the people making a pro argument for Joel
Spolsky’s article would be willing to support 1970’s COBOL banking software
for a living.

------
YZF
It's not black and white. There are situations where investing in your old
code base is throwing good money after bad and there are situations where the
old code base can be better. You need to make educated decisions not blindly
follow rules.

Refactoring can be a problem because you're always working towards some local
minima. It's a little bit like the decision to renovate an old house vs. raze
it and build a new one or buy an old car/boat and repair it.

There's a good discussion of various factors here:
[http://programmers.stackexchange.com/questions/6268/when-
is-...](http://programmers.stackexchange.com/questions/6268/when-is-a-big-
rewrite-the-answer)

------
drderidder
There's a time and place for everything. Firefox and Mac OS X are examples of
successful rewrites. I've had some positive experiences with re-writing
certain things from scratch. Not to downplay the issues Joel raised back in
2000, but if a product is tied to a technology that's limiting its
effectiveness, and better alternatives have become available, a next-
generation implementation bears consideration. I think companies that fail to
iterate and evolve this way will ultimately stagnate.

------
mtdewcmu
I can't imagine how a codebase could be so horribly bad that it doesn't do
even one thing right, and you have to throw it out wholesale. That is,
assuming it was good enough to ship at some point.

I don't like to feel that I'm wasting my time doing rework, and vulnerable to
getting in way too deep with no backup plan. I'd definitely want to rewrite in
pieces without ever discarding the whole, even if eventually there might come
a day when every single piece has been rewritten.

------
ryanackley
You usually only hear the catastrophic failure anecdotes. Large, complex, yet
successful rewrites I know about:

* The Windows kernel. From my understanding, Windows NT was a completely different kernel from Windows 95 and eventually replaced it in Windows XP.

* Coldfusion, I used to work with an ex-Adobe guy that told us the story how this was rewritten from scratch at one point. From C++ to Java

* I used to work on the SQL Server team at Microsoft. We completely rewrote the Reporting Services product between the 2005 and 2008 releases.

~~~
masswerk
Windows NT was an incremental update to what would have been the next VAX-OS.
(As for my humble knowledge.) See "Dave Cutler and Windows".

But another good example for a very successful rewrite would have been Adobe's
ActionScript for Flash 5 (turning to an ECMA-script-like language).

------
kailuowang
I always deem readability the number one priority in writing working code.
Readability is not just about understanding what the code in front of you is
doing but also easiness to find where in code a particular business logic was
implemented. Code written with readability as the top priority has the best
chance of avoiding the need for a full re-write. And even if it needs to be
re-written from scratch, it's so much easier with a readable codebase.

------
bluedino
How does this apply to a platform change? Say, going from an application
written on 80's UNIX hardware that's still going strong today on a 9-year old
Itanium system?

Start writing replacements for individual modules? The original programmers
are set to retire in another 5 years and then it's going to get really, really
ugly.

------
igl
April 06, 2000.

Any progress report on this?

~~~
trebor
This article has been posted around 3-4 times this year alone. Most people
should already be well aware of the cost of a full rewrite due to all the
horror stories. Yet, sometimes the best software has come from a full rewrite.

~~~
moron4hire
Unless the original system just did nothing of what it was supposed to do, a
full rewrite being successful would be in spite of itself.

------
lowmagnet
(2000)

------
yuhong
An example is legacy color parsing. I personally figured out where legacy
color parsing is in the Netscape classic source:
[http://stackoverflow.com/questions/8318911/why-does-html-
thi...](http://stackoverflow.com/questions/8318911/why-does-html-think-
chucknorris-is-a-color/12630675#12630675)

It is so subtle even Netscape's own Gecko rewrite did not get it completely
right the first time:
[https://bugzilla.mozilla.org/show_bug.cgi?id=121738](https://bugzilla.mozilla.org/show_bug.cgi?id=121738)

------
powertower
Does anyone have a good way to distinguish between -

A) Rewriting the codebase.

B) Fixing & modularizing the codebase - breaking it up into self-contained
units/parts while improving, cleaning, and modernizing them?

Because if you throw away the edge-case of completely starting from scratch,
B) can easily be seen as A) and vice versa.

Is there a thin line here?

(*I have a 200K line C#/.NET app that will be evolving and pivoting with it's
next major version).

