Over a few years Netscape code became so unmaintainable they had to start from scratch, which cost them years. Joel wrote in another famous article that this was a major mistake. However, if the code is a giant "pragmatic" mess with no architecture and no unittests, it becomes extremely hard and dangerous to refactor.
IE also got a lot of mindshare among developers because it actually tried to implement some standards like CSS, which Netscape completely disregarded. Netscapes "pragmatic" alternative to CSS, <spacer>, <layer> and so on luckily died together with Netscape.
Many developers started making IE-only pages because it was almost impossible to get anything to work in Netscape 4. IE6 is pretty unpopular among developers today, but this is nothing compared to how the Netscape 4 generation was reviled back in the day by anyone having to develop for it.
> Remember, before you freak out, that Zawinski was at Netscape when they were changing the world. They thought that they only had a few months before someone else came along and ate their lunch
Also remember that they lost it all, and someone did eat their lunch. So maybe the their strategy should be reexamined?
What all software architects forget is, most of time, the piece of code that we wrote is to solve problems in life. Those problems have their life cycles; some are long, some are short. While we seems like to imagine the piece that we wrote will be a masterpiece as a Cathedral/Pyramid and last for 1000 years. Unfortunately that is not the case. Most of time, our programs are just solutions among solutions to a series of bootstrapping problems. So unless we have a lousy but popular solution to a problem, our potential competitors might just ignored a market and no progress happened for the field. And this is a lost to human progress.
It is the same as maintaining old buildings, if condition is right, you may just tear it down and rebuild what you deem is fit by today's standard. But don't forget the original building has served its purpose.
I personally have affection for Netscape 1.0. I still remembered how people in my lab in Taipei ftped to netscape's download server and waited for the moment when they uploaded the tgz file and started to download it and installed it on Sun workstations. And by using it I felt making stuffs on internet is better than studying physics and the decision changed my life.
However, the rapid success of Netscape was very much due to that the basic architecture and protocols of the web was already designed by others. I give Netscape credit for the <img> tag, but apart from that, almost anything Netscape designed on their own were ill-conceived disasters from <font> and <frameset> to <layer> and JSSS.
So I think the correction to the duct-tape approach is that it works best if somebody else already designed the basic architecture, e.g. if you are copying an already established product. It does not seem to work very well if you have to design something original.
The biggest reason (by far) that Netscape faded into obscurity, though, is that Microsoft bundled IE with Windows.
More importantly, these strategic decisions about the product were not made by guys like jwz. jwz is a hacker. Like many hackers, his opinion was often at odds with the strategies of the Business People in charge.
I don't think it's fair to use these strategic missteps to discredit jwz or Joel's point about the balance between purity and pragmatism in writing code.
The <img> tag was a nice pragmatic solution, but from there it went quickly downhill. Already around version 2 Netscape was adding badly thought-out features like <font>, <frameset> etc. which have taken a decade to get rid off. Rendering bugs and inconsistencies were never fixed.
They could probably have recovered and consolidated by stepping back and focusing a bit more on quality and sound design for a while. But they didn't.
It seems every single project Netscape developed was either abandoned or turned out so buggy it was unusable. Clearly no single programmer can be blamed for that, and I believe Netscape had lots of brilliant developers on board. (And I don't doubt that JWZ was a brilliant programmer.) It must have been the overall "duct-tape" mentality that was to blame.
Some of the other replies blames various parties like Collabra, the "business people", strategic mistakes, Microsoft and so on. Sure Netscape made some strategic blunders, but it is still one of the few examples where genuinely bad code quality was a major reason for the downfall of a company. (Quark would be one of the few other - like Netscape they squandered a near-monopoly by releasing increasingly buggy software and pissing off their users).
Did those systems start out that way? Maybe not, but after a few years and a couple rewrites I'm sure they came to the same conclusion that most programmers do when they work on things for a long time: "I wish I could go back and write some tests / automate some stuff / add better debugging, etc." I know I always feel that way. I do now, after about a year and a half of hacking together our site. I'd kill for a decent test suite and fully-automated deployment. Kill!
Both styles of programming have a purpose. Maybe we'd like to avoid multi-threaded architectures, but it isn't always possible. When you have 6 weeks to launch, maybe unit tests aren't necessary, but eventually not having them will start doing more harm than good.
The more I read the writings of celebrity programmers / entrepreneurs, the more I come to realize that most of what they write reads like an attempt to justify their way of thinking as being The Right Way. Why can't we all just agree there is more than one way to skin a cat and each probably has an applicable use case or two?
Then why don't you write one? I suspect you don't have the time - well back then when you created the system, you did not have the time either. So the bottom line again seems to be: it is not actually THAT important. Otherwise you would make the time.
The payoff for something like unit testing, automated deployment, and continuous integration are over the very, very long haul. If your cost/benefit analysis is always looking at 3- or 6-months out, it'll never seem like a win.
So if you only ever do the urgent stuff and never anything strategic, 3 years later you'll realize that if you'd just sucked it up back at the start and done that stuff, even if meant putting off otherwise urgent features, you'd be further ahead than you are now, because it would have more than made up for the initial investment.
So it's not that you don't do those things because they're not important, but rather because they're never urgent, and because most people's time horizons, especially in a startup, are fairly short.
I kind of see your point, but I find it difficult to deduce a binding rule from all of this. Sometimes it is important to have tests, sometimes other things are more important. You still have to decide on an individual basis.
Thing is, the TDD and technical debt evangelists are typically consultants. Consultants usually earn more money the longer a project takes, and their income is not tied to the yields of the project. Just something to take into consideration imo.
Well... that seems oversimplification. When I consult, I ask the price roughly based on the estimation of how much time I'm going to take, but ultimately it is the matter that how much value the client sees in the outcome. Note that the expected outcome and the price are agreed beforehand---if I bump into an unforeseen obstacle and take twice time as my estimation, I can't ask them twice price; basically the more I work, the less I make per hour.
So (1) I have a very strong incentive to make sure my part works in time, and (2) There's less incentive to finish the work earlier than the initial estimation. Thus I tend to put effort to write tests and debugging aids within the time frame (or, in other words, I try to negotiate initial time frame including those tests).
Certainly there's a different pressure on employees; they may adjust priorities and time frames more frequently.
So, it is true that consultants make more money the longer the project takes, but the incentive is in making sure it won't take longer than expected, rather than making it take longer deliberately. (After all, if the consultant is doing ok, there are projects in waiting list so it's not much point to taking one project longer than necessary.)
I will, however, say that after working on the same code base for over 7 years now, and watching the company grow from 15 employees to 400, I can't imagine ever working at a place that didn't have a large investment in unit tests, tech debt elimination, automation, etc. Without that stuff, our products almost certainly would have collapsed under their own weight by now, and our ability to ship predictably and on-time would be gone. Even within the company, we have some groups that have done better than others as far as automating tests (both because of team personality and because of technical issues that make certain types of features harder to test), and it's quite obvious that the groups with the best testing are the groups that are able to make much more predictable progress and that are able to ship on time. The groups with less-good testing tend to be prone to fairly massive schedule slippages due to a ton of late-stage regressions that only get caught when they ship their code out to their internal users.
Once things get to the point where no one person can reasonably understand the full implications of their changes, because the system is just too big and complicated, if you don't have unit tests you're in big, big trouble, and you need to reduce tech debt so you can keep things as comprehensible as possible. Even then, of course, you have to constantly decide how much to invest in testing and infrastructure and cleanup versus how much you invest in forward progress, and there's never an obvious equation that will give you a right answer.
If your code base and team are small and likely to stay that way, such that you can still mercilessly refactor and change the code without introducing a bunch of hidden bugs, then testing doesn't matter as much. If you ever expect the code to get to the point where that becomes less true, and where the possibility of introducing errors increases, then it starts to matter a whole lot more.
Hypothetically, let's assume we built the same product with two teams, one that did a bunch of unit testing and one that didn't (call them Team A and Team B). From my experience, what essentially happens is that Team A ships version 1 first, ships version 2 first but takes about as long to build version 2 as Team B, ships version 3 about the same time (since it takes them longer to build it), experiences a massive schedule slip in version 4 (since the complexity catches up to them and things becomes buggy and they start playing whack-a-mole with bugs), and don't really ever ship a version 5 because their code has so much tech debt that no one can change anything safely without breaking something else unintentionally, and they start contemplating a complete rewrite of the code base. Again, totally contrived situation (it doesn't have to go that way, Team B could still totally screw things up anyway, etc.), but that's roughly what I've seen happen, both at my company and at others.
I don't think it's fair to say that consultants push TDD and tech debt reduction because that means the project will take longer: that's a bit overly cynical. Many, many organizations use unit testing and such in house because it has a huge long-term benefit (as well as generally more predictability in the short term, which is often more valuable than absolute speed), not because some consultant told them to do it.
A lot of unit tests make sense, but I suspect they also offer plenty of opportunities for idling time away.
If the test is testing something (like a getter or setter) with basically no chance of breaking, then it's a waste of time. If the test is likely to be fragile or non-deterministic, it's a waste of time. If the test is just too hard to write, and it's not too hard to just test by hand, then automating it is probably a waste of time and you should just QA it by hand every so often.
Finding the right balance tends to come back to the old experience and skill thing: you need to have some intuition about which tests will give you the most value (because that part needs to be rock-solid, or because it's hard to get right, or because it's high-change) and which tests need to be thrown away or never written because they aren't worth it.
Taking any development process too far tends to work out poorly, and taking any metric (like test coverage) too seriously is always a bad idea. That said, I've rarely seen unit testing taken way too far; not testing enough and ending up with buggy, regression-riddled software is a far more common failure mode.
In the case of unit testing, though, long term is short term enough for me to recognise it as urgent.
Another possibility is that it is important, possibly even more important than the other things they are working on, but isn't getting prioritized for other reasons. They've succeeded thus far with the way they have prioritized things, but that doesn't mean that they made all the right choices, it just means they made enough right choices to survive to this point. The future may prove tdavis painfully right in his concern about not having test cases. Another company that may not have made enough other choices correctly may be able to cling to having tests and deployment automated as the thing that saves them. I don't think you can draw any real correlation except that it might not be that important, and only time will tell.
Might have been better spent. Might have. Say that it was better spent is just begging the question.
That he didn't get the pump can be blamed as much on poor risk mitigation as it can be on avoiding unnecessary expenses. Having a cavalier and dismissive attitude toward mitigating future risks can result in a demonstration of exactly how faulty one's prioritizations really were.
Don't fall into the trap of assuming just because things were done out of assumed necessity, they were the right things to do.
There are all sorts of risks, and it isn't obvious when it is worthwhile to get insurance and when it isn't. One risk is never launching.
Right. I'm not saying become paralyzed due to the risks. And I'm not saying do dumb things that mitigate a risk with an even bigger risk. What I'm saying is don't assume your conclusions. Things that other people might argue are important might not seem that important to you, given whatever perspective you have at the time. It is possible part of the reason they argue for using a certain practice is because they, too, didn't see it as important until they got burned hard for their presumption.
This is not to say they are important. It might be that people are just squawking to sell books or consulting hours or seats at a conference. But, to use a really dumb metaphor, if that many hens are squawking about the same thing, it might be time to look in the barn and see what all the noise is about.
Automated deployment has been less of a priority because I don't really have to re-deploy that often (meaning spin up a new server). However, if/when I do need to, it will be automated. Even if I haven't written the scripts yet, I will do it before the new machine comes up.
Back when the system was created I had never been a "TDDer" before and the tools that exist now did not exist or were not viable for production, especially in terms of automated deployment and dependency management. If I could go back and do it again (or just murder someone) I'd love to have good test coverage and automation tools. It isn't that it's not important, it's just that the ship has sailed on half of it and the other half is situation-dependent at this stage.
In summary, it is THAT important. I will go as far to say that I think these two items in particular are more or less vital to the long-term viability of a software project. Anyone maintaining a large, aged project that contains no test coverage and no automated build / deploy tools has my deepest sympathies; it's like stacking an infinite house of cards.
Amen. I think a lot of the heated arguments about the value of testing arise from the fact that not everybody is using the same toolset. I'm sure unit testing in C is a major pain in the neck. (Can anybody link me to a document suggesting how to do it?) I know that unit testing in PHP is no picnic. But testing in Ruby is a joy, because the Ruby community has lavished attention on the subject over the last five years and the language, the tools, the idioms, and the culture are highly developed.
... that's just a little presumptuous, with respect to the concrete facts and motivations involved...
You can never say this is the ONE true way to develop. Do what works for you, but just be aware of other points of view.
Duct tape programmers have to have a lot of talent to pull off this shtick.
Oh, I see. What matters is talent, not duct tape at all. Untalented duct tape programmers do every bit as much damage as the untalented design-pattern programmers he scourges. So what was the point again?
I think "duct tape programmer" should be derogatory, while "practical programmer" or "pragmatic programmer" would be more apt for Joel's idols.
Most of the piece seems to be about the benefits of keeping it simple and doing the obvious thing that will work rather than outsmarting yourself. All well and good advice, easily applicable by anyone with the confidence to face down blowhards that would rather things feed their leet-programmer ego than actually meet a customer's need.
But then it goes into not writing tests and doing complicated bit-munging to save a bit of time or space (both of which you think would be the opposite of the previous advice of keep it simple and accept your own human failings) and which is stuff that you can only get away with if you are both talented and lucky.
If he dropped the last paragraph and the bit about not writing unit tests (which, the theory goes, will save you time assuming, again, that you're not a lucky genius) then this would hold together somewhat coherently.
So what's his point? Be talented.
Still, it was nice to see him get excited about something not directly in that vein.
Edit: It was also by reading a Spolsky post way back when that I originally discovered PG's essays. So I can't ever get too mad at him.
It's definitely one of the things I'm most impressed about with 37s and Fog Creek, that they've managed to extract so much business value out of their blogs.
Reading Spolsky back in '03-'04 was what made me want to become a product manager, which I was pretty quickly able to do, which experience taught me what little I know now about the software business (you know, besides actually writing and shipping code, and, uh, getting funding, neither of which appear to have any correlation with success in the business ;] ).
I actually think Spolsky's weakest on the hard tech content, and strongest on the product marketing side.
I agree about Spolsky & Co. not being that technically strong, which is a little ironic given how much he's written about attracting the best programmers, etc. Still, although I don't read him regularly, I almost always enjoy when I do. And a lot of what he says about the software business is really helpful. Chris Dixon linked to a post of his the other day about lowering the price of complementary products which was a great example.
Uh... wasn't that his affiliate amazon link? If he gets a cut each time somebody buys the book he keeps telling us to buy, isn't it still his marketing?
I've worked with a great many 'theorists' coders and they never get anything done. They spend too much time abstracting into nothingness. You know.. the kind of guys who remind you of your 3rd grade grammar teacher making sure you know when to use 'whom' vs 'who'...
While I think eventually one would refine their product so that it uses best practices I would say that having customers and a product should definitely be a prerequisite.
"all of the things you punted, ignored, assumed, patched over, and otherwise haphazardly threw together in version 1.0"
The Second System Effect is specifically about new features.
Like writing a custom compiler for your web app?
After he jumped that shark I don't read anything he writes anymore.
...how'd you get that quote then? Or did you only read enough to get something to complain about?
Rhetorical question; What percentage of an article need to be read for the article as a whole to be classified as having been read?
Wasabi is a duct tape.
In fact he made it quite clear in the last paragraph that these duct tape programmers are a rare breed. Maybe you never read that far though.
So it's not completely crazy.
So no that crazy
You're right, though, it's not that crazy of an idea.
"They xor the 'next' and 'prev' pointers of their linked list into a single DWORD to save 32 bits, because they’re... smart enough, to pull it off."
How is that not even slightly complicated?
Presumably, the duct tape programmer is doing that because it is the difference between making the product go and not making the product go, not because they love bit packing. It's not a technique I'd adopt today, but Zawinski (just to choose one example from his repertoire I've read about) was trying to make machines with, say, 8MB of RAM able to read thousands of email messages. You get a bit nutty under those constraints, or you ship slow crap. There isn't much of a third choice. (Fast and featureless, maybe.)
(I think I can bid lower than 8MB of RAM, too, but I'm a bit fuzzy on netscape timeframes vs. ram timeframes. I think 4.0 was in the 32-64-128MB era, putting 3.0 a ways back, but I'm not sure.)
I think the thing is, articles like this tend to create some idealized programmer that is just a conglomeration of attributes the author likes even if they are mutually exclusive. To me, avoiding complexity and doing bit manipulation are mutually exclusive.
It's like saying you should use left shift (or is it right...?) instead of diving by 2. Ok, it may be faster. Or the compiler may just do the same thing regardless how you type your code. The point is that "/ 2" means divide by 2 to anyone at all familiar with code. Unless you have some really compelling reason to do otherwise, you should use "/ 2".
Using shifts for division (or various other bit manipulation) may be how your idealized programmer shows their classical training, but don't kid yourself into thinking that bit manipulation fits into all your other ideals for programmers.
Joel's idealized programmer also avoids unit tests. Are you serious? How can this possibly be a good idea? No, your customers don't care if you wrote unit tests... in the same way you don't care if your architect does whatever it is architects do to ensure the accuracy of their work. But that's just the point. You don't care (nor should you) about how they ensure accuracy. You care only that they do. So no, your customer doesn't care if you wrote unit tests, but I assure you they care if your software crashes or gives inaccurate information.
Of course, no one ever creates an idealized programmer without creating their opposite. Joel's "ideally" bad programmer multiply inherits from 17 sources. Does any sane programmer really do this? No. Of course not. Why bother mentioning it? It's like saying an idealized pilot is not like those other pilots that intentionally crash their planes. Well... no one intentionally crashes a plane. Don't bring up absurd examples to prove your point. If real life doesn't prove it, then it's not a valid point.
The simple fact is that when I look at my own real-life, deployed-in-production code, I find this: The code I wrote just to get a problem solved in whatever way possible (duct tape) becomes more and more of a liability as the requirements change. With the code that I spent the most time designing (assuming I eventually came up with a good design), the more the requirements change, the more I see the beauty of the design. When a change in requirements can be fixed with a find/replace, it's a job well done. Duct tape code leads to duct tape maintenance. Duct tape maintenance leads to thedailywtf.com.
I have no problem with emphasizing the importance of shipping software. I have a problem with people saying "real programmers use butterflies" when they aren't writing a web comic.
I don't think there's a single "real programmers" article in the universe that is internally consistence (doesn't advocate any mutually exclusive practices). Like I said, it's an ideal, an ideal constructed out of everything the author could find in their mind, whether it fits together or not. This wouldn't be a problem if the author admitted even a slight possibility of exaggeration or lack of internal consistency, but they never do.
Now... I think by now I've probably exaggerated and broken internal consistency enough for one day, so I'll stop here.
I can see the benefits of getting the 1.0 to market first (if buggy), getting some market share and using that lead time to either iron out the bugs or to rewrite so you don't have to put up with duct tape maintenance.
I've been in a situation where the users started using the prototype because, despite being buggy as hell, it did stuff light years ahead of what they had before. So imo duct tape 1.0 is ok.
Your use of "assuming" is interesting. Sometimes it's hard to know if it really was a "good design" until after the requirements change...
"The competent programmer is fully aware of the limited size of his own skull. He therefore approaches his task with full humility, and avoids clever tricks like the plague." - Dijkstra
If you don't understand how your object model interact with your threading model, you're DOOMED.
(1999+) But now I've taken my leave of that whole sick,
navel-gazing mess we called the software
industry. Now I'm in a more honest line of
work: now I sell beer.
Selling beer => flirtation => sex => sperm + egg = Building a future human being!
But seriously, Joel is on crack.
". One of the best programmers I ever hired had only a High School degree; he's produced a lot of great software, has his own news group, and made enough in stock options to buy his own nightclub. "
-- from http://norvig.com/21-days.html
The problem is not with a specific language or technology -- it's using the bleeding edge technology, when the boring one will do.
There has got to be a better way than being a "duct tape programmer." It seems to me that one can practice good design, architectural grace, and hold true to a variety of other tendencies that seem theoretically and aesthetically appealing (the latter is very important; every good programmer I have ever met sees an artistic aspect to programming, even if it is not necessarily the central or principal one - it is a craft) without being the guy that never actually puts out any concrete deliverables.
I think this is just an angry, bitter overreaction - and a very understandable one that I fully endorse - to the dogmatism of many test-driven development acolytes and pig-headed "patterns" people.
Abstraction, interfaces and unit tests are not a leisurely activity for academic developers. We use them to make the code less complex and maintain. The cost of development isn't the initial code base, its the fixes and additional features people want AFTER the initial release(s). Going back into the code and safely making changes or adding code with these in place reduces time.
I had an application without automated testing, it cost the company almost 2000 man hours to test the system every time they made a release.
Design patterns, Joel, are repeatable patterns within code. Design patterns are again to help when another developer goes into the code they can see what the heck the original developer tried to accomplish.
To summarize, I would suggest you out source some code to the far east. They will get it done really fast for you. And yes it will only work 50% of the time. I love buying products that will only work 50% of the time and unfortunately I don't get to pick which 50% works.
Joel isn't directly bad-mouthing unit testing or multithreading, but as developers we tend to think about all of these cool toys we can use and "ooooh" and "ahhhh" instead of actually shipping code. Try not to get so hung up on the specifics.
"Just get it done programmers" are a scourge. Most of them have no business deciding which corners to cut, let alone how much to cut them by. There is definitely a balance to achieve. Are you going to ship a "3.0?" Are you building a company to last 5 or 10 years? Or one to last as long as it takes to make the money back and sell? Is this a product you're prepared to get dirty fighting competition with or do you just not care? You never want to over-engineer but cutting too many corners is far worse. If you're a low investment startup with a decent idea, do you honestly think you can put out something that works 50% of the time and even have a shot at a 2.0? Maybe if you've got a proven wizard that spins money out of thin air you duct tape the whole thing like he thinks you should.
If you have investors and you slam out a mediocre 1.0 in like 3 months, will they really sit back for 12 months while you "engineer" 2.0? Or do they want 2.0 in another 3 months?
Not to say that the latter has no merits -- there are situations and cases where they are 'net win' good things. But when overdone or done poorly they make a codebase much harder to understand, troubleshoot, extend or fix.
I've seen this first-hand many times, though I notice it most often occurs in large corporate Java shops rather than with smaller companies or codebases or more nimble/concise languages.
By all means, ship. Do what you gotta do. But a code base that doesn't have tests cannot safely be refactored. This technical debt must eventually be paid by the product owner in cash and the code's maintainers in sanity.
I've been refactoring code for 20+ years, the overwhelming majority of time without any automated tests, and I'd say offhand 99% of the time it causes no bugs, and in the occasional case where it does cause a bug (because I am imperfect and sometimes make mistakes), I almost always soon find it during the same coding session and fix it.
The key is to understand the code well enough to know what effects what and how. Hold that model in your mind and you're golden. Lots of time saved not writing tests, updating them, fixing them when they break, etc.
Note that this is not an argument against tests in general, just an argument for there being cases where you don't miss them and they would be a net loss if you had them due to all the extra make-work required. I think there's a lot of kool-aid drinking going on among people who themselves probably lacked the ability to do "naked" refactors well. To those folks I say, "Great, have fun storming the castle!" but don't assume that other folks who haven't drunk your kool-aid are constantly banging their head on the wall breaking the code or living in fear of mysterious hypothetical bugs due to a lack of tests. A really excellent 'old fashioned' sort of test is to just run the fricking code -- did it work? did it do what it was supposed to do? data look good? k, move on to the next one of the thousands of other problems you have to solve and tasks you have to do in life. And use version control, so if you retroactively do discover a problem, you can review the diffs, or rollback, or do a tactical patch against the branch, etc.
I do agree with your statement, "By all means, ship. Do what you gotta do." And I agree that that attitude may cause you to at least temporarily incur technical debt, and you generally want to pay that down as soon as feasible. (backing out ugly hacks to replace with more elegant or easier to read implementations, etc.)
It's nice to hear someone with experience from before the "Unit Test is compulsory" explosion. Programmers should always test their work, but testing comes in much more of a diverse range than mere Unit Tests.
There are plenty of cases where Unit Testing is 'embarrasingly' appropriate. These pin-up applications blinds Testing advocates to the fact that Unit Tests are often inferior to other methods or simply not possible.
An example where completely automated tests are impossible is PDF generation. One cannnot 'Unit Test' this. One has to build a framework to take test data, create less than 100 pdfs, and then a human has to eyeball it. Humans cannot eyeball more than 100 images and perceive subtle errors. Less than 100 output images means this cannot exercise every codepath of even simple applications.
Often I was working on a part of the Render Pipeline which was not currently exercised by the existing test. Do I create a whole new test suite to generate test images for each branch condition? If it is important, yes, I created a new end-to-end test. But if it was not, then I addapted some existing test input and used my best judgement and my knoweldge of the internal state. This test did not last beyond my short-term memory, and my own set of eyeballs. If a problem occured later, I would recreate the test from memory.
This is still TDD, but it is so much less straight-jacket than requiring Automated-testing. The tests are 'thrown away' effectively. But the tested code remains. I would also say that the coder knowing which portions to test is superior in many cases.
 Similar to :: http://en.wikipedia.org/wiki/Embarrassingly_parallel . Network stacks, account balances, Frameworks are all embarrasingly unit-testable
and agreed, there are situations where like you said it's embarrassingly appropriate to have tests. To me the classic case is where you are publishing a code library with thousands of real users across the internet, with real apps built against it already, themselves already in production, etc. It's probably downright stupid of the maintainer to not have a suite of automated tests they can execute, and must pass, before every release, to ensure no regressions. So the maintainers can catch them, and resolve them, before it makes apps break downstream.
But the whole 'you must write tests always, before any application code' thing strikes me as insane and masochistic. :)
Our OS team works differently from our apps team for example, because if something in their stuff breaks it's a big deal, they also really worry about backwards compatability and whenever I touch their code I have to create an IPrinter34 to not break things for old clients (who still want their IPrinter12).
In our apps team though we keep things cleaner and kick out the IPrinter23 and keep things as IPrinter so people can read the code more clearly. Backwards compatability isn't so much an issue for us (apart from obviously considering updates) as we release 1 single unit that replaces all our files. If somebody has an old OS then its not supposed to work anyway so the app going titsup is the correct outcome.
Therefore I don't have an opinion on this subject either way, in some scenarios you do IPrinter34 and in others IPrinter.
When people talk with strong opinions on code they should probably start by announcing their own applications of it.
Mozilla Firefox was forked from the rewritten Mozilla codebase
> Netscape 6.0 is finally going into its first public beta...Well, yes. They did. They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.
Re-writing a codebase to spawn a totally seperate application is not the same as when you're doing it to re-release an enhanced version of the same application (with a bumped up version number).
BTW there are some great JWZ links here http://www.reddit.com/domain/jwz.org
So that means that the majority of us are that duct tape programmers right? but these doesn’t fit, So what’s wrong? well I think there are different types duct tape programmer based on there how smart they are, and that only the really smart ones can successfully write systems without a single test. The remaining programmers use tests to ensure what they have done works, and also hasn’t broken something else.
So given the fact that most people work in teams of programmers of varied skill levels, it makes sense to write tests. And while this might slow the 1 or 2 super smart guys on the team, it will aid the rest of the team.
Use some template magic when you need it, create sprawling class hierarchies if required and write tests if you think they are necessary. In the real world, purity is a liability, not an asset.
I like to think that I design better apis and libraries than him because I concentrate much more on what I want to have and try to weed out any inconvenience, but when I can't get what I want then he comes up with the idea I actually can get and is good enough.
This is my personal pet peeve. There is the iron triangle at work again, and when one of its points is fixed you still have two others to adjust, which seems to be forgotten here. Zawinski is trying to keep the scope (i.e. what defines the "done") and is sacrificing quality; he should instead try to make his life easier by reducing both a little bit rather than cutting just one to the bone.
I have personally thrown out entire chunks of code except the unit tests and started from scratch to get things working again. I don't think the value of this can be understated.
I don't think Joel meant to say that accumulating technical debt (http://en.wikipedia.org/wiki/Technical_debt) is the way to go, rather he suggested/re-iterated Donald Knuth's statement on optimization: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." in his very own way.
The fact that a complete rewrite is a big mistake, is another story, of course...
This phrase comes from watching too much Eastwood (Gran Torino ~ http://www.imdb.com/title/tt1205489/) The idea behind it is you can jury-rig/fix almost anything with WD40 & duct tape alone without the need for fancy expensive tools.
Standard duct tape uses an awful adhesive that depending on the humidity turn into a gummy mess or desiccates into flakes -- either way leaving a difficult residue and not actually holding. The loose right-angle weave of the coarse fibers means that it has zero shear strength on the most common axes, and is prone to splitting when under tension. The outer vinyl layer will separate on its own in heat, leaving a mess of fibers + adhesive behind.
WD-40 combines a solvent, a mild lubricant, and an adhesive (!) -- it's extremely prone to collecting grit and caking it onto surfaces. It will displace any better lubricant it is applied onto.
I hear what your saying but I'm talking hacks (http://www.flickr.com/photos/bootload/3961148668/) not engineering ~ http://www.flickr.com/photos/bootload/3960385835/
“All of life’s problems can be solved with two things—duct tape and WD40. If it moves and it shouldn’t, you need duct tape. And if it doesn’t move and it should, you need WD40.”
I think this does matter.
And it applies doubly again if you're an early stage startup, because you're still deciding what to build at that point. Astronaut architecture is a complete waste of precious time that you don't have.