Hacker News new | past | comments | ask | show | jobs | submit login
The Duct Tape Programmer (joelonsoftware.com)
333 points by mqt on Sept 24, 2009 | hide | past | web | favorite | 118 comments

Joel does not mention that Netscape code was so bad that it cost them serious credibility and customers. As a Netscape user back in the day, I did not care whether Netscape used unittests or duct tape, but I switched from Netscape to Internet Explorer because Netscape was so buggy it was painful.

Over a few years Netscape code became so unmaintainable they had to start from scratch, which cost them years. Joel wrote in another famous article that this was a major mistake. However, if the code is a giant "pragmatic" mess with no architecture and no unittests, it becomes extremely hard and dangerous to refactor.

IE also got a lot of mindshare among developers because it actually tried to implement some standards like CSS, which Netscape completely disregarded. Netscapes "pragmatic" alternative to CSS, <spacer>, <layer> and so on luckily died together with Netscape.

Many developers started making IE-only pages because it was almost impossible to get anything to work in Netscape 4. IE6 is pretty unpopular among developers today, but this is nothing compared to how the Netscape 4 generation was reviled back in the day by anyone having to develop for it.

> Remember, before you freak out, that Zawinski was at Netscape when they were changing the world. They thought that they only had a few months before someone else came along and ate their lunch

Also remember that they lost it all, and someone did eat their lunch. So maybe the their strategy should be reexamined?

The argument ingnores a fact that if netscape did not take off and be popular, microsoft might not ever consider buying spyglass browser and expanded it into IE (Eric Sink led the spyglass team, with his memoir http://www.ericsink.com/Browser_Wars.html) and waged browser war in late 90s.

What all software architects forget is, most of time, the piece of code that we wrote is to solve problems in life. Those problems have their life cycles; some are long, some are short. While we seems like to imagine the piece that we wrote will be a masterpiece as a Cathedral/Pyramid and last for 1000 years. Unfortunately that is not the case. Most of time, our programs are just solutions among solutions to a series of bootstrapping problems. So unless we have a lousy but popular solution to a problem, our potential competitors might just ignored a market and no progress happened for the field. And this is a lost to human progress.

It is the same as maintaining old buildings, if condition is right, you may just tear it down and rebuild what you deem is fit by today's standard. But don't forget the original building has served its purpose.


I personally have affection for Netscape 1.0. I still remembered how people in my lab in Taipei ftped to netscape's download server and waited for the moment when they uploaded the tgz file and started to download it and installed it on Sun workstations. And by using it I felt making stuffs on internet is better than studying physics and the decision changed my life.

Obviously Netscape created a revolution. The "duct-tape" approach allowed them to iterate quickly and deliver Netscape 1.0 to the masses and change the world.

However, the rapid success of Netscape was very much due to that the basic architecture and protocols of the web was already designed by others. I give Netscape credit for the <img> tag, but apart from that, almost anything Netscape designed on their own were ill-conceived disasters from <font> and <frameset> to <layer> and JSSS.

So I think the correction to the duct-tape approach is that it works best if somebody else already designed the basic architecture, e.g. if you are copying an already established product. It does not seem to work very well if you have to design something original.

Jamie Zawinski explained that the most of the bad code was introduced by the people of Collabra, that was acquired by Netscape and ended up leading the new development efforts for version 4 of the browser. These people had no experience in writing multi-platform code, and added a lot of the bad features that we came to hate during the browser wars. And the email component he wrote was never really used.

You mention a few strategic mistakes that Netscape made. Those were poor decisions. Netscape certainly went downhill when it moved from the quick and lightweight Navigator to the slow, bloated, and buggy Communicator. Shying away from standards like CSS in favor of doing their own thing was a bad idea. However, don't forget that the early versions of Netscape, which is where jwz played a much larger role, were great.

The biggest reason (by far) that Netscape faded into obscurity, though, is that Microsoft bundled IE with Windows.

More importantly, these strategic decisions about the product were not made by guys like jwz. jwz is a hacker. Like many hackers, his opinion was often at odds with the strategies of the Business People in charge.

I don't think it's fair to use these strategic missteps to discredit jwz or Joel's point about the balance between purity and pragmatism in writing code.

Parent has the worst vote count/relevance score I've ever seen. JWZ didn't write Netscape 4.

Netscape had quality issues long before version 4. Navigator really felt like it never left beta. Version 4 was just the point where the accumulated technical debt had become so huge that it proved impossible to recover from - and at the same time IE matured as a credible alternative.

The <img> tag was a nice pragmatic solution, but from there it went quickly downhill. Already around version 2 Netscape was adding badly thought-out features like <font>, <frameset> etc. which have taken a decade to get rid off. Rendering bugs and inconsistencies were never fixed.

They could probably have recovered and consolidated by stepping back and focusing a bit more on quality and sound design for a while. But they didn't.

It seems every single project Netscape developed was either abandoned or turned out so buggy it was unusable. Clearly no single programmer can be blamed for that, and I believe Netscape had lots of brilliant developers on board. (And I don't doubt that JWZ was a brilliant programmer.) It must have been the overall "duct-tape" mentality that was to blame.

Some of the other replies blames various parties like Collabra, the "business people", strategic mistakes, Microsoft and so on. Sure Netscape made some strategic blunders, but it is still one of the few examples where genuinely bad code quality was a major reason for the downfall of a company. (Quark would be one of the few other - like Netscape they squandered a near-monopoly by releasing increasingly buggy software and pissing off their users).

You know, it is a great book and I love Jamie's interview and the "duct tape" style was used well at Netscape, but just because the guy doesn't writing unit tests or use higher-level abstractions doesn't automatically make him better than other types. Some of the smartest programmers I've met have been religious about TDD and strict formatting and commenting and as a result maintain and work on some incredibly large and complex systems.

Did those systems start out that way? Maybe not, but after a few years and a couple rewrites I'm sure they came to the same conclusion that most programmers do when they work on things for a long time: "I wish I could go back and write some tests / automate some stuff / add better debugging, etc." I know I always feel that way. I do now, after about a year and a half of hacking together our site. I'd kill for a decent test suite and fully-automated deployment. Kill!

Both styles of programming have a purpose. Maybe we'd like to avoid multi-threaded architectures, but it isn't always possible. When you have 6 weeks to launch, maybe unit tests aren't necessary, but eventually not having them will start doing more harm than good.

The more I read the writings of celebrity programmers / entrepreneurs, the more I come to realize that most of what they write reads like an attempt to justify their way of thinking as being The Right Way. Why can't we all just agree there is more than one way to skin a cat and each probably has an applicable use case or two?

"I'd kill for a decent test suite and fully-automated deployment. Kill!"

Then why don't you write one? I suspect you don't have the time - well back then when you created the system, you did not have the time either. So the bottom line again seems to be: it is not actually THAT important. Otherwise you would make the time.

That's a bad conclusion to make: it's the classic tradeoff between importance and urgency. If you only ever do the urgent stuff, the hair-on-fire-has-to-be-done-yesterday stuff, you'll never make time to get to long-term strategic projects.

The payoff for something like unit testing, automated deployment, and continuous integration are over the very, very long haul. If your cost/benefit analysis is always looking at 3- or 6-months out, it'll never seem like a win.

So if you only ever do the urgent stuff and never anything strategic, 3 years later you'll realize that if you'd just sucked it up back at the start and done that stuff, even if meant putting off otherwise urgent features, you'd be further ahead than you are now, because it would have more than made up for the initial investment.

So it's not that you don't do those things because they're not important, but rather because they're never urgent, and because most people's time horizons, especially in a startup, are fairly short.

Still, in 3 years the company might already be bankrupt, and nobody would care about tests anymore. Bankrupt is maybe too extreme, but the particular code module you spent 3 months writing tests for might be replaced by some open source solution or just not being needed anymore.

I kind of see your point, but I find it difficult to deduce a binding rule from all of this. Sometimes it is important to have tests, sometimes other things are more important. You still have to decide on an individual basis.

Thing is, the TDD and technical debt evangelists are typically consultants. Consultants usually earn more money the longer a project takes, and their income is not tied to the yields of the project. Just something to take into consideration imo.

> Consultants usually earn more money the longer a project takes, and their income is not tied to the yields of the project.

Well... that seems oversimplification. When I consult, I ask the price roughly based on the estimation of how much time I'm going to take, but ultimately it is the matter that how much value the client sees in the outcome. Note that the expected outcome and the price are agreed beforehand---if I bump into an unforeseen obstacle and take twice time as my estimation, I can't ask them twice price; basically the more I work, the less I make per hour.

So (1) I have a very strong incentive to make sure my part works in time, and (2) There's less incentive to finish the work earlier than the initial estimation. Thus I tend to put effort to write tests and debugging aids within the time frame (or, in other words, I try to negotiate initial time frame including those tests).

Certainly there's a different pressure on employees; they may adjust priorities and time frames more frequently.

So, it is true that consultants make more money the longer the project takes, but the incentive is in making sure it won't take longer than expected, rather than making it take longer deliberately. (After all, if the consultant is doing ok, there are projects in waiting list so it's not much point to taking one project longer than necessary.)

I agree that there's no overarching rule that makes the tradeoffs easy to analyze.

I will, however, say that after working on the same code base for over 7 years now, and watching the company grow from 15 employees to 400, I can't imagine ever working at a place that didn't have a large investment in unit tests, tech debt elimination, automation, etc. Without that stuff, our products almost certainly would have collapsed under their own weight by now, and our ability to ship predictably and on-time would be gone. Even within the company, we have some groups that have done better than others as far as automating tests (both because of team personality and because of technical issues that make certain types of features harder to test), and it's quite obvious that the groups with the best testing are the groups that are able to make much more predictable progress and that are able to ship on time. The groups with less-good testing tend to be prone to fairly massive schedule slippages due to a ton of late-stage regressions that only get caught when they ship their code out to their internal users.

Once things get to the point where no one person can reasonably understand the full implications of their changes, because the system is just too big and complicated, if you don't have unit tests you're in big, big trouble, and you need to reduce tech debt so you can keep things as comprehensible as possible. Even then, of course, you have to constantly decide how much to invest in testing and infrastructure and cleanup versus how much you invest in forward progress, and there's never an obvious equation that will give you a right answer.

If your code base and team are small and likely to stay that way, such that you can still mercilessly refactor and change the code without introducing a bunch of hidden bugs, then testing doesn't matter as much. If you ever expect the code to get to the point where that becomes less true, and where the possibility of introducing errors increases, then it starts to matter a whole lot more.

Hypothetically, let's assume we built the same product with two teams, one that did a bunch of unit testing and one that didn't (call them Team A and Team B). From my experience, what essentially happens is that Team A ships version 1 first, ships version 2 first but takes about as long to build version 2 as Team B, ships version 3 about the same time (since it takes them longer to build it), experiences a massive schedule slip in version 4 (since the complexity catches up to them and things becomes buggy and they start playing whack-a-mole with bugs), and don't really ever ship a version 5 because their code has so much tech debt that no one can change anything safely without breaking something else unintentionally, and they start contemplating a complete rewrite of the code base. Again, totally contrived situation (it doesn't have to go that way, Team B could still totally screw things up anyway, etc.), but that's roughly what I've seen happen, both at my company and at others.

I don't think it's fair to say that consultants push TDD and tech debt reduction because that means the project will take longer: that's a bit overly cynical. Many, many organizations use unit testing and such in house because it has a huge long-term benefit (as well as generally more predictability in the short term, which is often more valuable than absolute speed), not because some consultant told them to do it.

I am not actually against unit tests, but I have seen it being taken to unhealthy extremes. For example at some companies there are automated tools that check that every method has a unit test. In the end people write unit tests for Java getter and setters and so on. Mind numbing as that task is, people also end up writing bad unit tests just to silence the tool.

A lot of unit tests make sense, but I suspect they also offer plenty of opportunities for idling time away.

No question, it's a fine line . . . you need to be pragmatic and ask "is writing and maintaining this test going to save me more time than it costs?" Over several years, the maintenance of the tests themselves becomes a huge cost, which is something the TDD guys don't seem to talk about much. (My turn for an overly-cynical guess: since many of them are consultants as you've pointed out, they don't hang around with the same code and the same tests for 7 years, so they don't necessarily see how it really plays out). "Bad tests" are actually a huge net negative for development.

If the test is testing something (like a getter or setter) with basically no chance of breaking, then it's a waste of time. If the test is likely to be fragile or non-deterministic, it's a waste of time. If the test is just too hard to write, and it's not too hard to just test by hand, then automating it is probably a waste of time and you should just QA it by hand every so often.

Finding the right balance tends to come back to the old experience and skill thing: you need to have some intuition about which tests will give you the most value (because that part needs to be rock-solid, or because it's hard to get right, or because it's high-change) and which tests need to be thrown away or never written because they aren't worth it.

Taking any development process too far tends to work out poorly, and taking any metric (like test coverage) too seriously is always a bad idea. That said, I've rarely seen unit testing taken way too far; not testing enough and ending up with buggy, regression-riddled software is a far more common failure mode.

This is why it might be an idea to have unit tests AND QA. Be pragmatic with the unit tests and center them around core functionality and things that are hard to test (think very hard about race conditions for example). QA if they're any good should catch the boneheaded exceptions (such as a mis-behaving getter that calls itself).

The difficulty is that you always know what's urgent but you cannot be so sure about what has longer term importance.

In the case of unit testing, though, long term is short term enough for me to recognise it as urgent.

So the bottom line again seems to be: it is not actually THAT important. Otherwise you would make the time.

Another possibility is that it is important, possibly even more important than the other things they are working on, but isn't getting prioritized for other reasons. They've succeeded thus far with the way they have prioritized things, but that doesn't mean that they made all the right choices, it just means they made enough right choices to survive to this point. The future may prove tdavis painfully right in his concern about not having test cases. Another company that may not have made enough other choices correctly may be able to cling to having tests and deployment automated as the thing that saves them. I don't think you can draw any real correlation except that it might not be that important, and only time will tell.

That's not true. Just because he never gets the chance to doesn't mean that if he had it, it wouldn't make him more efficient. If you're bailing out a ship, you might not have time to run and get a motorized pump, but you sure as hell would like one!

So why didn't he get a motorized pump before the alarming need for it arose? Because, at the time, that money was better spent on other things that were more necessary then.

So why didn't he get a motorized pump before the alarming need for it arose? Because, at the time, that money was better spent on other things that were necessary at the time.

Might have been better spent. Might have. Say that it was better spent is just begging the question.

That he didn't get the pump can be blamed as much on poor risk mitigation as it can be on avoiding unnecessary expenses. Having a cavalier and dismissive attitude toward mitigating future risks can result in a demonstration of exactly how faulty one's prioritizations really were.

Don't fall into the trap of assuming just because things were done out of assumed necessity, they were the right things to do.

Fair point with the motor pump, but it seems to me that we end up on square one: it might have been better to get the pump in advance, or it might not have been better. It all depends. With the motor pump example it sounds like a no-brainer, but even there it depends. If you have a very small boat, heaving a motor pump on board might actually sink the ship.

There are all sorts of risks, and it isn't obvious when it is worthwhile to get insurance and when it isn't. One risk is never launching.

There are all sorts of risks, and it isn't obvious when it is worthwhile to get insurance and when it isn't. One risk is never launching.

Right. I'm not saying become paralyzed due to the risks. And I'm not saying do dumb things that mitigate a risk with an even bigger risk. What I'm saying is don't assume your conclusions. Things that other people might argue are important might not seem that important to you, given whatever perspective you have at the time. It is possible part of the reason they argue for using a certain practice is because they, too, didn't see it as important until they got burned hard for their presumption.

This is not to say they are important. It might be that people are just squawking to sell books or consulting hours or seats at a conference. But, to use a really dumb metaphor, if that many hens are squawking about the same thing, it might be time to look in the barn and see what all the noise is about.

This is where experience and wisdom are supposed to come in.

On the topic of testing, it's only currently viable to write tests for bugs which occur in the future and that are properly caught (I recently caught one completely by accident because it wasn't caught by our alert system). Going back and trying to write test cases for everything I've written up to this point simply makes no sense; they should have been written before the code, from the start (or from a point that I knew the code wouldn't be re-written again).

Automated deployment has been less of a priority because I don't really have to re-deploy that often (meaning spin up a new server). However, if/when I do need to, it will be automated. Even if I haven't written the scripts yet, I will do it before the new machine comes up.

Back when the system was created I had never been a "TDDer" before and the tools that exist now did not exist or were not viable for production, especially in terms of automated deployment and dependency management. If I could go back and do it again (or just murder someone) I'd love to have good test coverage and automation tools. It isn't that it's not important, it's just that the ship has sailed on half of it and the other half is situation-dependent at this stage.

In summary, it is THAT important. I will go as far to say that I think these two items in particular are more or less vital to the long-term viability of a software project. Anyone maintaining a large, aged project that contains no test coverage and no automated build / deploy tools has my deepest sympathies; it's like stacking an infinite house of cards.

Back when the system was created I had never been a "TDDer" before and the tools that exist now did not exist...

Amen. I think a lot of the heated arguments about the value of testing arise from the fact that not everybody is using the same toolset. I'm sure unit testing in C is a major pain in the neck. (Can anybody link me to a document suggesting how to do it?) I know that unit testing in PHP is no picnic. But testing in Ruby is a joy, because the Ruby community has lavished attention on the subject over the last five years and the language, the tools, the idioms, and the culture are highly developed.

it is not actually THAT important. Otherwise you would make the time.

... that's just a little presumptuous, with respect to the concrete facts and motivations involved...

It comes down to the fact that noone really has priorities, much less ranked ones -- there is only the priority, and it's whatever you're presently doing.

Didn't mean it that way, and why do you think so? How else to judge importance than by "people pay for it/make time for it"?

My sentiments exactly. Joel's article is just unnecessarily long with very little real content. There are many different ways to develop - Some people like C++, some don't, some do TDD, some don't.

You can never say this is the ONE true way to develop. Do what works for you, but just be aware of other points of view.

It's nice to see Spolsky get this enthusiastic about something other than his marketing, and I'm sure Peter Seibel agrees. But has no one else noticed how he negates his entire point at the end? After going on and on about how great duct tape programmers are, he says, don't think that means you can be one, because they're magic. (He says "pretty", but in this case pretty means magic.) To wit:

Duct tape programmers have to have a lot of talent to pull off this shtick.

Oh, I see. What matters is talent, not duct tape at all. Untalented duct tape programmers do every bit as much damage as the untalented design-pattern programmers he scourges. So what was the point again?

The point was that he exercised for 60 minutes today, instead of 30.

Absolutely agree. I work with a "duct tape" programmer and you couldn't pay me enough to touch his code. I'm so sick of someone asking me what's going on with RelayHandler's mda function and what do the variables "a", "sb", and "c" stand for? I kid you not... I don't always agree with Joel and this is one of those times where I absolutely do not agree. Duct tape programmers can stay the hell away from me.

He talks in his analogy of people who haven't taken off in their go-cart and are discussing design issues, and the people who took off and are fixing things with duct tape. There's a big difference between what I would call a "duct tape programmer" and someone who happens to keep a roll of duct tape handy. The former will run that duct-taped system in the next race, and will keep adding duct tape as problems arise. The latter will run the race, but then tear of the duct tape and look at why cart needed duct tape anyway and will then start debating design changes to get it to work better next time.

I think "duct tape programmer" should be derogatory, while "practical programmer" or "pragmatic programmer" would be more apt for Joel's idols.

I got pulled up short by that ending too.

Most of the piece seems to be about the benefits of keeping it simple and doing the obvious thing that will work rather than outsmarting yourself. All well and good advice, easily applicable by anyone with the confidence to face down blowhards that would rather things feed their leet-programmer ego than actually meet a customer's need.

But then it goes into not writing tests and doing complicated bit-munging to save a bit of time or space (both of which you think would be the opposite of the previous advice of keep it simple and accept your own human failings) and which is stuff that you can only get away with if you are both talented and lucky.

If he dropped the last paragraph and the bit about not writing unit tests (which, the theory goes, will save you time assuming, again, that you're not a lucky genius) then this would hold together somewhat coherently.

Agreed. I wasn't really sure what I had just read. One thing I feel fairly certain about though is that these duct-tape people will screw up a project in the long run.

I think duct-tape programmer is not a very well defined concept. People who are careless and sloppy are the ones who will wreck a project. Duct-tape sounds too sloppy to me as well, but it could also be a metaphor for being pragmatic and keeping it simple.

Gruseom, you are absolutely right that it is talent that matters, and Spolsky understates this point, appending to the end of his post seemingly as an afterthought. A talented hacker can take some duct tape and turn it into a thing of beauty, like those prom dresses and tuxedos people make every year out of actual duct tape. A bad coder doing this would make only a piece of trash that falls apart, just like a bad coder making a standard garment would try to add so many frills and accessories and various things that its complexity would soon outstrip his talent.

So what's his point? Be talented.

Tangent: you think Spolsky overdoes it with the marketing stuff, gruseom?

No, actually, I think he's brilliant at it. I tend to at best half agree with him on software issues, but I enjoy how lively his writing is and the way he uses it to promote his company is usually fine with me. In some ways it's exemplary, because the promotion is typically secondary to what he's writing about, and tends to be related to the subject at hand, so it doesn't come across as arbitrary or sneaky. I suppose a lesser writer might have trouble pulling it off, but it's a good model for software people.

Still, it was nice to see him get excited about something not directly in that vein.

Edit: It was also by reading a Spolsky post way back when that I originally discovered PG's essays. So I can't ever get too mad at him.

It's tricky, right? We blog a lot for a company our size --- multiple books worth of technical writing on the blog --- and we still feel like we have to be ultra-careful about promoting ourselves to keep people from feeling like we're just trying to sell something. For instance, I'm not sure we've ever worked in a call to action for our consulting services in a post.

It's definitely one of the things I'm most impressed about with 37s and Fog Creek, that they've managed to extract so much business value out of their blogs.

Reading Spolsky back in '03-'04 was what made me want to become a product manager, which I was pretty quickly able to do, which experience taught me what little I know now about the software business (you know, besides actually writing and shipping code, and, uh, getting funding, neither of which appear to have any correlation with success in the business ;] ).

I actually think Spolsky's weakest on the hard tech content, and strongest on the product marketing side.

I'm not in your target audience, so I've seen very little of you guys' blog, but it certainly didn't feel marketing-driven to me.

I agree about Spolsky & Co. not being that technically strong, which is a little ironic given how much he's written about attracting the best programmers, etc. Still, although I don't read him regularly, I almost always enjoy when I do. And a lot of what he says about the software business is really helpful. Chris Dixon linked to a post of his the other day about lowering the price of complementary products which was a great example.

>It's nice to see Spolsky get this enthusiastic about something other than his marketing

Uh... wasn't that his affiliate amazon link? If he gets a cut each time somebody buys the book he keeps telling us to buy, isn't it still his marketing?

Awesome read. I couldn't agree more.

I've worked with a great many 'theorists' coders and they never get anything done. They spend too much time abstracting into nothingness. You know.. the kind of guys who remind you of your 3rd grade grammar teacher making sure you know when to use 'whom' vs 'who'...

While I think eventually one would refine their product so that it uses best practices I would say that having customers and a product should definitely be a prerequisite.

Yes, thank you. The moral of the story is that you ship a product first, then you tweak, improve, and refactor it once you've got a reason to!

If you do that, you'll run a very considerable risk of wonder why version 2.0 of your product is taking so damn long to ship. The answer: all of the things you punted, ignored, assumed, patched over, and otherwise haphazardly threw together in version 1.0. Now all these have set your code in concrete, and you have to remove half the foundation to get them back out.

It is always, always, always better to have a delay in version 2.0 than that a delay in version 1.0.

It is often that way anyway regardless of planning. You get into it then your needs change over time. By version 2.0 a new foundation will last longer and incorporate things people have learned from the first version.

This is know as the "second system effect":


No it isn't. The Second System effect is about wanting to put in all those features into version 2 that you left out of version 1.

Yes, which is what the parent said:

"all of the things you punted, ignored, assumed, patched over, and otherwise haphazardly threw together in version 1.0"

No, sorry it isn't. Fred Brookes goes into detail when he coins the term "Second System Effect", and it definitely doesn't refer to half-arsed, half-debugged, haphazardly thrown together anything from version 1.0.

The Second System Effect is specifically about new features.

But all that refactoring will be easy because of the comprehensive unit test suit you wrote. Oh wait...

But don't you have to add/remove/edit your comprehensive unit tests after you refactor? Refactoring would have have to include this work needed to keep your unit tests comprehensive.

"One principle duct tape programmers understand well is that any kind of coding technique that’s even slightly complicated is going to doom your project."

Like writing a custom compiler for your web app?


After he jumped that shark I don't read anything he writes anymore.

"After he jumped that shark I don't read anything he writes anymore."

...how'd you get that quote then? Or did you only read enough to get something to complain about?

He wrote a multi-threaded C++ app to parse the html and return a random sentence?

Rhetorical question; What percentage of an article need to be read for the article as a whole to be classified as having been read?

I got it from another posting right before mine. I'm sorry I offended you and your beloved coding sensei.

But you've just confirmed the article here. Wasabi is exactly the duct tape he is talking about. FogBugz started as VBScript project, and when they needed to ship it for both Windows and Unix, instead of rewriting the system in two languages, they created a "compiler" to generate PHP code from existing code base. Later they added generation of .NET bytecode, and -- boom! -- instead of rewriting the whole FogBugz project they made it run on .NET runtime.

Wasabi is a duct tape.

As the lead maintainer of the Wasabi compiler project, it actually feels pretty good to have someone call it duct tape :)

Joel did not write, not imply, that he himself is a duct tape programmer.

In fact he made it quite clear in the last paragraph that these duct tape programmers are a rare breed. Maybe you never read that far though.

A custom "compiler" for me is a bit of a stretch. It's essentially a code generation tool where you code in a language similar to VBScript and it could output php or classic asp. It's not a custom C / C++ compiler or anything complicated like that.

So it's not completely crazy.

That article was from 2006. I'm sure that his (and many other people's) viewpoints change over the corse of 3 years.

Noup They still use Wasabe. He mentions it often on his podcast. And I don't think it is such a bad idea. It is simply a translator from one languate to another. Like GWT (Google Web Toolkit) They program in Java however the result is 'translated' to javascript. That is how the Gmail UI was made.

So no that crazy

Google only uses GWT for their internal projects. Gmail's UI doesn't use it.

You're right, though, it's not that crazy of an idea.

Isn't GWT used to build their wave client?

"Any kind of coding technique that’s even slightly complicated is going to doom your project."

"They xor the 'next' and 'prev' pointers of their linked list into a single DWORD to save 32 bits, because they’re... smart enough, to pull it off."

How is that not even slightly complicated?

The Kolmogorov complexity of COM is, at the very least, hundreds of kilobytes of itchy, fidgety, sensitive, and complicated code. The Kolmogorov complexity of xor'ing two pointers to save 32 bits is on the order of tens or hundreds of bytes. (I'm using the term a bit loosely, obviously, but I think it gets the point across.) I suppose it depends on the limit of "slightly", but in context I think it's clear we're talking about "techniques" that are more than a three line hack in your linked list library. YMMV. (That is, I do see the point you are trying to make.)

Presumably, the duct tape programmer is doing that because it is the difference between making the product go and not making the product go, not because they love bit packing. It's not a technique I'd adopt today, but Zawinski (just to choose one example from his repertoire I've read about) was trying to make machines with, say, 8MB of RAM able to read thousands of email messages. You get a bit nutty under those constraints, or you ship slow crap. There isn't much of a third choice. (Fast and featureless, maybe.)

(I think I can bid lower than 8MB of RAM, too, but I'm a bit fuzzy on netscape timeframes vs. ram timeframes. I think 4.0 was in the 32-64-128MB era, putting 3.0 a ways back, but I'm not sure.)

I see what you're saying, and I agree. It's not comparable to COM.

I think the thing is, articles like this tend to create some idealized programmer that is just a conglomeration of attributes the author likes even if they are mutually exclusive. To me, avoiding complexity and doing bit manipulation are mutually exclusive.

It's like saying you should use left shift (or is it right...?) instead of diving by 2. Ok, it may be faster. Or the compiler may just do the same thing regardless how you type your code. The point is that "/ 2" means divide by 2 to anyone at all familiar with code. Unless you have some really compelling reason to do otherwise, you should use "/ 2".

Using shifts for division (or various other bit manipulation) may be how your idealized programmer shows their classical training, but don't kid yourself into thinking that bit manipulation fits into all your other ideals for programmers.

Joel's idealized programmer also avoids unit tests. Are you serious? How can this possibly be a good idea? No, your customers don't care if you wrote unit tests... in the same way you don't care if your architect does whatever it is architects do to ensure the accuracy of their work. But that's just the point. You don't care (nor should you) about how they ensure accuracy. You care only that they do. So no, your customer doesn't care if you wrote unit tests, but I assure you they care if your software crashes or gives inaccurate information.

Of course, no one ever creates an idealized programmer without creating their opposite. Joel's "ideally" bad programmer multiply inherits from 17 sources. Does any sane programmer really do this? No. Of course not. Why bother mentioning it? It's like saying an idealized pilot is not like those other pilots that intentionally crash their planes. Well... no one intentionally crashes a plane. Don't bring up absurd examples to prove your point. If real life doesn't prove it, then it's not a valid point.

The simple fact is that when I look at my own real-life, deployed-in-production code, I find this: The code I wrote just to get a problem solved in whatever way possible (duct tape) becomes more and more of a liability as the requirements change. With the code that I spent the most time designing (assuming I eventually came up with a good design), the more the requirements change, the more I see the beauty of the design. When a change in requirements can be fixed with a find/replace, it's a job well done. Duct tape code leads to duct tape maintenance. Duct tape maintenance leads to thedailywtf.com.

I have no problem with emphasizing the importance of shipping software. I have a problem with people saying "real programmers use butterflies" when they aren't writing a web comic.

I don't think there's a single "real programmers" article in the universe that is internally consistence (doesn't advocate any mutually exclusive practices). Like I said, it's an ideal, an ideal constructed out of everything the author could find in their mind, whether it fits together or not. This wouldn't be a problem if the author admitted even a slight possibility of exaggeration or lack of internal consistency, but they never do.

Now... I think by now I've probably exaggerated and broken internal consistency enough for one day, so I'll stop here.

Unit tests have sometimes been a great help, especially for regression tests, but they can get in the way, especially if you actually want to ship.

I can see the benefits of getting the 1.0 to market first (if buggy), getting some market share and using that lead time to either iron out the bugs or to rewrite so you don't have to put up with duct tape maintenance.

I've been in a situation where the users started using the prototype because, despite being buggy as hell, it did stuff light years ahead of what they had before. So imo duct tape 1.0 is ok.

I don't see from your argument how unit tests keep you from shipping. You can still choose to ship a product with failing tests. The difference is now you know what (some of) the bugs are.

> With the code that I spent the most time designing (assuming I eventually came up with a good design), the more the requirements change

Your use of "assuming" is interesting. Sometimes it's hard to know if it really was a "good design" until after the requirements change...

"The competent programmer is fully aware of the limited size of his own skull. He therefore approaches his task with full humility, and avoids clever tricks like the plague." - Dijkstra

You're quite right with unit tests, I often wonder why people only seem to state their about regression. Two and a half years into my project I think our tests have only caught one or two relatively tame regressions. I mostly use them to test new components (that would be hellish to QA manually as the system is so big it takes a while to build->deploy->run), large refactoring of components and integration tests (like spamming our transaction service randomly and ensuring it always outputs valid files)

If you're reusing that nice general list you wrote ages ago with static inlines so that there isn't any function call overhead anyway, it's trivial, because you change the "get next element" and "get prev element" functions and never worry about it again. Complicated algorithm != complicated coding technique. It's simple expression of smart algorithms.

If you don't understand xor next, prev, it's just one line (one function) to rewrite. It's local complication.

If you don't understand how your object model interact with your threading model, you're DOOMED.

Jamie Zawinski is not "hard at work building the future". According to his own website, he is managing the DNA lounge, and the last thing of any substance he worked on was a program to delete silence from mp3 streams (see http://www.dnalounge.com/backstage/src/archiver/). He claims a copyright date of 2001-2006 for this program, which, after a quick skim, appears to be high quality. In my opinion, he is a talented programmer who has this to say about the software industry:

    (1999+) But now I've taken my leave of that whole sick,
            navel-gazing mess we called the software
            industry. Now I'm in a more honest line of
            work: now I sell beer. 
So, I suppose, Joel is right, in a roundabout way:

Selling beer => flirtation => sex => sperm + egg = Building a future human being!

But seriously, Joel is on crack.

Jamie also worked under Peter Norvig, and this is what Peter said about Jamie:

". One of the best programmers I ever hired had only a High School degree; he's produced a lot of great software, has his own news group, and made enough in stock options to buy his own nightclub. "

-- from http://norvig.com/21-days.html

Once you build something as important as Netscape Navigator and cash out for a good chunk of money, I think it's ok to do whatever you want. Building something for the future doesn't have to be a lifelong thing.

I agree with the basic idea, but I think Joel is going over-the-top with the C++ hate. I've actually shipped real code that used the insanely complicated feature of C++ templates. Works great.

The problem is not with a specific language or technology -- it's using the bleeding edge technology, when the boring one will do.

I am not a fan of this kind of extreme maximalism; surely there has got to be a decent compromise? That's assuming, of course, that purely utilitarian pragmatism vs. lofty, academic architecture idealism is a valid dichotomy, and that there don't exist a variety of third ways and composite profiles. Of course, any useful generalisation that posits a continuum can be torn down, but I really think that in this case it needs doing.

There has got to be a better way than being a "duct tape programmer." It seems to me that one can practice good design, architectural grace, and hold true to a variety of other tendencies that seem theoretically and aesthetically appealing (the latter is very important; every good programmer I have ever met sees an artistic aspect to programming, even if it is not necessarily the central or principal one - it is a craft) without being the guy that never actually puts out any concrete deliverables.

I think this is just an angry, bitter overreaction - and a very understandable one that I fully endorse - to the dogmatism of many test-driven development acolytes and pig-headed "patterns" people.

I call bullshit. I've worked with some "just get it done programmers'. Have you tried to go into code that someone threw in to just make it work.

Abstraction, interfaces and unit tests are not a leisurely activity for academic developers. We use them to make the code less complex and maintain. The cost of development isn't the initial code base, its the fixes and additional features people want AFTER the initial release(s). Going back into the code and safely making changes or adding code with these in place reduces time.

I had an application without automated testing, it cost the company almost 2000 man hours to test the system every time they made a release.

Design patterns, Joel, are repeatable patterns within code. Design patterns are again to help when another developer goes into the code they can see what the heck the original developer tried to accomplish.

To summarize, I would suggest you out source some code to the far east. They will get it done really fast for you. And yes it will only work 50% of the time. I love buying products that will only work 50% of the time and unfortunately I don't get to pick which 50% works.

I don't think that's the point Joel is making. At the end of the article, it really comes together. Perhaps you are too heavily focused on the specifics (unit testing, etc) instead of the overall theme. He's basically saying that while the other guys are wasting time overengineering a project, sometimes just getting to work and getting started is a much more productive approach.

Joel isn't directly bad-mouthing unit testing or multithreading, but as developers we tend to think about all of these cool toys we can use and "ooooh" and "ahhhh" instead of actually shipping code. Try not to get so hung up on the specifics.

And the guys who will have to maintain that promptly shipped code will say a lot of "wtf?" and waste 10x more time on it.

I don't think he sees a "duct tape engineer" as a "just get it done programmer." Like he said, they're pretty boys that just look pretty, ie they're rare and have a special balance of pragmatism. JWZ guys are the kinds of wizards you might never work with. We're also talking about a fairly specialized position Netscape was in, that was a company that made a huge amount of money. Have the twitters and facebooks of today made any money yet? Some of those guys at netscape got rich, what can you say to that? They've got scoreboard.

"Just get it done programmers" are a scourge. Most of them have no business deciding which corners to cut, let alone how much to cut them by. There is definitely a balance to achieve. Are you going to ship a "3.0?" Are you building a company to last 5 or 10 years? Or one to last as long as it takes to make the money back and sell? Is this a product you're prepared to get dirty fighting competition with or do you just not care? You never want to over-engineer but cutting too many corners is far worse. If you're a low investment startup with a decent idea, do you honestly think you can put out something that works 50% of the time and even have a shot at a 2.0? Maybe if you've got a proven wizard that spins money out of thin air you duct tape the whole thing like he thinks you should.

If you have investors and you slam out a mediocre 1.0 in like 3 months, will they really sit back for 12 months while you "engineer" 2.0? Or do they want 2.0 in another 3 months?

I worked at Orbitz which has a huge Java codebase. It was complex and hard to maintain. It was loaded with abstractions, interfaces and unit tests. I believe the former was mostly due to the latter.

Not to say that the latter has no merits -- there are situations and cases where they are 'net win' good things. But when overdone or done poorly they make a codebase much harder to understand, troubleshoot, extend or fix.

I've seen this first-hand many times, though I notice it most often occurs in large corporate Java shops rather than with smaller companies or codebases or more nimble/concise languages.

"And unit tests are not critical. If there’s no unit test the customer isn’t going to complain about that.”

By all means, ship. Do what you gotta do. But a code base that doesn't have tests cannot safely be refactored. This technical debt must eventually be paid by the product owner in cash and the code's maintainers in sanity.

I disagree with your statement that a codebase without tests cannot safely be refactored.

I've been refactoring code for 20+ years, the overwhelming majority of time without any automated tests, and I'd say offhand 99% of the time it causes no bugs, and in the occasional case where it does cause a bug (because I am imperfect and sometimes make mistakes), I almost always soon find it during the same coding session and fix it.

The key is to understand the code well enough to know what effects what and how. Hold that model in your mind and you're golden. Lots of time saved not writing tests, updating them, fixing them when they break, etc.

Note that this is not an argument against tests in general, just an argument for there being cases where you don't miss them and they would be a net loss if you had them due to all the extra make-work required. I think there's a lot of kool-aid drinking going on among people who themselves probably lacked the ability to do "naked" refactors well. To those folks I say, "Great, have fun storming the castle!" but don't assume that other folks who haven't drunk your kool-aid are constantly banging their head on the wall breaking the code or living in fear of mysterious hypothetical bugs due to a lack of tests. A really excellent 'old fashioned' sort of test is to just run the fricking code -- did it work? did it do what it was supposed to do? data look good? k, move on to the next one of the thousands of other problems you have to solve and tasks you have to do in life. And use version control, so if you retroactively do discover a problem, you can review the diffs, or rollback, or do a tactical patch against the branch, etc.

I do agree with your statement, "By all means, ship. Do what you gotta do." And I agree that that attitude may cause you to at least temporarily incur technical debt, and you generally want to pay that down as soon as feasible. (backing out ugly hacks to replace with more elegant or easier to read implementations, etc.)


It's nice to hear someone with experience from before the "Unit Test is compulsory" explosion. Programmers should always test their work, but testing comes in much more of a diverse range than mere Unit Tests.

There are plenty of cases where Unit Testing is 'embarrasingly'[1] appropriate. These pin-up applications blinds Testing advocates to the fact that Unit Tests are often inferior to other methods or simply not possible.

An example where completely automated tests are impossible is PDF generation. One cannnot 'Unit Test' this. One has to build a framework to take test data, create less than 100 pdfs, and then a human has to eyeball it. Humans cannot eyeball more than 100 images and perceive subtle errors. Less than 100 output images means this cannot exercise every codepath of even simple applications.

Often I was working on a part of the Render Pipeline which was not currently exercised by the existing test. Do I create a whole new test suite to generate test images for each branch condition? If it is important, yes, I created a new end-to-end test. But if it was not, then I addapted some existing test input and used my best judgement and my knoweldge of the internal state. This test did not last beyond my short-term memory, and my own set of eyeballs. If a problem occured later, I would recreate the test from memory.

This is still TDD, but it is so much less straight-jacket than requiring Automated-testing. The tests are 'thrown away' effectively. But the tested code remains. I would also say that the coder knowing which portions to test is superior in many cases.


[1] Similar to :: http://en.wikipedia.org/wiki/Embarrassingly_parallel . Network stacks, account balances, Frameworks are all embarrasingly unit-testable

thanks for backing me up. yeah i often feel un-PC when I say anything bad about unit tests. (Like saying, gee, maybe there are differences between races, or between cultures, or between genders -- Cats, what you say?!?!)

and agreed, there are situations where like you said it's embarrassingly appropriate to have tests. To me the classic case is where you are publishing a code library with thousands of real users across the internet, with real apps built against it already, themselves already in production, etc. It's probably downright stupid of the maintainer to not have a suite of automated tests they can execute, and must pass, before every release, to ensure no regressions. So the maintainers can catch them, and resolve them, before it makes apps break downstream.

But the whole 'you must write tests always, before any application code' thing strikes me as insane and masochistic. :)

Might just be a matter of context and without the context we get the zealotry. Biggest mistake everyone makes when saying "XYZ is teh lames" or "teh wins" is not describing their context.

Our OS team works differently from our apps team for example, because if something in their stuff breaks it's a big deal, they also really worry about backwards compatability and whenever I touch their code I have to create an IPrinter34 to not break things for old clients (who still want their IPrinter12).

In our apps team though we keep things cleaner and kick out the IPrinter23 and keep things as IPrinter so people can read the code more clearly. Backwards compatability isn't so much an issue for us (apart from obviously considering updates) as we release 1 single unit that replaces all our files. If somebody has an old OS then its not supposed to work anyway so the app going titsup is the correct outcome.

Therefore I don't have an opinion on this subject either way, in some scenarios you do IPrinter34 and in others IPrinter.

When people talk with strong opinions on code they should probably start by announcing their own applications of it.

Maybe Netscape's duct-tape programming had some side-effects though... http://www.joelonsoftware.com/articles/fog0000000069.html

If you read the chapter that Joel recommends, it talks about the design patterns guys that came in and how the anti-duct tape guys had a role to play in that delay.

I haven't read the book yet, plan to get it soon. Sounds awesome.

A consequence of those rewrite side-effects:

Mozilla Firefox was forked from the rewritten Mozilla codebase

> Netscape 6.0 is finally going into its first public beta...Well, yes. They did. They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.



You're confusing two different things here.

Re-writing a codebase to spawn a totally seperate application is not the same as when you're doing it to re-release an enhanced version of the same application (with a bumped up version number).

Duct tape is a great metaphor. Duct tape was one of my favorite toys as a child. My dad was always mad at me for wasting it. But i can't help it if i want to build a tower from straws and duct tape, a tool thats as flexible as duct tape is empowering for a 10 year old. I propose duct tape become the new hacker symbol! :D

;-) I think many other engineering disciplines would be upset if we took it all for ourselves - especially mechanical engineers.

Im sure that many of us will agree that hacking is not limited to software.

I sometimes feel that programming is the mathematical/logical equivalent of the kludges on this site http://thereifixedit.com/

BTW there are some great JWZ links here http://www.reddit.com/domain/jwz.org

Basically Joel is saying that duct tape programmers are pragmatic programmers who’s priority is to get the job done. Now from my experience in the office there are very few non-pragmatic programmers, and even fewer coders who don’t want to get the job done as quick as possible.

So that means that the majority of us are that duct tape programmers right? but these doesn’t fit, So what’s wrong? well I think there are different types duct tape programmer based on there how smart they are, and that only the really smart ones can successfully write systems without a single test. The remaining programmers use tests to ensure what they have done works, and also hasn’t broken something else.

So given the fact that most people work in teams of programmers of varied skill levels, it makes sense to write tests. And while this might slow the 1 or 2 super smart guys on the team, it will aid the rest of the team.

Basically, a hacker.

Moral of the story: there is no silver bullet.

Use some template magic when you need it, create sprawling class hierarchies if required and write tests if you think they are necessary. In the real world, purity is a liability, not an asset.

I have friend like that and he saved my ass many times. When I was stuck trying to find satisfying solution, he was almost always coming up on the spot with something simpler then I was striving for, but upon close inspection good enough. If I pointed significant problem in his solution he either came up with fix or abandoned his idea without regret.

I like to think that I design better apis and libraries than him because I concentrate much more on what I want to have and try to weed out any inconvenience, but when I can't get what I want then he comes up with the idea I actually can get and is good enough.

"We’ve got to go from zero to done in six weeks"

This is my personal pet peeve. There is the iron triangle at work again, and when one of its points is fixed you still have two others to adjust, which seems to be forgotten here. Zawinski is trying to keep the scope (i.e. what defines the "done") and is sacrificing quality; he should instead try to make his life easier by reducing both a little bit rather than cutting just one to the bone.

I would just say this for folks who don't like unit tests because it takes longer. Push yourself away from the keyboard and think about it. When one writes codes one writes unit tests in ones head or the code doesn't work. I submit that we always write unit tests. The difference is in one case we keep it so we can run it over and over, in the other case we do it anyway in our heads then throw it away. I am not convinced that writing unit tests now takes any longer. We all know it saves our bacon later.

I have personally thrown out entire chunks of code except the unit tests and started from scratch to get things working again. I don't think the value of this can be understated.

Though the title "the duct tape programmers" may be a bit misleading but I think the essence and emphasis was on "Shipping is feature" and over engineering is not.

I don't think Joel meant to say that accumulating technical debt (http://en.wikipedia.org/wiki/Technical_debt) is the way to go, rather he suggested/re-iterated Donald Knuth's statement on optimization: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." in his very own way.

Is Netscape really that good of an example for Duct Tape Programmers at work? Granted, they got a killer application out that was succesful for quite some time but considering the following events, that is, Netscape figuring the codebase got so bad that a complete rewrite was in order - isn't it rather an example for duct tape-programming doing more harm than good? Or am I missing something?

EDIT: The fact that a complete rewrite is a big mistake, is another story, of course...

"He is the guy you want on your team building go-carts, because he has two favorite tools: duct tape and WD-40."

This phrase comes from watching too much Eastwood (Gran Torino ~ http://www.imdb.com/title/tt1205489/) The idea behind it is you can jury-rig/fix almost anything with WD40 & duct tape alone without the need for fancy expensive tools.

Except that both of those tools are the absolute worst at their respective jobs!

Standard duct tape uses an awful adhesive that depending on the humidity turn into a gummy mess or desiccates into flakes -- either way leaving a difficult residue and not actually holding. The loose right-angle weave of the coarse fibers means that it has zero shear strength on the most common axes, and is prone to splitting when under tension. The outer vinyl layer will separate on its own in heat, leaving a mess of fibers + adhesive behind.

WD-40 combines a solvent, a mild lubricant, and an adhesive (!) -- it's extremely prone to collecting grit and caking it onto surfaces. It will displace any better lubricant it is applied onto.

"... Standard duct tape uses an awful adhesive that depending on the humidity turn into a gummy mess or desiccates into flakes ... WD-40 combines a solvent, a mild lubricant, and an adhesive (!) -- it's extremely prone to collecting grit and caking it onto surfaces ..."

I hear what your saying but I'm talking hacks (http://www.flickr.com/photos/bootload/3961148668/) not engineering ~ http://www.flickr.com/photos/bootload/3960385835/

I think you make a very good point on taking Spolsky seriously.

The use of duct tape and WD40 as basic tools certainly far predates that movie. Per the ancient quote:

“All of life’s problems can be solved with two things—duct tape and WD40. If it moves and it shouldn’t, you need duct tape. And if it doesn’t move and it should, you need WD40.”

> A 50%-good solution that people actually have solves more problems and survives longer than a 99% solution that nobody has because it’s in your lab where you’re endlessly polishing the damn thing. Shipping is a feature. A really important feature. Your product must have it.

I think this does matter.

Just remember that this principle applies doubly if you're a startup.

And it applies doubly again if you're an early stage startup, because you're still deciding what to build at that point. Astronaut architecture is a complete waste of precious time that you don't have.

It really depends. If you put together such a kludge that you're going to have to completely rebuild it to scale past a nontrivial quantity of initial customers, you would do well to put at least a little thought into the theoretical foundation of what you're doing.

When Sarah Palin was running for office I heard a British politician remark that Sarah Palin represented the negation of politics. She appealed to people who were fed up with politics and politicians. For some reason Joel's argument reminds me of this. He seems to have examples of over the top designs gone bad but in the end I can't find much of real value to take away from this article. Is the visitor pattern too much? What about hibernate or other ORM tools?

Ah yes, now I know why xemacs and firefox crash so much.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact