
The Duct Tape Programmer - mqt
http://www.joelonsoftware.com/items/2009/09/23.html
======
olavk
Joel does not mention that Netscape code was so bad that it cost them serious
credibility and customers. As a Netscape user back in the day, I did not care
whether Netscape used unittests or duct tape, but I switched from Netscape to
Internet Explorer because Netscape was so buggy it was painful.

Over a few years Netscape code became so unmaintainable they had to start from
scratch, which cost them years. Joel wrote in another famous article that this
was a major mistake. However, if the code is a giant "pragmatic" mess with no
architecture and _no unittests_ , it becomes extremely hard and dangerous to
refactor.

IE also got a lot of mindshare among developers because it actually tried to
implement some standards like CSS, which Netscape completely disregarded.
Netscapes "pragmatic" alternative to CSS, <spacer>, <layer> and so on luckily
died together with Netscape.

Many developers started making IE-only pages because it was almost impossible
to get anything to work in Netscape 4. IE6 is pretty unpopular among
developers today, but this is nothing compared to how the Netscape 4
generation was reviled back in the day by anyone having to develop for it.

> Remember, before you freak out, that Zawinski was at Netscape when they were
> changing the world. They thought that they only had a few months before
> someone else came along and ate their lunch

Also remember that they lost it all, and someone _did_ eat their lunch. So
maybe the their strategy should be reexamined?

~~~
eugenejen
The argument ingnores a fact that if netscape did not take off and be popular,
microsoft might not ever consider buying spyglass browser and expanded it into
IE (Eric Sink led the spyglass team, with his memoir
<http://www.ericsink.com/Browser_Wars.html>) and waged browser war in late
90s.

What all software architects forget is, most of time, the piece of code that
we wrote is to solve problems in life. Those problems have their life cycles;
some are long, some are short. While we seems like to imagine the piece that
we wrote will be a masterpiece as a Cathedral/Pyramid and last for 1000 years.
Unfortunately that is not the case. Most of time, our programs are just
solutions among solutions to a series of bootstrapping problems. So unless we
have a lousy but popular solution to a problem, our potential competitors
might just ignored a market and no progress happened for the field. And this
is a lost to human progress.

It is the same as maintaining old buildings, if condition is right, you may
just tear it down and rebuild what you deem is fit by today's standard. But
don't forget the original building has served its purpose.

Edit:

I personally have affection for Netscape 1.0. I still remembered how people in
my lab in Taipei ftped to netscape's download server and waited for the moment
when they uploaded the tgz file and started to download it and installed it on
Sun workstations. And by using it I felt making stuffs on internet is better
than studying physics and the decision changed my life.

~~~
olavk
Obviously Netscape created a revolution. The "duct-tape" approach allowed them
to iterate quickly and deliver Netscape 1.0 to the masses and change the
world.

However, the rapid success of Netscape was very much due to that the basic
architecture and protocols of the web was already designed by others. I give
Netscape credit for the <img> tag, but apart from that, almost anything
Netscape designed on their own were ill-conceived disasters from <font> and
<frameset> to <layer> and JSSS.

So I think the correction to the duct-tape approach is that it works best if
somebody else already designed the basic architecture, e.g. if you are copying
an already established product. It does not seem to work very well if you have
to design something original.

------
tdavis
You know, it _is_ a great book and I love Jamie's interview and the "duct
tape" style was used well at Netscape, but just because the guy doesn't
writing unit tests or use higher-level abstractions doesn't automatically make
him better than other types. Some of the smartest programmers I've met have
been religious about TDD and strict formatting and commenting and as a result
maintain and work on some incredibly large and complex systems.

Did those systems start out that way? Maybe not, but after a few years and a
couple rewrites I'm sure they came to the same conclusion that most
programmers do when they work on things for a _long_ time: "I wish I could go
back and write some tests / automate some stuff / add better debugging, etc."
I know I always feel that way. I do now, after about a year and a half of
hacking together our site. I'd _kill_ for a decent test suite and fully-
automated deployment. Kill!

Both styles of programming have a purpose. Maybe we'd like to _avoid_ multi-
threaded architectures, but it isn't always possible. When you have 6 weeks to
launch, maybe unit tests aren't necessary, but eventually not having them will
start doing more harm than good.

The more I read the writings of celebrity programmers / entrepreneurs, the
more I come to realize that most of what they write reads like an attempt to
justify their way of thinking as being The Right Way. Why can't we all just
agree there is more than one way to skin a cat and each probably has an
applicable use case or two?

~~~
Tichy
"I'd kill for a decent test suite and fully-automated deployment. Kill!"

Then why don't you write one? I suspect you don't have the time - well back
then when you created the system, you did not have the time either. So the
bottom line again seems to be: it is not actually THAT important. Otherwise
you would make the time.

~~~
akeefer
That's a bad conclusion to make: it's the classic tradeoff between importance
and urgency. If you only ever do the urgent stuff, the hair-on-fire-has-to-be-
done-yesterday stuff, you'll never make time to get to long-term strategic
projects.

The payoff for something like unit testing, automated deployment, and
continuous integration are over the very, very long haul. If your cost/benefit
analysis is always looking at 3- or 6-months out, it'll never seem like a win.

So if you only ever do the urgent stuff and never anything strategic, 3 years
later you'll realize that if you'd just sucked it up back at the start and
done that stuff, even if meant putting off otherwise urgent features, you'd be
further ahead than you are now, because it would have more than made up for
the initial investment.

So it's not that you don't do those things because they're not important, but
rather because they're never urgent, and because most people's time horizons,
especially in a startup, are fairly short.

~~~
Tichy
Still, in 3 years the company might already be bankrupt, and nobody would care
about tests anymore. Bankrupt is maybe too extreme, but the particular code
module you spent 3 months writing tests for might be replaced by some open
source solution or just not being needed anymore.

I kind of see your point, but I find it difficult to deduce a binding rule
from all of this. Sometimes it is important to have tests, sometimes other
things are more important. You still have to decide on an individual basis.

Thing is, the TDD and technical debt evangelists are typically consultants.
Consultants usually earn more money the longer a project takes, and their
income is not tied to the yields of the project. Just something to take into
consideration imo.

~~~
akeefer
I agree that there's no overarching rule that makes the tradeoffs easy to
analyze.

I will, however, say that after working on the same code base for over 7 years
now, and watching the company grow from 15 employees to 400, I can't imagine
ever working at a place that didn't have a large investment in unit tests,
tech debt elimination, automation, etc. Without that stuff, our products
almost certainly would have collapsed under their own weight by now, and our
ability to ship predictably and on-time would be gone. Even within the
company, we have some groups that have done better than others as far as
automating tests (both because of team personality and because of technical
issues that make certain types of features harder to test), and it's quite
obvious that the groups with the best testing are the groups that are able to
make much more predictable progress and that are able to ship on time. The
groups with less-good testing tend to be prone to fairly massive schedule
slippages due to a ton of late-stage regressions that only get caught when
they ship their code out to their internal users.

Once things get to the point where no one person can reasonably understand the
full implications of their changes, because the system is just too big and
complicated, if you don't have unit tests you're in big, big trouble, and you
need to reduce tech debt so you can keep things as comprehensible as possible.
Even then, of course, you have to constantly decide how much to invest in
testing and infrastructure and cleanup versus how much you invest in forward
progress, and there's never an obvious equation that will give you a right
answer.

If your code base and team are small and likely to stay that way, such that
you can still mercilessly refactor and change the code without introducing a
bunch of hidden bugs, then testing doesn't matter as much. If you ever expect
the code to get to the point where that becomes less true, and where the
possibility of introducing errors increases, then it starts to matter a whole
lot more.

Hypothetically, let's assume we built the same product with two teams, one
that did a bunch of unit testing and one that didn't (call them Team A and
Team B). From my experience, what essentially happens is that Team A ships
version 1 first, ships version 2 first but takes about as long to build
version 2 as Team B, ships version 3 about the same time (since it takes them
longer to build it), experiences a massive schedule slip in version 4 (since
the complexity catches up to them and things becomes buggy and they start
playing whack-a-mole with bugs), and don't really ever ship a version 5
because their code has so much tech debt that no one can change anything
safely without breaking something else unintentionally, and they start
contemplating a complete rewrite of the code base. Again, totally contrived
situation (it doesn't have to go that way, Team B could still totally screw
things up anyway, etc.), but that's roughly what I've seen happen, both at my
company and at others.

I don't think it's fair to say that consultants push TDD and tech debt
reduction because that means the project will take longer: that's a bit overly
cynical. Many, many organizations use unit testing and such in house because
it has a huge long-term benefit (as well as generally more predictability in
the short term, which is often more valuable than absolute speed), not because
some consultant told them to do it.

~~~
Tichy
I am not actually against unit tests, but I have seen it being taken to
unhealthy extremes. For example at some companies there are automated tools
that check that every method has a unit test. In the end people write unit
tests for Java getter and setters and so on. Mind numbing as that task is,
people also end up writing bad unit tests just to silence the tool.

A lot of unit tests make sense, but I suspect they also offer plenty of
opportunities for idling time away.

~~~
akeefer
No question, it's a fine line . . . you need to be pragmatic and ask "is
writing and maintaining this test going to save me more time than it costs?"
Over several years, the maintenance of the tests themselves becomes a huge
cost, which is something the TDD guys don't seem to talk about much. (My turn
for an overly-cynical guess: since many of them are consultants as you've
pointed out, they don't hang around with the same code and the same tests for
7 years, so they don't necessarily see how it really plays out). "Bad tests"
are actually a huge net negative for development.

If the test is testing something (like a getter or setter) with basically no
chance of breaking, then it's a waste of time. If the test is likely to be
fragile or non-deterministic, it's a waste of time. If the test is just too
hard to write, and it's not too hard to just test by hand, then automating it
is probably a waste of time and you should just QA it by hand every so often.

Finding the right balance tends to come back to the old experience and skill
thing: you need to have some intuition about which tests will give you the
most value (because that part needs to be rock-solid, or because it's hard to
get right, or because it's high-change) and which tests need to be thrown away
or never written because they aren't worth it.

Taking any development process too far tends to work out poorly, and taking
any metric (like test coverage) too seriously is always a bad idea. That said,
I've rarely seen unit testing taken way too far; not testing enough and ending
up with buggy, regression-riddled software is a far more common failure mode.

~~~
Quarrelsome
This is why it might be an idea to have unit tests AND QA. Be pragmatic with
the unit tests and center them around core functionality and things that are
hard to test (think very hard about race conditions for example). QA if
they're any good should catch the boneheaded exceptions (such as a mis-
behaving getter that calls itself).

------
gruseom
It's nice to see Spolsky get this enthusiastic about something other than his
marketing, and I'm sure Peter Seibel agrees. But he negates his entire point
at the end. After going on about how great duct tape programmers are, he says,
don't think that means _you_ can be one, because they're magic. (He says
"pretty", but in this case pretty means magic.) To wit:

 _Duct tape programmers have to have a lot of talent to pull off this shtick._

In other words what matters is talent, not duct tape. Untalented duct tape
programmers do as much damage as the untalented design-pattern programmers he
scourges. So what was the point again?

~~~
wheaties
Absolutely agree. I work with a "duct tape" programmer and you couldn't pay me
enough to touch his code. I'm so sick of someone asking me what's going on
with RelayHandler's mda function and what do the variables "a", "sb", and "c"
stand for? I kid you not... I don't always agree with Joel and this is one of
those times where I absolutely do not agree. Duct tape programmers can stay
the hell away from me.

~~~
Periodic
He talks in his analogy of people who haven't taken off in their go-cart and
are discussing design issues, and the people who took off and are fixing
things with duct tape. There's a big difference between what I would call a
"duct tape programmer" and someone who happens to keep a roll of duct tape
handy. The former will run that duct-taped system in the next race, and will
keep adding duct tape as problems arise. The latter will run the race, but
then tear of the duct tape and look at why cart needed duct tape anyway and
will then start debating design changes to get it to work better next time.

I think "duct tape programmer" should be derogatory, while "practical
programmer" or "pragmatic programmer" would be more apt for Joel's idols.

------
fjabre
Awesome read. I couldn't agree more.

I've worked with a great many 'theorists' coders and they never get anything
done. They spend too much time abstracting into nothingness. You know.. the
kind of guys who remind you of your 3rd grade grammar teacher making sure you
know when to use 'whom' vs 'who'...

While I think eventually one would refine their product so that it uses best
practices I would say that having customers and a product should definitely be
a prerequisite.

~~~
rbranson
Yes, thank you. The moral of the story is that you ship a product first, then
you tweak, improve, and refactor it once you've got a reason to!

~~~
azanar
If you do that, you'll run a very considerable risk of wonder why version 2.0
of your product is taking so damn long to ship. The answer: all of the things
you punted, ignored, assumed, patched over, and otherwise haphazardly threw
together in version 1.0. Now all these have set your code in concrete, and you
have to remove half the foundation to get them back out.

~~~
logicalmind
This is know as the "second system effect":

<http://en.wikipedia.org/wiki/Second-system_effect>

~~~
tome
No it isn't. The Second System effect is about wanting to put in all those
features into version 2 that you left out of version 1.

~~~
logicalmind
Yes, which is what the parent said:

"all of the things you punted, ignored, assumed, patched over, and otherwise
haphazardly threw together in version 1.0"

~~~
tome
No, sorry it isn't. Fred Brookes goes into detail when he coins the term
"Second System Effect", and it definitely doesn't refer to half-arsed, half-
debugged, haphazardly thrown together _anything_ from version 1.0.

The Second System Effect is specifically about new features.

------
KevinMS
"One principle duct tape programmers understand well is that any kind of
coding technique that’s even slightly complicated is going to doom your
project."

Like writing a custom compiler for your web app?

<http://www.joelonsoftware.com/items/2006/09/01b.html>

After he jumped that shark I don't read anything he writes anymore.

~~~
swilliams
"After he jumped that shark I don't read anything he writes anymore."

...how'd you get that quote then? Or did you only read enough to get something
to complain about?

~~~
cubicle67
He wrote a multi-threaded C++ app to parse the html and return a random
sentence?

Rhetorical question; What percentage of an article need to be read for the
article as a whole to be classified as having been read?

------
DanielStraight
"Any kind of coding technique that’s even slightly complicated is going to
doom your project."

"They xor the 'next' and 'prev' pointers of their linked list into a single
DWORD to save 32 bits, because they’re... smart enough, to pull it off."

How is that not even slightly complicated?

~~~
jerf
The Kolmogorov complexity of COM is, at the very least, hundreds of kilobytes
of itchy, fidgety, sensitive, and complicated code. The Kolmogorov complexity
of xor'ing two pointers to save 32 bits is on the order of tens or hundreds of
bytes. (I'm using the term a bit loosely, obviously, but I think it gets the
point across.) I suppose it depends on the limit of "slightly", but in context
I think it's clear we're talking about "techniques" that are more than a three
line hack in your linked list library. YMMV. (That is, I do see the point you
are trying to make.)

Presumably, the duct tape programmer is doing that because it is the
difference between making the product go and not making the product go, not
because they love bit packing. It's not a technique I'd adopt today, but
Zawinski (just to choose one example from his repertoire I've read about) was
trying to make machines with, say, 8MB of RAM able to read thousands of email
messages. You get a bit nutty under those constraints, or you ship slow crap.
There isn't much of a third choice. (Fast and featureless, maybe.)

(I _think_ I can bid lower than 8MB of RAM, too, but I'm a bit fuzzy on
netscape timeframes vs. ram timeframes. I think 4.0 was in the 32-64-128MB
era, putting 3.0 a ways back, but I'm not sure.)

~~~
DanielStraight
I see what you're saying, and I agree. It's not comparable to COM.

I think the thing is, articles like this tend to create some idealized
programmer that is just a conglomeration of attributes the author likes _even
if they are mutually exclusive_. To me, avoiding complexity and doing bit
manipulation are mutually exclusive.

It's like saying you should use left shift (or is it right...?) instead of
diving by 2. Ok, it may be faster. Or the compiler may just do the same thing
regardless how you type your code. The point is that "/ 2" means divide by 2
to anyone at all familiar with code. Unless you have some really compelling
reason to do otherwise, you should use "/ 2".

Using shifts for division (or various other bit manipulation) may be how your
idealized programmer shows their classical training, but don't kid yourself
into thinking that bit manipulation fits into all your other ideals for
programmers.

Joel's idealized programmer also avoids unit tests. Are you serious? How can
this _possibly_ be a good idea? No, your customers don't care if you wrote
unit tests... in the same way you don't care if your architect does whatever
it is architects do to ensure the accuracy of their work. But that's just the
point. You don't care (nor should you) about _how_ they ensure accuracy. You
care only that they do. So no, your customer doesn't care if you wrote unit
tests, but I assure you they care if your software crashes or gives inaccurate
information.

Of course, no one ever creates an idealized programmer without creating their
opposite. Joel's "ideally" bad programmer multiply inherits from 17 sources.
Does any sane programmer really do this? No. Of course not. Why bother
mentioning it? It's like saying an idealized pilot is not like those other
pilots that intentionally crash their planes. Well... _no one_ intentionally
crashes a plane. Don't bring up absurd examples to prove your point. If real
life doesn't prove it, then it's not a valid point.

The simple fact is that when I look at my own real-life, deployed-in-
production code, I find this: The code I wrote just to get a problem solved in
whatever way possible (duct tape) becomes more and more of a liability as the
requirements change. With the code that I spent the most time designing
(assuming I eventually came up with a good design), the more the requirements
change, the more I see the beauty of the design. When a change in requirements
can be fixed with a find/replace, it's a job well done. Duct tape code leads
to duct tape maintenance. Duct tape maintenance leads to thedailywtf.com.

I have no problem with emphasizing the importance of shipping software. I have
a problem with people saying "real programmers use butterflies" when they
aren't writing a web comic.

I don't think there's a single "real programmers" article in the universe that
is internally consistence (doesn't advocate any mutually exclusive practices).
Like I said, it's an ideal, an ideal constructed out of everything the author
could find in their mind, whether it fits together or not. This wouldn't be a
problem if the author admitted even a slight possibility of exaggeration or
lack of internal consistency, but they never do.

Now... I think by now I've probably exaggerated and broken internal
consistency enough for one day, so I'll stop here.

~~~
redcap
Unit tests have sometimes been a great help, especially for regression tests,
but they can get in the way, especially if you actually want to ship.

I can see the benefits of getting the 1.0 to market first (if buggy), getting
some market share and using that lead time to either iron out the bugs or to
rewrite so you don't have to put up with duct tape maintenance.

I've been in a situation where the users started using the prototype because,
despite being buggy as hell, it did stuff light years ahead of what they had
before. So imo duct tape 1.0 is ok.

~~~
kscaldef
I don't see from your argument how unit tests keep you from shipping. You can
still choose to ship a product with failing tests. The difference is now you
know what (some of) the bugs are.

------
Virax
Jamie Zawinski is not "hard at work building the future". According to his own
website, he is managing the DNA lounge, and the last thing of any substance he
worked on was a program to delete silence from mp3 streams (see
<http://www.dnalounge.com/backstage/src/archiver/>). He claims a copyright
date of 2001-2006 for this program, which, after a quick skim, appears to be
high quality. In my opinion, he is a talented programmer who has this to say
about the software industry:

    
    
        (1999+) But now I've taken my leave of that whole sick,
                navel-gazing mess we called the software
                industry. Now I'm in a more honest line of
                work: now I sell beer. 
    

So, I suppose, Joel is right, in a roundabout way:

Selling beer => flirtation => sex => sperm + egg = Building a future human
being!

But seriously, Joel is on crack.

~~~
eugenejen
Jamie also worked under Peter Norvig, and this is what Peter said about Jamie:

". One of the best programmers I ever hired had only a High School degree;
he's produced a lot of great software, has his own news group, and made enough
in stock options to buy his own nightclub. "

\-- from <http://norvig.com/21-days.html>

------
timr
I agree with the basic idea, but I think Joel is going over-the-top with the
C++ hate. I've actually shipped real code that used the _insanely complicated_
feature of C++ templates. Works great.

The problem is not with a specific language or technology -- it's using the
_bleeding edge_ technology, when the boring one will do.

------
abalashov
I am not a fan of this kind of extreme maximalism; surely there has got to be
a decent compromise? That's assuming, of course, that purely utilitarian
pragmatism vs. lofty, academic architecture idealism is a valid dichotomy, and
that there don't exist a variety of third ways and composite profiles. Of
course, any useful generalisation that posits a continuum can be torn down,
but I really think that in this case it needs doing.

There has _got_ to be a better way than being a "duct tape programmer." It
seems to me that one can practice good design, architectural grace, and hold
true to a variety of other tendencies that seem theoretically and
aesthetically appealing (the latter is very important; every good programmer I
have ever met sees an artistic aspect to programming, even if it is not
necessarily the central or principal one - it is a craft) without being the
guy that never actually puts out any concrete deliverables.

I think this is just an angry, bitter overreaction - and a very understandable
one that I fully endorse - to the dogmatism of many test-driven development
acolytes and pig-headed "patterns" people.

------
Triston
I call bullshit. I've worked with some "just get it done programmers'. Have
you tried to go into code that someone threw in to just make it work.

Abstraction, interfaces and unit tests are not a leisurely activity for
academic developers. We use them to make the code less complex and maintain.
The cost of development isn't the initial code base, its the fixes and
additional features people want AFTER the initial release(s). Going back into
the code and safely making changes or adding code with these in place reduces
time.

I had an application without automated testing, it cost the company almost
2000 man hours to test the system every time they made a release.

Design patterns, Joel, are repeatable patterns within code. Design patterns
are again to help when another developer goes into the code they can see what
the heck the original developer tried to accomplish.

To summarize, I would suggest you out source some code to the far east. They
will get it done really fast for you. And yes it will only work 50% of the
time. I love buying products that will only work 50% of the time and
unfortunately I don't get to pick which 50% works.

~~~
rbranson
I don't think that's the point Joel is making. At the end of the article, it
really comes together. Perhaps you are too heavily focused on the specifics
(unit testing, etc) instead of the overall theme. He's basically saying that
while the other guys are wasting time overengineering a project, sometimes
just getting to work and getting started is a much more productive approach.

Joel isn't directly bad-mouthing unit testing or multithreading, but as
developers we tend to think about all of these cool toys we can use and
"ooooh" and "ahhhh" instead of actually shipping code. Try not to get so hung
up on the specifics.

~~~
rimantas
And the guys who will have to maintain that promptly shipped code will say a
lot of "wtf?" and waste 10x more time on it.

------
clutchski
"And unit tests are not critical. If there’s no unit test the customer isn’t
going to complain about that.”

By all means, ship. Do what you gotta do. But a code base that doesn't have
tests cannot safely be refactored. This technical debt must eventually be paid
by the product owner in cash and the code's maintainers in sanity.

~~~
mkramlich
I disagree with your statement that a codebase without tests cannot safely be
refactored.

I've been refactoring code for 20+ years, the overwhelming majority of time
without any automated tests, and I'd say offhand 99% of the time it causes no
bugs, and in the occasional case where it does cause a bug (because I am
imperfect and sometimes make mistakes), I almost always soon find it during
the same coding session and fix it.

The key is to understand the code well enough to know what effects what and
how. Hold that model in your mind and you're golden. Lots of time saved not
writing tests, updating them, fixing them when they break, etc.

Note that this is not an argument against tests in general, just an argument
for there being cases where you don't miss them and they would be a net loss
if you had them due to all the extra make-work required. I think there's a lot
of kool-aid drinking going on among people who themselves probably lacked the
ability to do "naked" refactors well. To those folks I say, "Great, have fun
storming the castle!" but don't assume that other folks who haven't drunk your
kool-aid are constantly banging their head on the wall breaking the code or
living in fear of mysterious hypothetical bugs due to a lack of tests. A
really excellent 'old fashioned' sort of test is to just run the fricking code
-- did it work? did it do what it was supposed to do? data look good? k, move
on to the next one of the thousands of other problems you have to solve and
tasks you have to do in life. And use version control, so if you retroactively
do discover a problem, you can review the diffs, or rollback, or do a tactical
patch against the branch, etc.

I do agree with your statement, "By all means, ship. Do what you gotta do."
And I agree that that attitude may cause you to at least temporarily incur
technical debt, and you generally want to pay that down as soon as feasible.
(backing out ugly hacks to replace with more elegant or easier to read
implementations, etc.)

~~~
tezza
Amen,

It's nice to hear someone with experience from before the "Unit Test is
compulsory" explosion. Programmers should always test their work, but testing
comes in much more of a diverse range than mere Unit Tests.

There are plenty of cases where Unit Testing is 'embarrasingly'[1]
appropriate. These pin-up applications blinds Testing advocates to the fact
that Unit Tests are often inferior to other methods or simply not possible.

An example where completely automated tests are impossible is PDF generation.
One cannnot 'Unit Test' this. One has to build a framework to take test data,
create less than 100 pdfs, and then a human has to eyeball it. Humans cannot
eyeball more than 100 images and perceive subtle errors. Less than 100 output
images means this cannot exercise every codepath of even simple applications.

Often I was working on a part of the Render Pipeline which was not currently
exercised by the existing test. Do I create a whole new test suite to generate
test images for each branch condition? If it is important, yes, I created a
new end-to-end test. But if it was not, then I addapted some existing test
input and used my best judgement and my knoweldge of the internal state. This
test did not last beyond my short-term memory, and my own set of eyeballs. If
a problem occured later, I would recreate the test from memory.

This is still TDD, but it is so much less straight-jacket than requiring
Automated-testing. The tests are 'thrown away' effectively. But the tested
code remains. I would also say that the coder knowing which portions to test
is superior in many cases.

\--------------

[1] Similar to :: <http://en.wikipedia.org/wiki/Embarrassingly_parallel> .
Network stacks, account balances, Frameworks are all embarrasingly unit-
testable

~~~
mkramlich
thanks for backing me up. yeah i often feel un-PC when I say anything bad
about unit tests. (Like saying, gee, maybe there are differences between
races, or between cultures, or between genders -- Cats, what you say?!?!)

and agreed, there are situations where like you said it's embarrassingly
appropriate to have tests. To me the classic case is where you are publishing
a code library with thousands of real users across the internet, with real
apps built against it already, themselves already in production, etc. It's
probably downright stupid of the maintainer to not have a suite of automated
tests they can execute, and must pass, before every release, to ensure no
regressions. So the maintainers can catch them, and resolve them, before it
makes apps break downstream.

But the whole 'you must write tests always, before any application code' thing
strikes me as insane and masochistic. :)

~~~
Quarrelsome
Might just be a matter of context and without the context we get the zealotry.
Biggest mistake everyone makes when saying "XYZ is teh lames" or "teh wins" is
not describing their context.

Our OS team works differently from our apps team for example, because if
something in their stuff breaks it's a big deal, they also really worry about
backwards compatability and whenever I touch their code I have to create an
IPrinter34 to not break things for old clients (who still want their
IPrinter12).

In our apps team though we keep things cleaner and kick out the IPrinter23 and
keep things as IPrinter so people can read the code more clearly. Backwards
compatability isn't so much an issue for us (apart from obviously considering
updates) as we release 1 single unit that replaces all our files. If somebody
has an old OS then its not supposed to work anyway so the app going titsup is
the correct outcome.

Therefore I don't have an opinion on this subject either way, in some
scenarios you do IPrinter34 and in others IPrinter.

When people talk with strong opinions on code they should probably start by
announcing their own applications of it.

------
dangrover
Maybe Netscape's duct-tape programming had some side-effects though...
<http://www.joelonsoftware.com/articles/fog0000000069.html>

~~~
wglb
If you read the chapter that Joel recommends, it talks about the design
patterns guys that came in and how the anti-duct tape guys had a role to play
in that delay.

~~~
dangrover
I haven't read the book yet, plan to get it soon. Sounds awesome.

------
pavelludiq
Duct tape is a great metaphor. Duct tape was one of my favorite toys as a
child. My dad was always mad at me for wasting it. But i can't help it if i
want to build a tower from straws and duct tape, a tool thats as flexible as
duct tape is empowering for a 10 year old. I propose duct tape become the new
hacker symbol! :D

~~~
SapphireSun
;-) I think many other engineering disciplines would be upset if we took it
all for ourselves - especially mechanical engineers.

~~~
pavelludiq
Im sure that many of us will agree that hacking is not limited to software.

------
lupin_sansei
I sometimes feel that programming is the mathematical/logical equivalent of
the kludges on this site <http://thereifixedit.com/>

BTW there are some great JWZ links here <http://www.reddit.com/domain/jwz.org>

------
alanl
Basically Joel is saying that duct tape programmers are pragmatic programmers
who’s priority is to get the job done. Now from my experience in the office
there are very few non-pragmatic programmers, and even fewer coders who don’t
want to get the job done as quick as possible.

So that means that the majority of us are that duct tape programmers right?
but these doesn’t fit, So what’s wrong? well I think there are different types
duct tape programmer based on there how smart they are, and that only the
really smart ones can successfully write systems without a single test. The
remaining programmers use tests to ensure what they have done works, and also
hasn’t broken something else.

So given the fact that most people work in teams of programmers of varied
skill levels, it makes sense to write tests. And while this might slow the 1
or 2 super smart guys on the team, it will aid the rest of the team.

------
ptn
Basically, a hacker.

------
GeneralMaximus
Moral of the story: there is no silver bullet.

Use some template magic _when you need it_ , create sprawling class
hierarchies _if required_ and write tests _if you think they are necessary_.
In the real world, purity is a liability, not an asset.

------
scotty79
I have friend like that and he saved my ass many times. When I was stuck
trying to find satisfying solution, he was almost always coming up on the spot
with something simpler then I was striving for, but upon close inspection good
enough. If I pointed significant problem in his solution he either came up
with fix or abandoned his idea without regret.

I like to think that I design better apis and libraries than him because I
concentrate much more on what I want to have and try to weed out any
inconvenience, but when I can't get what I want then he comes up with the idea
I actually can get and is good enough.

------
BerislavLopac
"We’ve got to go from zero to done in six weeks"

This is my personal pet peeve. There is the iron triangle at work again, and
when one of its points is fixed you still have two others to adjust, which
seems to be forgotten here. Zawinski is trying to keep the scope (i.e. what
defines the "done") and is sacrificing quality; he should instead try to make
his life easier by reducing both a little bit rather than cutting just one to
the bone.

------
tmikew
I would just say this for folks who don't like unit tests because it takes
longer. Push yourself away from the keyboard and think about it. When one
writes codes one writes unit tests in ones head or the code doesn't work. I
submit that we _always_ write unit tests. The difference is in one case we
keep it so we can run it over and over, in the other case we do it anyway in
our heads then throw it away. I am _not_ convinced that writing unit tests now
takes any longer. We all know it saves our bacon later.

I have personally thrown out _entire_ chunks of code except the unit tests and
started from scratch to get things working again. I don't think the value of
this can be understated.

------
jnaut
Though the title "the duct tape programmers" may be a bit misleading but I
think the essence and emphasis was on "Shipping is feature" and over
engineering is not.

I don't think Joel meant to say that accumulating technical debt
(<http://en.wikipedia.org/wiki/Technical_debt>) is the way to go, rather he
suggested/re-iterated Donald Knuth's statement on optimization: "We should
forget about small efficiencies, say about 97% of the time: premature
optimization is the root of all evil." in his very own way.

------
JustAGeek
Is Netscape really that good of an example for Duct Tape Programmers at work?
Granted, they got a killer application out that was succesful for quite some
time but considering the following events, that is, Netscape figuring the
codebase got so bad that a complete rewrite was in order - isn't it rather an
example for duct tape-programming doing more harm than good? Or am I missing
something?

EDIT: The fact that a complete rewrite is a big mistake, is another story, of
course...

------
bootload
_"He is the guy you want on your team building go-carts, because he has two
favorite tools: duct tape and WD-40."_

This phrase comes from watching too much Eastwood (Gran Torino ~
<http://www.imdb.com/title/tt1205489/>) The idea behind it is you can jury-
rig/fix almost anything with WD40 & duct tape alone without the need for fancy
expensive tools.

~~~
blasdel
Except that both of those tools are the absolute worst at their respective
jobs!

Standard duct tape uses an awful adhesive that depending on the humidity turn
into a gummy mess or desiccates into flakes -- either way leaving a difficult
residue and not actually holding. The loose right-angle weave of the coarse
fibers means that it has zero shear strength on the most common axes, and is
prone to splitting when under tension. The outer vinyl layer will separate on
its own in heat, leaving a mess of fibers + adhesive behind.

WD-40 combines a solvent, a mild lubricant, and an adhesive (!) -- it's
extremely prone to collecting grit and caking it onto surfaces. It will
displace any better lubricant it is applied onto.

~~~
bootload
_"... Standard duct tape uses an awful adhesive that depending on the humidity
turn into a gummy mess or desiccates into flakes ... WD-40 combines a solvent,
a mild lubricant, and an adhesive (!) -- it's extremely prone to collecting
grit and caking it onto surfaces ..."_

I hear what your saying but I'm talking hacks
(<http://www.flickr.com/photos/bootload/3961148668/>) not engineering ~
<http://www.flickr.com/photos/bootload/3960385835/>

------
juvenn
> A 50%-good solution that people actually have solves more problems and
> survives longer than a 99% solution that nobody has because it’s in your lab
> where you’re endlessly polishing the damn thing. Shipping is a feature. A
> really important feature. Your product must have it.

I think this does matter.

------
mrshoe
Just remember that this principle applies doubly if you're a startup.

And it applies doubly again if you're an _early stage_ startup, because you're
still deciding what to build at that point. Astronaut architecture is a
complete waste of precious time that you don't have.

~~~
abalashov
It really depends. If you put together such a kludge that you're going to have
to completely rebuild it to scale past a nontrivial quantity of initial
customers, you would do well to put at least a little thought into the
theoretical foundation of what you're doing.

------
johnwatson11218
When Sarah Palin was running for office I heard a British politician remark
that Sarah Palin represented the negation of politics. She appealed to people
who were fed up with politics and politicians. For some reason Joel's argument
reminds me of this. He seems to have examples of over the top designs gone bad
but in the end I can't find much of real value to take away from this article.
Is the visitor pattern too much? What about hibernate or other ORM tools?

------
kschiess
[http://blog.absurd.li/2009/09/24/on_smart_boys_in_programmin...](http://blog.absurd.li/2009/09/24/on_smart_boys_in_programming.html)

------
jrockway
Ah yes, now I know why xemacs and firefox crash so much.

