
The Duct Tape Programmer - pcr910303
https://www.joelonsoftware.com/2009/09/23/the-duct-tape-programmer/
======
jdauriemma
I enjoyed reading this piece and really benefit from hearing this sort of
practical wisdom.

> [Unit tests] sound great in principle. Given a leisurely development pace,
> that’s certainly the way to go. But when you’re looking at, ‘We’ve got to go
> from zero to done in six weeks,’ well, I can’t do that unless I cut
> something out. And what I’m going to cut out is the stuff that’s not
> absolutely critical. And unit tests are not critical. If there’s no unit
> test the customer isn’t going to complain about that.

I hear this sentiment being shared a lot by self-described pragmatists. I
don't doubt that this individual made the right choice at the right time given
the circumstances, and I certainly can't claim to understand the unit test
tools available to C programmers in the Netscape era. But I do know that there
is data to strongly suggest that unit testing increases velocity. And I also
know that if a module is cumbersome to test, it's almost always a code smell,
not an indictment of unit testing.

Anecdotally, I feel way less anxiety shipping production code that is well-
tested. That's important, because the root of merge-induced anxiety is
uncertainty about the impact on the end user - I'm sure the customers care
about _that_.

~~~
MaulingMonkey
> And I also know that if a module is cumbersome to test, it's almost always a
> code smell, not an indictment of unit testing.

I work with a disturbingly large number of exceptions.

What's the correctness criteria for, say, graphics? Is the scene rendered
"right"? Accounting for differences in graphics cards, drivers, etc. leading
to minor variations in the pixels? Large swaths of the scenes are going to
render differently if I change shadow mapping techniques, or adjust the cap or
prioritization criteria of shadow casters. Just capturing and returning device
screenshots to the test harness is hideously platform specific.

How do I unit test hardware abstractions? I've encountered buggy gamepad
abstractions across enough codebases I'm half tempted to set up a physical
test rig somehow...

What about the deploy process for a Win8 .appx vs a Android .apk vs an iOS
.ipa?

I wrote tests for APK deployment once. ADB kept lying to us and saying it'd
deployed when it didn't, so I wrote a wrapper to catch it in the act and cry
foul (by checking file metadata.) Tests of the wrapper required spinning
up/down Android emulators, skipping it for machines where Hyper-V was
disabled, and parsing logcat output to figure out when the emulator was alive
enough to start interacting with. Did you know `adb logcat` has multiple text
formats, and which one is the factory default can be device specific? There's
a reason i started writing tests! We eventually figured out the root cause of
ADB lying: loose USB connections. But those are sadly common, so the need for
my wrapper remained.

It took me probably 2 weeks to get the wrapper into a solid state for Android
project #2. For Android project #1, management wanted a port to iOS _and_
Android in the impossibly small timeline of two months. We had things kinda
working by then, and I think we were mostly done with post-launch patching
within three months or so - or at least I moved back off the project by then.
We lost hours to debugging stale builds and wondering why our new fixes didn't
fix anything, but that's still less time than I sunk into getting the wrapper
working for project #2, so I wager that saved us time / let us fix more
product-visible bugs on project #1.

Project #2 would later be canceled resulting in major layoffs. I doubt my
wrapper ever paid for itself as a result - it simply didn't get used for long
enough. Part of me wonders if I should've been more "pragmatic" and helped
with more customer-facing stuff instead, if that would've prevented
cancelation. (But we already had good people doing good work on that front,
and I would've risked getting in their way, so I suspect making them be able
to trust their deploy process was a good use of my time.)

> Anecdotally, I feel way less anxiety shipping production code that is well-
> tested. That's important, because the root of merge-induced anxiety is
> uncertainty about the impact on the end user - I'm sure the customers care
> about _that_.

While I share your anxiety over poorly-tested code, manual testing and good QA
can cover a multitude of sins, and the end user for a lot of stuff can be your
fellow developers - not your customer.

~~~
jdauriemma
Your points are well-taken, but some people tend to extrapolate the testing
issues with specific cases like the ones you're describing and apply that same
ethos to projects that can and should be unit tested, such as web
applications. Manual testing and QA are great but cannot provide the sort of
architectural guidance that unit testing provides. If a module is difficult to
unit test, it will also be difficult to maintain. Manual testing will not
readily provide those insights.

------
GrumpyNl
Its from 2009 but still valid. Keep it simple and choose your tools wisely.
Most of the time,the company yoiu work for is nothing like google, so don't
try to use all their fancy and shinny tools. For a lot of websites, php,
jquery is more then sufficient.

