You should be using source control. It might sound shocking to the Github generation, but there was a time not too long ago where important work with millions on the line routinely got done with one copy of the code sitting on a single workstation, and collaboration happening over network shares or email. I worked at one of those places. It sucked. The Joel Test was part of an evangelization movement which changed standard practices in our industry radically, for the better.
Building in a single step? Still relevant for many projects. Less relevant for, e.g., Rails web apps. I migth substitute "can you deploy to staging and production in one step".
Speaking of which, a Joel Test in 2010 should include "do you have a fully functional staging environment" and "can you recreate a dev environment from a factory new machine in under an hour".
Fixing bugs versus writing new code, well, one might quibble with that in some scenarios. There exist businesses where the marginal business value or customer value of squashing an edge case bug is measurably less than new features. I have a known bug in my shopping cart that permits abuse of discounts. It has never been exploited. The minimum cost of fixing is several hundred dollars. Of course I write new code rather than fixing it. You can come up with quite a lot of similar scenarios if you're doing a lean startup -- there is little point to rigorously debugging code which might have an expected lifetime of a week.
I'm all for the Joel Test, and I've definitely seen places without source control, or with terrible or no source control.
But do you really think the Joel Test was on of the real pushers for change? I know Joel is popular, but I'd bet the Joel Test has been read by all of 1-2% of programmers. Moreover, I'd guess that it's the 1-2% who were already using source control.
Don't mean to nitpick, I'm honestly wondering if I'm underestimating the impact of this article. Completely agree with the rest of your comment.
Back then, there were no blogs. There were no programmers who wrote stuff. Joel was pretty much the first. And his stuff was really good and actionable.
So if you cared to read about your field, you were reading Joel. If you sent a programming link to your team, chances are it was one of his articles.
Coming into the field today, it's understandable that you might not have heard of the guy. But if you were around in the late 90s and you'd heard of anybody, you'd heard of Joel. If you were on a team in the late 90s and you wanted to improve things, you sent around a copy of the Joel test.
So yes, I do think a lot of the credit for the higher standards we have today goes to him.
I was inspired by Scripting News, which had been discussing programming for quite a while at that point.
But you're right about the 90s. They were a disgusting decade in a lot of ways. The reasonably _good_ development shops of the 90s looked like the horrible shops of today. I worked at a couple of top-flight Common Lisp companies in those years, and they had (1) incomplete, token test suites with dozens of "expected" failures, (2) release cycles of a year or more, (3) independently written modules with "big bang" integration a couple of times per year, and (4) version control systems ranging from CVS (on a very good day indeed) to vile, in-house horrors written in Perl 4. But when I talked to other programmers, nearly all of them worked in much worse environments.
The early days of the XP movement were a revelation for me: You could test _everything_. You didn't have to get the design right up front. Instead, you could rely on your unit tests to help you refactor your code. You could work in 2-week release cycles and pull your features from a constantly-changing priority queue. This, of course, all seems completely obvious today, and not even particularly "agile".
Starting around 2000, Joel wrote his short, funny essays. The biggest advantage of Joel on Software wasn't that his ideas were new, but that he communicated them well enough for managers to understand. And I stopped a few bad decisions in my day by sending those essays around. So while I agree his writing was influential, I don't think he deserves _quite_ the central importance you give him.
I think this is where it fails - most people don't read about their field, at all. At least in my experience.
I was only programming professionally in 2004, though, for what it's worth.
My gut feeling is that this also depends heavily on the (technical) type of software you're writing. Dereferencing a null pointer in a C++ desktop app is a whole lot worse than a null pointer exception in a web app built on most modern frameworks. The latter normally won't take down the whole app, just show a one-off 502. A browser that crashes every 3 minutes is basically never going to cut it.
If you are a serious web business and your app drops a ten thousand dollar order, or it dies in the middle of posting an important live news story, or it fails to send a personal message to an SO overseas, the fact that they can retry (perhaps after re-doing a significant amount of work) is no consolation. You, dear web app creator, are still proper fucked.
 to preeempt ad-hominem claims of lack of experience or skill with C/C++: they were my primary languages for around 8 years, so at least I've seen my fair share of code
So if you happily go ahead with new features, you might find out later that you're building on sand, and fixing the old code plus all the new code might be massively more expensive.
Regarding version control: I think this depends on your community. In 2000, it was standard knowledge in University/UNIX/Java land that developing software without version control is not only a bad practice, but just unprofessional. This might have been different in all those small shops doing VB or Flash stuff.
Spolsky actually makes that point himself in another article:
There are still plenty of shops that score close to zero on the Joel Test. Talk to your non-startup friends working for a big company where software is considered a cost center. They'll tell you about the big battle they had to get the company onto source control, and how deploying a new version of the company website still involves copying files by hand and running some wacky freeware database comparison tool to get the new schema changes across.
Hell, just last year I did a consulting gig for a shop that didn't even make a distinction between dev and production before I got there. Changing the site involved creating "index_new4.php", making your changes, smoke testing, then renaming (or repointing links). No source control, and they didn't even have a backup of the production database.
That's the sort of place that needs to get a copy of the Joel Test forwarded around the dev team, up to management, etc. The things on that list are as important today as they were 10 years ago. If anything, it carries more weight when you show it to management by virtue of being 10 years old.
I've written succesful applications without any spec, just talking with users, writting things down myself, showing them prototypes, repeating. In some cases users are not able to produce a spec and even if the do they don't always really "know what they want", that's why prototyping is so important.
I've worked on projects that didn't have usable spec, and monthly meetings had client saying "oh, that's not what I've meant, can you redo this like that?".
Oh, and users never know what they want, even when they think they do. What they know is the problems they have and the job they're trying to do or want to do. It's the job of the software designer (both spec writer and programmer if they're different) to figure out what the software has to do to give the users what they need.
I say this even though I disagree with the spec rule.