Hacker Newsnew | comments | show | ask | jobs | submitlogin
The Joel Test: 12 Steps to Better Code (2000; Still relevant?) (joelonsoftware.com)
54 points by OoTheNigerian 1585 days ago | 27 comments



Some parts of that have aged better than others.

You should be using source control. It might sound shocking to the Github generation, but there was a time not too long ago where important work with millions on the line routinely got done with one copy of the code sitting on a single workstation, and collaboration happening over network shares or email. I worked at one of those places. It sucked. The Joel Test was part of an evangelization movement which changed standard practices in our industry radically, for the better.

Building in a single step? Still relevant for many projects. Less relevant for, e.g., Rails web apps. I migth substitute "can you deploy to staging and production in one step".

Speaking of which, a Joel Test in 2010 should include "do you have a fully functional staging environment" and "can you recreate a dev environment from a factory new machine in under an hour".

Fixing bugs versus writing new code, well, one might quibble with that in some scenarios. There exist businesses where the marginal business value or customer value of squashing an edge case bug is measurably less than new features. I have a known bug in my shopping cart that permits abuse of discounts. It has never been exploited. The minimum cost of fixing is several hundred dollars. Of course I write new code rather than fixing it. You can come up with quite a lot of similar scenarios if you're doing a lean startup -- there is little point to rigorously debugging code which might have an expected lifetime of a week.

-----


"there was a time not too long ago where important work with millions on the line routinely got done with one copy of the code sitting on a single workstation, and collaboration happening over network shares or email. I worked at one of those places. It sucked. The Joel Test was part of an evangelization movement which changed standard practices in our industry radically, for the better."

I'm all for the Joel Test, and I've definitely seen places without source control, or with terrible or no source control.

But do you really think the Joel Test was on of the real pushers for change? I know Joel is popular, but I'd bet the Joel Test has been read by all of 1-2% of programmers. Moreover, I'd guess that it's the 1-2% who were already using source control.

Don't mean to nitpick, I'm honestly wondering if I'm underestimating the impact of this article. Completely agree with the rest of your comment.

-----


Sounds like you might not have been around back at the turn of the century.

Back then, there were no blogs. There were no programmers who wrote stuff. Joel was pretty much the first. And his stuff was really good and actionable.

So if you cared to read about your field, you were reading Joel. If you sent a programming link to your team, chances are it was one of his articles.

Coming into the field today, it's understandable that you might not have heard of the guy. But if you were around in the late 90s and you'd heard of anybody, you'd heard of Joel. If you were on a team in the late 90s and you wanted to improve things, you sent around a copy of the Joel test.

So yes, I do think a lot of the credit for the higher standards we have today goes to him.

-----


That's not really true. Greenspun was the hot pundit before Joel. A bunch of people hung out on SlashDot. And the C2 Wiki, which arguably was filled with more experienced practitioners than Joel, has been around since 1995.

-----


I've been programming professionally since 1997 or so, and as a hobbyist well back into the 80s. There were certainly programming blogs back then--I was working on an RSS aggregator in 2000, and my own blog dates back to 1998:

http://web.archive.org/web/19981206053731/http://www.randomh...

I was inspired by Scripting News, which had been discussing programming for quite a while at that point.

But you're right about the 90s. They were a disgusting decade in a lot of ways. The reasonably _good_ development shops of the 90s looked like the horrible shops of today. I worked at a couple of top-flight Common Lisp companies in those years, and they had (1) incomplete, token test suites with dozens of "expected" failures, (2) release cycles of a year or more, (3) independently written modules with "big bang" integration a couple of times per year, and (4) version control systems ranging from CVS (on a very good day indeed) to vile, in-house horrors written in Perl 4. But when I talked to other programmers, nearly all of them worked in much worse environments.

The early days of the XP movement were a revelation for me: You could test _everything_. You didn't have to get the design right up front. Instead, you could rely on your unit tests to help you refactor your code. You could work in 2-week release cycles and pull your features from a constantly-changing priority queue. This, of course, all seems completely obvious today, and not even particularly "agile".

Starting around 2000, Joel wrote his short, funny essays. The biggest advantage of Joel on Software wasn't that his ideas were new, but that he communicated them well enough for managers to understand. And I stopped a few bad decisions in my day by sending those essays around. So while I agree his writing was influential, I don't think he deserves _quite_ the central importance you give him.

-----


"So if you cared to read about your field, you were reading Joel. If you sent a programming link to your team, chances are it was one of his articles."

I think this is where it fails - most people don't read about their field, at all. At least in my experience.

I was only programming professionally in 2004, though, for what it's worth.

-----


I began programming professionally at around 1997. Joel wrote this article in 2000. Before 2000 I've worked for Nokia, GE, and Lufthansa Systems. Source code control were a trivial established practice already at all these firms even in their Hungarian 'outsourced' parts where I worked.

-----


Fixing bugs versus writing new code

My gut feeling is that this also depends heavily on the (technical) type of software you're writing. Dereferencing a null pointer in a C++ desktop app is a whole lot worse than a null pointer exception in a web app built on most modern frameworks. The latter normally won't take down the whole app, just show a one-off 502. A browser that crashes every 3 minutes is basically never going to cut it.

-----


I disagree that they are different levels of problem. A fatal error in a web app is the same as a crash, the fact that it doesn't affect other people and the fact that you can effectively "restart" the app by refreshing or going back and performing an action again is irrelevant.

If you are a serious web business and your app drops a ten thousand dollar order, or it dies in the middle of posting an important live news story, or it fails to send a personal message to an SO overseas, the fact that they can retry (perhaps after re-doing a significant amount of work) is no consolation. You, dear web app creator, are still proper fucked.

-----


I think we can all agree that bugs are bad, but as Patrick mentioned there are different types of bugs, and some have a lower priority than new features. When I'm working with C or C++ I'm practically paranoid about bugs. [1] Memory corruption frequently has catastrophic consequences, where crashes are actually almost the best-case scenario. Bugs cause all sorts of knock-on effects in unrelated parts of the code. Most web frameworks run in a VM that avoids this kind of situation by design; moreover, state rarely lives in the app itself but is maintained in a separate database. Yep, it's still possible to accidentally DROP TABLE or DELETE/UPDATE the wrong stuff, but it's nowhere near as easy to do as a buffer overflow. I guess the modern day equivalent of memory corruption are security issues. Yet I (anecdotally) seem to encounter far fewer severe errors doing web dev.

[1] to preeempt ad-hominem claims of lack of experience or skill with C/C++: they were my primary languages for around 8 years, so at least I've seen my fair share of code

-----


The idea of fixing bugs before implementing new features is that the bugs in your program might interfere with new features. Or finally fixing that bug requires you to do some large scale refactoring, a completely different approach, who knows.

So if you happily go ahead with new features, you might find out later that you're building on sand, and fixing the old code plus all the new code might be massively more expensive.

Regarding version control: I think this depends on your community. In 2000, it was standard knowledge in University/UNIX/Java land that developing software without version control is not only a bad practice, but just unprofessional. This might have been different in all those small shops doing VB or Flash stuff.

-----


Fixing bugs versus writing new code, well, one might quibble with that in some scenarios.

Spolsky actually makes that point himself in another article:

http://www.joelonsoftware.com/articles/fog0000000014.html

-----


For the audience it was written for, I'd say yes.

There are still plenty of shops that score close to zero on the Joel Test. Talk to your non-startup friends working for a big company where software is considered a cost center. They'll tell you about the big battle they had to get the company onto source control, and how deploying a new version of the company website still involves copying files by hand and running some wacky freeware database comparison tool to get the new schema changes across.

Hell, just last year I did a consulting gig for a shop that didn't even make a distinction between dev and production before I got there. Changing the site involved creating "index_new4.php", making your changes, smoke testing, then renaming (or repointing links). No source control, and they didn't even have a backup of the production database.

That's the sort of place that needs to get a copy of the Joel Test forwarded around the dev team, up to management, etc. The things on that list are as important today as they were 10 years ago. If anything, it carries more weight when you show it to management by virtue of being 10 years old.

-----


Since so much software development is now web development, and since so much of that lives or dies by keeping customer data safe, something like "Have you practiced restoring from backup lately?" should go on the list.

-----


"7. Do you have a spec?"

I've written succesful applications without any spec, just talking with users, writting things down myself, showing them prototypes, repeating. In some cases users are not able to produce a spec and even if the do they don't always really "know what they want", that's why prototyping is so important.

-----


That only works if you can have short iterations and client that cooperates.

I've worked on projects that didn't have usable spec, and monthly meetings had client saying "oh, that's not what I've meant, can you redo this like that?".

-----


Also, there are many apps/teams that don't have "clients", and a spec is not really worth writing, much less maintaining. Specs are great when you are a larger team with stakeholders who aren't part of that team, but if you're a small tech-heavy startup moving quickly, I'd consider even the presence of a spec a symptom of mismanagement.

-----


So you did have a spec, you just did the work of producing it yourself. That's sometimes necessary for a developer to do, and for small projects it can be the best approach as well.

Oh, and users never know what they want, even when they think they do. What they know is the problems they have and the job they're trying to do or want to do. It's the job of the software designer (both spec writer and programmer if they're different) to figure out what the software has to do to give the users what they need.

-----


In that case, you are essentially growing a spec. Probably not what Joel meant, but it does work in some situations.

-----


I think "Do you write tests" should be included. And "Do you have continuous integration running" would be a nice replacement of having a daily build.

-----


The "daily builds" item should at least be a "daily build & smoke test", but for the rest I think it's at an appropriate level. Keep in mind, these are simple questions which have the greatest differential in quality between yes/no answers.

-----


I have a print out of the 12 rules on the walls in my cubicle...if only for those non-techy people in my office who then ask what it means!!

-----


Maybe for small teams and microISVs, you could merge the advice from Patrick's article too: http://www.kalzumeus.com/2010/04/20/building-highly-reliable...

-----


I think even Joel might update number one to "Do you use distributed version control?", not so much for the "distributed" part but for the "branching and merging that just works" part.

-----


I doubt it. The leap between proper version control and excellent version control (distributed being the peak) is significant but not the same as the leap between no version control and any version control. That transition is as significant as coming to use structured programming languages (over raw machine code and assembly), or the invention of fire.

-----


They all are relevant to someone.

-----


Always relevant, which is the brilliance of it.

I say this even though I disagree with the spec rule.

-----




Applications are open for YC Summer 2015

Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: