Hacker News new | past | comments | ask | show | jobs | submit login
"Joel Spolksy is wrong about my work" - Kent Beck (threeriversinstitute.org)
85 points by bjclark on Feb 4, 2009 | hide | past | favorite | 98 comments



A note to all the people arguing about whether TDD is a good idea or not: You are arguing about the colour of the bikeshed.

The problem with Joel and Jeff's podcast isn't that they disagree with TDD. If you poll 100 random smart people on the subject, you will get a continuum of answers.

The problem is that they then made a sweeping generalization about Kent Beck and Robert Martin's experience and competence based on their disagreement. The old, "If you don't develop software exactly as I do, you are an idiot" line.

I think Kent is right not to argue the fine points of TDD, because that isn't the issue. The issue is that his competence and experience were attacked on the basis of advocating TDD, and his response is to suggest that J&J are unaware of his experience.

If someone wants to suggest that Kent is a very experienced guy but they don't think TDD is a good idea for their project, I'm sure Kent is fine with their stance on the matter.


It sucks to have your life's work strawmanned like this by pop programmer podcast banter. I hope Joel addresses it professionally.

XP/Agile/TDD (like any movement) is filled with rabid fanboys who misapply the principles and try to ram them down everyone's throat, but it's rarely the case that the inventors of popular methodologies are filled with the same blind zeal. After all, their ideas were originally informed by first-hand experience.


Good point, but it's not just fanboys, I've seen Uncle Bob speak and he's a lot less reasonable then he was in his response to Joel.

The thing is, it's appropriate for his audience, large cubicle farms with a completely broken system.

So oversimplifying and extreme work for Uncle Bob when he's consulting.

Joel was obviously talking about the kind of super star programmers he hires in his magical fairy land office. (Just kidding Joel, I wish I worked in an office like that.)

The problem as I see it, is the difference between what's good for good programmers, and what's good for most people who happen to diddle with visual basic. (No offense to VB rock stars, keep on rockin'!)

Kent Beck and Bob Martin are very good programmers, but they get paid from consulting large organizations with not so great programmers.

Joel gets paid by hiring and working with great programmers, in his magical high rise office, with free gourmet food and rainbows.


Most great programmers take very well to unit tests. The way they became great was to assume their code is flawed and take steps to remedy that.

Perhaps I'm being a little harsh, but I don't think Joel or his employees are really all that great. I mean, they're working on bugtracking software. Great programmers tend not to trade interesting problems for private offices and gigantic monitors (plus, they often can get jobs at places that offer them interesting problems, private offices, and gigantic monitors).


No, really? FogBugz is written in a custom language created by the team for the application. Doesn't that sort of problem sound interesting enough to you? http://www.fogcreek.com/FogBugz/blog/category/Wasabi.aspx Honestly, the problem domain is not an indicator of the programming interest of a project.

It's a general rule - if you are working in a boring problem domain, then you need to abstract out the boring part. Doing the abstraction is generally a very interesting task.

Example: you have a GUI test rig for a set top box. Your task is to generate tests for the setup, which works by doing screen captures, and then comparing the screen capture with a pre-recorded image of the screen, to verify that error messages are correctly displayed. The problem is that the tests are brittle - if the designer changes the shade of a pixel anywhere on the screen, your test breaks and has to be regenerated - a very boring, repetitive and time-consuming task. So, change the system - right a screen scraper with some OCR, that can actually read the screen. Now the screen design can change, and as long as the message still appears somewhere on the screen, your test passes. It's a very non-trivial, interesting problem in a relatively dull problem domain.

Conclusion, if your work as a programmer is boring, you're doing it wrong (unless you're doing documentation - good grief, I'd kill for a way of automating that well!)


I absolutely agree everybody, including the great ones, should write unit tests.

I think the storm in a programmer's tea cup here is exactly how much unit testing do you need.

Joel is accusing Kent Back and Robert Martin of wanting to write a whole testing framework for each and every little get and set method.


That might apply to Robert Martin, but Kent Beck's on record as saying don't unit test getters and setters. "Test everything that could possibly break."

So, strawman.


I don't think anyone ever accused geeks of really meaning sexy when they say sexy. More like, to each his own.


I think that the whole movement idea of software development is pretty weak. If Joel can write down a life's work in a couple of sentences, then there is definitely not much to it in the first place.

This is a mud-slinging contest over a subject with dubious value to software development in general. it might just be me, but whenever I hear TDD/Agile/XP my heart is made sad by all the crap out there in software development.

I want to design and write good programs. Now of course, that might include testing the software for errors in an automated way. I would think that most people would agree on testing as a needed tool for most software developed today and in the future.

But there is a long way from testing a piece of software to the extreme views. And there, we see all kinds of disagreement. Hence the mud slinging. This is a bad bragging contest. And it is just going to take up your time. You won't gain anything from it unless you take stance. Here is mine:

XP: Crap. Kill it. It completely disagrees with the way I can work soundly on a project. But take the code review part with you.

TDD: Crap. Kill it. It completely disagrees with the way I develop software. Sorry. I am not going to change. And I am not becoming extinct because of my view either.

Agile: Take the old ideas of iterative development and short development cycles. Take the good idea of keeping the development methodology simple. Take the good idea of providing transparency of the development status. Kill everything else.

Now your stance might be different from mine. But we will gain little by discussing the finer points.


> If Joel can write down a life's work in a couple of sentences, then there is definitely not much to it in the first place.

You seem to presume that Joel has written down Beck's life's work in a couple of sentences. The whole point of Kent's post is to refute this presumption.

Joel himself wrote a post giving a twelve-point "Joel Test" for software development practices: http://www.joelonsoftware.com/articles/fog0000000043.html

I'm sure you agree that Joel did not write down his own life's work in articulating twelve simple yes or no questions about software development practices. There's more to Joel's own experience than those twelve questions, and there's more to Kent's life's work than Joel's off-hand remarks.

I'll close my rebuttal to your comment by pointing out that you have just summarized your own stance on software development with some amazingly superficial and non-actionable prescriptions.

But I would never presume that these prescriptions, right or wrong, would summarize your life's work either. Nor would I jump to hasty conclusions about your experience based on whether I agree or disagree with your stance on TDD.


originally informed by first-hand experience

You can read about that experience here:

http://c2.com/cgi/wiki?CthreeProjectTerminated


It's interesting how this "little" detail always seems to get overlooked by the XP zealots. A great book which I think everyone should read before jumping onto the XP/TDD/Scrum/Agile/etc bandwagon is called "Extreme Programming Refactored: The Case Against XP". It highlights the failed C3 project and offers some compelling arguments.


Don't conflate having a project canceled with having it fail. Successful projects get canceled for a variety of reasons, and some failed projects never get canceled.


Never heard of this guy before, but considering that he just wrote 3 paragraphs eloquently saying "Joel is an idiot" without saying anything about how they disagree makes me think quite little of him. Ad hominem attacks are rhetoric - meant to influence, not inform.

See How to Disagree, by PG: http://www.paulgraham.com/disagree.html


Kent Beck is one of the original Smalltalkers....way back. If I recall correctly, Kent won the "competition" for the best answer to "How many lines of code does your app have?". His answer (best recollection) was "lots, but with some effort I was able to remove most of them".

Kent and his circle of friends were refactoring before there was a word for it; when XP and all its many ancillary methods were just called "best practices".

Not sure I've ever seen him publish something like this before. This is part of the problem with dismissing people over the Internet without really knowing their backgrounds. It is very easy to make a fool of yourself.

I think he treated Joel fairly. He was complimentary to Joel's efforts in writing but simply told him that he doesn't really know him and should be more careful with his opinions.


Joel dragged him into this by name. Clearly he thinks Joel is attributing ideas to him that he doesn't hold. Therefore it's not a matter of disagreement--he may well fully agree with Joel's thesis--it's the demonization of his work by misrepresentation. In that light his response seems measured and appropriate.


It may be measured, but it's not helpful.

Let's assume that Joel is neither stupid nor malicious. Therefore he is having problems that presumably others are having. In this case it is useful to actually set out the nature of Joel's error, rather than simply saying "Joel is a dolt". Non-malicious criticism can often be helpful in letting you explore why people misunderstand you.

Of course my assumptions could be wrong - he could believe that Joel is being malicious.


He could also believe exactly what he says, that Spolsky has a "lack of knowledge of what I do and what I say".

FWIW, I think this is the post being responded to: http://www.joelonsoftware.com/items/2009/01/31.html


2. "Joel is a dolt" - Neither on the lines nor between them have I seen this in Kent's post.

1. "Joel is neither stupid nor malicious" - Stupid + Malicious != AllThereCouldEverBe... Pompous and AttentionCraving are missing for example. Others are too...

0. I _partly_ dislike both stands on software dev. methodology (Kent's and Joel's) so don't take this a fanboysm. I respect Kent's life work more though.


Kent Beck is the creator of Extreme Programming, one of the fathers of Test-Driven-Design and a bunch of other agile practices. Quite well known amongst most programmer circles.

More info here: http://en.wikipedia.org/wiki/Kent_Beck


He also co-wrote JUnit and wrote SUnit which spawned the many other counterparts in other languages and wrote one of the better Smalltalk books.


And that book is Smalltalk Best Practice Patterns (http://www.amazon.com/Smalltalk-Best-Practice-Patterns-Kent/...). Not anything like the design patterns book in case anyone wonders, and it's a good read even if you don't do Smalltalk.


I totally disagree with this. First of all, Joel is an idiot. I think this is fact. I think we all know this, and those of us who don't know yet will find out sooner or later.

More importantly, this whole "how to disagree" thing is about extending courtesy. Joel Spolsky dissing Kent Beck is like a four-year-old piddling on a real pioneer's foot. Beck did Spolsky more courtesy than Spolsky deserved just by responding at all.

A lot of people on the Internet don't seem to realize that when one person disrupts another's day with attacks or what have you, that other person might have other things going on that are more important to them than responding to the Internet crap, or which require so much time that no time remains for responding to Internet crap.

The subtext in "How To Disagree" is "you have to show me respect if you want to debate something with me." That's perfectly reasonable. But starting a fight with somebody out of the blue isn't about rational, adult disagreement.

Besides, dude, he's responding to something somebody said. It's not his fault you know about it. He didn't publicize it to you, and for you to judge him based on his failure to provide you with context, when it's you going to his site, that makes no sense at all. Be mad at Hacker News for linking to something you don't know about, or be mad at yourself for clicking links automatically and wasting your own time. I mean there's no logic in holding that against Beck at all.


But what happens when you stick inventor of popular methodology A against inventor of popular methodology B? Is the disagreement blind zeal or professional opinions originally informed by first-hand experience?

Or is the whole thing just like duct-taping buttered toast to the back of a cat and then pushing it off a high surface?

(Toast always ends buttered side down. Cats always land feet first. Thus, if either lands, the universe implodes. I don't even want to consider what happens if the push-cat-off-ledge-or-not decision is made based on whether a radioactive isotope decays or not.)


Joel pushed no "methodology" in that sense at all. What I see is Kent Beck, who pushes a methodology which has taken up a religious like following in "IT" or corporate developer circles. Joel just ships software, and judges things in terms of commercial success in the market place.


You forgot to mention writes persuasively based on a handful of good ideas filled out with trite brain candy.


I am not really a fan of Joel's advice, but if I look at the relative successes, I would have to favour Joel (also, listening to him speak now, on that podcast, indicates that in his "old age" he really has less advice, he has mellowed out and realised that there is a lot more variables at play).

I think any methodology really boils down to: have good people any they will make something work. That is the only common thread in successful projects/teams/products that I have seen (and others). Its 80% people, perhaps more. Therefore any other tweaking of things are really like premature optimisation.

And I like brain candy. Its sweet in a bitter world ;)

I think Paul Bucheit said : "Limited life experience + overgeneralisation == advice".


Here's a transcript of the podcast I believe Kent is referring to. Judge for yourself whether Joel "makes comments that make clear his lack of knowledge".

http://www.joelonsoftware.com/items/2009/01/31.html

I'd like to see Kent respond to Joel's specific points. E.g:

  The real problem with unit tests as I've discovered is that the type of changes 
  that you tend to make as code evolves tend to break a constant percentage
  of your unit tests.... 
  So the end result is that, as your project gets bigger and bigger, if you 
  really have a lot of unit tests, the amount of investment you'll have to make
  in maintaining those unit tests, keeping them up-to-date and keeping them 
  passing, starts to become disproportional to the amount of benefit that you 
  get out of them.


In this case, I think Kent's right. Joel's doing it wrong.

You should never have 10% of your unit tests depend upon the location of a menu. You should have one unit test that depends upon the location of the menu, and everything else should be isolated by stubs, shunts, mocks, whatever. Otherwise, they aren't really unit tests, because they're testing more than a unit.

I didn't realize how wrong I was doing it until I got to Google. (Then again, a lot of Google's best programmers are doing it wrong too.) Perhaps this is because doing the right thing - mocking out your dependencies and testing only one feature per test - is often harder in the short term than just running all your code when you test the topmost layer. But I think the point needs to be hammered home: most programmers who're doing "unit" testing are not doing unit testing, and their code would be much less brittle if they took the time to stub out lower layers.


I agree, in theory. However, when you're quickly iterating, redesigning, changing how things work, having to completely rework the unit tests as well is pretty much wasted effort.

I think it depends on the project - is it a 'design up front, go away code once' project, or an iterative constantly changing thing.


'Fraid not, axod. It's when you're quickly iterating and changing lots of stuff that quality unit testing helps you the most. Let's look at 2 possible scenarios:

1) You don't bother testing much. You make lots of changes, iterate quickly. Before the changes, you have code that, let's say, you know works. After the changes, you know there's probably bugs, but you don't know where.

2) You test fairly thoroughly. You make lots of changes, iterate quickly. Before the changes you have code that you know works. After the changes, you know where most of the bugs are because your tests tell you where they are.

It's the difference between walking with your eyes open or closed... it may be more work to process all that information about obstacles, but it sure is helpful if you want to avoid those obstacles.


There are two schools of thought in programming I have seen.

1. Design it right the first time, tests and all.

2. Build a throwaway, no tests, little refactoring as you quickly iterate

Programmers who fall into category 1 in my experience mistakenly believe that they understand the requirements of the software they are writing. Category 2 programmers tend to be ones who actually deal with users/real world, and know that they don't understand the requirements.

My position is that if you are building something for the first time, you are going to get it wrong, so get it wrong as quickly and cheaply as possible. This means getting it in front of users as quickly as possible, bugs and all.


You're assuming that you'll get something in front of users more quickly without writing tests. I find that's not the case in my experience.


I find there's a balance to be struck. You want to get stuff in front of users quickly, but if that stuff sucks so much that it's unusable, you aren't going to release it (and if you did, nobody would use it). So you want enough tests that you can be reasonably confident of the code and not spend a lot of time debugging, since debugging is the real timesink that'll kill you.

I often find that I unit test all but the latest layer. As soon as I start depending on code from something else, it's time to write some tests. But if I just wrote something, it's probably in flux and tests would just prematurely lock down the interface. Write the code, write the client (which probably requires that you rewrite the code), and then write the tests in lieu of debugging the code.


Also, to get something in front of users, you rarely need to be doing anything remotely complex. It's not like writing a disk based inverted index or a filesystem. When writing tools like those, obviously testing is important.


I'd love to see some research on this. I would hypothesize that there is a point of problem complexity where the lack of tests becomes a negative for productivity.


I second this suggestion (and agree with the hypothesis, although its probably not just a function of raw problem complexity, other variables must come into play)

Hasnt it been done already? Couldnt find anything I would class as 'scientific' with a quick google search. Would be a good thesis for someone.


> Before the changes you have code that you know works. After the changes, you know where most of the bugs are because your tests tell you where they are.

The tests tell you where the tests fail. Some of those failures may be due to bugs, and others may simply due to the way your code now works.


I'll chime in from recent experience. I had to pull a dependency in my project because it was causing heisenbugs that I couldn't root out. Without tests, I wouldn't have known. With tests, it took 2 hours to replace (diff -200 +250), and I'm confident the library still works. Without tests, it would have taken several more, at least.


How long did it take to write the tests?


I write tests as I develop, and they take the place of going to the browser or a repl and poking around at my work. In some sense, testing takes zero or negative net time. It would be hard to measure gross time, because it occurs in such small increments.


I disagree.

If I'm constantly changing my mind about how best to do something, I don't want the massive overhead of having to change unit tests all the time as well.

I'll test as I go, in the code.


If your testing in the code then your doing unit tests. But probably in a poor fashion.

Unit tests should hook into the application in such a way that you can reorder the GUI and change nothing.

If you are using Unit tests correctly then any change to the code base at worst requires 10% of that effort to change the test cases. If you can think of any meaningful change that is worse then that your something is wrong with how you're testing.


"If I'm constantly changing my mind about how best to do something" then you're finding /more/ not /best/ ways.


> After the changes, you know there's probably bugs, but you don't know where.

Which highlights one of the "problems" with unit tests: if something fails, you have to take time to go and fix it. If you have no tests, you can assume there are no bugs because you are such a great programmer, then go home after a job well done.

That there actually are bugs there doesn't bother you, because you don't know about them. It takes less time to write buggy code.


The alternative, is to build you code in a way which tells you quickly about bugs... decent logging, etc


I do think he was being sarcastic.


>> "Before the changes you have code that you know works. After the changes, you know where most of the bugs are because your tests tell you where they are."

No. More often than not, if you change was a large architectural one, which is often the case in young software, your tests are now irrelevant, or broken.

>> "It's the difference between walking with your eyes open or closed... it may be more work to process all that information about obstacles, but it sure is helpful if you want to avoid those obstacles."

Do you seriously think people are just blindly coding and never testing anything here?

  while(true) {
    writeABitOfCode();
    while (!compile()) fixCompilationErrors();
    testThingsYouMightHaveChanged();
    checkLogsForIssues();
  }


* testThingsYouMightHaveChanged();*

If you change that into writeAnAutomatedTestForItAndRunYourTestSuite() you get the following benefits:

1) Every test you write accrues value onto your test suite and helps you avoid problems later;

2) You don't need to spend time to figure out what you might have broken with your change - your test suite tells you in seconds;

3) You can say with some measure of certainty that it's extremely unlikely that your change broke anything.


BUT you then have to modify:

  while (!compile()) {
    fixCompilationErrors();
    fixUnitTests();
    rewriteUnitTestsThatNoLongerMakeSense();
    WriteNewUnitTests();
  }
Your iterate loop just became maybe twice or 3 times as long. Less iterations = worse code.


rewriteUnitTestsThatNoLongerMakeSense

Delete em. You should be iterating your unit tests.

Over time you will find some really useful unit tests for common mistakes, and others that are basically useless. Try and approach things from the perspective of what you would like to know works and not just tossing everything and the kitchen sink at the problem. When you find a bug add a unit test just in case it comes back, if you change major parts of the system, port the most useful unit tests and delete the rest.

I think of unit tests as saving you from the (I thought I just fixed this) problem.

PS: The major advantage of this approach is you keep adding more tests to the older parts of the system, which are also the most costly to change because other parts of the system assume it works. And assumptions you and your code is making can be really hard bugs to find and fix.


I think the best benefit of unit tests is really saving you from the "Will this break anything?" problem. Uncertainty is the real big killer for programming productivity; if you know what to code, you just type it out and you can get like 10 lines/minute. So have some way that you can run things and say "I know this broke something" or "I know this didn't break anything."


testThingsYouMightHaveChanged requires running your tests manually, which is slow and hard to fully reproduce later. It also depends on being able to make strong predictions about what your change affects, unit testing assumes that anything could have broken.


> I think it depends on the project - is it a 'design up front, go away code once' project, or an iterative constantly changing thing.

I think your line is too coarse. I do heavily rely on unit tests even in "iterative constant changing projects". But in the very early "sketching" stage of development I don't bother to write tests. I just play with code on REPL (I work with Common Lisp or Scheme most of the time), in a flat namespace. Then, once I start putting code snippets together into packages (CL) or modules (Scheme), that's about the time I start writing tests as well. After that, even I constantly change stuff, sometimes drastically, unit tests do help me a lot.

It is true that you need effort on reworking on the tests. Actually I think writing tests needs more work than writing actual code. And I think that's the right thing. Writing something that works is an easy part. Making sure it doesn't break in every possible situation is much harder. In language like CL, you can fairly quickly write something that takes typical input and reasonable output without serious thinking. To make sure it works in every corner case you have to think much harder and precisely, and test code reflects that thinking.

(NB: I know some programmers who seem to think out those corner cases in their brain and just spit out a beautiful code at the first attempt. They may not need support from test code, I guess.)


As someone who is relatively new to the world of testing, I think I finally understand the point of stubs/mocks/etc. Thanks!


Do you know of any good texts or articles that describe the "right" way to test?

I put myself in the category of mediocre test writer, because I do bump into the problem fairly regularly of having a lot of tests fail when I do a rewrite. I'd love to learn how to write better tests...and your last paragraph seems to scratch the surface of a superior approach.


The Google Testing Blog can be helpful:

http://googletesting.blogspot.com/


Seconding this. I think the internal Google training materials were some of the best I've run across, and the Google Testing Blog is basically the public face of those.


Dive Into Python has a couple of good chapters on unit testing. Maybe you're looking for something more advanced though. If so, I think he gives some references for further reading.

http://diveintopython.org/unit_testing/index.html


I think I agree with everything Joel said in that post.

These crazy rules people come up with for "proper object oriented" programming remind me of extreme religious rituals. You have to observe all these stupid little rules or else your code will become impure you and will spend an eternity in code maintenance hell.

I'll make a somewhat heretical claim even: Unit test are 20% useful engineering, and 80% fad. They are great for tricky code with well defined behavior (like a parser), but wasteful for most code.


I’ve never seen Kent Beck as a crazy rules person. I think his comment from stackoverflow demonstrates this:

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.

http://stackoverflow.com/questions/153234/how-deep-are-your-...


I have the deepest respect for you and your writings, but exactly how much maximum code coverage have you achieved and sustained for at least a month using automated unit tests ?

Just wondering how long you've visited the region you're writing a travel review about.

Unit tests are about confidence in the software. Automation is the mechanism for sustained confidence. Engineering (and / or fad as you put it) is a vehicle to get there.


You can have 100% code coverage, and still not be testing anything. Unless you're testing thoroughly the right parts, they're a false sense of security.


I agree with that. Its just an (imperfect) proxy measure of the depth / extent of unit testing. Asking how much code coverage is likely to give a better reflection of the effort behind unit testing rather than just a - "have you done unit testing".


> These crazy rules people come up with for "proper object oriented" programming remind me of extreme religious rituals. You have to observe all these stupid little rules or else your code will become impure you and will spend an eternity in code maintenance hell.

What's even more amusing is that the people prescribing these rules aren't even using languages where OO is implemented properly. I will start listening when they start using Common Lisp or Smalltalk for their examples. When they use C++ or Java, it just makes me giggle.


And of course, if you find that these rituals are not working for you, then you "obviously" are not doing them properly.


Are you objecting to cultish behaviour like TATFT (Test All The F'ing Time), or to comprehensive testing in general?

Also, are you objecting only to low-level unit tests, or to "integration tests" as well? (i.e. unit tests with a much larger "unit")

I don't TATFT. My metric for deciding what tests to write is "is it likely to break in a non-obvious manner?" If a breakage in this code will cause the application not to run at all, there's not all that much point in testing that. However, if the code is likely to result in some weird errors that will be hard to track down, I feel writing tests is a very good investment.


I wouldn't write off unit testing and test-driven development so quickly. My personal experience is that sometimes it felt like a waste of time, and when it was, I dropped it. For times when I had a clue of what I wanted to do, but only just a clue, writing the test first was a nice tool for fleshing out behavior.


I think it depends on whether your users can write programs for your program.

I work on a web browser, and every test is worth something. To some degree, we don't care what the test is testing. We want to be sure we don't change it unintentionally, thus breaking the web.


I think what Martin and Kent are really objecting to is this (from the same podcast):

"They've just gone off the deep end, and I don't think these people write very much code if they're coming up with these principles, to be honest, it doesn't even make sense."

It probably wasn't meant that way, but it was taken as an unnecessary personal attack (despite Joel's frequent 'don't listen to me' implorations)


".. and I don't think these people write very much code"

and why is this objectionable? Joel Spolsky thinks some agile "gurus" don't write much code.

Fwiw, I know at least a couple of these "guru" (note the quotes) folks who don't write production code and haven't done so in years and make their living telling other people how to write their code !

The correct way to counter this is to point at (open source) code they wrote (Bob Martin does this by pointing at the Fitnesse code base, so he got this part right imo), or if it is not possible, just calmly assert that Spolsky's thinking is wrong and leave it at that.


I've generally had some antipathy to Joel Spolsky, seeing him as more management oriented than programmer oriented. But I have to say that he seems both right and honest here.

TDD is great if you use it appropriate along with good design - but "tests for everything" is just ideology. As he says, it's like the old OO schema, where you created these ridiculous inheritance trees and unnecessarily complex diagrams and such.


If Spolsky is going after "tests for everything", his target is Robert Martin, not Kent Beck. Beck freely admits that he doesn't test everything. Pulling Beck into the argument feels like a cheap debate trick.


If you are:

* selling software people pay money for

* writing software people install -- that is, /not/ web stuff

Then under what circumstances would it make sense to ship untested code? When is there a difference between code that has unit tests, versus that with only end-to-end tests?


I don't think anybody is arguing that shipping completely untested code is a good idea.


If your iterations break too many tests, then you are writing too specific tests for things that can't really (or shouldn't) be tested.

And, BTW, I never found Joel Spolsky a particularly insightful person.


In most cases people who think they want unit tests really wanted to stay awake in CS101 where they'd have learnt about referential transparency and composability.


In this sort of situation test coverage is actively harmful. You want to write lots of tests at a fine grain, and only test 'happy paths' above that level of abstraction.

When I layer my tests I rarely end up with thrashing tests. Multiple tests often fail, but it's easy to figure out which test to focus on. And fixing the right test case fixes them all.

Even if it makes sense to write a test for menu placement, it rarely makes sense to write more than one.


I've always thought that that was a benefit of unit testing and high code coverage - knowing when a change has broken some other piece of code.


I'd like to see Kent respond to Joel's specific points

Psht, why do that when you can just ad hominem for 3 paragraphs?


I found Joel Spolsky's opinions to be all over the spectrum. Sometimes he posts well-researched pieces with intriguing conclusions. Sometimes he writes uninformed rants, which show his lack of understanding of the subject matter.

This is not a problem if you carefully apply your own measure to whatever he writes.


I'd say that his ideas on management are interesting and informed. But when he starts talking development or technology he quickly becomes uninformed and poorly researched.

The discussion they had a number of episodes ago regarding MVC "patterns" made it clear that they had no fucking clue what they were talking about. And when Wikipedia didn't give a fast answer they began guestimating instead.


What has Kent Beck shipped?

Follow the advice of people that have shipped code. That is what I took from the Stack Overflow podcast.

Seriously, what has Kent Beck shipped? The Chrysler Compensation program is always brought up. But, I understand that the project was cancelled.

I'm not trying to troll here. I would have greater confidence in Kent Beck's argument if I knew what he was actually capable of shipping.


JUnit, for one thing.


Eclipse as well.



Kent Beck's system is awesome if you want to subjugate a thousand programmers working at a bank. In some sense, it is a very successful way of forcing stupid people to build mediocre products under threat of economic violence.


To quote from http://www.yafla.com/dforbes/Coding_Horror_Strikes_Again:

Coding Horror is an entertaining, sometimes even educational blog. Be careful diving in headfirst, though, as the technical depth is generally so shallow you'll be hitting the bottom before you've even broken through the surface tension.

Based on what I have (managed to) read at CH I find this description to be fairly accurate.


That's Jeff Atwood, not Joel Spolsky. Unless I'm mistaken about something.


It's quite interesting to see two kinds of technical guys arguing against each other. On one side, we have the product guy, on the other, we've got the consulting guy.

You've got Joel, Jeff, Paul Bucheit on the product side saying that "Yes, automated testing and some of the OOP principles are great, but let's not go as far as Uncle Bob"

Check out the SOLID principles. Some of the principles are alright, but when Uncle Bob explained that Rectangle class thing, I don't know what to say other than I just changed my mind leaning toward not to buy his "Clean Code" book.

On the other side, you've got Kent Beck, GoF, Uncle Bob, Martin Fowler, etc. (and possibly Gilles Bowkett). They're what you would consider "war-proven": they've done numerous "IT projects", leading a team of corporate developers, probably making a website for clients, and etc.

I don't mean to be rude (and perhaps this is an overgeneralisation), but I haven't heard any software product made by the consulting group. Yes, they wrote code, but the code is based on requirements given by some clients that:

1) Change the requirements frequently 2) Keep cutting costs 3) Ask for more 4) Cut more time

There's a huge different between writing code for commercial software that you steer versus writing code driven by someone else who knows almost nothing about software development but knows a lot how to save money.

Would you TDD-ed your UI code?

If you wrote API, yes, I can understand you do heavy unit-tests.

A side note for having your own unit-test framework: 1) Developers shouldn't write them 2) Developers should provide minimal to enough unit-tests 3) Go hire your own test team to do the rest

Developers aren't meant to be full-time testers. Testers aren't mean to be a full-time developers.

What I'd like to see is a team like this: 1) Joel becomes the product manager 2) Paul Bucheit becomes the software architect 3) Jeff, Kent Beck and Martin Fowler do the coding 4) Robert Martin and those infested TDD people can build a full-blown suite of unit-test frameworks, acceptance test and etc.

Microsoft has a category for the people in (4), it is "SDET". For those of you who think developers must write a comprehensive TDD (not just a minimum or "just to pass"), here's a challenge to you: why don't you try and be SDET. You'll see some of the serious full-blown test code, black-box, UI test code, UI automation and a serious CI process. I'm sure the experience will make your TDD and unit-test code look pale in comparison.

Google seems going to that (MS) direction slowly.


classy response from kent beck


Does "classy" mean a rant with no or little content and the only remarkable thing about it is a famous author?


I'm thinking there may have been just a pinch of sarcasm behind the choice of the word 'classy'...


I'm surprised nobody has mentioned Spolsky's significant bias in this argument:

His company makes bug tracking software!

Of COURSE he wants to cast unit testing in a bad light - done properly, it could put him out of business.


I sincerely doubt anyone who runs a company that makes bug-tracking software loses any sleep about software bugs being eliminated forever.

Regardless of what happens in the software industry, to-do lists ain't going away any time soon.


Epic rebuttal fail.


Guru fight! Which one is right? Neither!


What a well articulated ad hominem flame! So much confidence, and yet, no content. Now that is art!

sigh

http://www.paulgraham.com/disagree.html




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: