I wish he had a higher profile (similar, say, to Spolsky and Atwood, for example)....
That being said, the effect of all those references (on me anyway) was more to highlight the weakness of the research literature than to make a convincing case. But that's not Glass' fault. He does a great job of reporting what the literature says.
It's quite concise (224 pages), but it's chock full of quite excellent advice. Each one of these points (and many others) is fleshed out in a separate chapter that gives a good deal of background, clarification, and supporting evidence.
RES1. Many software researchers advocate rather than investigate. As a result, (a) some advocated concepts are worth less than their advocates believe and (b) there is a shortage of evaluative research to help determine the actual value of new tools and techniques
to a researcher friend of mine every time he advocates the use of Lisp in commercial projects claiming that such a great language will just inevitably bring more productivity to the team in comparison with such a poor language like Java (or any other Blub.)
And look at who ycombinator chooses--it is more about the people than anything else.
"in a room full of expert software designers, if any two agree, that's a majority!"
P3. Good programmers are defined as those programmers that produce good products. If you focus only on coding speed or analytical skill, you may miss out on the best programmers.
P4. If you are paying your good programmer less than you know s/he is worth, count on them leaving soon. You will deserve the loss.
Why is this a given? What are the forces which prevent the pay getting commensurate with ability? Is this unique to programming profession, or is this widely observed in other fields too?
Now, how many $500,000 programmers do you know? Outside finserv quants & algos, not many. But THEY can make millions and tens of millions.
Unfortunately our industry tends to grant incremental compensation increases against geometric performance & value increases. Which is true in a lot of engineering fields. And the same can probably be said in design fields as well.
RE4. Even if 100-percent test coverage (see RE3) were possible, that criteria would be insufficient for testing. Roughly 35 percent of software defects emerge from missing logic paths, and another 40 percent are from the execution of a unique combination of logic paths. They will not be caught by 100-percent coverage (100-percent coverage can, therefore, potentially detect only about 25 percent of the errors!).
Given the fanaticism for TDD today, this is a fact well worth remembering.
Also, passing all tests does not mean there are no bugs. Assuming you have tests for all your requirements, passing the tests means only that software can perform those requirements. It does not mean that it will be bug-free, especially if the users try to do things the requirements didn't anticipate.
TDD has the great benefits you mention but it is not the magic bullet so many of its advocates make it out to be.
One criterion, two criteria. That one thing, those two things.
/pet grammar peeve
Dijkstra said it, tests can prove the presence of bugs but not their absence.
I'm a TDD fanatic but I haven't looked at code coverage in years.
TDD claims to lead to higher coverage but I guess they'd claim to help you avoid missing logic paths too. Since you start with tests and then write code so that you've always got high coverage. If you've not written a test for it then you can assume that it doesn't work. The alternative is to
write a bunch of code, then add just enough tests to keep a coverage metric happy.
There was an article about this on HN a while back, but I can't find it right now.
However, a minor quibble: REU2. Reuse-in-the-large (components) remains largely unsolved, even though everyone agrees it is important and desirable. Not everyone agrees about reuse-in-the-large as being desirable. In Coders At Work, Knuth goes so far as to suggest taking apart other "reusable" libraries and rewriting them. Rumbaugh (of the Three Amigos) has said that reuse-in-the-large is overrated as a goal.
It seems like the harder we try to improve estimate accuracy the worse it gets. Are there any great examples of successful software estimation?
> User satisfaction is related to a combination of quality product, meet[ing] requirements, delivered when needed, and appropriate cost.
A lot of code I have seen has weaker mathematical roots, so I got no problem with him stating it how he did.
Commonly forgotten fundamentals
This was obviously a case where the author was thinking, Ooh, I will look clever and poetic by starting off my blog post with a pointless tongue twister.
Well, not unless you provide links to references.
In fact, here's one fact that was left out: merely asserting something in a technical article does not make it a fact. It's opinion until you back it up with data.