The reality is that there are different types of coders who are better suited for different types of tasks. When you mismatch a coder to a task they aren't well suited for, results are not great. Some are super fast but there is little regard for how it was created (great for trying out new ideas that need validation), others are more methodical and better for tricky things that need to be done right.
I think your point about error is more valid. When I wrote about errors, I was thinking about more serious errors, but I don't think I wrote that, and I think it changes my argument some.
A bricklayer that makes a chimney collapse could very easily make the bathroom collapse too. Arguably, the bricklayer has even less margin of error because if he messes up, people could be physically injured or killed.
I'm a neurobiologist. While doing dissections I separate the hippocampus from the cortex. If I don't perform that dissection well, my cultures will be contaminated with excessive cortical neurons and that will bubble through all my results. Typically not noticed until weeks later when the neurons are mature. If the errors were subtle enough these might get propagated into a journal article and published as Scientific Knowledge.
When I worked fast food waaaay back in high school, if I dropped a burger on the floor, I would back up the entire kitchen because our orders were no longer flowing correctly.
I find that most good blogs tend to mark such after-the-fact revisions in a manner similar to that suggested above. Otherwise, there is no coherent article for the readership to discuss. There's the version you read, which is different from the version I read, which is different from the version Alice will read tomorrow...
I was actually not aware that there was an accepted finality to online writing though, so that is something I should keep in mind.
And yet: one afternoon (many years ago) I came in to work, and my boss called me into his office. He was holding a copy of Time magazine, and looked angry.
Woops! The border around the picture at the bottom was glitched. Turned out to only occur for pictures with vertical line count of the form 4n + 3: even line count or 4n + 1 were fine. Luckily, the bug had been noticed before millions of copies had been printed.
"The only reason you still have a job, is that the compression routine you added let Time push back their photo deadline by a full day." [I believe that at the time, the communication between their headquarters and their regional printing plants was over 4800 bps leased lines. Note that the compression routines achieve ~50% lossless compression, looking at only one scan line at a time (core was tight in those days: some of it even was literally core).]
This is why safety critical systems are sometimes verified using model checking and so on. In some cases (airplanes, etc) even the best programmers cannot garuantee a necessary level of quality.
In your every day app where a user could crash teh program by using an obscure value, this still IS a significant problem. Especially if the error happens rarely it mike take a lot of time to be noticed for the first time and might have critical impact (loss of money?) by that time.
The important point is that the great developer will introduces less of those subtle errors.
I'd love to see more discussions of flow on HN. I'm particularly interested in the statement that Python leads to more flow for the author.
Aside (not directed at you): I watched the article drop from #6 all the way down to #20 in the space of a minute. Does anyone know why that might happen?