Maybe it's just because I've been dealing with them on a daily basis for decades, but when I hear "bug" I don't infer any different connotation than "defect". They both sound like an error in a program created by the human(s) who made that program.
Also, I don't understand the reasoning for why we shouldn't anthropomorphize programs. I read the digression about two and a half times, but I can't connect the chessboard/domino tiling problem to a reason we shouldn't consider software's behavior.
I can't make much sense of Djikstra's linguistic points either. But with the little I can understand, my objection boils down to the following statement:
> A programming language, with its formal syntax and with the proof rules that define its semantics, is a formal system for which program execution provides only a model. It is well-known that formal systems should be dealt with in their own right, and not in terms of a specific model. And, again, the corollary is that we should reason about programs without even mentioning their possible "behaviours".
Perhaps this is another aspect of the "SWE vs. CS" debate but I've come to the conclusion that in SWE, it makes more sense to reason about entities and how they behave (deliberate use of the word) rather than in terms of every single line of code. Said another way, anthropomorphizing programs is another means of abstraction and SWEs and CSs alike should be comfortable going up and down the abstraction ladder as needed. Even in academia, outside your first two programming classes or so, outside algorithms, it is rare to spend time in the realm of "formal syntax and proof rules". (Or maybe this is simply a difference of the needs of CS education today and during Djikstra's time.)
But why should we encourage reasoning in this manner? I have many reasons but the bluntest (yet holds no less water) one is because I think Djikstra's statement assumes you have access to the readable source code. That's simply not true. So it's more useful to reason about computing units as entities with "behaviors".
In the real world, (outside of compiler development) we rarely discover arbitrary code without context. Usually, we are writing that code to fulfill some business purpose. Without being able to describe the behavior of the system, we'd never be able to design any practical software that exists outside of pure math.
Origin of the word bug is that long time ago, an actual bug (as in animal) caused an error. Since then errors are called bugs, because everybody loved the story.
I have no idea whether story is true. But, at Dijkstra time, he probably associated "bug" with above story - the error was not human mistake, humans were innocent, it was just bad luck that rarely happens. We are now 40 years later and associations are completely different.
In my CS university software engineering course we defined several versions with different meanings: error, fault, defect, failure, etc. Some are about the programmer doing something wrong, some are about what the code itself contains after this, others about the actual event when the problem gets triggered etc. I forget which was which. Something like the error is what the programmer makes, the failure is what happens when it gets triggered when used, the defect is the thing in the code (colloquially: bug) if I remember correctly. No idea what "fault" was.
I’m not sure if there are standard meanings for these words, but ISO26262 defines Fault, Error, and Failure in specific ways that are useful to think about but I always struggle to keep straight. Well mainly I swap fault and failure if I’m not careful.
But in this case, none of these are design or implementation mistakes. We just call those bugs, though maybe there is an official term like defect for them.
He dislikes anthropomorphism because there is no real-world equivalent of logic, and he believes that we should attack the logic problem of software head-on. Creating a metaphor is great for abstracting what logic is actually doing, but it discourages focusing on the actual logic.
Basically the metaphor is an opiate. It feels good to think about, because turns an intractable problem into one that is intuitive. But it doesn’t actually solve the problem - only considering the logic does.
Also, I don't understand the reasoning for why we shouldn't anthropomorphize programs. I read the digression about two and a half times, but I can't connect the chessboard/domino tiling problem to a reason we shouldn't consider software's behavior.