Sorry to spoil it but the punchline is that the study that everyone claims proves "[software bugs] detected during system testing cost 10 times more to fix than in the construction phase and 10-25 times more to fix when detected post-release" actually didn't prove that and software engineering academics have been parroting it because they're citing a powerpoint presentation that doesn't mention the caveats of the study.
Would it have been so hard to lead with that fact?
You are certainly correct that in another forum, and particularly in this one, the much shorter version would also be valuable.
Good observation. I find that the most engaging writing is when someone writes the standard intro/problem statement->supporting data->conclusion, and then completely inverts it. Assert the final conclusion in the very first sentence, then work backwards through the supporting data to the problem statement.
Quick readers will usually already be able to extrapolate the problem from the conclusion, and then can either move on with just that executive summary in mind, or can read through the supporting data and actively evaluate it. Both quicker and more engaging overall.
Instead it seems we're taught to ramble on and on in an attempt to build up to some grand conclusion, but in this day and age of information overload, few people have the time, patience, and/or attention span to ingest all that, at least in the form of a blog post (books, academic articles are a different case).
It's like if you took a speech by a politician, removed the obvious identifiers, and showed it to his enemies who agree with it, and then reveal who it was by - you've demonstrated something interesting about human biases. And his enemies may even learn something.
If one assumes that bugs are introduced at construction (the writing of the software) then you can use that as a fixed point to determine the cost of bugs based on when they are found. Which, using the original study, would mean that in testing the bugs would be more costly and that in production the bugs would be even more costly.
What is interesting is that it is a function of time and not phase. So, for example, if you were continuously deploying code and a QA found a bug the same day it would effectively be the same cost (assuming everything else is equal) as a code review done by a developer. It would also be significantly less than a bug found several months down the pipeline once your code was finally integrated into a QA environment in a non CD example.
I'm slower to make the same changes after a few days or a week, and even more so after six months.
My experience is that I'm not alone. Well structured code, unit tests, and documentation help - but will never eliminate the work of loading the system into your brain again.
There is an anecdote of one physicist showing an X-Y chart to another. Latter elegantly explains the phenomena, but then the first guy says "oh, you are looking at it upside-down", to which the other guy says - "ah, now it makes even more sense."
I've never heard it phrased that way "in industry" (as opposed to academia, where I have no experience). Instead, what I've heard is:
""defects detected in the operations phase (once software is in the field) cost more to fix than those detected and fixed earlier"""
While perhaps not backed up with published results (again, I don't know), just by 1) taking the actual development cost of fixing a bug as constant (which is extremely generous for comparing a bug in dev versus production) and 2) then counting the number of other people involved in identifying and resolving the bug shows it to be true.
I guess the straw man argument is that design bugs are worse than implementation bugs in a project, but that's also true. In both cases, the issue is that time cures all wounds, where cure is used in the sense of made permanent, not fixed, or fixed is used on the sense of locked, not restored to working order.