Hacker News new | comments | show | ask | jobs | submit login

Somehow I think this post would have been more valuable if it hadn't played stupid games about imagination.

Sorry to spoil it but the punchline is that the study that everyone claims proves "[software bugs] detected during system testing cost 10 times more to fix than in the construction phase and 10-25 times more to fix when detected post-release" actually didn't prove that and software engineering academics have been parroting it because they're citing a powerpoint presentation that doesn't mention the caveats of the study.

Would it have been so hard to lead with that fact?




In the context of lesswrong.com, that's a fine post.

You are certainly correct that in another forum, and particularly in this one, the much shorter version would also be valuable.


That is the worst aspect of LessWrong. Unnecessarily long with huge analogies.


>Would it have been so hard to lead with that fact?

Good observation. I find that the most engaging writing is when someone writes the standard intro/problem statement->supporting data->conclusion, and then completely inverts it. Assert the final conclusion in the very first sentence, then work backwards through the supporting data to the problem statement.

Quick readers will usually already be able to extrapolate the problem from the conclusion, and then can either move on with just that executive summary in mind, or can read through the supporting data and actively evaluate it. Both quicker and more engaging overall.

Instead it seems we're taught to ramble on and on in an attempt to build up to some grand conclusion, but in this day and age of information overload, few people have the time, patience, and/or attention span to ingest all that, at least in the form of a blog post (books, academic articles are a different case).


Yes, but the point is not to simply state 'this common citation in software engineering is bullshit, which indicates they're all idiots and full of bullshit', but to get the reader to actually think about and agree to the logical argument before revealing the specifics of it being software engineering - before readers can engage in self-justification and criticism and confirmation bias 'oh we're special it's not as simple as you think'.

It's like if you took a speech by a politician, removed the obvious identifiers, and showed it to his enemies who agree with it, and then reveal who it was by - you've demonstrated something interesting about human biases. And his enemies may even learn something.


It isn't entirely false:

If one assumes that bugs are introduced at construction (the writing of the software) then you can use that as a fixed point to determine the cost of bugs based on when they are found. Which, using the original study, would mean that in testing the bugs would be more costly and that in production the bugs would be even more costly.

What is interesting is that it is a function of time and not phase. So, for example, if you were continuously deploying code and a QA found a bug the same day it would effectively be the same cost (assuming everything else is equal) as a code review done by a developer. It would also be significantly less than a bug found several months down the pipeline once your code was finally integrated into a QA environment in a non CD example.


Agreed: When I work on a block of code I am quickest at making changes to it within the hour (as I still have it's design and structure loaded into my brain).

I'm slower to make the same changes after a few days or a week, and even more so after six months.

My experience is that I'm not alone. Well structured code, unit tests, and documentation help - but will never eliminate the work of loading the system into your brain again.


Counterexample: the Debian bug of 2008 lingered in the code base for years before being detected, but the fix was still a one-liner (as was the original bug).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: