Hacker News new | past | comments | ask | show | jobs | submit login

Does this even need a study? Isn’t it simply obvious? Perhaps less so with online applications and continuous updates but in the classical model of software development that most of the industry still depends on the costs and hassles of fixing a bug will obviously increase the further a piece of code goes along the development pipeline the more people, communications and effort it requires to right it. A stitch in time saves nine goes the saying. This reality is at odds with the cult of “move fast and break things” that seems to be somewhat popular these days. In any sector where quality is a factor this will always lead to increased expense and stress.

Of course if you’re just running a website and you’ve got a nice network effect going on, or you’ve got your customers locked into contracts based on matters other than the users’ satisfaction all of a sudden matters of quality become a “cost” and you all of a sudden have to come up with this hokum to try and discourage your developers from doing a good job contra their intuition.

I have a close acquaintance who works in medlab and frustratingly we’ve seen this mentality creeping into the equipment even they use where the system can be down for days at a time due to some software update and then they won’t even properly resource field engineers because they then refuse to reap the costs that the 100x model predicts. Certainly they’re probably saving pennies on the dime but who ends up bearing costs but users and healthcare budgets.

There are those that will quibble this is but an anecdote but you can see this kind of sloppy crack-handedness creeping in everywhere. Oh ship it today we can fix it tomorrow when we ship a whole slew of new bugs too.

Sorry. A bit of a sideways rant.




> Does this even need a study? Isn’t it simply obvious?

There are "obvious" and "trivially true" things in science that turned out to be false.

While it might be easy for certain engineers to accept, there is real value in having validated our assumptions on how we operate.


Yeah, and there's no single golden rule. Everything would depend on:

a) the bug

b) the product

There can also be cases where it's more expensive to find and fix the bug as opposed to some customer finding a bug and reporting it.

There's reasonable amount of testing you should do. You can never be sure that there aren't any bugs and at certain moment, proceeding tests wouldn't any longer be cost beneficial.

A bug could be some button's focus border being of wrong color. No one notices this despite having spent 20 hours testing, running through the site in dev, and then one day designer happens to notice it and makes the report. Developer will have to change one line in CSS, they don't even have to deploy it immediately, but can deploy it together with everything else. Depending on the pipeline of course.

In retrospect are you going to go ahead and conclude, that we should have done another 20 hours of testing to definitely spot that "bug" because it's 100x more expensive to fix it in production?


As a practicing engineer this should be readily observable in our day to day lives. Defects escape, and the further they go the more hassle it is. Only people who haven’t experienced this would need it “validated” and I think this type of thought is the same as denying climate change. Oh you have a lived experience well I have this study.

That is not to say that the economics necessarily follow the costs … and that is very likely the route of the problem.


> Defects escape, and the further they go the more hassle it is.

As a practicing engineer, what is readily observable is that this is highly dependent on context. You see how hard it is to reach any consensus without rigorous measurement? How many more hours are needed to correct a defect down the line? How much does the context affect this? Is it possible to change the context such that the cost is the same?

I can't understand people that are against having scientific knowledge, when time and time again we see that real advances are mostly made by acquiring that. Many times overthrowing what was "obvious".

> I think this type of thought is the same as denying climate change.

Climate change is one of the best examples of how having scientific knowledge is absolutely crucial, instead of guiding ourselves by what we experience.


Denying climate change is ignoring science and studies as opposed to requiring studies to be sure.


> Does this even need a study? Isn’t it simply obvious?

Actually checking what seems obvious is good science.


What goes up must come down.


And we're still testing that rule. Most recently for antimatter.


Then you learn about escape velocity (to choose an example where no "tricks" are involved).


And you end up right back where you started. Day to day, for most people in most cases what goes up must come down.

That there are “special cases” such as “flight”, “parachutes” and “space travel” doesn’t invalidate the fact that - notwithstanding additional feats of engineering and expense - if you jump off this building you will break your leg.


To continue the bridge metaphor... there's a pot of gold under the bridge. Is jumping off the bridge survivable (low bridge, water underneath, you have a bungee cord)?

Knowing the special cases has value.


Not arguing with that. Special cases don’t invalidate a rule though. Kirchhoff’s circuit laws continue to be a thing even if they break down at higher frequencies and give way to radio …


> Kirchhoff’s circuit laws continue to be a thing even if they break down at higher frequencies and give way to radio …

Do you think maybe we know this because someone decided to check the assumption?

That was the original point of this thread. You just gave an argument in support of it.


If that’s what you think then you haven’t been paying attention


> Day to day, for most people in most cases

Right but we're engineers - we're considering things outside the day to day and beyond most cases.


I think this is the correct take. Engineers are in the business of solving problems.

So if the question is "How do I make something go up and NOT come down?" or "How do I make something come down differently?" then the obvious take isn't as useful.


> Does this even need a study? Isn’t it simply obvious?

Well at the very least, commercial software used to come in shrinkwrap boxes, and now as web applications that can be updated at any moment.

Surely the cost of fixing a bug in the first situation must be higher than in the second.

The statement sounds equally obvious for both, but must be wrong for one.


> Of course if you’re just running a website and you’ve got a nice network effect going on, or you’ve got your customers locked into contracts

It’s perfectly fine as a rule of thumb and regardless of how the software comes to be in your possession it is relevant unless there is absolutely no delivery pipeline or other people involved in said delivery. Even for personal projects the recency effect comes into play and it will be harder for you to address something 6 months down the line as you struggle to recall why you did it that effin way.


Even assuming it's true (and I believe it is, if 100x is really "orders of magnitude"), there is value in having valid research on the real costs of fixing bugs across various industries and software types. It can drive innovation in the testing/QA space. It can drive investment in test pipelines. Currently, large companies likely have enough software, with enough defect escape, to generate their own metrics. Smaller companies are left to rely on the "100x" rule of thumb and guess at what level of defect escape maximizes their resource allocation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: