Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why is software quality an afterthought for many people/companies?
80 points by jcfausto on Oct 23, 2017 | hide | past | web | favorite | 97 comments
I've seen many discussions around this topic lately, but I'm particularly curious to understand why most people think that software/code quality is something secondary and could be addressed late in the process, for instance with peer review.

Why isn't the idea that software quality starts way before you write any line of code the predominant mindset amongst engineers / the industry?

I have the feeling that most companies don't hold discussions about what software quality means and how it should be measured.

To which extent do you agree or disagree with this feeling?




Software development is, apart from a few rare outposts like GOV.UK, conducted in the private sector for a profit.

That means that the number one consideration for the software is profitability. For internal-only software, this means that cost is the prime consideration.

In support of that, often software startups are trying to capture a winner-takes-all market, so time-to-market is critical.

Thirdly, consumer protection law is weak in the US, and product liability is almost nonexistant for software everywhere. The cost of failure is very low even if you leak all your customers' data or your product ceases to work after 18 months because you've "pivoted".

Fourthly, a lot of software is ""free"" or ad-funded. This further weakens the cost of failure.

There are techniques for delivering extremely high quality software. Few sectors of the industry care about them because it's not required and is unprofitable, but the aerospace people can usually get it right and the security people can usually get it right (when dealing with security products, not general purpose junk like Flash).

The automotive industry is kind of on a boundary. The Toyota "unintended acceleration" bug revealed some tremendously poor quality software. This is one of the main worries about self-driving cars: how minimal is the quality assurance going to be?


Only thing I can add (indirectly covered in your comment) is that software companies are competing with each other too. So once quality vs. price bar gets set, it is hard for another company to offer a better quality while asking for a higher price. This is one of the reasons how a lot of software has gotten to ad-funded.

Dan Ariely (author of the book Predictably Irrational) called allowing free apps on AppStores to be a mistake made by the industry. The customers have now gotten used to free apps, making it harder for the industry/developers to offer better quality.


Theoretically software products could compete on "quality", but this is quite rare because it's hard for the customers to measure. See https://en.wikipedia.org/wiki/The_Market_for_Lemons

I don't agree with the idea that free apps are inherently bad - that would rule out Open Source / Free Software, and it would also put the boundary vs "free" web pages in a strange place.

"Free"+adsupported and "Free"+IAPs have certainly produced some strange and terrible incentives though. As has the incredibly bad discovery process on app stores.


We are mostly in agreement. See my other comment here: https://news.ycombinator.com/item?id=15532543

I do find open-source software to be generally lower in quality than paid products. Though there are many exceptions at this point in time where open-source is rather highly superior.

However, even in cases where someone could produce a better paid software for an open-source alternative, they practically cannot as it is hard to compete with free.


In my opinion open source software typically has worse design but better implementation.

For instance, in the latest iOS there is a stupid bug where the calculator blocks the buttons if you press them too fast. So if you enter 1+2+3 the display will show 23.

In open source this would be trivial to fix. In closed source you have to wait for Apple to do its implementation, testing, distribution.

In open source you would have a customizable calculator with a million generally useless buttons though because it’s so easy to add them.


Ariely was talking about free crapware, not community ware. But even opensource only works in the community. Once you go to apptore you get tons of derivatives of open source stuff, laden with crap layers


I think it is even worse than that. I would like to pay for all of my apps. I could not find any way on Android to filter out all of the free apps and just show ones that have a price.


You obviously don't want to pay for the same app+crap currently offered for free, but the purveyer would happily offer it for a fee. What you say you want isn't actually what you want. What you want is a quality filter. And that is what Apple claims to provide, and Google intentionally does not.


I would suggest that the problem is that the app stores don't properly support the free trial model.

I won't spend $10 on an app sight unseen because if it's crap or even just plain doesn't do what I need, there's no way to know that aside from trying to parse it out of the reviews that the developer probably bought from a spammer.

I will spend $10 in a heartbeat on an app that I've tried for two weeks and don't want to go back to living or working without.

But AFAICT, Apple explicitly forbids that business model in its store. Dunno whether or why Google Play apps don't use it more, tho.


Google Play also doesn't really have a good way of implementing that as far as I know. You can do free app and unlock via IAP, but no matter how clearly you spell it out on the page people will not read it and kill your ratings with "1-star, SCAM!" reviews.

Something like this really should be a store-level feature


Indeed. I am often happy to pay the developer for a better software, but no such thing may even be available.


> the security people can usually get it right

Having worked for an AV vendor, I assure you that is not the case. Just check Project Zero[1], most of them do parsing of complex binary formats in kernel mode, 'nuff said.

This is just one example, but all major vendors have had issues:

[1]https://googleprojectzero.blogspot.ro/2016/06/how-to-comprom...

-----------------

OTOH, there are a few individuals who show a great deal of care about software correctness. Daniel Bernstein comes to mind, but many other people are offering big bounties for their personal projects, and have a track record of delivering correct software. But even in cases such as these, there are probably some hidden bugs in there, because of the inherent complexity. Nobody has the time to verify the fine interactions between the compiler, OS, libraries etc.

At the end of the day, if you want higher quality software, you have to incentivize it, as others have mentioned.


Parsing binary formats in kernel mode isn't more dangerous than text. Taking user input as executable instructions in your host computing environment is dangerous.


I just made a general comment above, but your message is very well put about software in particular.

Reading along with HN for a few years now is making me more terminology-oriented in areas of coding and capitalism, two of the major themes.

So to me it's only software if it's intended for sale (or profitable distribution), otherwise it's just computer programs.

Same with hardware, if it's not built for profit then it's not wares, just equipment.

Nothing wrong with building in quality for profit, but you may not be able to compete with low-quality-focused operators, especially ones which are strongly established.


OK but no one else uses your idiolect, so it's just making it hard to communicate.


Two points:

First, to extend what you said about startups, you generally aren't totally sure that the market is really there. If it turns out that nobody cares about what you're building, it doesn't matter what quality you built it with. Therefore, as long as it's cheaper to build it with lower quality, startups are rational to build it with little concern for quality.

Second, Toyota: I'm not going to be any kind of apologist for Toyota's horrible software. From what I read about the situation, the way it was written was appalling. And they rightly got a lot of heat over it. (Arguably, they should have gotten more.)

But I wonder if the quality bar isn't being set too high in this situation. If Toyota didn't implement things in software, they would have had to implement it in hardware (either mechanical or electrical). That hardware would have some failure modes and failure rate. If the software has a lower failure rate than the hardware, that's progress, even if the software has a higher failure rate than it should have.

Our discussion of the Toyota flaws is colored by the fatalities. Still, hardware flaws can kill people, too...


But some of the stuff they're implementing in software wouldn't have to be implemented in hardware in a less high-tech car. It would just have manual controls for a human to operate the underlying hardware that the software is now meant to control.


Fourthly, a lot of software is ""free"" or ad-funded. This further weakens the cost of failure.

I'm not sure being ad-funded weakens the cost of failure. Losing users or having down time impacts revenue immediately if you're ad-funded.


Yes, but the users aren't going to ask for refunds. Also they feel that because it's free they aren't really entitled to customer service (and you certainly wouldn't offer them any).


Your original seems to suggest that loss of revenue from software not working will be more severe for directly-paid software than for ad-supported software. It's not clear to me that's the case, especially given that some of the highest revenue software currently in use (consumer services from FB etc.) is ad-supported.

Anyway, this is tangential to this thread, so I won't go on about it.


> The Toyota "unintended acceleration" bug revealed some tremendously poor quality software.

Did Toyota ever recall the affected ETCs?


"aerospace people can usually get it right "

Are you refererring to adherence to standards like Misra and DO-178B or something else?


Yes, that kind of thing. The observed failure rate is pretty low although there have been a few high profile incidents (A400M, the whole Chinook fiasco http://www.computerweekly.com/blog/Public-Sector-IT/One-of-t... )

The Chinook fiasco is, like most quality issues, really a project management fiasco. The decision to do special software rather than get Boeing to do it, then a series of oversight failures on known problems.


Adherence to standards is a means to an end, but it is neither an end in itself, nor a guarantor of that end.


Sure, stupid tools are pointless. But usefull tools...

If the point is to reduce the number of errors then it helps to at least have a checklist of the errors, and someone reminding the team of the checklist. Checklist process is one of the easiest quality and safety tools to implement.

Having a premade checklist that makes sense in the form of a process plan makes things easier.


Exactly - a means to an end.

I'm not sure why you are mentioning "stupid tools", whatever they are - they would not even be means to an end.


I was going to say that we still seem to measure everything except code quality - your answer is way more insightful though.


> That means that the number one consideration for the software is profitability. For internal-only software, this means that cost is the prime consideration.

> In support of that, often software startups are trying to capture a winner-takes-all market, so time-to-market is critical.

I think we software engineers should embrace this reality, and learn to live with it. Your employer is willing to spend years to develop top-notch quality software? Great, you can employ all the software engineering best practices. That's not the case? Well, we should have a standard approach for gracefully handling strong time constraints without completely giving up on quality.


Therein lies the disconnect between what we want to achieve as a profession and the commercial needs of companies. Too often these are conflated in our view of what we do and that creates unhappy developers pushed to deliver stuff quickly is the result.

One way this unhappiness with our lot is expressed is Technical Debt. For me this just cognitive dissonance on the developers part trying to reconcile / justify why the codebase is a mess and why all those shortcuts were taken to get the thing shipped. If you want to pursue your craft and deliver a result you would be proud of then probably commercial software companies are not for you.

Well all might be great writers at heart but if all the employers want is a pool of people to write pulp fiction and romance novels the sooner we get over it the better.

Of course one solution to this identity crisis is sort of mapped out with Erik Dietrich's Developer Hegemony, https://leanpub.com/developerhegemony but it might take a while to get there.


To most businesses, the core objective is creating a profitable business, and most other objectives, including engineering, marketing, support, and yes, in many companies even things like customer and employee satisfaction, are secondary to that, and only really prioritized to the extent that improving those areas also improves the bottom line (I would in fact argue that this is true of almost all companies, and that the difference in whether they prioritize customer and employee satisfaction or not, is mostly a matter of whether they look at the impact on the bottom line in a short-sighted manner, or in a more long-term way).

So in regards to your question, it would seem that the market reality is that a lot of the time, it is better for a company to have a quickly-cobbled-together piece of software that mostly does what the customers want (and maybe get to the market first) even if it is low-quality, than it is to have a piece of high-quality software that does less, or is finished later, but is maintainable, and potentially scalable in the future (which you'll never get to enjoy because the worse-is-better people already conquered the market).


>> piece of software that mostly does what the customers want even if it is low-quality

The state of the software is currently much worse in my opinion.

Quality is not easily quantified while the price is. Metrics at the customer end are hard to collect (it requires software development too, raising the costs), and in the current state of the art, it also requires having customer support staff too which is still costlier. As a result, quality does not even gets quantified properly. A natural result of which is quality reducing below what would customers desire.

This isn't much different than where quality of MP3 players, laptops and smartphones was headed. Perhaps quality then was being measured just by percentage of customer returns, not by customer satisfaction. Steve Jobs then changed the game. Apple's products would just "feel right" to the customers. Apple iPod took over the market even after being much costlier. It then took a couple years for the rest of the laptop/smartphone manufacturers to catch up.


This isn't much different than where quality of MP3 players, laptops and smartphones was headed. Perhaps quality then was being measured just by percentage of customer returns, not by customer satisfaction. Steve Jobs then changed the game. Apple's products would just "feel right" to the customers. Apple iPod took over the market even after being much costlier. It then took a couple years for the rest of the laptop/smartphone manufacturers to catch up.

"Feel right" is definitely a kind of quality that software can compete on. I'd probably put Chrome in this category (relative to other browsers that were around at the time it launched). Sublime Text, maybe. Blizzard games (especially those of a certain era).

Note, though, that this quality is principally about doing what the users want and being pleasant to use while doing so.

It's something that you can definitely focus on deliberately in your work and projects, but I'd argue that a lot of the current mantras that get recited when software quality comes up (test coverage, continuous delivery, maybe even code reviews) are not especially helpful for achieving this kind of user-perceived quality. Maybe even a distraction, in some cases. Getting your code in front of users and listening to feedback can help, certainly. But having a strong, clear, vision of what you're trying to build in the first place might be even more important. And I don't think that's something that's achieved with tools and processes.


Companies do care about code quality, but it's not the same kind of quality. This is the purpose of QA. Every company that has a QA team cares about quality at least enough to hire a full-time person responsible for it.

When developers talk about "quality," they mean tech debt. Addressing tech debt is problematic for businesses because it's something that never ends. You allocate one month for tech debt and the devs will ask for two. Allocate two and they will ask for three. There is no agreed-upon standard at which devs will stop and say, "Now our code is clean."

Add to this the fact that there are many developers who just always seem to have an agenda about the code they're working on. They never work a project except they're dying to add some pattern or change some aspect of the code, even if it's something that they used to favor a year ago.


I've managed developers for the same software product (accounting system) over the last 15+ years. That amount of time gives you some perspective.

A common thing that new hire developers do is call for "a complete rewrite", they do this because when they first approach a large old code base, its daunting and seems impenetrable. Of course they are right, but naive in thinking a "rewrite" will help. Any new rewrite will eventually just grow to be just as impenetrable once all features and edge-cases are accounted for.

Fundamentally, any software product is trying to model some aspect of the real world...and the real world is messy, very messy. Governments pass laws that contradict each other, some laws change drastically state by state, employees try new and novel ways to embezzle, different languages and units of measure exist, changing prices for commodities can suddenly cause complete upheavals in manufacturing process, etc. All this must try to be accounted for and its nearly an impossible task.

The bugs that persist are almost never "I click Button A and it does the wrong thing", but almost always "In case that Situation A + B + C all simultaneously exist, the result as interpreted by Agency X is not optimal". Obvious and real bugs get squashed pretty quickly, but those complex situational bugs can linger for a long time. As a manager, you sometimes just need to shrug, because the effort required to fix each and everyone of these would produce little to no tangible business value. Moreover, an environmental change could come along to render your "fix" invalid anyways.

Sometimes even during design discussions we are completely aware we are creating "a bug", but the decision is made that the amount of people that want both Feature A and where Situation B exist will produce relatively little overlap. Most often we just design a manual workaround, instead of trying to completely eliminate the bug.

I'm always refreshed and excited by dealing with young devs, particularly for their zeal to fix problems, simplify things, and generally improve the product. Yet, I do feel a bit of sadness in that I know reality is going to temper their enthusiasm after a decade or so. Reality is a very hard thing to model with any semblance of being "bug-free".


> once all features and edge-cases are accounted for.

But one of the benefits of a rewrite is that you can dump all the features and edge-cases that are no longer required. Or fold old edge-cases into new generalities because the business has changed since then.

> the real world is messy, very messy.

Cannot disagree -but- it's nowhere near as messy as the people (often those who are to blame) defending the byzantine software stacks using that argument.

> reality is going to temper their enthusiasm after a decade or so.

I've been doing this professionally for two decades and my enthusiasm for "chuck it away and do it right" hasn't waned one bit.


>> the real world is messy, very messy.

> Cannot disagree -but- it's nowhere near as messy as the people (often those who are to blame) defending the byzantine software stacks using that argument.

This. As a still relatively young developer, I can almost guarantee you that the initial reaction of "nuke it from orbit!!!" doesn't come from a couple of minor abstraction problems. You get this reaction when every second bug you try to fix ends in a trip to Klendathu.


>> reality is going to temper their enthusiasm after a decade or so.

>I've been doing this professionally for two decades and my enthusiasm for "chuck it away and do it right" hasn't waned one bit.

I think it's probably somewhere in between the two extremes. I think you should have good unit tests and then refactor parts of your code where you see better generalities, or where basic code cleanliness was disregarded before.

But throwing all of it away is rarely possible without endangering the profitability of the company for a while.


> throwing all of it away is rarely possible without endangering the profitability of the company

Well, obviously I don't mean "turn it off and wait for the new system to be finished". You build the new one whilst the old one is in maintenance mode and swap in new bits as and when you can.

For example, at current $WORK, the backoffice system is a horror show of overcomplex PHP that is riddled with bugs and no-one really understands how it all works. Replacing that would be a huge boon both humanly and monetarily to the company because CS use it heavily every day.


> You build the new one whilst the old one is in maintenance mode and swap in new bits as and when you can.

Continuous incremental improvement of a production system may, over time, have the same net effect as a an idealized big-bang replacement, but it's a very different process (it's usually what people who are saying you should never do a ground-up replacement prefer instead, because actual big-bang replacements, unlike idealized ones, are usually a shitstorm: and the reason is that they are usually done to the kind of systems you describe, overcomplicated key systems with inadequate documentation or institutional memory, and they are done instead of trying to get a firm grasp on each component of the existing system before replacing that component. And so they end up, at best, being exceedingly well designed, but overlooking key elements of business function discovered and implemented, but not durably documented, in the original system.)


>I've managed developers for the same software product (accounting system) over the last 15+ years.

You will probably appreciate this little piece of anecdata, last July there was an update change in some fiscal Laws in Italy, so that a number of firms had to make a certain payment of taxes within the 31st of July, BUT the change was communicated/published almost "last minute" (as often happens) and a software house had to update their accounting program in a very strict time. The payment code (on the government side) was the same of another payment (already known to be due on the 31/07/2017) so the programmers, in order to distinguish the two payments "anticipated" (virtually) the date on the database, so that two payments were resulting, one on the 30th and one on 31st.

This (intentionally) "queer" behaviour was not explained (or not explained well enough) to the users.

Most users "trusted" the program and everything went well for them, those that noticed the anomaly managed to "force" both payments on the same date, and this resulted in a "single" payment (instead of the two separate ones required), thus messing up the whole thing.


This is the future of learned helplessness. People that see problems and inconsistencies in software systems and try to work around them will have worse outcomes than those that just blindly follow the workflow assigned to them.

Seems like one needs to be controlling the spec and writing the code not to get stuck in this trap.


>People that see problems and inconsistencies in software systems and try to work around them will have worse outcomes than those that just blindly follow the workflow assigned to them.

Well, not always.

That applies ONLY to those that find such problems or inconsistencies and workaround them in an incorrect manner.

And this brings us back right to Chesterton's fence:

https://en.wikipedia.org/wiki/Wikipedia:Chesterton's_fence

that can be invoked both when users do silly things, but also when programmers do them.


Never heard of Chesterton's fence before, and it's quite interesting, but I don't see this as applicable here. This was a design error on the part of the programmers, because the software didn't make it clear why it was "misbehaving".


> This was a design error on the part of the programmers, because the software didn't make it clear why it was "misbehaving"

Partly yes, but only partly, as they did publish the "peculiar" workaround that they (the programmers) used in the update (though not giving to it the relevance that should have been given to it), but NO user actually reads the boring text that comes with the updates of course.

In this peculiar case the non-reading users were divided in three:

1) non-reading users that didn't even notice the anomaly

2) non-reaading users that noticed the anomaly and that either read the accompanying text or called to ask why the anomaly presented itself and were given a reasonable explanation.

3) non-reading users that noticed the anomaly, but, assuming that the programmers were a bunch of good-for-nothing morons [1], forced or "overruled" the settings without asking anything (and of course without even asking themselves if they were possibly causing an issue later on)

Of course both the #1 and #2 were fine, with the difference that the #1 were simply lucky, whilst the #2 "deserved" their success, as they had the curiosity to delve deeper in the issue.

The #3 are the main reason why I posted the Chesterton Fence reference, but it is applicable more generally.

Now that they were (all, users and programmers, in different ways) bitten by the issue, most probably the programmers in next release will add a field to the database so that you can have more than one payment with the same government code on the same day.

Still I can bet that in a few years the new kid on the block (among the programmers) will notice that there is a field in the database that is always set to 1, the memory of the reason why that field was added will be lost and he will probably remove that field by saying "Ha! I optimized the database by removing an unneeded field." and falling in the same fallacy.

[1] BTW, not that the opinion specifically was completely wrong, though I am not a programmer (nor an accountant) I had to deal with some of these guys to import some inventory data coming from another accounting program and it was a nightmare.


Clients don't care about quality, they care about features, so management doesn't care about delivering quality, they care about delivering features, so engineers aren't allocated enough time to care about quality, only features. And then it's still the engineers fault when everything explodes or takes 10x longer than it should to rework or expand a feature later.

Basically it comes down to management that is willing and able to tell a client no, or convince the client to budget to do things right, and not management like my current company, which has in the past threatened to disallow even unit test writing and code review as slowing the process down too much.

"Code quality is time and money you're saving your future self" is an argument that only makes sense to people who write code, apparently, until you actually lose a client to avoidable problems.


> Clients don't care about quality, they care about features, so management doesn't care about delivering quality

Up to a point, as soon as you start losing market share to competitors because your customer complains your application crashes every other day. Suddenly, the focus switch back to quality. (Until the next cycle).

What is frustrating as engineer is to release something you know you'll have to fix in 6 months after a customer's complaint. But maybe from a sales point of view that was the right decision.


In Peopleware, Tom DeMarco thinks it's because a business's customers will tolerate lower quality software, so there are diminishing marginal returns to revenue as investment in software quality continues. He predicts that while this management style works wonderfully for the bottom line in the short run, it causes long-term ailments such as team dissatisfaction, overly complex architectures, and other issues that may be more expensive overall.

Quality and security become increasingly important as we depend even more on software systems for essential functions such as cars, power grid management, agriculture, etc. Unfortunately, this situation is all too similar to how many opt for the emergency room over preventative care.

We should also consider that many businesses wouldn't exist if not for lax quality requirements for software products. How many product V1s are chock full of bugs and exploits, and to what extent is that okay? What about open source? As usual, it's pretty complicated.


To take this a level higher -- managers are often rewarded on quarterly or annual targets, not long-term targets. When it managed a large software team, it was very difficult to budget time/money for quality -- if I did, peers would swoop in and try to take my position under the guise of "he's overspending for the task." It takes good, strategic prioritization all the way up the management chain to build quality.


Because of the fact that software is seen as a cost center rather than as a means of production. If you were to factor quality in right at the start most projects would never be approved and so we try to 'fix' the problem at two minutes to twelve.

This is also a large factor in why software projects tend to overrun both in terms of time and budget (the other large factor is bad project management).


Code Quality (CQ) is an ideal.

Your feeling about the magnitude of the issue (most companies) is wrong. Programmers discuss CQ principles among themselves. However, the discussion becomes more challenging with management.

Management is responsible for accomplishing business objectives. Development and testing timeframes are at odds with business objectives. Adopting CQ delays product. If you're going to delay product, but the product will be beautifully efficient, idiomatic, elegant and possibly a little faster than the first pre-CQ version, you're not going to win an argument in an organizational context where delivery timelines matter.

Time and effort are not a programmer's friend in a task-driven organizational setting. Fortunately, real-time linters tell programmers not only about material errors but present stylistic warnings (such as Python pep8 linters). Further, static analysis tools such as Quantified Code [1] conduct an in-depth analysis of code and suggest stylistic improvements. I suspect that this is an area where machine learning will advance Code Quality further. Maybe, just as there are language servers, there will be code quality servers.

It is worth noting that the QuantifiedCode entity shuttered in the Summer of 2017. It's not clear why the company closed-- did they fail to monetize automated code review? Were they acquired?

In conclusion, the more you can automate code quality-related improvements, the more likely you can promote your Code Quality ideals.

[1] https://github.com/quantifiedcode/quantifiedcode


Minimum Viable Product

In fact these days, people kinda know what they want, but really only understand what they want when they have something in front of them. I'd go so far as to suggest you can over-engineer a solution too easily and then spend a lot of time refactoring it. YMMV


A thought: The slightest design flaw or manufacturing flaw in a microprocessor or memory chip can reduce its value to zero. Generally, flaws in software are easier to correct and don't have the same catastrophic effect on value.


With respect to quality, some software is so fundamental and widely used that flaws are noticed immediately and fixed. Examples include microcode, firmware, operating system kernels, compilers, and embedded databases such as SQLite. Flaws in low-level software are generally much more destructive to value than flaws in high-level application software.


I've fixed bugs in postgres that were more than 10 years old...


I working in consulting, and in a lot of cases while we're well intentioned in delivering quality software, we're forced to fight against clients in time, budget, scope creep etc.

Sadly, things sometimes get rushed out the door and it's not until some time has passed that they realise the enormous tech debt they've incurred.


In my experience managements set the budget and timeline before anybody has adequately evaluated the work that needs to be done. Typically there isn't enough time nor money to support quality engineering practices.


How do you define software quality? Just off the top of my head, I can think of:

* Defect rate (does it do what it's meant to do?)

* Does it do what the user wants? (not always the same as the above...)

* Is it pleasant and efficient to use (definitely not the same as either of the above).

* Is it developed in a way the management are comfortable with? (which often seems to lean towards sufficiently "under control", replaceable developers).

How do you balance these? The answer will be quite different depending on whether you're landing on Mars or writing a free-to-play game.


I think there is room for some balance. If not doing what the user wants is not part of the defect rate, then maybe you are not defining defects properly. According to the principle of the separation of concerns, management's other issues are best considered as non-quality constraints on the overall process. That leaves the user experience - but what user does not want a product that is pleasant and efficient to use, all else being equal? (But beginning, casual and experienced users have different ideas of what his means, so it is probably better to treat it as a separate concern.)


Feature creep and timelines are in my opinion the root cause of lower quality software. Most engineers sit down planning to develop clean efficient code but that generally takes more time than they are allocated. As the deadline draws closer new features and edge cases are often added in by either management or the end user.

Often these new features were not accounted for in the original design and in order to fit them into the system in a nice and clean manner, a large rewrite of certain modules and/or database tables is required. Due to time constraints and developer fatigue this is not possible and the mindset of "Just get it done" sinks in. This is no one's fault just a harsh reality of writing software where timelines and profits are a factor.

No one wants to be the guy that hard coded several edge cases into an otherwise clean module but it happens and it happens often as "Just get it done" takes hold. I think a good developer just accepts this, and makes sure to do a good job commenting their code. This especially happens during customer acceptance testing. Customer brings up a feature that they never mentioned before, they want it now. Management and/or your bank account says just give it to them. You hard code it in.

The circle of life


I don't understand why people have a problem with hard-coding features. The argument is that it creates more work later. But this is a fallacy as the work "later" is not guaranteed to be necessary or requested. The idea that all software has to be abstracted, configurable, and future-proof to be "good" is just wrong. We hard code features all the time on my project. The earth keeps on spinning, the company makes money, and the code is usually removable with a single 'git revert' when the time comes to clean up. That time may never come, which is fine. That is the natural way. Nature has yet to do 'git revert male nipples' or 'git revert human appendix'. Both are dirty hacks left in by mother nature.


Your comment reminds me of the time I spent reading Chuck Moore's writings and learning forth. One of the main things he endorses is keep it simple. Don't do things because you may need them. You don't need it now, so don't do it now. Factor your code later, or rewrite it if you have to. But there's little point implementing something that may not be useful in the future (or whose complexity costs aren't amortized over enough uses later).

I can abstract out some feature into a handful of classes, and then use that in one place in my code. Was it useful? Probably not. Now I use it in 20 places. Was it useful? Almost certainly. But if I don't have multiple places to use the abstraction, it probably isn't worth developing today.


For me it has to do more with the idealized way to develop a project. You have a pretty plan & design, and want to see it executed. Unfortunately that rarely happens in real life. For example on my current project I wanted a very database driven design. Essentially just be able to update records in the database and everything on the site changes accordingly.

Last minute changes and exceptions to the rule made that impossible with the given deadline. You are absolutely correct in that the end result is completely fine and the world keeps spinning.

There are no real world consequences aside from needing to remember that there are hard coded exceptions in the code. Its just the hunt for the elusive "perfect execution and design."


If your code is a mess of "this user has this feature on", "all users in this state have this other feature off", etc it's hard to revert. Especially when that mess is everywhere from years of changes. Reverting otherwise working features upsets customers. So now you have to write a rules engine to be able to replace all those hard coded use cases. Except that's not in the budget. Continuing to layer on workarounds also becomes more tricky. If you have to test as 1000 different users to hit all he edge cases, you're going to end up not testing, which results in more bugs which begets more hacks. It's a slippery slope


You don't make money building good or bad software. You make money with mainteinance contracts.

That's what my CS 101 professor told us our fist day at the university. Heresy! you might think. But it is true.

Making software is a one-off cost for the customer. Maintenance is recurring income for you.

Custom software is Capex (Capital expenditure: investment). Mainteinance is Opex (operational, aka expenses). All Capex you spend will be in your books for several years (usually 3-5). Opex is fort the year only.

A lot of customers will happily like to pay high mainteinance fees if you can convert those fees to development of new features (in case there is little or no debugging/improvement to do).


"If you don't ship it you can't sell it."

Now tell that to any sales team and they'll tell you to hold their beer. The problem is, now they've sold your vaporware and you _reallly_ need to ship it, now. Like, yesterday.


Anything that you're not explicitly optimising for, you're optimising against.

And companies are built to optimise other stuff.


They must believe that the consequences for making it an afterthought are not severe. Depending on what they are building they might be right or wrong about that.

I find that in bigger teams with more complex projects that are sold to the customers it’s not usually an afterthought, they take quality seriously usually, at least relatively. I can understand why it’s an afterthought for a small startup trying to see if they can even get customers but they have to be careful too, if they do get traction then quality should be taken very seriously. I can also understand why it’s an afterthought for some internal tooling, the consequences aren’t severe.

I run a cloud based automated test reporting app called Tesults (https://www.tesults.com) and I’ve worked as a software development engineer in test at large tech and game companies. Based on my experience reporting is definitely an afterthought and this makes testing in general an afterthought sometimes. You need a way to keep on top of failing tests and have some measure of the problems that are being discovered. Especially when it comes to the modern agile style way of working where constant check-ins are made.

Another issue is that a lot of testing now (particularly for performance and automation in general) requires testers to be engineers and in some organizations this still isn’t understood. The manual testers are still required of course for UI/UX testing but there is definitely a shift in this area and in games at least it’s taken a long time to understand that.


Unfortunately most of the business people are in a hurry to release a new feature or version.

There is a misconception that code quality comes in a price of developing time but in my experience I have released much faster the final product when I use unit testing, incremental releases, use code reviews and other techniques that help maintain the code quality high.

By not using these techniques you get faster initial release but a much slower final, bug free, release.


Human nature.

Quality in general has always been an afterthought by many if not most people and companies. Always will be.

Some people are just not quality people. But when they find their way into important corporate positions their "leadership" effectively puts a major obstacle in front of any inherently higher-quality operators or teams underneath, restricting the flow of true available quality towards the clients, customers, and shareholders that could otherwise benefit.

Probably why I named my first company Quality, just like so many other companies in so many fields of operation, because it's not an afterthought to me.

Combined, all us "Quality" oriented companies who try to choose this as a differentiator still make up a small minority and are always under continuous pressure to compromise this most elusive feature, sometimes necessary to compete or even survive in situations where higher quality is not fully valued. More often there is downward pressure when lower quality becomes overvalued, as we see this trend growing in the 21st century.

It's tough for so many people to tell the difference between low and high quality anyway, especially for those where it's not even an afterthought.


Businesses* don't care about quality until it's too late.

They pay lip service to it, sure, but when it comes down to it most don't care enough until it actually starts to affect the bottom line. And longer, more expensive development processes are already affecting the bottom line, so come on, get it out the door!

Plus a lot of engineers see quality considerations as a drag. If they can find a home in a company that doesn't want all this "extra" stuff done then, well, this is what you get.

There are notable counter-examples in companies - Big Blue has a huge focus on quality, and their teams put a lot of effort into it (note I am saying nothing about usability here...) which is possible because a lot of stuff there moves slowly anyway. It's also because IBM are very, very good at measuring their cashflows and costs and have figured out just how much lack of quality can impact their bottom line.

There are also many individual engineers in smaller companies who put quality up front, and try their damnedest to push it through even where the business may not really care.

( * mostly SMEs are terrible for this, IMHO, though one or two large corporates I've worked with haven't been that great either)


Many software companies fail outright. Other enduring bad software delivery outfits enjoy specific types of subsidies or captive pricing powers until they fail. Those are different cases.

Outright unsubsidized failures that never deliver any product to customers are mostly functions of communicative challenges. The opacity of costing for simple speed/space or protocol adherence engineering decisions is easy to underestimate. Business actors will nod and agree to anything that sounds good and give wrong signals. Others wave hands frantically around every buzzword and easy windfall demanding features. Many purchase agents subject to hype have no idea what they are buying and fetishize wasted software LOE. Training is routinely shortchanged for industries with high turnover.

Accept failure and then low quality as the norm. Then seek team members, suppliers, channel partners and customers around new or rekindled software with needs to focus intensively for 2 months and then 24 months. Most enduring software requires stakeholders more than customers. Leave other endeavors up to researchers and understand what capital resources amplify or do not amplify.


If software is created within a profit oriented organisation, then there is a rush to get stuff out the door. Quality is seen as an expensive intangible. Managers are focused on time to market - the shorter the better and costs - the lower the better.

When software is created without a profit motive, then it is to coolness of the idea that motivates the creators. Focusing on quality would only slow down the "creative" process.


Management incentives and profitability aren’t aligned to software quality...until they are (see: Equifax), and perceieved software quality is given priority over actual software quality. Also, if your product is a monopoly or relatively monopolistic, then software quality literally doesn’t matter because customers don’t have a choice...until they do (the very definition of disruption)


Design by committee vs design by vision. Takes guts to have strong opionions and stick to them.

At almost every company, the primary background motivation is not getting fired. Virtually no one even aspires to great work let alone takes the risk to have a vision.

This is pretty rational. There's just no incentive to risk your neck pushing for quality when you'll just end up working a lot harder for little reward.


Software quality is one of my main professional areas of interest.

I agree with the general sentiment. Things have improved in recent years though, in part on account of movements like software craftsmanship, habits like clean code and also because software is becoming more easily and more rapidly testable due to better testing frameworks.

Testing frameworks are only one part of the equation. As you stated correctly software quality starts at the beginning i.e. with the requirements or rather even earlier when defining your values and expectations for your project and the software you create for it. There's no shortage of tools for this aspect of software quality either. Those are less rigorous though and emphasise communication rather than true / false (test succeeded / failed) outcomes. Communication is a vital component of good software quality but it has a way of becoming an end in itself rather than a means, e.g. in the form of pointless, cargo cult meetings.

Furthermore, in order to accurately measure software development outcomes it is essential to have clearly verifiable acceptance criteria. Defining those can be time-consuming.

Simply put, software quality requires investment, both in terms of time and money. Not investing in software quality means taking on massive technical debt. It's unfortunately still a common practice because often a lack in quality will only come back to bite you after some time. It's relatively easy to temporarily cover up and paper over quality deficiencies by implementing workarounds or simply by putting in extra hours. Those temporary measures aren't sustainable in the long run though. They just lead to more technical and organisational debt. Ultimately that debt can become unmanageable.

Much like in the boiling frog parable that increase in debt happens very gradually so it's often not perceived as a problem until it's (almost) too late.


If you think about it, software quality is not something that is ever visible or measurable from the outside.

Theoretically, you could write software by just having an incredibly long list of test cases and a random string generator.

The quality of that code would probably be terrible, but it would still work as long as your test cases are restrictive enough.


> If you think about it, software quality is not something that is ever visible or measurable from the outside.

If you have mistaken issues of style for issues of quality, than that might seem to be the case. True quality in software is measurable primarily in the defect rate, and secondarily in the amount of effort needed to enhance it.

> Theoretically, you could write software by just having an incredibly long list of test cases and a random string generator.

Putting aside the time and concurrency issues, the quality would be determined by the correctness and thoroughness of your test cases.


You measure current code quality by your future development velocity. That is, low quality in your current code base makes it harder to produce the next feature, and the feature after that.

Unfortunately, you learn that some time after you write the poor-quality code...


The problem is, money loss through poor software quality is (and maybe can) not be calculated in a way that, for example time (for development) can be measured. Often the arguments for increasing software quality, for example through refactoring parts of a codebase, are therefor not fact-based, but mostly vague. This is often not enough to encourage the decision-makers to invest in good software quality.

Apart from that some people might deliberately choose to not care about good software quality. But in my opinion this is often a sign of missing/poor education/experience.


Adding to other answers, I have noticed this behavior very common in startup scene, where many of them are not actually building a brand for lifetime, but to quickly add pleaothera of features so that they can be sold at a good price.


The difference in money, and perhaps more importantly, time, between great software and good-enough software is large enough that most companies will require good-enough.

There are two big trade-offs in time alone: missing the chance to be first to market (mongo vs rethink comes to mind, albeit not quite accurate), and the need to get feedback early and often enough to pivot if the idea isn't quite right.

Then the layers of lava come- not enough time to rewrite everything now that the domain is better understood, the prototype becomes the foundation, and cruft builds up.


If you embrace the idea that software development is a process of figuring out what to do - there is just no place for quality at the start, only for throwaway prototypes. And you have to avoid any rigidity at that stage, as it only slows the development down. Quality should be introduced later, once you know for sure you would need a production quality implementation of something. Maybe check out whatever Fred Brooks wrote about this.


I wrote an article detailing why companies acquire technical debt/bad software. It's typically sacrificing quality for short term gains. https://medium.freecodecamp.org/what-is-technical-debt-and-w...


Any organization that has a "normal" modern development process (Code reviews, Reasonable test suite, continuous builds, static analysis etc) has a decent focus on quality. That doesn't mean quality has the focus it needs within the organization but at least its then not secondary or a focus for late in the process.


If it's MVP let it loose.. poor quality can also serve as a nice auto-obsolescence accelerating the upgrade cycle


>I have the feeling that most companies don't hold discussions about what software quality means and how it should be measured.

I have instead the feeling that the matter is endlessly talked about in meetings but noone actually puts in practice the "good intentions" discussed (for the one or the other reason).


My feeling is that this type of secondary thought will be seen more often in places where management has less experience with programming. Also, it will happen more in places with less process in place.

If you cannot manage the complexity of the software developed, you increase the risk of creating lower quality software.


When it comes to controlling costs, reducing quality is one of the few levers we have. When it comes to developing software the trade-off we're making is usually not whether we make low or high-quality software, but whether we make low quality or no software.


In my experience, it seems like people who don't have much hands on experience are leading the show, and don't realize the value in well designed, clean code in minimizing bugs and and decreasing development and maintenance cost in the future.


A lot of views here are based around a capitalistic rational. Though true, I think there are many companies out there that value certain types of cultures, and some of those cultures include good quality software design.

But this is certainly not the majority.


Time is money and fast cheap and good often beats slow expensive and great in business.


Quality has an extremely broad definition. Everyone cares about quality, just not perfection. At some point, your defect density is acceptable.


Because _your_ code is fscked, and _your_ code is fscked, in fact all yours codez are facked!!

:points madly around the room:


Did you ever hear: "I love to code", "Writing code is so much fun" ?

Software is not made by grownups. And for the most part the development is not managed by grownups. The problem is that so many can get away with childish behaviour.


This doesn't quite say, but seems to imply, that if all software was written by people punching in and our every day and having no fun whatsoever, software quality should improve.

I'm unconvinced by this argument. Maybe compliance with "best practices" would improve a bit, but I see very little evidence that the results would be good for the average user.

I'm pretty certain, for example, Rich Hickey was having fun at least some of the time when building Clojure -- his belief in it is palpable whenever I've seen one of his talks -- yet it's an incredibly well-thought-out, solid piece of software.

Lots of classic games were passion projects of an individual or a very small and close-knit team. Whether that's "quality" I guess depends on perspective. For me, a lot of classic games very definitely were, though (and a bug or two doesn't necessarily detract from the overall experience).


"Grownups" can't enjoy their work? Damn.


Here are just some causes (IMHO) of poor quality software:

1. Creative and intelligent people are forced into an industrialized process of distributed micromanagement (Scrum) which stifles their ability to create a truly wonderful product and instead leaves them feeling lost and producing their worst work.

2. Idiots are running the show.

3. Quality is seen by the above-mentioned idiots as a threat to the deadline, whereas the reality is that low quality spreads like cancer and kills projects before they can deliver unless a series of miracles occur, in which case they deliver over time, over budget, descoped, and full of bugs.

4. Implementing non-functional requirements (i.e. the environment in which features exist) doesn’t visibly demonstrate progress to stakeholders, so this activity is deprioritised in favour of building features so there will be something to demo at the next showcase and the project manager can keep his job. At some point, as developers try to implement non-functional requirements, they’re faced with features that haven’t implemented the non-functional parts and features that have to be rebuilt after the non-functional requirements are implemented. After some time, the developers “come clean” to the project manager about incomplete features when, in fact, those features couldn’t be completed at the time because the environment in which they are supposed to exist barely existed itself.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: