Hacker News new | past | comments | ask | show | jobs | submit login

> evidently the market won't pay for something qualitatively better

Your assumption here is that quality is more expensive. In my experience, it's substantially cheaper. I've seen "enterprise" shops take reasonably simple apps and blow them up into things requiring large teams and enormous amounts of hardware. And then spend 70% of their time debugging, because they're going too fast to do anything right. This is endemic; some friends of mine do ops consulting, and even at the heart of the tech boom they see clusterfuck after clusterfuck. The apps all mostly work, or the companies would be out of business. But we can do better than just failing to fail.

As an analogy, look at the US car industry in the 70s and 80s. They were producing terrible stuff. Toyota came along and demonstrated you could make better cars for less money. The same opportunity is available here in software. Consider, E.g., WhatsApp, which was serving nearly a billion people on 8 platforms with a team of 50 engineers.

It's an especially appropriate analogy in that a lot of the most effective process improvements come from applying TPS-derived principles to software. See, e.g., Mary Poppendieck's work.

> But I think the argument that most software is bad is hyperbole.

Only if you define bad to mean "worse than average". But I mean it quite literally.

Bug rates, development cost, development cadence, WIP, and project failure rates are all absurdly high compared to well-run projects. This has been true for decades. By "bad" I mean "well below what teams could achieve if they applied best practices".




Your assumption here is that quality is more expensive.

To some extent, I think it is. More specifically, I think there is a balance between spending more on preventing defects up-front and not needing to spend as much on dealing with those defects later, which dominates the issue up to a certain point, and then beyond that point you have to start considering external costs as the dominant factor.

If you have a project that is made of poorly designed spaghetti, doesn't have any sort of serious test or review processes, and is kept afloat by little more than a few hero developers, then of course you're likely to have a relatively high level of defects. Even modest improvements in the development process will likely have a very good ROI in this case. In this sort of scenario I would agree with you that improving quality may be substantially cheaper than neglecting it, because relatively easy changes in development process would probably pay for themselves in reduced maintenance costs even before considering external factors.

However, the kinds of changes that bring really dramatic improvements in quality -- the kind of thing we might hope you would use for a medical device or safety-critical transport control system -- really can significantly increase development costs. Assuming you could fix the easy problems in other ways, you're probably chasing a relatively small number of extra defects already by the time you get to using these methods. To get a big jump in quality at this point, you might need to employ very different development and/or engineering techniques, such as formal verification stages, redundant systems, or much more structured and demanding review processes, and you might need to do this all the way down your tool chain in both software and hardware terms. These measures tend to require more skills, time and/or resources, and all of those are expensive.

Now, we have to be clear on what we mean by "more expensive". So far, I've mainly been talking about the development costs here, what it takes to write and maintain the software. The point of the extreme quality approaches is usually that failures of the system may have some other cost -- in human life, perhaps, or in delaying something important by a very long time -- that is not acceptable, and so extra investment in avoiding that external cost may be justified even though it makes the development itself much more expensive.

In my experience, the development costs associated with those more extreme approaches ("extreme" is a somewhat loaded term, but I can't immediately think of a better word and I hope you understand what I mean) will be prohibitive today for non-critical software, the kind of system that doesn't have a catastrophic failure case with disproportionate external costs to consider. This is what I mean when I say the market won't pay for something qualitatively better: most people won't prefer to pay $20,000 for a word processor that essentially never crashes or corrupts data or has minor incompatibilities when loading files created using its previous version, instead of $200 for a word processor that basically does its job but might crash out every couple of months and lose the five minutes of work done since the last auto-save.

By "bad" I mean "well below what teams could achieve if they applied best practices".

OK, so if we also restrict "best practices" to "things that improve quality at any cost" then I would agree with you that most software is bad by your definition.

However, if best practices also include things like being commercially viable, then I would no longer agree with the claim that most software is bad by your definition. There certainly is plenty of bad software around, but there's also plenty of software developed in ways that already do avoid silly defects reasonably successfully. In the latter case, I come back to my argument above: because of both diminishing returns in the number of failures you might prevent and the need for more fundamental changes in the development strategy that are relatively expensive to implement if you want to significantly reduce the number of remaining defects, most projects won't be able to do these things with the tools and techniques we have available today and still remain commercially viable. I don't think it's really fair to say those projects aren't well-run just because they went with a strategy that the market would accept.


> In my experience [...] will be prohibitive today for non-critical software

And how much time have you spent practicing TDD? Have you worked on a project with 95%+ unit test coverage? Have you worked on with a comprehensive test suite that runs in under 30 seconds? Have you worked in a team that practices pair programming and collective code ownership? Have you worked on a team that does continuous deployment with at least one deployment per developer per day? Have you worked on any team that has bug rates below one per developer per month?

Other people have had different experiences than you. I am one of them. I'm telling you that it's perfectly possible to do an order of magnitude better on bug rates than most teams and get a cost decrease. Plenty of other people will tell you the same. People having been writing about their experiences like this for 15 years.

At this point, I have given up expecting J Random Commenter to believe me; normalization of deviance means that most people cannot (or will not, I can't tell which) even conceive that things could be better. It's the same way that American car companies literally could not understand how Japanese manufacturers were producing radically better products at substantially lower costs. They still generally can't, because to do so would mean admitting that they've been screwing up for decades.

So if you'd like your current limitations, carry on arguing for them. But if you would like to see if something can be different, try out something like Extreme Programming.


It's regrettable how often people assume opinions different to their own must have been formed out of ignorance.

Some of the code I've written has to run in places where the cost of failure can be very high (not normally human life high, but certainly economically prohibitive) and the processes to deploy an update anything if a bug does need fixing can be measured in months with significant costs of their own.

As an example, I wrote a program a while back that implemented somewhat complicated data processing algorithms, took a few months to develop, has been in service for several years now, and to my knowledge has never had a single bug reported against it in production other than a small number where the project met the spec but the spec turned out to have been wrong.

That project was developed and tested using a variety of techniques. A sensible automated test suite was one of them. It was also built on rigorously proven mathematical foundations, among other things.

So yes, I do have experience with building very high quality software. I've made a significant part of my living doing it over the years, and in some cases I have single-handedly outperformed entire teams working for my clients' competitors at the same time. I do know the value of a good test suite, and a lot of other effective development techniques.

It would still be commercially unreasonable to spend the kind of time and money it took to develop a project at that level of robustness if the potential costs of failure were not so high.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: