
Ask HN: Can a software ever be 100% bug free? - BlackLamb
If we have so many great devs, and people constantly working on a piece of software. Why can&#x27;t they produce 100% bug free software&#x2F;app?
======
patio11
Bug free isn't a requirement for the vast majority of applications. Attaining
it is _very_ expensive, and most people/companies are not willing to pay the
price.

Probably the closest the world has ever come to bug-free code of substance is
that produced by NASA for their space shuttles, and it cost ~$1,000 _per
line_.
([http://history.nasa.gov/sts1/pages/computer.html](http://history.nasa.gov/sts1/pages/computer.html))
This would make the typical YC app cost tens or hundreds of millions just to
launch, and would put something like e.g. the operating system for your cell
phone outside of the budget of _most nations_.

There are a host of other engineering tradeoffs that have to be made, too. In
addition to prioritizing cost over bugs, shipping speed over bugs, feature set
over bugs, being able to actually hire programmers over bugs [+], etc, are all
important.

[+] Many programmers would consider the type of work environment you need to
get bugs down to zero to be an oppressive place to work in. Among other
things, it will quite literally try to crush any creativity out of you,
turning you into an automaton which _exactly_ implements the specs given to
you. Those specs were dictated by Really Important People. You're not one.
Write your function. Your last function was non-compliant with rule 436 2a
subsection b. Improve your performance or we'll find someone who is not a
threat to our software quality.

------
loumf
Programs that produce an output from an input and exit can get very close. See
TeX as an example.

There is research being done on programs that can be proven to be correct and
some progress has been made on practical languages. See Idris and Coq.

~~~
gus_massa
Yes, but remember that (Plain)TeX has very few features, so most people use
LaTeX that is not bug free.

Moreover, it's very common to use many LaTeX packages that are slightly
incompatible, or have to be loaded in a fixed order, or don't handle accented
letters, or ...

------
rnovak
I think it can be formally proved that software that is non-trivial cannot be
written to be 100% bug free in all cases, in all contexts, and I think that's
a by-product of how software runs.

We have so many layers that software has to run on-top of: You have raw
hardware, but a layer above that is the BIOS, and a layer above that is the
OS, and a layer above that are the SDKs like .NET/GTK/Ncurses/Shell, and above
that you have the layers that are built into software itself as abstractions,
and on top of that ...

If _anything_ on _any_ one of those layers changes, you have the potential of
introducing bugs. Not all interface contracts are honored by all
developers/engineers. There have absolutely been cases where changes are made
at one level and that has an effect on _everything_ that runs on top of it.

So even if you can reliably say that your program is 100% bug free for a
specific OS running on a specific set of hardware at Time T, that has the
potential to break during the next release of any of those products.

The software the Patio11 mentions (NASA systems) is running in a very specific
context. It's running on hardware that has a high level of formal
verification, and it's running on a RTOS or running directly on hardware, such
that there isn't anything that can interfere with the software, because
otherwise even the CPU scheduler could _introduce_ bugs into your system.

Anyway, that's my view of things, I'd love for someone to tell me if I'm
wrong, since that's all based on observations I've made during my industry
experience.

------
maramono
There are methods that can be applied to software that formally prove
properties of software so that those pieces are free of bugs. These are called
formal methods and require heavy use of math. Some of them are ASM and Z.

There are provers that help as well, such as Alloy.

For the typically/average project put there, the mayor problem I see is that
people still rely on code coverage as the one and only metric for quality of
their tests (if they even implement tests at all). Code coverage is a lazy,
flimsy and unreliable metric that has been subsumed by the more powerful
metric called mutation score.

If devs utilized mutation analysis more, they would provably have much better
quality software.

See this video: [http://confreaks.tv/videos/mwrc2014-re-thinking-
regression-t...](http://confreaks.tv/videos/mwrc2014-re-thinking-regression-
testing)

And here:
[https://en.m.wikipedia.org/wiki/Mutation_testing](https://en.m.wikipedia.org/wiki/Mutation_testing)

------
tr352
You can mathematically prove whether a piece of code is 100% bug-free using
formal verification techniques
([https://en.wikipedia.org/wiki/Formal_verification](https://en.wikipedia.org/wiki/Formal_verification)).

However, these techniques are often too computationally expensive in many
practical cases. Moreover, formal verification requires one to very precisely
specify how a piece of code should behave, as well as how the environment in
which it operates may behave. (Think of OS, network I/O, user interaction,
etc.) This is another source of often prohibitive complexity.

~~~
AnimalMuppet
And then you have the problem of proving that the specification is bug free...

------
insoluble
If we can agree that low-level hardware such as CPUs are essentially physical
software, then it would seem that there are many bug-free software pieces out
there. Many operating systems these days seem to have perfect or near perfect
memory and thread management. Kernel mode drivers in general require a high
level of quality since bugs usually mean death. User mode is where things get
sloppy, and front-end Web Development is where things often get extra sloppy.

~~~
wglb
_it would seem that there are many bug-free software pieces out there._

I don't think that you can consider CPUs to be bug-free. Just to pick one,
Intel got SYSRET wrong: [https://blog.xenproject.org/2012/06/13/the-intel-
sysret-priv...](https://blog.xenproject.org/2012/06/13/the-intel-sysret-
privilege-escalation/)

~~~
greenyoda
And don't forget the famous Pentium division bug in the 1990s:

[https://en.wikipedia.org/wiki/Pentium_FDIV_bug](https://en.wikipedia.org/wiki/Pentium_FDIV_bug)

~~~
insoluble
I never said they were all bug-free. I am aware that there have been some
problems over the years, but the track record seems much better with CPUs than
with most things having what could be called advanced programming.

~~~
wglb
Having a game over user to kernel exploit available for who knows how long is
not my idea of anywhere near bug-free.

And who knows what happens or what bugs are fixed when intel/amd push new
firmware to the chip.

------
SQL2219
It probably has a lot to do with how many users you have. 10 users, then bug-
free is possible. 10,000+ well that is a different story. The more users you
have, the more weird scenarios they come up with on how to break stuff.

~~~
mod
"Unfound bug" is still a "bug."

It doesn't have anything to do with users or weird scenarios.

------
S4M
It depends of the complexity of the software. The small Unix utilities like
grep or ls are, AFAIK, bug free, because what they do is very simple, but
something more complicated is more likely to have small bugs somewhere.

~~~
LukeShu
There was a bug in ls, long ago; it stuck around long enough that it became a
feature. It was supposed to not list "." or "..", but instead it skipped all
files starting with "."; which thus became the standard way to hide a file on
*nix.

------
as1ndu
absolutely!!! look at the python code below :D

print "Hello World!"

