Hacker News new | past | comments | ask | show | jobs | submit login

Makes sense if your software is responsible for keeping people alive. Most of us don't need to work to such a standard (thankfully).



You don't always know if your software is going to be responsible for keeping people alive. Operating systems, system components, firmware in devices and so on are all potentially software that can be responsible for keeping people alive.

Let me give you a simple and easy to understand example: an MP3 decoder performs the boring task of transforming one bunch of numbers into another bunch of numbers. This second bunch of numbers is then fed into a DAC which in turn feeds into an amplifier. If your software malfunctions it could cause an ear splitting sound to appear with zero warning while the vehicle that your MP3 decoder has been integrated into is navigating a complex situation. The reaction of the driver can range from complete calm all that way to a panic including involuntary movements. This in turn can cause loss or damage of property, injury and ultimately death.

Farfetched? Maybe. But it almost happened to me, all on account of a stupid bug in an MP3 player. Fortunately nothing serious happened but it easily could have.

So most of us should try harder to make good software, because (1) there should be some pride in creating good stuff and (2) you never really know how your software will be used once it leaves your hands so better safe than sorry.


There’s a certain level of arrogance that comes from the people who don’t work on safety critical stuff, that we could all do without


Eh.

I make video games. _Everything_ in games is a trade-off. There are areas of my code that are bulletproof, well tested, fuzzex and rock solid. There are parts of it (running in games people play, a lot) that will whiff if you squint too hard at it. Deciding when to employ the second technique is a very powerful skill, and knowing what corners to cut can result in software or experiences that handle the golden path case so much better, you decide it's worth the trade off of cutting said corner.

I'll let you know when I find the right balance.


I agree, and appreciate it when the discussion at least evolves out of "ready fire aim" and moves to rational talks about balance and tradeoffs. I wish more software engineers learned about risk management and bayesian probability.


I always thought “well, nobody’s gonna die” is a crappy attitude for any professional developer. We should care about quality and getting it right, regardless of the stakes.

QA: ”Look, if that integer overflows here, your software is going to fail.” Dev: “Well, it’s a cooking recipe app. Nobody’s gonna die!” How low of an opinion you must have of your own profession if you’re going to excuse yourself this way!


Have you ever written avionics-grade software? If all software had to be written to that standard, we'd have a lot less of it. That might be a good thing I guess, but recipe apps probably wouldn't make the cut.


> Have you ever written avionics-grade software?

I've written software that estimated fuel loads for freight carrying 747s.

> If all software had to be written to that standard, we'd have a lot less of it.

I'm not so sure about that but if that were the consequence then maybe that would be ok. It would mean that we've finally become an engineering discipline with as a result more predictable and reliable output.

> That might be a good thing I guess, but recipe apps probably wouldn't make the cut.

A recipe app can result in injury, disease or death with ease depending on how much a recipe can be corrupted. Just one lost sanitation step for some choice ingredient and you're off to the ER with food poisoning.


Yeah, there’s a middle ground there. I work on non-flight-critical software that goes on large drones. We don’t do DO-178C for cost and schedule purposes, but because the system and crew is expensive and flies on tight timelines (covering as many acres as possible during daylight hours), we take system reliability very seriously. It definitely takes more time, but we’ve had the system limp through some pretty wild hardware failures just fine (while immediately notifying the ground crew that the system is degraded). Having an aviation mindset has absolutely helped us make really robust software.


Not everything needs to be written at that level, but I've found that most programmers think they can do risk assessment on the back of a napkin and 'assume' their crappy little music player has 0 safety risk just because they thought about it really hard for 5 minutes. See the earlier comment in this tree about how audio players DO have a safety element to them.


That does not mean there are no valuable lessons in there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: