Hacker News new | comments | show | ask | jobs | submit login

The solution to that is good regulation, not complete banning. While software has the disadvantage of being unable to adapt on the fly, it has the advantage of never making mistakes - if it's written correctly, it will not randomly fail. That's decidedly not the case for humans, as we see every day with automobile accidents.

Software is only a part of it. I deal every day with software that doesn't randomly fail -- we write damn good code -- but the hardware it controls has failure modes that can't always be predicted, or are so unlikely that no one thought of it.

Realize here that we're not talking about software running on a server in a nice temperature controlled room. This is software controlling hardware that is under constant vibration, will get sticky, or bend, or break, or ice up under variable conditions - hot, dry, rainy, wind gusts as it goes from behind a building to crossing an intersection. There is a mind boggling number of things that can go wrong when you're controlling a device in the 'real world.'

Even if the software is perfect there is still a large number of variables to account for and most of them can't be controlled. There's a case for UAVs and I would love to get involved with them, but building a reliable UAV and properly maintaining it would almost certainly cost too much to have it deliver tacos. Unless you're willing to spend $250 to avoid walking a few blocks.

If it's written correctly [...]

No one argues this is not true, it's the premise that's unlikely.

Well, virtually all modern cars have software running critical systems. How is that regulated? I don't think it would be too difficult to adapt those regulations to flying "tacocopters".

One of the most basic safety measures cars take is to reboot critical systems several times an hour and have mechanical backups so the breaks both work and can overpower the engine. You can't exactly do that with a drone.

I am not aware of a single automotive subsystem controller (or any other embedded system for that matter) that reboots as a preventative mechanism. For handling an unrecoverable error, yes, that's standard practice. But rebooting in an attempt to prevent errors? That screams bad design.

Can you offer more details?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact