

Has Robocar Safety Been Hyped? - pauloteixeira
http://spectrum.ieee.org/cars-that-think/transportation/safety/has-robocar-safety-been-hyped

======
ThrustVectoring
Looks like a giant pile of fear, uncertainty, and doubt. Even if we only wind
up with cars that drive better than bad drivers and worse than good ones,
that's a win - now the driving algorithm is debuggable and improvable
software, and the gains can be generalized across the entire fleet.

"How to drive safely" is a skill that can be learned once - by an explicit
algorithm - and then shared across every vehicle that needs it.

------
Someone
Not a strong article, but the comments are worthwhile reading. I also think
the main claim that safety hasn't been tested enough to state much about it is
correct. One thing is that there is selection in where and when to test self-
driving cars. In real life, people will not be happy to get a message "sorry,
I will not drive in this weather", if it is given frequently.

~~~
_up
BMW for example uses also radar and infrared. These sensors can "see" even in
foggy german weather when the human driver would be almost blind. Also we have
the technology (RANGE-R) to detect humans even behind massive objects (walls,
cars). So we could make cars a lot safer for childs and pets that "dart out of
nowhere".

------
bane
Maybe, no, yes.

[https://news.ycombinator.com/item?id=8699791](https://news.ycombinator.com/item?id=8699791)

For most things they'll probably be safer than the average human driver. But
then they'll fail in weird ways that humans would never do and somebody will
die when their automatic car drives them through the back of their house
because of some weird one-off weather condition and people will get scared of
them.

------
Qantourisc
Robocars CAN be more safe: "just" a matter of getting all the
sensors/software/harware reliable, redundant and error free. If we are "there
yet" I don't know.

------
mbq
Not so long ago elevators had drivers...

------
johnlbevan2
The comments in the article show a lack of thought and understanding; just
listing fears someone may have about a new driver and applying them to
automated cars instead.

Some genuine concerns would be those below. In my opinion the benefit still
outweighs these arguments (as most of these could also be applicable to a
regular car driven by a person), but if someone wants to have a go at self-
drive cars, here's some ammo instead of blanks:

\- Malicious Hacking - People with malicious intent can hack the cars by using
devices to confuse/misinform their sensors, or by finding ways to amend the
car's code (perhaps amending at source, perhaps by patching the car with the
malicious code). This isn't something a layman could do - so the risk of an
idiot loner is small; however if all of our infrastructure moves to AI cars
the payoff could attract some organised and skilled attacks.

\- Poor Maintenance - cars would self diagnose and ensure issues are dealt
with promptly; however if people are still doing the repair work they may make
mistakes which the car cannot see (at some point the car has to assume it can
trust its sensors) - so those mistakes may cause unpredictable issues.

\- Geek Play - If you had an AI car and knew a bit about coding, how tempted
would you be to put some of your own code in there - maybe just to give it
KIT's accent. Most of the car's controls would be locked down, but people
would look to jailbreak their cars so they could do such customisations, which
may have unforeseen side effects. This would be illegal, but some would still
try.

\- EMP - there's a concern that EMPs can be caused naturally by solar flares,
which could have significant impact on our infrastructure. If not properly
shielded, cars could be affected by such issues:
[http://www.forbes.com/sites/tombarlow/2011/06/23/huge-
solar-...](http://www.forbes.com/sites/tombarlow/2011/06/23/huge-solar-flares-
could-spell-catastrophe-for-earth/)

\- Refusal to drive unsafely - in an emergency it's OK to break the rules
(this person will die if we don't get them to hospital in 10 minutes). An AI
car will be designed to put safety first, so likely won't allow you to break
the rules even in such a situation. If the rules can be broken in such a
scenario, people could just declare emergencies when running late. Perhaps a
compromise is to allow overrides such as "get to nearest hospital asap" rather
than just a generic override, and to penalise anyone misusing this facility -
however there will always be unforeseen edge cases (though these will be
rapidly diminishing & the cars will still be far preferable to humans given
they'll at least prioritise public safety).

\- Unforeseen special cases - Y2K, Y2K38, Patriot missile clock, north pole
sub divide zero coordinates bug, etc are examples of issues caused by
oversight: more here [http://listverse.com/2012/12/24/10-seriously-epic-
computer-s...](http://listverse.com/2012/12/24/10-seriously-epic-computer-
software-bugs/). None are likely to be global catastrophes, but we can never
rule out possibilities of issues lurking for years unnoticed.

~~~
johnlbevan2
(I wonder if the the patriot missile story inspired the Space Alert mouse
wiggling rule - [http://boardgamegeek.com/boardgame/38453/space-
alert](http://boardgamegeek.com/boardgame/38453/space-alert))

------
nazgulnarsil
"psychologists"

into the trash it goes.

