
Why self-driving cars are not going to happen - z3t1
https://yinwang0.wordpress.com/2016/03/22/self-driving-cars/
======
stevenh
Human drivers can feel safe around other cars because of mutually assured
destruction. You are both afraid of dying if you make a significant mistake.

Self-driving cars will never have this fear holding them back from making
mistakes. You could argue there is actually an incentive to let them crash
once in awhile, because the valuable real world data will be phoned home and
used to improve the product for the small price of a human life.

Once corruption sets in down the road, cars with autopilot capabilities will
enable easy kidnapping and assassination. Simply activate backdoor remote
piloting and crash the target's car (or the self-driving taxi they're in) into
a tree at 70 mph, or lock the doors and drive to a pickup location to complete
the kidnapping.

Driving in a traditional car wouldn't protect the target, because a group with
ubiquitous optional remote control over self-driving cars could also force
surrounding cars to enter kamikaze mode and ram the target off the road.

Governments will demand some kind of backdoor like this at some point, and
then we have to trust they'll never abuse it (lol) and that no third parties
will ever be able to hack it remotely.

Even if you think the government is your friend, the first worm using a remote
code execution exploit that hops from car to car via Bluetooth and activates
its payload on a specific date will be able to kill tens of millions of
people.

~~~
nommm-nommm
I am a human driver. I do not feel safe around other cars.

~~~
goodplay
I am also human driver and I feel safer around "dumb" cars. I don't ever
intend to ride a self driving car, and will do my best to avoid being near
one.

Aside from being vulnerable to malicious actors, self-driving cars make "cold-
hearted" mistakes that don't make sense; mistakes that humans would never
make. Sure, the total number of mistakes they make will be lower than the
equivalent number of accidents caused by humans. However, having human life
lost to the type of mistakes self-driven cars make is, in my opinion, an
unacceptable price to pay.

If I must loose my life to a car-related indecent, I want the it to be caused
by a human, and not some quirk or edge-case in an algorithm or neural net.

~~~
toomuchtodo
As someone almost killed this evening while driving home from the Thanksgiving
holiday weekend by a driver texting and not paying attention behind me at
highway speeds (with my wife and newborn daughter in the car), I welcome the
cold, calculating embrace of self driving cars with open arms.

Edit: The driver ended up impacting a semi trailer that had been in front of
us, with me leaving the lane for the safety of the grassy interstate median.

~~~
goodplay
Firstly, I'm glad you and your family were not harmed.

I do not in any way mean for this to be offensive or disrespectful, but would
you still welcome self-driving cars had this incident been caused by a
malfunctioning self-driving car as apposed to a reckless driver?

My point is that self-driving cars lack the common sense that humans have. A
human would immediately halt if they hear a child scream near their car, a car
would continue moving along as if nothing is happening if it isn't designed or
trained to detect screams.

Until machines gain sensory and cognitive capabilities that exceed ours, and a
working comprehension of human values, I maintain my stance regarding this
issue.

~~~
euyyn
> A human would immediately halt if they hear a child scream near their car, a
> car would continue moving along as if nothing is happening if it isn't
> designed or trained to detect screams.

I don't know to what extent they use sound as an input (makes sense that
they'd do, given horns). But for what I've seen, self-driving cars are much
better than humans at detecting pedestrians in their vicinity, including cases
in which a child can't actually be seen by a human driver and he happens not
to be screaming.

~~~
goodplay
Machines are much better than humans at detecting pedestrians, but only in
situations they are specifically designed or trained for. If a machine finds
itself in an unfamiliar or unknown settings (or if its sensors fail in a way
not accounted for in the design), it will fail catastrophically. A human will
not.

Hell, machines tend to fail even if the setting varies slightly from what they
are designed for; just see the numerous ways self-driven machines failed
recently for examples.

Machines must gain sensory and cognitive capabilities that exceed ours and a
working comprehension of human values for me to trust them.

~~~
euyyn
> but only in situations they are specifically designed or trained for

Where are you getting this from? An obstacle in the LIDAR is an obstacle,
there's no common-sense or training needed for the car to know that.

> the numerous ways self-driven machines failed recently

I can only think of the Tesla crash, Tesla cars being notorious for trying to
work around the need of LIDAR or similar (being deemed too expensive). What
are the other, numerous examples?

> If a machine finds itself in an unfamiliar or unknown settings (or if its
> sensors fail in a way not accounted for in the design), it will fail
> catastrophically. A human will not.

Fail catastrophically how? Worst-case scenario for a well-designed car in a
situation it can't understand is to park itself and refuse to go on
unassisted. There's no neeed for general AI to implement such a behavior.

~~~
goodplay
>An obstacle in the LIDAR is an obstacle, there's no common-sense or training
needed for the car to know that.

>Worst-case scenario for a well-designed car in a situation it can't
understand is to park itself and refuse to go on unassisted.

All these are false positives. What I object to are the costs of false
negatives; A situation where the car mis-recognizes an abnormal situation as
normal, and then proceeds to act on it's failed understanding.

The Tesla is a good example of this; the car failed to recognize the obstacle
in front of it and acted accordingly (namely, crashing full speed into the
object). Regardless of the underlying reason, a machine did not recognize it
was in an unknown setting, and thus failed to react correctly. A human, on the
other hand, notices, and will generally use what capabilities they possess to
at least attempt to handle the situation safely.

I do acknowledge that self-driving cars will likely be statistically safer,
but until these cars completely exceed the all the capabilities of human
drivers, I refuse to trust them. Regardless of statistics.

~~~
mixedCase
>The Tesla is a good example of this

A car without LIDAR, only relying on camera.

>A human, on the other hand, notices

The camera didn't even doubt that there wasn't an obstacle in its path due to
how it camouflaged against the background. I doubt the human would've seen it.

And you're speaking of a prototype hacked on a car barely prepared for self-
driving situations (again, no LIDAR and early iteration) and being used
improperly.

You're not making a good argument here.

------
srinikoganti
Disappointed that this article even made it to hacker news.

Problem#2: "Liability Issues" doesn't make sense. Geico, Allstate and other
insurance companies can easily provide coverage. In fact the insurance gets
easier with self-driving cars. Premium might go up or down based on the
technology used. But coverage is very much doable.

Driving is so monotonous,that it is a perfect "killer app" for computer vision
and cognitive computing.

------
avmich
The two reasons offered are the complexity of algorithms and the liability.

I don't think these reasons are valid. For the complexity, I would be really
surprised if self-driving car programmers wouldn't consider dangers from
objects falling from other cars. In fact, this is relatively easy to consider.

The other one is the liability. One explanation for ML software was that if
"regular" software is about data and algorithm, the ML software is about data,
model and algorithm. The idea is that algorithm in ML case is somewhat
generic, and a lot of customization comes from learned model. In case of
liability I think it's quite possible that a judge would make a decision
basing on analysis of the model - if the model is "reasonable", then the
decision based on that model could be too.

------
aroberge
And this new article, about self-driving trucks _next week_ in Ohio has been
posted to HN [http://www.cbsnews.com/news/self-driving-trucks-will-hit-
the...](http://www.cbsnews.com/news/self-driving-trucks-will-hit-the-road-in-
ohio/)

------
toss1
As you're saying it can't be done, just don't get in the way of the guys
actually getting it done.

The mattress is a nice example of a novel problem, but a remote edge-case.
Being safe from all badly secured loads is not a prerequesite for self-driving
cars -- they merely need to be more safe _on balance_ than the mass of human
drivers before they start taking over. I suspect that even if self-driving-
cars all did the worst possible thing and tailgated all badly secured loads,
the overall safety improvement would still be a net positive.

This prediction will suffer the same fate of other naysaying predictions by
experts (never fly, never get to the moon, never know what a star is made of,
etc., etc,., etc.....).

------
greenstonekid
I would agree that the ability of computers to identify novel problems and
hazards (such as a falling mattress) will continue to be inferior to that of
humans for a long time.

However identification of a novel problem and the anticipation that it will be
a harzard in the near future is a trait that is only needed if your reaction
time is too slow.

A self driving cars reaction time will be fast enough that it doesn't need to
anticipate anything. It will identify the the velocity of objects (it does not
need to know what the object is) in front of it within in milliseconds of it
becoming a harzard and decide on the correct corrective manouever to avoid it
within milliseconds again.

~~~
addicted
In addition even assuming that computers will not be as good at novel
situations than humans (I don't agree with this, but I'll assume it for the
purpose of this argument), we also need to assume that (damage prevented by
himan's reactions to novel situations) > (damage caused by humans failing in
non-novel situations), for the authors argument to work. I don't think this is
evident st all, and if asked to pick, I would overwhelmingly favor the
opposite.

Further, for self driving cars, the moment something novel happens to them, it
won't be novel to pretty much every self driving car running on the same
software anymore (and assuming some basic data sharing between manufacturers,
probably to all self driving cars). So a mattress falling is a novel situation
for every driver the first time it happens to each individual driver. It's a
novel situation for self driving cars only the first time it happens to any
one of the self driving cars sharing the same model.

The liability issue is not very useful. We would have to change laws slightly,
but the insurance for self driving cars will be far more predictable, and the
likes of Geico will probably give the owners of self driving cars better
insurance than they would owners of human driven cars, because their safety
record would be better.

In fact, I bet if self driven cars come with the ability to also be driven by
a human, once they're a little established, the ones that do have this feature
will be far more expensive dive to insure than the ones that are purely self
driven.

------
eutropia
I feel like issue number one is being given the naturalistic fallacy
treatment. If animals can react to falling objects, so can computer models.
There's nothing magical about instincts.

------
kayman
Very good points. Liability is an issue that has not been played out yet in
any meaningful way.

I think driving cars will happen soon simple because the ROI is too great to
ignore.

It doesn't have to happen on a grand scale. It could start with small things
like pizza delivery or package delivery.

------
FullMtlAlcoholc
This writer sure does have a high opinion of human cognition and hiw it
applues to driving. In Los Angeles, I've seen people drive worse than trained
monkeys.

The problem with human drivers is that a lot of them maje idiotic mistakes a
self-driving car wouldn't. Two weeks ago I saw a guy driving the wrong way
down a one way street and he continued past me without realizing it.

------
xupybd
It'll happen. It'll happen soon.

------
pretendscholar
Its a trillion dollar industry it'll happen sooner or later.

~~~
rustydev
by the same token ... it will put a lot of people out of work... expect civil
unrest...

~~~
euyyn
Hey, no worries, the president-elect will negotiate the best deals with the
robots.

------
cyborgx7
Cute [http://i.imgur.com/duI91L6.png](http://i.imgur.com/duI91L6.png)

------
ericnolte
Strong opinions and weak arguments.

------
rmrk
Surely you are joking

