
Hidden obstacles for self driving cars - scottlocklin
http://www.technologyreview.com/news/530276/hidden-obstacles-for-googles-self-driving-cars/
======
balloot
I've been saying this for years, as I took Thrun's class when he was at
Stanford. Google's dirty little secret is that self driving cars are still
mostly smoke and mirrors. Given relatively controlled conditions and a trained
driver who can play backup when needed, they work. But if you put them in
complicated situations - snow, a busy city environment, abnormal signage -
watch out.

The problem is that the driving model is probabilistic. When you solve a
problem probabilistically, getting from 90% covered to 99% to 99.9% covered to
99.99% covered involves exponential leaps in difficulty. So even if the car
covers 99.9% of driving conditions (and it currently doesn't), there's still a
tremendous amount of work to be done to get it to 99.9999% correct, or
whatever the threshold is for it to be deemed "safe" for fully autonomous use.

I personally am bearish on the technology, as getting the inconvenient final
situational cases correct will be extremely challenging. I would love to be
proven wrong, but at Stanford I came to the opinion that the probabilistic
approach would get us to really cool demos, but never a fully autonomous
vehicle. That being said, the people working on this are a whole lot smarter
than I, and I would love to be proven wrong.

~~~
nicholas73
I decided to test a Google self driving car as it crossed the intersection by
accelerating into its broadside - no reaction at all. Well, got a reaction
from the humans inside.

I recently narrowly avoided getting killed by a broadside collision by braking
just in time. If I were further I would have sped up out of the way. Would a
probabilistic approach handle this? Maybe they need to compile a list of
special edge cases.

~~~
balloot
You can't compile a list of edge cases for this kind of thing, because it is
impossible to know the comprehensive list of all the situations the car won't
handle correctly.

In the end, you need a learning technology that can properly adapt to any
possible situation and give a decent response. Maybe it can be done, but we
certainly aren't there yet and I'm skeptical as to the the tractability of the
last bit of the problem.

------
prbuckley
It seems to me we have a better chance of seeing low flying autonomous
vehicles for personal transportation before we see self driving cars (think
quad coptor drone style vehicle with 500 lb payload capability). Autopilot
systems are already widely used in commercial air vehicles.

Fewer obstacles need to be accounted for in the air and there is less in
grained regulation. It feels like we are going to leapfrog the car all
together. I would love to see a detailed analysis that compares these two
approaches.

~~~
smeyer
Why do you think this? Self-driving cars seem reasonably close whereas there
doesn't seem to be anything resembling what you're proposing. Also, aren't
autopilot systems successful in commercial air vehicles because they operate
high up away from most everything other than a few birds and other commercial
air vehicles and they maintain massive separation between vehicles (compared
to roads)? This wouldn't seem to scale well to lots of small personal vehicles
relatively close to the ground. Or are airborne autopilot systems also good in
these situations and I'm just unaware?

~~~
pseudonym
It depends on how "reasonably close" they end up coming. If it ends up being a
xeno's paradox of issues where it can handle 90% of situations, then 99%, then
99.9%, then 99.99%, etc, but each of those incremental improvements prove
harder and harder to achieve, how many lawmakers are going to say "Sure, only
.001% of robot cars on the road get into catastrophic accidents in unhandled
situations, that's fine, let's make these things legal"?

The problem with emergent technology is that it's what people tend to be most
afraid of, and if there ends up being situations a robotic car absolutely
cannot handle that's known about, how long will it take for people to start
abusing it?

And for unhandled exceptions...awhile back I was driving up 280 and a police
officer pulled out in front of traffic, flipped his lights on, and started
weaving back and forth across all lanes of traffic. All the drivers slowed
down and kept behind the officer, obviously not sure what was going on. The
officer stopped weaving at a couple points along about a one-mile stretch to
get out of his car and pick up an item off the freeway, then got back in and
resumed weaving, until he got up to a previously-pulled-over car and parked
behind another officer.

That's definitely not something they covered in driver's ed, apart from "if
something unusual is happening, slow down". But how long would it take for it
to make the news if a smart car in that situation passed the car on a weave
and struck a drunk guy that was stumbling along the freeway at 85mph, do you
think? And do you really think every possible situation that occurs during
driving will eventually be able to be handled by a smart car?

Another example, any point you run into a car or random other vehicle that's
double-parked in the city. If the car can only figure out that pedestrians are
blobs of pixels, does it have sufficient resolution to figure out how far away
that oncoming car is, or will it just patiently sit behind that moving truck
until they're done and start moving again?

There's a lot of edge cases for this tech, and most if not all of them have
potentially fatal exception cases if you fail to handle them correctly.

~~~
kelnos
_how many lawmakers are going to say "Sure, only .001% of robot cars on the
road get into catastrophic accidents in unhandled situations, that's fine,
let's make these things legal"?_

If they're actually thinking properly (doubtful), they'll look at the rate of
catastrophic accidents with human drivers, and make a call based on whether or
not self-driving cars are an improvement.

~~~
Rapzid
Yeah, hopefully. There are right near 200M licensed drivers in the US. That
would be 200k catastrophic accidents!

------
gfodor
Personally I think the path forward for autonomous cars goes hand-in-hand with
services like Uber. First you need there to be a population of people who can
get by without owning cars and have them rely exclusively on driving service.
Then you slowly augment the driving service with autonomous vehicles when
these folks are going to be traveling along routes that are supported by the
cars. These routes will likely be an order of magnitude cheaper to travel,
since there is no human driver. The market dynamics combined with the steady
march forward of the capabilities of the autonomous cars will do the rest.

In other words, I think it's pretty unlikely you will just wake up one day and
swap out your car for a self driving one to drive you around. Eventually,
maybe, but fundamentally it will be less about which car you own and more
about if you own a car at all. In the long run if autonomous vehicles are
successful it makes little sense to own a car anyway, so it's only logical
that the early adopters naturally be non-owners.

~~~
neolefty
Yes, self driving cars make more sense as a service than as a possession. And
knowing their limitations helps us see where they might be useful _today_ , so
that they can be gradually incorporated into traffic.

------
teuobk
So what if the car can't drive itself in anything other than familiar roads
during clear weather? I would guess that the majority of my current driving is
done under those conditions. If the car could take care of itself most of the
time but fell back to my control during rain, snow, obscure roads, or
construction zones, I think it would still be a net benefit to me.

~~~
gfodor
The problem there is driving is a skill, and one which you can lose if you do
not exercise it. A self driving car which only works 99% of the time seems
like a non-starter, since by that point the humans inside of them are no
longer qualified to take over. Particularly if that 1% is in conditions that
are particularly difficult.

~~~
luos
I don't have a driver license so for me that would be perfect. I don't care
about driving, I don't want to drive, but I want my (or a) car to come for me
to the bar / after work. If the weather is not good then I will use public
transport.

I think people overestimate their abilities. Sure, this is a new technology
but in X years these cars will be better drivers than 99% of the people.

~~~
renox
> I don't have a driver license so for me that would be perfect.

No, it WOULDN'T be perfect! What if it's starting to rain/snow while you
already being in the car? And you still need to drive the car for parking..

------
xauronx
Oh really? A futuristic mind blowing developing technology which hasn't been
released to the public (and has no plans to) isn't ready to be released to the
public? What's the point of an article like this, especially being posted
here? We're all technical people who understand the complexities (or
understand it's so complex that we can't understand them.) If the effort is to
inform the ignorant public of their misconceptions, post it on fox news or
yahoo answers.

/rant

~~~
agildehaus
It's an odd reaction that self-driving cars have to be perfect and handle
every edge case and have tens of millions of miles of error-free testing
before they're considered "ready".

Humans usually spend about 30 minutes driving 5-10 miles to get a license, are
far more unpredictable and inattentive, and have a stupid number of edge cases
of their own.

Not to say we shouldn't hold self-driving cars to a higher standard, we
absolutely should, but it's silly to think they need to be perfect.

------
astrocat
>"Some experts are bothered by Google’s refusal to provide that sort of
safety-related information... the public 'has a right to be concerned' about
Google’s reticence: 'This is a very early-stage technology, which makes asking
these kinds of questions all the more justified.'"

This seems to be a pretty common opinion, but I fail to see why anyone feels
they have a "right" to be told Google's plans, strategies and ideas, much less
their development mistakes and failures for a thing that is not a product
currently available (or even close to it).

I realize there is a very real fear many people have about autonomous vehicles
because it's something they don't/can't understand. And while that's something
Google's marketing/PR department _may_ want to deal with, this fearmongering
attitude of "Google won't tell me exactly how it's going to work... it must
not be safe... _oh no we 're all going to die!!!_ " is just getting
ridiculous.

~~~
SomeYoungGuy
The public has the right because Google is testing its cars on public streets.

------
naiyt
I remember a few of these difficulties being mentioned in Sebastian Thrun's
Self-Driving Cars course on Udacity. It's been awhile, but I seem to remember
the snow discussion in particular. The problem wasn't just slippery
conditions, but the question of how the vehicle can localize itself when the
various visual cues that it uses are covered in snow (e.g. road lines).

Pretty interesting stuff.

~~~
NoPiece
I think there are better potential solutions for a self driving car to orient
itself in those kinds of conditions than for a person. I've driven in
miserable weather, and it is very disconcerting - you just guess and go. A
self driving car could take measurements of the road + gps coords + look its
map database to know how many lanes there should be, then estimated where the
lanes and limit lines are. Once my eyes fail me, I don't have a fallback.

~~~
zanny
That, and you have to consider that one of the chief issues this article
brings up - that the self driving cars are relying on unreliable maps - is
that once a self driving car traverses a road, it could propagate a much more
detailed, or at least much more auto-appropriate map to the swarm of other
cars.

The maps would get magnitudes better the year the cars were unleashed.

------
gamerDude
Most of the problems yet to be tackled seem to be troublesome for many humans
as well. But as for the adding to the maps, couldn't google work with local
governments to have stoplights, stopsigns, construction etc added and google
will contribute observations of potholes etc.

------
cinskiy
Living in Russia, I always tried to conceive how Google cars would behave
here, especially in winter.

As a driver here you have to take daring decisions sometimes, something that
robots probably should never do due to 'do no harm' rule. That's a real
engineering challenge.

------
jbarham
There was recently a car crash here in Victoria, Australia, that resulted in
the deaths of two teenagers and injuries to five others. It was apparently
caused by the relatively inexperienced driver swerving to avoid a rabbit. In
hindsight it would have been better if the driver hadn't swerved and the
rabbit had been run over, but in some sense this is a moral choice of the type
that robots can't make.

In theory self driving cars sound like they could be potentially safer than
human drivers, but there are many edge cases like the one above where a
robotic driver following reasonable heuristics (e.g., swerve to avoid animals
on the road) could cause fatal accidents that most human drivers would be able
to avoid.

~~~
robotresearcher
> this is a moral choice of the type that robots can't make.

The driver in Victoria did not make a moral choice. Did they really weigh the
life of the rabbit against death and injury of seven people? Extremely
unlikely.

A robot, like an experienced driver, could easily use the heuristic that
hitting the small thing is safer than hitting the big thing.

~~~
kamaal
>>Did they really weigh the life of the rabbit against death and injury of
seven people?

I would say yes. Firstly I would see what the driver actually did. Did the
driver have the seven people in plain sight when he/she made that decision?
But for sure the driver did make a moral decision to save the Rabbit's life.
It would have even been the case where the person thought he could save the
rabbit's life without harming the humans. So you have to make decisions which
are purely not mathematical here.

>>could easily use the heuristic that hitting the small thing is safer than
hitting the big thing.

How would this hold against hitting some one below average height versus
hitting some one tall. Or running over a baby Vs an aged person.

Right there are all kinds of moral dilemma.

------
sasoon
Mercedes-Benz S 500 Intelligent Drive Autonomous Car Self Driving Car:

[https://www.youtube.com/watch?v=LHqB47F12vI](https://www.youtube.com/watch?v=LHqB47F12vI)

[http://www5.mercedes-benz.com/en/innovation/mercedes-benz-
in...](http://www5.mercedes-benz.com/en/innovation/mercedes-benz-intelligent-
drive-driver-assistance-systems-safety-comfort/)

[http://www5.mercedes-benz.com/en/innovation/autonomous-
long-...](http://www5.mercedes-benz.com/en/innovation/autonomous-long-
distance-drive-research-vehicle-s-500-intelligent-drive/)

------
Qworg
I'm always glad to see reasonable overviews of emerging technologies. While
very focused on what's wrong, it is a good counterpoint to the typical
breathless coverage.

------
dalore
Talks about unknown traffic lights possibly being a problem. But if all cars
were self driving would there even be a need for traffic lights?

> If it encountered an unmapped traffic light, and there were no cars or
> pedestrians around, the car could run a red light simply because it wouldn’t
> know the light was there.

So the car determined it was safe to cross and crossed? I don't see the
problem. The red light is for humans, but a self driving car will know where
the other cars are and know when to cross an intersection.

You could argue the reverse even. A self driving car comes to a traffic light
that is green but detects a fast moving car running the red, it would stop.
But a human might not and continue on thinking green is ok.

------
thewarrior
I am making a bold prediction that it will be decades before a self driving
car can drive itself safely on Indian streets.

Road Signs ? None.

Rules ? They're more like suggestions.

Roads ? What Road ?

~~~
letstryagain
They just have to be significantly safer than humans, and that's not hard.

~~~
webignition
I think you would have to experience first-hand what it is like to drive in
almost any country that is outside the US, Europe or most of the commonwealth
to really appreciate the difficulty of developing a self-driving car that
could handle such conditions more safely than humans.

It very much is hard.

I've driven in and around India. Aside from the fact that the majority of
vehicles have four wheels and generally move forwards more often than
backwards, the similarities to driving in countries with more regulated
driving environments are few to none.

Try circumnavigating Connaught Place on any day for a taster, or try taking
the main highway north from New Delhi any distance.

The smaller more fragile vehicles give way, in the interests of self-
preservation, to the larger, heavier vehicles.

Approaching in an intersection in a truck? Blast your horn to warn others of
your approach so that they can get out of the way. Slowing down to carefully
approach the junction is not what happens.

Need to cut across some lanes of traffic in a school bus to make a turn? Have
your co-pilot hang out of the side door waving, shouting and berating other
vehicles to make them give way. That's your turning indicator in many cases.

Need to take the same school bus on a circuit to collect kids in the morning
and are running a bit late? Why not head straight through some empty fields.
Roads can be optional.

All of the above I have experienced first-hand.

~~~
prawn
Can you imagine the sensors and speeds that might be available within one
decade let alone two or three? Compare that to a human looking in just one
direction at any given time and with imperfect reaction time.

You know those larger, heavier vehicles you mentioned? What if they're self-
driving long-haul transport? They'll likely be able to make it through.

Cost will be the thing that prevents self-driving cars from dominating roads
in India before the technology itself is a problem.

~~~
thewarrior
I'm not trying to be snarky or anything but take a look at this :

[https://www.youtube.com/watch?v=twMj9MTs3lw](https://www.youtube.com/watch?v=twMj9MTs3lw)

A self driving car would just sit there frozen as there would be no 100 % safe
action to take.

~~~
prawn
Yes, I've seen crazy intersections in person - I considered exactly that sort
of thing with my comment.

But I can also see that a self-driving car could edge forward gradually until
it had the opportunity to take a larger advantage. A lot of those vehicles and
bikes are not travelling at full speed and are also prepared to brake if need
be. A self-driving car would be able to judge all of these things almost
immediately unlike the reaction lag we get with humans.

------
thewarrior
I'd like to see one navigate its way around this :

[https://www.youtube.com/watch?v=twMj9MTs3lw](https://www.youtube.com/watch?v=twMj9MTs3lw)

------
eungyu
I think it may not be a long time until we realize that achieving fully
cognizant AI would be faster than covering all these corner cases in somewhat
hard coded logic.

------
teej
I made this comment about google self driving cars two years ago and I think
it's still relevant now:

200k miles is nothing. Over 8 billion vehicle miles are spent per DAY in the
US. One person is killed for every 75 million miles driven. 200k miles isn't
enough to test every terrain, under every weather condition, in every
lighting, etc. There's too many variables in the equation. The Google car
won't be truly safe until it has logged 1000x miles.

~~~
agildehaus
Considering the average human has only a few miles of officially observed
driving before it receives a license, I think this is a bit unfair.

How would you suggest testing these vehicles to tens of millions of miles
without having an actual public deployment?

Also, Google can test its algorithms in simulation to many millions of miles
within hours. I believe they're lobbying for simulation to be a major part of
testing:

[http://www.theguardian.com/technology/2014/aug/21/google-
tes...](http://www.theguardian.com/technology/2014/aug/21/google-test-self-
driving-cars-virtual-world-matrix)

------
georgemcbay
I'd love to see an article on this topic that was actually written for
techies/engineers.

IMO, this could be right out of Wired the way it is written -- it conflates
regulatory issues, problems that are clearly (relatively) easy but just not
tackled, and problems that are legitimately still pending solutions before
this goes mainstream. An article focusing on what the actual real difficult
unsolved technical issues are would be fascinating, but this ain't it.

"The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece
of paper, so the car will try to drive around either."

As a human I sometimes can't tell the nature of something on the road at
speed. They gave an easy example (rock vs paper) but IRL it is usually things
that aren't so easy to identify like trash bags that might be empty... or
maybe not. So avoiding any unexpected item in the road when safe to do so is
probably always the right solution anyway.

"Urmson also says the car can’t detect potholes or spot an uncovered manhole
if it isn’t coned off."

People aren't that great at this either, especially in traffic where the hole
is obscured until the last second, at least an automated car that reports home
could, via various sensors, detect hitting "something that felt like a
pothole" and send that information out to other cars in the region.

~~~
robotresearcher
> People aren't that great at this either

The robot car has to be considerably better than people to become acceptable.
This is a serious burden for the technology.

~~~
agildehaus
Accidents are usually caused by human unpredictability, lack of attention,
terrible reaction time, stupidity, narrow perception, inability to follow
traffic rules, and the effects of various drugs. Driverless cars mitigate all
of these issues.

I would feel _much_ better riding a bike in the vicinity of a self-driving
vehicle than any random human.

There are a number of issues to iron out yet and a lot of testing to do. But
make no mistake, they're already far better.

~~~
robotresearcher
They are currently stumped by situations that humans are OK in. They are not
"already far better" in all situations. Even if they are better on average, my
point is that this is not enough. If the world was run on pure rationality,
they would have only to be epsilon better on average to be adopted. In the
actual world, they are going to need to be very much better, and have very
very few failure cases to be adopted.

