
New reports suggest limits to autonomous vehicle performance - charlysl
https://spectrum.ieee.org/cars-that-think/transportation/self-driving/have-selfdriving-cars-stopped-getting-better
======
kyrra
Another interesting article popped up with the release of the CA disengagement
reports: [https://jalopnik.com/californias-autonomous-car-reports-
are-...](https://jalopnik.com/californias-autonomous-car-reports-are-the-best-
in-the-1822606953)

TLDR: Cruise had, what some would consider, to be a disengagement incident but
didn't report it because it does not fall into the categories defined by
California.

The core issue here is that the disengagement reports should not be the end-
all-be-all of where companies are. As the ieee article calls out, we have no
clue what kind of testing the companies are doing (how hard they are pushing
their systems). There is also some room for companies to avoid reporting
certain types of incidents.

Waymo did 2 million miles in 2017, but only 330k were in CA. We are only
seeing a slice of their data.

If we (the public) want a better picture of where companies are on this, the
rules around what is reported need to be changed. They possibly need to also
be done at the federal level so no company can hide their public testing in
states that don't require reporting.

~~~
deegles
My biggest issue with the "miles driven" metric is that it doesn't account for
the software version. Maybe updating from v1.0 to v1.1 won't warrant resetting
the counter, but I believe that over time each historical mile driven becomes
materially less relevant to currently deployed version.

~~~
ocdtrekkie
I would argue the biggest issue with the "miles driven" metric is that it
isn't contiguous. Your test drivers can handle all the rough stuff, and your
mile counter ticks up on the straight shot highways. (Arguably, you could be
super dishonest and handle every single intersection manually, and still have
a massive miles driven statistic since most distance is accumulated on
straight-ways.)

Disengagements is a useful number, and the disengagement numbers say so much
more about the real state of self-driving cars: That they fail as often as a
non-synthetic oil change. And if humans had driving system failures that
often, we would not be considered good at driving.

The longest number a company should be marketing is how many miles they've
driven between disengagements. Like the "days since an accident" sign at a
workplace: Fail, and you go back to zero. The numbers will be smaller, but
they'll be a lot more meaningful.

~~~
saltyworker
what about hours, like a pilot?

~~~
praptak
All simple metrics can be gamed.

~~~
dredmorbius
s/simple//

------
maxxxxx
I think it's pretty common that progress slows down once the simpler problems
have been solved. The unsolved problems to get to fully autonomous cars are
getting harder and harder once the low hanging fruit is gone.

~~~
glup
I'd love to see an analysis of the sorts of situations that the advanced
systems can't handle.

Driving in Mexico City last month I was appreciating how intensively
interpersonal driving was: it seemed all drivers were constantly computing the
risk appetite of all other drivers and acting accordingly; navigation,
avoiding pedestrians, and following the law (to the extent possible) was the
easy part. Maybe there's a way to avoid issues by providing coordination
mechanisms, but I don't see how this would work as long as there are human-
driven legacy cars. So will self-driving cars need theory of mind?

~~~
maxxxxx
In addition, humans tend to push the limits so even if we have autonomous cars
that can handle current human driver behavior, the humans will adapt and put
more pressure on the autonomous cars.

~~~
Peeda
I'd argue for it to go the other way. Driving has a large cultural component
-- people tend to drive how others around them do. Once autonomous cars hit
some critical mass lots of human drivers are going to start mimicking how they
drive which is probably a fair bit more conservative than most of these people
in big cities.

~~~
maxxxxx
I doubt it. Once you know that can cut off a self driving car without
consequences it's very tempting to do that.

~~~
sombremesa
What are the consequences of cutting off any other car today? There will still
be people in those self driving cars, you know. Perhaps even people with a
horn.

~~~
maxxxxx
A self driving car will be for sure programmed in a way that it will avoid
collisions at any cost. Another human driver on the other hand may just be an
idiot and run into you.

~~~
21
On the other hand a self driving car would have a crystal clear record of you
driving unsafely, which could be immediately uploaded to police.

------
gwbas1c
Letting a computer drive my car is high-risk: If there is a malfunction at
70mph, I could die!

Where's my automatic lawn mower? Where's my robot to unload the dishwasher and
put everything away?

We need to find other consumer technology to use AI in before we risk our
lives with self driving cars.

~~~
visarga
> Letting a computer drive my car is high-risk: If there is a malfunction at
> 70mph, I could die!

Google can't even make GPS mapping work perfectly - it's apparent if you use
it for a day. Not to mention that AI vision systems are susceptible to
adversarial attacks, which means they can be hacked by external images. This
paper just came out yesterday: "Breaking 7/8 of the ICLR 2018 adversarial
example defenses"
([https://arxiv.org/abs/1802.00420](https://arxiv.org/abs/1802.00420)). The
situation is dire.

I think they are afraid that drawing a couple of lines with a marker on a
street sign might make it invisible, or turn it into another sign. They can't
defend against such attacks because the neural net doesn't actually understand
the world, it just observes patterns. Understanding requires causal reasoning.

~~~
kajecounterhack
Fwiw maps used for autonomous vehicles are higher fidelity than consumer maps.
Furthermore the cars are generally equipped with IMUs so that they don't get
thrown off by urban canyons, clouds, etc. And more error correction (particle
filters, visual cues + map data) is used to pinpoint your location on a map.

Adversarial attacks are concerning for any AI model, but autonomous vehicles
aren't all AI. Lots of rules and heuristics and pre-knowledge are utilized. In
the end if something is uncertain it generally amounts to "slam on the brakes"
which is more a ux problem than a safety problem.

~~~
Twisell
There is no « better quality » map out there (unless maybe military and
restricted).

Mapping the world is hard because it is complex and evolve way faster than a
Google Car army can patrol it for a reasonable cost. More often than not
OpenStreetMap which is a community project is more up to date than commercial
map, but it’s not precise enough. Beside nobody would bet there life on
something that is Wikipedia-like editable by ill-intentioned trolls.

This is precisely why all autonomous car rely heavily on computer vision and
why adversial attacks should be carefully considered.

Uncertainty will indeed « slam on brake » but adversial attack don’t aim to
generate uncertainty the aim to generate false certitudes.

~~~
kajecounterhack
Nope. There are higher fidelity maps. You should read about robotics.

It's true these maps don't exist for the whole world yet but it's also why all
sdc companies are launching in metros. They're building them.

Vision is good at recognition but bad at localization. This is the real
problem. In robotics a fundamental problem is SLAM. You should read about it.

~~~
Twisell
Could you explicit acronyms? They seem very business specific but I would
gladly learn about theses projects as I often use cartographic products for my
work.

------
mtgx
I never believed neither Tesla nor Nvidia when they said last year that they
would achieve Level 5 within a few short years. I have a rule of thumb for
such announcements: subtract at least one level to get the _real_ level and
what their systems are actually capable of.

How could these cars reach Level 5 when they've only been tested for like a
year or two on _some_ roads? And I don't believe there is good enough
simulation for them right now either. Even Google's CAPTCHA was asking me
recently to pick the "bridge" from the pictures. There was no bridge.

~~~
Spooky23
Most of the public testing seems to be on the West coast and Southwest as
well. What happens in environments where you have things like rain and snow?

My non-autonomous Honda Odyssey has cameras on the passenger mirror and rear
hatch -- both of which were useless within 90 minutes of traveling from Lake
Placid back home thanks to road spray and salt.

IMO Level 5 is viable for some relatively limited use cases, or with roads
that have embedded telemetry. I'll believe it when intercity truck fleets
convert for a few years.

~~~
exelius
I think this is why cheap solid-state LIDAR is critical to this goal. It’s
currently too expensive and bulky for this kind of use, but if you can embed a
few dozen LIDAR units in a car for under $1000, you get some really accurate
telemetry that’s not affected by adverse weather. Cheap solid state LIDAR is
gonna be the invention that powers the next industrial boom.

~~~
nerfhammer
LIDAR doesn't really work in rain and snow. That's part of why this is a hard
problem.

~~~
exelius
Agreed, but multiplexed solid state LIDAR will up to a 99% use case for vision
capture. What’s out there now solves maybe 80%.

Its a hard problem being solved by incremental improvements — first you gotta
make LIDAR work in concept (which is basically done), then you have to shrink
it, then commercialize it, then commoditize it. So it’ll take a decade or more
to realize, but that’s the ultimate thing I think we need to get these things
into the mainstream.

------
jakarta
Waymo progress seems to be slowing on critical disengagements (in older CA DMV
reports these were called "safe operation disengagements" \- they stopped
reporting this type in 2017). These disengagements deal with perception
issues, the software leading to unwanted maneuvers, inability to react to
reckless road users, and incorrect predictions.

You can see it reduced rate of improvement when you dig into the numbers:

2015 0.16 disengagements per 1000 miles

2016 0.13 disengagements per 1000 miles

2017 0.12 disengagements per 1000 miles

~~~
Retric
Looking at miles before disengagement shows things a little differently.

    
    
      2015: ~6250
      2016: ~7692 (+1442)
      2017: ~8333 (+641)
    

Personally, I think a self driving car with 1 notified disengagement per year
as perfectly usable. Not so useful for the blind, but still very helpful.

That said, even reporting the same number every year could still represent
progress if they keep pushing the car harder. AKA, if 2015 = highway driving
in the day with nice weather, 2017 = inner city driving at night in a
snowstorm the same number of disengagements would still represent a lot of
progress.

~~~
jschwartzi
This is a serious problem if you have to take over during disengagement. If
you spend most of the year not driving you risk falling out of practice, and
then the one time you must rely on your own skill is in an unusual situation
requiring you to exercise good judgement at speed. I would expect most
disengagements to result in accidents because of this.

~~~
Retric
I don't think people are going to enter their destination for every trip. So,
people will still do some driving even with self driving systems.

On top of that most situations with disengagement have not resulted in
crashes. Really it's failing to pay attention during normal driving that's
most often at fault, a 'buzzing you need to pay attention now' system solves
most of the problems even if the car is not driving it's self.

~~~
stevenwoo
You make fair points, but didn't Waymo first show cars with no steering wheel
whatsoever as their target? I have only see the Waymo branded vans in recent
months, not the tiny custom cars (forgot the name of them).

~~~
Retric
WePod and some other companies already have self driving buses doing limited
routes in foot traffic. Which is basically the Waymo tiny car demo.

It turns out to be significantly easier problem than full speed road traffic.
Which IMO is why google gave up on that concept demo.

------
adrianmonk
Not sure I believe the title is justified given what the rest of the article
says.

I'm no statistician, but I'm not sure if there are enough miles being driven
that good conclusions can be drawn about whether self-driving cars are getting
better or not.

Lots of testing is taking place outside of California due to other states'
efforts (like changing regulations) to cater to it. So data from just
California very likely doesn't tell the whole story.

Also, as has been mentioned here before, once they've solved and tested the
easy cases, they might move on to solving and testing the harder ones. So
there isn't an apples-to-apples comparison on test failure rates from one year
to another because they could be testing different things.

------
pfarnsworth
We should compare this to having a random human driver drive, and see how
often a third party would “take control”. Lord knows that if my dad were
driving I would take control a lot more than a self driving car.

~~~
mlinksva
Yet your dad (likely, I'm guessing, but many like him anyway) has a license
and can legally drive. I hope one thing that criticism of computer driver
outcomes leads to is conclusion that lots of human drivers should not be
entitled to drive, and kill >1m people/year worldwide.

------
frgtpsswrdlame
Does anyone know how much of the actual driving decisions in these cars are
made by machine learning algorithms? I'm aware that ai/ml is heavily used on
the vision side but how much of the actual driving component is traditional
algorithms vs neural nets/svm/whatever?

~~~
perfmode
What does it matter?

~~~
frgtpsswrdlame
I suppose it doesn't really? It's a bit of a side question that I thought
maybe someone would have insight into.

------
simion314
This idea is expensive to implement, but building some tracks, then put only
the self driving cars on this tracks, then you could have very large speeds
and since only this cars would be on tracks and no other obstacle it would be
very safe.

~~~
lolc
You mean trains?

~~~
simion314
Similar to trains, but high speed, and you have your own cart, you do not need
to wait for a scheduled train. It does not need to be using tracks but an
exclusive road with specific sensors and electronics so the carts would be
able to avoid collisions at intersections.

------
BuffaloBagel
No surprise here - the last 20% of project taking 80% of development time.

------
kazinator
The postulation of a limitation is basically a negative claim. "This can go
only so far and no farther." Very hard to prove, and inherently myopic. Even
if the observations are true, it's likely just a lull in the self-driving
scene.

It's reminiscent of how some people once believed that a machine will never
beat the top human players in chess.

------
d13
Yes, it's Xeno's paradox ([http://platonicrealms.com/encyclopedia/zenos-
paradox-of-the-...](http://platonicrealms.com/encyclopedia/zenos-paradox-of-
the-tortoise-and-achilles)). CGI has exactly that same problem - the first 99%
is easy, the final 1% is impossible.

~~~
r00fus
I don't see how Zeno's paradox applies here. The root assumption is that the
work is asymptotic with some "goal" \- in this case "as good as a human".

Perhaps the real goal is actually "level 6 - better than human" or "level 7
futuristic transport". Looking at it that way, maybe that would indeed follow
Zeno's paradox (we'll never get futuristic transport) but soon surpass how a
human would drive permitting integration with human drivers (being more
reliable and cheaper causing disruption).

Ultimately, I think the main roadblock to automation in this realm are the
vast entrenched interests (big oil, big auto) who'd lose out due to efficiency
gains (just like with solar power and EVs).

~~~
lukas099
Regarding your last point, doesn't 'big auto' comprise some of the largest
investors in this technology?

------
hodder
"It is possible that Waymo put its technology into more challenging scenarios
in 2017" Seems extremely likely.

~~~
beambot
Really? Waymo went from driving exclusively in Mountain View, CA to driving
across a number of cities in the US: Atlanta, SF, Detroit, Phoenix, & Kirkland
[1]. The diversity in cities would easily add to the challenge!

[1] [https://www.theverge.com/2018/1/30/16948356/waymo-google-
fia...](https://www.theverge.com/2018/1/30/16948356/waymo-google-fiat-
chrysler-pacfica-minivan-self-driving)

~~~
lukas099
That is what GP was saying: it is extremely likely that the difficulty level
of the scenarios increased.

I wonder if the flattened curve of progress in the metric described by the
article (disengagement rate) is actually intentional. Maybe a decision was
made to increase difficulty of driving scenarios in such a way to keep that
approximately constant.

------
joejerryronnie
There is an upper bound to autonomous vehicle performance with our current
roadway infrastructure and that is well below the threshold of full self
driving cars. Once we retrofit the roads with sensors, remove the reliance on
existing physical signage, and add guardrails to prevent unanticipated
occurrences as much as possible, we'll see real progress. Once everything
(cars, roadway, services, etc) becomes networked and talks to each other, we
can get rid of the steering wheels. My guess is we're 10 years away from
limited availability and 50 years away from full build out country wide. One
question - will a new technology come along before then and render driverless
cars as a means of human transport irrelevant? Personally, I'm waiting for
human drone delivery ;-)

------
zone411
One thing I haven't seem mentioned is that when you're driving yourself, you
anticipate and prepare your body for the changes in direction and
acceleration/deceleration, so the car appears to drive smoother. If the
disengagement rate is not zero (human becoming just a passenger), you still
have to pay attention to the road and people may prefer to drive themselves
because of this effect, maybe with exception of the highway. If the road rules
stay unchanged, the self-driving car will also usually be slower in getting to
the destination, as it will be set to strictly follow the speed limits and
yellow lights, and if it's too deferential to other cars.

~~~
tqkxzugoaupvwqr
What about today’s taxis, busses, trains, planes? I don’t see passengers pay
attention to the road to prepare for driver initiated braking, lane change or
speed adjustment. I would assume passengers not paying attention today in
human driven vehicles will continue to not pay attention in autonomous
vehicles because nothing changes (as long as these vehicles drive similar to
human drivers).

~~~
sjruckle
Except it's not like today's taxis, busses, trains or planes. It's more like
riding with a friend who's never crashed their car, but sometimes will tell
you "I can't handle this right now, take the wheel." and you don't have a
choice.

------
petra
If we consider remotely controlled vehicles, they seem like the perfect step
for self-driving cars: Assuming self-driving on simple conditions(like the
highway), while letting people do the complex driving, and maybe people with
lower cost of living and salaries, such service may be cheaper than a taxi.

And this could create a large business, doing many miles every day, which
would be the perfect means to gradually expand the role of self-driving.

Yet,non of the big companies choose this(and i'm sure they're aware of this).
Why ?

~~~
300bps
Wouldn’t remote control cars require a near zero latency network connection at
all times in all areas?

~~~
saas_co_de
It would be computer assisted remote control, similar to how a drone works.
The remote operator provides general directions but the computer has direct
control over the vehicle systems to implement those directions in response to
sensor feedback.

Human reaction time is very slow so to do better than a human definitely does
not require zero latency.

~~~
notatoad
this sounds a bit like what Cruise is planning to do. Not remote control, but
remote instruction

>“The specially trained operator [then] provides a domain extension to the
vehicle to help define safe operating boundaries (e.g., permission to use the
opposing traffic lane where cones are demarcating a new path of travel)

[https://spectrum.ieee.org/cars-that-
think/transportation/sel...](https://spectrum.ieee.org/cars-that-
think/transportation/self-driving/gm-says-look-ma-no-steering-wheel)

------
dasimon
Betteridge's law of headlines: "Any headline that ends in a question mark can
be answered by the word 'no'."

tl;dr of the article:

\- Autonomous vehicles drove fewer miles in the state of California in 2017,
but maybe made up for that in miles driven elsewhere.

\- The disengagement rate will probably need to improve considerably before
AVs are ready for widespread deployment

\- Waymo's disengagement rate barely improved year over year, but that may
have been because they are placing the cars in more difficult scenarios (their
blog post suggests that is indeed the case)

~~~
tome
tome's law: In any discussion about an article whose title is a question,
Betteridge's law is mentioned with probability 1.

[https://news.ycombinator.com/item?id=9077549](https://news.ycombinator.com/item?id=9077549)

~~~
ddlatham
Perhaps you should revise the law to just yes/no questions?

[https://news.ycombinator.com/item?id=16288489](https://news.ycombinator.com/item?id=16288489)

[https://news.ycombinator.com/item?id=16276911](https://news.ycombinator.com/item?id=16276911)

On the other hand, even those aren't holding up.

[https://news.ycombinator.com/item?id=16277231](https://news.ycombinator.com/item?id=16277231)

~~~
tome
Good point and good observation! The statement of Betteridge's Law itself
would have to be changed though

[https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines](https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines)

