
Automakers Are Rethinking the Timetable for Fully Autonomous Cars - AndrewBissell
https://www.designnews.com/electronics-test/automakers-are-rethinking-timetable-fully-autonomous-cars/93993798360804
======
cletus
Colour me not surprised in the least.

Speaking as someone in tech talking to others in tech I'm constantly surprised
how optimistic people are about the timetable for fully self-driving cars.
Honestly, I think we're still 20+ years away.

Here's another thing to consider: people act differently when its a machine vs
a person they're affecting. There are countless examples of this eg:

\- I live in a building with a doorman. The main purpose of a doorman is to be
a person to prevent criminal action from starting. After all a person can be
fooled, distracted or bribed but people seem less inclined to mess with an
unpredictable person vs, say, an unmanned security system on a similar
building.

\- Forget humans, this applies to animals. It's well-documented that just
having a dog (or even saying you do) reduces the likelihood you'll get
burgled. Thieves will generally pick easier, more predictable prey.

\- ... and this brings us back to cars. Reading some of the coverage of Waymo,
Uber, Tesla, etc you see that other drivers act in ways around a car they
realize has no human driver than they would if it were a human-driven car.
Cutting it off, messing with it, etc.

How exactly does an autonomous car deal with that aspect of human nature?

~~~
adarsh_thampy
Add country to the equation, and you can easily add another 20-50 years into
the already 20+ you mention.

Take India, for example.

\-- The infrastructure is poor.

\-- Drivers ignore rules to suit their convenience.

\-- A lot of people depend on driving for their livelihood. Union minister,
Nitin Gadkari, has already made it clear that they will not let tech that
affects the livelihood of so many people come in and take jobs. Most
governments that rely on the vote bank of poorer sections of the society will
not dare to move against this mandate.

\-- The preferred and most popular mode of transportation is still two-
wheelers. I don't see anyone building self-driving two bikes. That rules out
almost half of the Indian population's transportation

I don't think India will be ready for another 50 years to have fully self-
driving vehicles.

~~~
tonyedgecombe
The poor infrastructure could work to their advantage, they could build it to
suit new applications. In the West we have a huge legacy infrastructure that
is hard to deal with. It’s somewhat similar to the way Africa skipped
traditional telecommunications and went straight to mobile.

~~~
adarsh_thampy
Might be possible on less dense cities.

Conjusted cities like Bangalore and reeling under bad city planning. Adding a
self-driving vehicle is not going to be of much help.

The govt does not have money to spend on road repairs. Corruption at the
contract level is rampant. Putting 1+1, I doubt India will use it to their
advantage.

------
YeGoblynQueenne
>> “It’s really, really hard,” Krafcik said during a live-streamed tech
conference. “You don’t know what you don’t know until you’re actually in there
and trying to do things.”

Now, where was that Rodney Brooks interview? Ah, here it is:

[https://cdn.arstechnica.net/wp-
content/uploads/2018/06/2nd-P...](https://cdn.arstechnica.net/wp-
content/uploads/2018/06/2nd-Part-RB-for-Ars-Transcript-2.pdf)

 _> > Rob Reid: (...) Why do you think these people and others are
overestimating the rate of development in this field?

>> Rodney Brooks: I think they're making a bunch of mistakes. I asked them
when did the first car drive down a freeway for 10 miles at 50 miles an hour.
They know that the Google cars did that in 2004 or 2005. It was actually done
in Germany in 1987.

>> Rob Reid: Wow.

>> Rodney Brooks: When are we going to get the first car, hands off the
steering wheel, feet off the pedals, drive coast to coast in the US? Yeah,
well, it actually happened in 1995 with the Navlab Project from Carnegie
Mellon University. My point is, everyone thinks, oh this is just a [inaudible
00:44:50], this is going to happen quickly. It's been around a long time to
get to where we are. I have now demonstrated to them that their scale is
wrong, their start point is wrong. It has taken a lot longer to get to where
we are now._

So, to anyone who had any understanding of the technology it must have been
obvious for a long, long time that "it's really, really hard" and that it
wouldn't just take a few years after the time that Google announced the Google
car. I struggle to believe that Krafcik is not one of those people, or that
major automotive company CEOs are not.

Basically, those people have been bullshitting everyone and they're still
bullshitting everyone, and they were probably not even interested in
developing self-driving cars in the first place, it was all just some stupid
marketing game supported by a very excitable tech press and a gullible public.

~~~
chroma
Rodney Brooks is misleading. He says things like, "It was actually done in
Germany in 1987." but neglects to mention that the road had no other cars on
it.[1] He also claims that the first autonomous coast to coast drive happened
in 1995, but actually the project was autonomously controlling steering, not
gas or brakes, and 150 miles of the trip were driven by humans.[2]

Modern autonomous vehicles are much more impressive. They were successfully
navigating urban environments 12 years ago.[3] Waymo's cars have driven over
10 million miles and disengagement rate is once every 11,000 miles.[4] We're
at the point where no major breakthroughs are needed, just incremental
improvements. That's what's different from earlier eras of autonomous
vehicles.

1\. [https://en.wikipedia.org/wiki/History_of_self-
driving_cars#1...](https://en.wikipedia.org/wiki/History_of_self-
driving_cars#1980s) has this quote:

> In the 1980s, a vision-guided Mercedes-Benz robotic van, designed by Ernst
> Dickmanns and his team at the Bundeswehr University Munich in Munich,
> Germany, achieved a speed of 39 miles per hour (63 km/h) on streets without
> traffic.

2\.
[http://www.cs.cmu.edu/afs/cs/usr/tjochem/www/nhaa/nhaa_home_...](http://www.cs.cmu.edu/afs/cs/usr/tjochem/www/nhaa/nhaa_home_page.html)

3\.
[https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2007)](https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_\(2007\))

4\. [https://medium.com/waymo/an-update-on-waymo-
disengagements-i...](https://medium.com/waymo/an-update-on-waymo-
disengagements-in-california-d671fd31c3e2)

~~~
rayiner
Think about what that means. Waymo has been doing this for many years, drives
in basically ideal conditions (Phoenix rather than Philadelphia), and still
disengages at a rate that amounts to once a year for a typical personal
vehicle. That’s not enough for “real self driving” because at that
disengagement rate the human must be actively engaged the whole time. Human
drivers go about 500,000 miles between crashes. (And that’s not 500,000 miles
driving through Phoenix. That includes miles driven through places like DC
where freeways have no acceleration lanes. That includes drunk drivers and
teen drivers. You can’t control the other people on the road, but if you don’t
yourself drive distracted, drunk, speed, etc., I’d bet you can expect to go at
least a million miles between crashes.) Disengagement rates would have to
improve by a factor of 50 to allow a human to not be paying attention at all
times while achieving an acceptable level of safety.

~~~
chroma
A disengagement doesn't necessarily mean that the car would have crashed.
There are amusing disengagement stories such as a cyclist doing a track stand,
causing the car to get stuck.[1] Many disengagements are false positives.
Think of how often you've had to do the equivalent to human drivers by telling
them to slow down or watch out. For me it's certainly more often than every
11,000 miles.

Given that humans aren't great drivers, you'd think that after 10 million
miles Waymo would be at fault for some crashes. But only one of Waymo's
crashes was even partially their fault. One of their cars was moving at 2mph
to get around some sandbags in the road and hit a muni bus that was trying to
squeeze by at 15mph.[2] Waymo has since tweaked their software to account for
aggressive bus drivers. That's over 10 million miles and only one collision
that was even partially their fault. That sounds pretty safe to me.

And regarding weather: Though their pilot program is in Phoenix, Waymo doesn't
just drive in places with nice weather. They've been testing in Michigan for
the past 2 winters.

1\. [https://forums.roadbikereview.com/general-cycling-
discussion...](https://forums.roadbikereview.com/general-cycling-
discussion/encounter-google-car-today-349240.html)

2\.
[https://en.wikipedia.org/wiki/Waymo#Crashes](https://en.wikipedia.org/wiki/Waymo#Crashes)

~~~
rayiner
A disengagement generally measures when a human test driver had to take
control. It’s not just telling a human driver to watch out—it’s taking the
wheel from them. They wouldn’t necessarily lead to a crash, but there is a
pretty good chance they would. If even 10% of disengagement’s would’ve led to
a collision, you’re still not close to a good human driver.

One crash in 10 million miles isn’t as great as it sounds. First, it’s a
meaningless number because a human is intervening every 11,000 miles. It’s not
a true measurement of what a purely autonomous collision rate would be.
Second, humans crash once in every 500,000 miles, and that’s under the full
gamut of circumstances (drunk drivers, unfamiliar roads, teenagers, etc).
Waymo is running with trained drivers on thoroughly mapped test areas in a
place with easy traffic and weather. You’d expect humans doing nothing but
running the same routes over and over in that carefully geofenced area to do
better than one collision per 500,000 miles. (Especially with someone looking
over their shoulder, like the self driving car is doing!)

If you look at the improvement in disengagement rates it’s not particularly
compelling: [https://cdn-images-1.medium.com/max/1600/1*oX-
ykZtxiMiuguSow...](https://cdn-images-1.medium.com/max/1600/1*oX-
ykZtxiMiuguSowQgGwg.png).

They’ve improved by a factor of 6-7 since 2015, but most of that was
2015-2016. That suggests the pace of improvement is getting slower.

------
q845712
One of the things the article names is the challenges caused by snow, ice,
etc. I realize I'm in the minority, but I actually think there's lots of
conditions that we shouldn't be driving in -- conditions where the obscured
lines of sight, slipperiness of the road, etc, actually mean that we should
either be off the road entirely, or traveling at speeds far far below normal.
I know this happens, but I honestly believe that it should be far more common
-- so common that it would likely take a chunk out of the economy.

For instance you're driving on the freeway and it starts to rain -- can you
really reduce your speed, and/or leave an extra "car length" between the car
in front of you, so that you'll still have time to stop if needed? In my
experience, no one else wants to slow down, so it's safer for me to keep up
with the slowest traffic than to go what I would now deem a safe speed.

~~~
dbish
Absolutely agree. People don't realize how long it takes to stop your car when
the roads are wet, and they keep going at high speed. I think the digital
speed limit signs that are adjustable should be used more often to greatly
reduce speed limits and discourage the standard 55mph+ since the coefficient
of friction can be half or less then during dry conditions (not to mention
loss of visibility). [http://hyperphysics.phy-
astr.gsu.edu/hbase/Mechanics/frictir...](http://hyperphysics.phy-
astr.gsu.edu/hbase/Mechanics/frictire.html)

~~~
marvin
As a Norwegian who drives in snow at least a few times a year, it’s funny and
tragic to see videos of pileups in the US when there’s 5cm of snow on the
road. It’s easy to drive safely in snow, assuming you have snow tires, but you
do need to slow down a lot. Even here, the first snow of the year results in
multiple accidents every year, before people learn to adjust.

~~~
mbeex
> As a Norwegian

I think, Scandinavia is a different story. A closed snow cover at say -5
degrees Celsius is quite safe compared to muddling zero degree conditions with
continuous freezing and thawing, producing ice surfaces or slush. The US have
this in some states the same way we have it in Central Europe (I am from
Germany).

~~~
marvin
We’ve got this on the west coast where I’m from though :) The coastal climates
are normally just a few degrees below zero. Granted, our snow-clearing teams
are very effective, so there is rarely deep snow.

------
MegaButts
Based on what I hear from those in the know, the perception is that it's
decades away instead of years. And an even more complicated problem isn't
whether or not they can make it work with considerable constraints (geofence,
no highways, avoid several spots that are hard to handle, weather, assistance
via remote operation when stuck), but how to do it profitably. The cars are
expensive. The maintenance is expensive. The ongoing cleaning, charging, and
prepping of vehicles is expensive. Constantly updating HD maps is expensive.
Of course the engineers to keep everything running smoothly are expensive. And
the small army of people you need to actually talk to and help customers -
something Google has basically never done before - is expensive.

I love the idea of self-driving cars and I really can't wait until they're
ready. But I'd say that's going to happen about as quickly as changing the
infrastructure in this country. Personally I'd rather have better public
transit anyway.

~~~
stale2002
> I love the idea of self-driving cars and I really can't wait until they're
> ready.

What if I were to tell you that you can buy a self driving car right now, and
that there are 100s of thousands them in consumer hands, right now?

Tesla sells them. And they aren't _that_ expensive.

If you can press a button, and take your hands off the wheel, that is a self
driving car. And you can buy them today.

~~~
AndrewBissell
Sorry, nope. If your car crashes after you press that button and take your
hands off the wheel, Tesla will point to the line in their terms of service
which says, "keep hands on the wheel, driver must pay attention at all times."

Tesla (and Elon Musk in particular) are of course happy to encourage the
_misconception_ that they already sell a self-driving car.

~~~
stale2002
You can call it whatever you want. But at the end of the day, this definitely
fits the vast majority of people's definition of "self driving".

You press a button, and it stays in lane, at the right speed. Most people are
happy with that feature set.

That is the MVP, that provides most of the benefits to most people, even if it
isn't perfect.

~~~
oblio
It kills people... it's not ready.

------
killjoywashere
Einstein's annus mirabilis was 1905 and it took decades of determined,
international promotion of the Einstein brand, World War 2, a letter from
Einstein to Roosevelt, plus another 6 years and 33 billion 1945 dollars (473B
USD today), and a staggering collective of the smartest minds to ever walk the
Earth in an existential race against a mortal enemy to develop nuclear
weapons.

NASA's shuttle project started in 1969, cost $200B, was shut down in 2011 and
never got close to it's planned 50 launches per year.

This stuff is hard.

~~~
dx034
But both of these were mostly science problems. Self driving cars are about
making machines interact with humans. That has never been solved before.

Self driving in itself wouldn't be an issue, it's the other humans on the road
that make it so hard to solve.

One of the main problems for manufacturers is that we have never before solved
a similar problem, so there's nothing to compare it to.

~~~
asdff
>it's the other humans on the road that make it so hard to solve.

Humans, cats and dogs, deer, tumbling rocks, low flying seagulls, tumbling
trashbags or a toppled trashbin bin, palm fronds, drunk people jaywalking,
drunk guy in a jeter jersey directing traffic after a yankees game, edgy
teenagers pulling the invisible rope trick (or just yanking a sheet on a
string), puddles reflecting humans on the sidewalk, blind turns and hidden
driveways, competitors self driving tech, the metawars that will result by
trying to raise a competitors accident rates (and not doing that would be
leaving money on the table).

------
dalbasal
Geo-fencing counts, IMO. "Driver assist" does not.

Full autonomy respresents an economic paradigm, geo fenced or not. When
private cars went mainstream, the economic cascade was so broad that no one
could have predicted it. Supermarkets, for example, and the subsequent impact
on food industry was a side effect of private cars. Suburbs. Disneyland...

Self driving is such a change. Starting from geo-fencing is just a constraint
on how fast the change happens. Private cars were limited by affordability
instead. There's always something.

Once these cars are on (geo-fenced) roads, infrastructure (eg signs) and
vehicle can start adapting to eachother. There will be real economic pressures
to "open up" roads and highways to autonomous driving.

The tech problems in the space are so interesting, we forget about other
aspects. Ultimately though, geo-fencing or any other hack could get us to the
point where product problems get solved without a 1:1 relationship to solving
tech problems.

------
bayareanative
Hmm. I was driving a rental Hyundai that has auto-lane steering. I constantly
wondered if it would follow the wrong path at choice decision opportunities
(exits and forks) or side-swipe another vehicle trying to follow a lane. It's
quite aggressive in basically overriding the driver's input without
significant force and not notifying audibly when it disables itself (except
for a visual indicator, a small icon in the cluster turns from green to white,
but it requires "polling" behavior of constantly looking down).

~~~
de_watcher
You can feel if it's on or off from the resistance. It requires you to apply
force to the wheel anyway, otherwise there is an no-hands-on-the-wheel alert.

------
Creationer
We might have to contemplate that we need to change our roads if we want
automomous cars.

We have lanes for carpooling - why not lanes for autonomous vehicles? The road
surface could have special markers to centre the vehicle, and to position it
in the correct lane for a certain destination. This could be a single lane in
a highway, with shorter distances between cars (relying on the quicker
reaction speeds of computers and the modern, well-maintained brakes of the
cars themselves), allowing humans to overtake and recognise them easily.

Then we need a standard method of communication between vehicles - braking,
movement etc. - rather than relying on cameras and radar.

Remove as many variables as possible and the task becomes much easier.

Heck, we'll need a new method of charging road users for road use, without gas
taxes, so we might as well introduce a standardised road pricing scheme as
well (that takes into account vehicle size, weight, time, and location to
calculate the price/km - and can communicate that to the end-user to allow
them to optimise their travel).

~~~
chipperyman573
You can't rely on what other cars are telling you because a bad actor could
create a device that sends out false information that leads to crashes (send
out "I'm now accelerating from 50mph to 60mph" then brake immediately, or even
just not accelerate and continue to go 50mph). You could of course solve this
by verifying all the information sent by cars, but then you have negated any
benefits because you have to be able to detect anything they could send in a
message.

~~~
tsomctl
We have a justice system that would prevent this. Same reason why I don't
worry that a gunman is going to kill me every time I go out into public.

~~~
addcn
This argument doesn’t hold though because a single bad actor who figures out
how to spoof the interface has tremendous leverage to hurt a lot of people at
once. And it’s likely they could leave a device at night and trigger it days
or weeks later. One instance of this could literally cause enough public fear
to get AVs banned for years.

I agree with OP that each car managing its own cameras and radar makes the
most sense. Cooperation by way of individual optimization is the cornerstone
of modern society and something we should absolutely imbue our autonomous
vehicles with.

~~~
ux-app
>And it’s likely they could leave a device at night and trigger it days or
weeks later.

same could be said about leaving a bomb on a bus, and yet there aren't bombs
going off on busses every day, and even when it does happen, it's not enough
to scare people away from busses for good.

Society kinda works because the vast majority of people are not homicidal.

~~~
glandium
When you think about it, it's amazing there isn't more terrorism.

~~~
raxxorrax
True, and that is certainly not due to impotent efforts to increase security
in basically every facet of life.

------
jaimex2
It's awesome watching the race play out. It very much does look like at this
point that Tesla are on a spint to the finish line while everyone else took a
wrong turn near the start following Google.

~~~
rayiner
What on _earth_ makes you say that? Tesla’s whole model seems like a
bust—unlikely to evolve beyond cruise control+.

~~~
olau
If you watch the presentation at the Tesla Autonomy Day, the dude in charge of
their deep neural network explained that they think basing the whole system
upon cameras is necessary since HD maps and LIDAR are too limiting. Basically,
the network needs to be able to see what people are doing to drive safely.

They've devised a method of upon training data for their network and is
basically iterating through the problems, finding training data and counter-
training data, as far as I understood the presentation.

Whether this is going to work or not, we shall see.

~~~
pas
Have they solved the world model problem, or mentioned anything about that? If
not, they are lost in noise.

Video streams are so full of maybe signals, and ML loves that. As long as they
don't have a fundamental (sub)system that knows how much space it needs to
safely proceed at the current speed, how the 99.99% of the roads of the world
work geometrically, and how to infer this from visual data, they are in
trouble.

(That said, I guess Karpathy is perfectly aware of this.)

~~~
jaimex2
I highly recommend watching the presentation. It's quite a deep dive into
their entire plan including solving the world model problem.

------
beefield
I would be 99% happy regarding autonomous driving if I got a platooning
hardware kit installed into my car so that on highway I could connect behind
another car/truck that is going to same direction than me and take a nap or
watch a movie. This should be easier problem than full self driving car by
orders of magnitude, and I do not quite understand why platooning gets so
small share of the self-driving hype.

~~~
badpun
The first driver ("platoon commander") would still need to be human. I'm not
sure people would sign up for this, as your error could result in deaths of
people in cars following you.

~~~
beefield
The we should have problems finding bus drivers?

Anyway, I am more thinking the platoon commanders would be truck drivers that
already have legal requirements for rest and no capability for speeding in
highway.

------
kieranmaine
After reading this I'm thinking Tesla claims of being close to achieving level
5 autonomy aren't pure hot air. Toyota's requirement of "8.8 billion test
miles for safe deployment of self-driving vehicles", puts Tesla well on the
path to this. I found articles online stating Autopilot has driven 1 - 1.2
billion miles. Assuming that they've gathered many more miles of test data of
human driving, then at least they have the raw data on which to iterate their
platform. Whilst I'm still skeptical I've gained a new appreciation of how
pushing the autopilot package allowed them to gain a big advantage in terms of
raw data.

~~~
shawabawa3
I personally think Tesla simply doesn't have the sensors for self-driving.

I imagine the huge amount of data they've collected is largely useless because
they aren't collecting the right data (e.g. LIDAR, radar, higher definition
video)

~~~
pas
They probably have the right data for safe disengagement. If they can solve
that, the rest truly can be incrementally added.

------
8bitsrule
"Within a generation ... the problem of creating 'artificial intelligence'
will substantially be solved."

\- Marvin Minsky, 1967, _Computation_

------
sxp
>Krafcik went on to say that the auto industry might never produce a car
capable of driving at any time of year, in any weather, under any conditions.
“Autonomy will always have some constraints,” he added.

This is an out-of-context quote. Right before that, the linked CNET article
said "John Krafcik, head of the self-driving car unit of Google parent company
Alphabet, said that though driverless cars are "truly here," they aren't
ubiquitous yet. ".

There is a big jump between L4 & L5 autonomy which is what Krafcik was
discussing. But even achieving L4 (which is what Waymo has in its Phoenix
tests) at mass scale will revolutionize the industry since it means cheap
robocabs for large portions of the US for much of the year.

The other companies are still struggling with L2 & L3, but Waymo's lead
appears to be getting larger based on the public results they are showing.
Tesla is the only one that's close since it has a public L2 system.

------
ux-app
Yeah full autonomy that matches or exceeds human driving is a loooong way away
for sure. Closing the gap is not just about sensing the environment, but also
_understanding_ it. That's (probably) a Hard AI problem.

Why is no one looking at retrofitting existing roads with markers that the car
can use to follow? I've read suggestions of using magnetic markers which would
have the added benefit of working through snow.

Roads get new lines marked all the time, seems like embedding a magnetic
marker at the same time would be quite feasible. This would basically be a
tram line on every street.

Who cares if the car needs to hand off to the human in the event of road works
or missing markers? Those are fairly rare events on my commute at least.

~~~
superkuh
Winter driving in cold regions is hard. Knowing where the middle of the road
is and the car's position relative to it is only the very start of the
problem.

In regions that have roads not cleared of snow for multiple days the lane
positions change. The evolution of the position of the lanes _is not_ strongly
defined by where the previous lines were. The obvious spot to drive for a
human would contradict the magnetic, ground penetrating radar database, or
whatever marker is used.

Markers of any type would be useful for awareness of car self position but not
sufficient for what the lane has become. Even as a human you have to guess
sometimes because it's not possible to sense it. You just have to know what
humans would do. Machines aren't great at heuteristics and guessing what
humans might do.

~~~
frosted-flakes
> In regions that have roads not cleared of snow for multiple days the lane
> positions change.

Or even in the same day. I've seen major freeways shrink from three to two
lanes after heavy snowsqualls blanketed the road completely in less than an
hour. Yet, traffic still flowed smoothly and quickly.

------
areoform
I have made this prediction multiple times throughout the last 8 years based
on my knowledge of robotics. I have argued with family members who are VCs as
well as enthusiastic technical observers based on simple first principles
reasoning.

It is highly unlikely that completely autonomous vehicles will be present on
the streets - interacting with humans, cars, and random events in real-time -
within the next decade. What is more feasible and likely to occur is that
large chunks of trucking may become "automated" in the form of a hyper-
advanced cruise control, but these vehicles will lack Level 5 autonomy for
their whole route. To understand why, requires a deeper examination of where
this technology came from; Stanley, Thrun’s robotic car that was acquired by
Google and is the basis for Waymo’s tech. Thrun notes;

> “In the last Grand Challenge, it didn't really matter whether an obstacle
> was a rock or a bush, because either way you'd just drive around it," says
> Sebastian Thrun, an associate professor of computer science and electrical
> engineering. "The current challenge is to move from just sensing the
> environment to understanding the environment."

[http://news.stanford.edu/pr/2007/pr-
junior-021407.html](http://news.stanford.edu/pr/2007/pr-junior-021407.html)

More precisely, to interact and function in the wild, the robot needs to
understand, distinguish, and model the properties of wildly different
entities. And these entities aren't just other people - they are _everything_
in its environment. And this includes random events like; people walking into
traffic, objects falling onto the road, sudden rain and hail, or just
potholes.

This is not a problem that can be trivially solved with better sensors or
cheaper LIDAR. It is a conceptual problem that can not be trivially brute
forced nor simulated in advance as each unique permutation is just different
enough to confound existing models. Even if such different and confounding
events are six sigma in nature and have a probability of occurring 0.00034%
for every mile driven, then at 3.22 trillion miles/year
([https://www.npr.org/sections/thetwo-
way/2017/02/21/516512439...](https://www.npr.org/sections/thetwo-
way/2017/02/21/516512439/record-number-of-miles-driven-in-u-s-last-year))
Americans will experience 10,948,000 anomalies per year. Or, about 20.8 events
per _minute_.

20.8 anomalies per minute is a pretty big number. That’s a potential accident
every 3 seconds or so. And that’s a six sigma estimate. It is hard to
emphasize how hard six sigma performance is for the current generation of
autonomous vehicle hardware.

You might dispute the numbers, but they are meant to be an illustration and an
intuitive explanation for why this problem is Hard with a capital H. Unlike
some professionals, I don’t believe Strong AI is necessary to solve this
problem, but Weak AI is still a form of AI. Technology that's just out of
reach today.

At this point, people counter with Google’s cars. After all, seeing is
believing, and haven’t they been seen to run successfully for millions of
miles at this point?

Yes, but... Waymo/Google's success is hard to replicate. No one else can bend
the rules far enough to achieve their headway.

Most of us forget that before starting the project, Google used LIDAR to
create highly detailed maps of every environment their cars will ever be in.
Engineers then captured standard human driving on that route over and over
across different hours. These elements were then combined within their model
to pre-compute routes _before_ any real world driving happened.

Google was able to achieve its miraculous results because of its ownership of
GMaps, Street View and the resources to deploy a fleet to map everything in
advance. No one else has these resources. No one else can acquire them
cheaply. And even Alphabet, a trillion-ish dollar corporation can't replicate
this package for the entire world. Maybe someone will figure out a way to
create precise sub-eighth-of-an-inch level 3D models of environments using
drones and satellite images, but until that time this technology is too
expensive to deploy.

It is hard to emphasize how important a distinction this is. The "only" thing
a Google/Waymo robot does in real-time is that it checks the difference
between the recorded dataset and whatever the real-time LIDAR + camera feed is
saying. Or in other words, for every point of the route, Google has generated
an expected set of values, and any deviations from these are used by the car
to make real-time decisions.
[https://spectrum.ieee.org/automaton/robotics/artificial-
inte...](https://spectrum.ieee.org/automaton/robotics/artificial-
intelligence/how-google-self-driving-car-works)

However, even this partial solution is tempting. Because it works. But, sadly,
that's not the whole story. What gets lost in the re-telling of this story is
that this technique means that the cars can't work in unstructured
environments. They are unsafe for most roads, especially uncontrollable,
unpredictable and narrow residential roads. It's why these cars exist inside
strict geo-fences even when they're ferrying passangers around, like in
Phoenix. Where the car seems to mostly serve the downtown area with very
little residential coverage (caveats apply, I couldn't find a map).

But, even getting this far is a huge technical achievement, even if it's not
the fundamental breakthrough we've been looking for.

We should be prudent, but optimistic. There are a lot of smart people out
there who are working on this problem right now. They are figuring out really
clever ways to deal with these edge cases and they will solve this problem.
Even if it's not on the hype cycle's schedule.

With that in mind, I'd like to suggest a better model for thinking about
Driverless Vehicles is another Google service, Translate. It's quite good, but
when was the last time someone translated an entire book with it (and it came
out to be readable/sensible)? The system's performance - due to its
statistical roots - seems to be fundamentally asymptotic w.r.t. the data
presented to it. It can't be improved beyond a certain point even if you throw
more data at it. I suspect that the current generation of driverless vehicles
will be the same.
[http://people.csail.mit.edu/brooks/papers/elephants.pdf](http://people.csail.mit.edu/brooks/papers/elephants.pdf)
[http://arxiv.org/pdf/1604.00289.pdf](http://arxiv.org/pdf/1604.00289.pdf)

Before the needed breakthrough happens, it is likely that the cutting edge
won't advance beyond Level 3 in the next decade;

> I would guess Tesla’s position on this would be that most of the time, yes,
> you can rely on it, but because Tesla has no idea when you won’t be able to
> rely on it, you can’t really rely on it.

[https://spectrum.ieee.org/cars-that-
think/transportation/sel...](https://spectrum.ieee.org/cars-that-
think/transportation/self-driving/fatal-tesla-autopilot-crash-reminds-us-that-
robots-arent-perfect)

And that's okay. We might not have driverless cars by 2020, but we probably
will by the mid 2030's and that sounds pretty darn fantastic to me.

~~~
Majromax
> Americans will experience 10,948,000 anomalies per year. Or, about 20.8
> events per minute.

One possibility is that _driving_ is simply too challenging in the general
case.

Note one of the problems discussed in the article, that self-driving
algorithms can't cope with bad weather such as snow when road markings are
invisible. Humans also can't cope with this situation. We don't drive with
strict adherence to the rules of the road, and for example the tracks left by
the car in front of you are far more important than any buried lane marking.

If we demand (for example) that autonomous vehicles cause less than one
accident per 1 million km driven, then some situations that humans brave on a
daily basis (thunderstorms, blizzards, icy roads, residential streets with
children around) may never meet that standard even with a "perfect" autonomous
system. Thus far, we've papered over the contradiction because we have human
drivers to blame.

~~~
rayiner
> Humans also can't cope with this situation.

What exactly does that mean? People will happily drive on I85/95 in Atlanta in
torrential downpours where the lane markings are almost impossible to see. For
the most part they don’t crash, because they know where the lanes should be.
Humans are unmatched at filling in missing information like that. People drive
in rain, sleet, snow. And they still manage to go 500,000 miles between
crashes on average.

~~~
asdff
In Atlanta people are coped to that situation, but in places like Los Angeles
people come to a crawl in a drizzle with otherwise zero traffic. In places
with snowstorms, people still regularly loose traction and spin off the roads
or start sliding backwards down a hill.

I remember a picture a couple years ago from a snowstorm that hit the south,
could have been Georgia even, that looked like a scene from an apocalyptic
movie complete with a flipped over car burning in the distance.

Some people crash, some people do fine with skill. Will a self driving car be
able to counter steer an ice induced drift into safely regaining traction like
an experienced human car driver could? All up in the air right now.

------
siruncledrew
From a marketing/business perspective for the automotive industry, my
hypothesis is that it’s in their interest to favor publicizing the ‘best-case
scenario’ earliest dates over realistic target dates for these kinds of
milestones to hype the public up and make a company look like they are ahead
of the pack in innovation.

This is the sentiment I gathered from also looking back at previous
announcements in various industries (see r/retrofuturism on reddit) where a
CEO will say that something is 5 years away when in practicality it’s at least
5 years away from being 5 years away.

------
netwanderer3
Uber is screwed. They clearly stated in their SEC filing that the possibility
of deploying full autonomous cars will significantly impact their future
earnings.

~~~
scythe
That would be true even if they believed the chance that autonomous cars would
succeed was 1%. If it happens, it will still significantly impact their
earnings.

------
Derek_MK
Related article:

> Tesla Will Have Full Autonomy in 2020, Musk Says

~~~
lm28469
Reminds me of Uber ex CEO saying he'd buy every single autonomous car Tesla
will build by 2020.

The dream is slowly fading away, Tesla, or any car really, won't be fully
autonomous for a long time, Uber economic model doesn't make sense anymore
because 90% of it was base don removing drivers entirely. It's like a dream
built on top of a dream...

[https://www.businessinsider.com/top-vc-claims-uber-ceo-
said-...](https://www.businessinsider.com/top-vc-claims-uber-ceo-said-he-
wants-to-buy-500000-self-driving-cars-from-tesla-2015-7?IR=T)

------
zaroth
If safe, reliable, autonomous driving is a decade away that’s half a million
lives lost. If it’s two decades away then a million.

The incentives are there to make this work. The funding is there to make this
work. The technology is actually there to make this work, _if_ you can change
some of the rules to get there.

It’s worth over a trillion dollars annually to make this work. It’s the great
infrastructure product of the 2020s and we should get fucking started already.

~~~
asdff
Are self driving cars going to eliminate traffic deaths? Tesla already has a
body count.

~~~
zaroth
Eliminate totally? Of course not.

But we have a long way to go before 1 fatality per 1 billion miles would be a
bad thing. That would be an order of magnitude improvement. I think we can get
two orders of magnitude in 10 years.

Autonomy absolutely _will_ have a body count. It cannot save every single
driver. And so if we want to be able to enjoy the benefits of autonomy that
saves _almost_ every single driver, we better get comfortable with the notion
that some people will die in self driving cars.

If we can’t accept that the technology will never be _perfect_ we will just
keep dooming 30,000 people (in the US alone) to die each year, not to mention
countless maimings (2 million actually) and property damage.

------
SubiculumCode
It was always clear to me that to accelerate the arrival of autonomous driving
providing more consistent and controlled roadways would be key. That is why
freeway driving will come well before any autonomous city street driving.Even
the freeway is not consistent enough, but could more easily be adapted to some
standard much more easily than, say, the roads in NYC.

------
ErikAugust
If we are rethinking the timetable of the self-driving car, is it safe to
rethink the whole future path of transportation? Do cars have to be it?

------
_cs2017_
> Krafcik went on to say that the auto industry might never produce a car
> capable of driving at any time of year, in any weather, under any
> conditions.

What's the likely business model for Waymo if level 5 autonomy is
unattainable?

Also, what did he mean by "never"? Did he mean "within our lifetimes" or
something? A strangely confusing statement for a CEO of a self driving
company.

~~~
xenospn
There are areas of the country with relatively stable weather and mild
conditions (no snow, no storms, etc). Could work reasonably well there.

~~~
_cs2017_
I thought the conditions he is concerned about are not merely bad weather.
Doesn't his statement hint at giving up on self driving in SF/NYC/London urban
traffic?

------
chrischen
I don’t understand the obsession or even the belief that fully autonomous
driving is around the corner.

When has anyone, much less Google, released a product that worked at such
advanced levels from day 1 and starting from no product in the market?

They should deploy autonomous cars at limited production scales in specific
environments, and build it out from there—but we haven’t even seen that yet.

~~~
anoncareer0212
That's exactly what you're seeing Waymo do, they've been operating a paid taxi
service at scale outside Phoenix for 6+ months

~~~
chrischen
Yes so the logical conclusion would be to scale that up, in whatever limited
capacity it is currently serving.

~~~
asdff
How many places in the U.S. have the weather and traffic level of a region
outside of phoenix? Maybe a region outside of scottsdale? They would scale if
they could but that adds more variables that they haven't felt confident
enough to tackle yet, otherwise you would see them scale this up and around
the U.S.

~~~
anoncareer0212
Great example of no true scotsmen - we went from them not existing to "well
they're not everywhere, so do they really exist?" in 3 comments

------
jillesvangurp
I think there's another thing going on here and we've seen a similar pattern
before with electrical cars about 10 years ago.

Ten years ago was when most of the car industry believed it had decades to
adapt to full EV. It turns out that they had much less than a decade. By 2017
most manufacturers were scrambling to accelerate timelines for internal R&D
when they realized competitors had other timelines and were about to grab some
serious market share. This is now playing out in the market. Several ICE
production lines have been shut down (GM, Ford, etc). Most manufacturers have
launched or are about to launch full EVs. At this point several manufacturers
are putting billions in making mass production happen in the next few years.

I predict the same will happen with autonomous driving. Most manufacturers are
completely unprepared for it and behind with their R&D. They are actively
looking to buy as much time as they can get because they know they won't be
there. However, some manufacturers are ahead of the curve here and it's not
just Tesla. When this stuff gets good enough, it means some manufacturers will
have a huge competitive advantage over basically everything else. Everything
you read about this needs to be put in this context.

In other words, some manufacturers are dragging their heels here and it's the
same manufacturers that are also struggling to adapt to full EV. And they are
using the same strategies to manipulate the timelines. Think futuristic
looking concept cars that will never drive, lots of lobbying about how cool
their R&D is, and a distinct lack of commitment to any timelines, combined
with regular reports on serious safety issues that are obviously going to
delay everything by decades. That's what this article says. What it translates
to is: several manufacturers are getting to level 5 autonomy in the next few
years and it is going to take decades for everybody else to catch up and mass
produce this stuff.

Tesla is probably over confident about the timelines but well on track to
making lots of progress in the next few years nevertheless. Waymo looks like
they are making progress as well. However, they are held back by the notion
that they are more of a pure tech play and less of manufacturing/logistics
play. In other words, they are dependent on the before mentioned manufacturers
dragging their heels.

IMHO several manufacturers are close enough to solving most of the issues here
that there are going to be a few leaps forward in the next few years. These
leaps are going to force the agenda for the rest of the industry. Mostly the
timelines for autonomous and a switch to full EV are actually aligned.

By the time most cars will be EVs, most of them will also be fully autonomous.
That's roughly 15-20 years from now. Plenty of time to tackle any remaining
issues with sensors, snow, and other edge cases. Yes, that's hard but people
are throwing billions at these problems and making rapid progress. The first
level 5 cars might be on the market within as little as 2-5 years. The biggest
hurdle here is non technical: legislation. Tesla is being ambitious here
(obviously) but even in 5 years they'd have the market largely to themselves.

------
MR4D
AI Winter is coming.

(Sorry to riff on GoT, but I couldn’t resist)

------
amgin3
Having 5+ companies each pouring billions into trying to make this technology
a reality separately is ridiculous. All these companies should come together
and work on a standardized single solution that can be shared between them. It
will come much faster and be less expensive if they are pooling their
resources.

~~~
whenchamenia
But what if they standardized on some compromise, as they tend to? That alone
would set progress back, not foreward. Think if there are multiple orgs with
billions to throw at it, the more the merrier. If there were just a few with a
few million each, your plan makes much better sense. At some point the
beuracracy of collectivism overshadows its usefulness.

~~~
asdff
I dare you to name a good technology that came about due to a race of
proprietary competition and not in close collaboration with the best talent in
the field. It really is a waste to bin all this engineering talent apart just
because their free backpacks have a different logo embroidered. You could have
two engineers struggling with the exact same problem sitting in the same uber
pool ride to work and they wouldn't be able to speak a word of it to each
other due to shortsightedness by shareholders. There will be a point where a
clear winner emerges, 5 competing companies turns to 3 and turns to 1, and all
those hours and hours of engineering effort by the failures becomes a moot
point. It's not like the 1 company that emerges will hire another 5 companies
worth of engineering either, this is redundant work. You wouldn't even be able
to learn from these failures until decades after the fact in a blog post from
a retired engineer no longer bound to an NDA.

------
panpanna
Multiple companies are currently testing level 5 autonomous fleets in some
European cities.

I think the article misses that you can't reach 5 without efficient and secure
v2x.

~~~
frosted-flakes
V2X?

~~~
eb3c90
Apparently vehicle to everything [1].

The car industry seems to love V2* terminology. V2H is connecting an electric
vehicle to a home to provide power in an emergency.

[1] [https://www.investopedia.com/terms/v/v2x-vehicletovehicle-
or...](https://www.investopedia.com/terms/v/v2x-vehicletovehicle-or-
vehicletoinfrastructure.asp) [2] [https://thedriven.io/2018/10/19/v2g-whats-
the-state-of-play-...](https://thedriven.io/2018/10/19/v2g-whats-the-state-of-
play-with-vehicle-to-grid-vehicle-to-home-technology/)

------
pfarnsworth
After seeing some autonomous cars in action, I believe they are 20 years away.
In that sense I mean a fully automated driverless car that you can command to
do what you want.

It would be great if you could buy a car, have it bring you to work while you
read or slept, then after dropping you off, go earn money while it was ubering
people around until it was time to pick you up. But I highly doubt this will
happen in less than 20 years at this point. There's just too much lacking. It
needs real artificial intelligence, not just machine learning and pattern
matching. The car needs to know "that's a paper bag flying in the wind I'm
about to hit", or "that truck in front of me is stopped."

~~~
nradov
Considering that we have made ~0 progress toward true AGI, I'm skeptical that
level 5 autonomous driving will arrive even within 20 years.

~~~
tim333
We certainly don't have true AGI but there has been progress in that general
direction eg Deepmind's AlphaZero, StarCraft and this thing
[https://news.ycombinator.com/item?id=17313937](https://news.ycombinator.com/item?id=17313937)

~~~
nradov
No those aren't progress toward AGI at all. They are tremendously impressive
technical achievements, but from an AGI perspective they're basically just
parlor tricks.

The reality is that for AGI we don't even know which direction to go yet so
it's literally impossible to determine whether we're making forward progress
toward the goal. Show me a computer as smart as mouse and then I'll believe
we're making progress toward AGI.

------
quotemstr
How much of this delay is just perfectionism and excess risk aversion? In
pretty much any pursuit, demanding perfection ensures that you'll never be
finished. Based on what I'm seen, autonomous vehicles already work. Waymo's
cars have driven millions of miles. Other companies have working systems too.
The technology is already at the point where it's useful.

I understand wanting to keep people safe, but the benefits of this technology
so are enormous that I'd rather companies launch now and iterate than spend
decades wringing every last disengagement out of the system while the carnage
on our highways continues.

I feel like we're in one of those situations where everyone is just afraid to
be first. Meanwhile, people die every day.

~~~
wpietri
That you are willing to let other people die for the sake of increasing your
convenience isn't very compelling.

And I think that's also unnecessary. From what I can tell about Waymo's
approach, they are launching with something well below level 5, but where they
cut over to a remote driver whenever the car is unsure. It's a smart approach,
in that they can offer a level-5 service with level-3 technology. They've
turned the technology cliff into a cost-optimization problem, which they can
chip away at over time.

~~~
BurningFrog
1.25 million people die in human driven traffic per year, 37,000 of those in
the US.

SDCs should only need to beat that safety level, not be perfect.

~~~
wpietri
I understand the theory, but good luck selling a car on the basis of "probably
as safe as the average driver!" And good luck convincing the parents of some
kid run over by Google that it's just one of those things.

Note also that for many of those 37,000 deaths, a human is found culpable,
often doing time. Who do you propose should do the time for autonomous cars?
An engineer? Their manager? The CEO?

In practice, our corporate accountability is such that nobody will pay any
penalty. Maybe that's fine when, as with the 2008 financial crisis, the losses
are abstract and distributed. But when it's the life of somebody's kid,
somebody's mom, somebody's sister? That at best will be an extremely volatile
situation. Especially if Google or Uber captures 20% of the self-driving car
market and is now responsible for 37,000 * 0.20 = 7400 deaths per year, or
about 20 per day.

~~~
BurningFrog
I wonder how many of those deaths anyone is punished for? My guess is under
10%. Mostly it's just considered an "accidental death" and life goes on, as I
understand it.

But I (fortunately) don't have much experience with this.

The only possible solution is that the SDC and/or software manufacturer is
legally responsible for any accidents the SDC causes. I think that coverage
_has_ to be part of the legal package for these things to ever be sold in the
US. But that would be money, not jail time, of course.

Fortunately the car(s) should have video and data logs from the event, so the
evidence situation should typically be very clear.

~~~
wpietri
> The only possible solution is that the SDC and/or software manufacturer is
> legally responsible for any accidents the SDC causes. [...] But that would
> be money, not jail time, of course.

I don't think there's any "of course" there. As we see with the 2008 crash and
the financial industry generally, corporate financial liability for the high-
risk decisions of individual managers and execs doesn't do much to reduce
systemic risk. For the managers, it's a "heads I win, tails you lose" play.

The difference between banks and robot cars is that a banking failure just
means slightly higher prices for the customers and/or slightly lower dividends
for investors. Whereas with robot cars, in comes in funerals. Especially given
the endemic low quality in the software industry, there's little doubt in my
mind that unless "disruptors" face actual jail time, we'll see what is in
effect scaled-up negligent homicide.

~~~
BurningFrog
I mean, we have a century of settled tradition/law for how to handle deaths
and injuries from badly designed cars. It's far from a new issue.

I think that system can keep working.

I don't really see the analog with the banks.

Software has endemic low quality where it doesn't much matter. In mission
critical systems, it is usually reliably as solid as it needs to be.

~~~
wpietri
> I mean, we have a century of settled tradition/law for how to handle deaths
> and injuries from badly designed cars.

It's settled for cars with human drivers, because the drivers bear most of the
responsibility. For cars without drivers, who's to blame for bad driving?
"Somebody's bank account" is not a compelling answer.

> I don't really see the analog with the banks.

What part is unclear for you? I'm saying that with banks for the last 10+
years, we see that financial penalties don't solve problems of risk or
accountability.

> Software has endemic low quality where it doesn't much matter. In mission
> critical systems, it is usually reliably as solid as it needs to be.

I don't think either of those statements is correct, unless you mean the first
one tautologically. But the latter is definitely untrue. Look at the way
Toyota was building their software. Or the recent Boeing fiasco. Or
Volkswagen's emissions scandal, which is responsible for circa dozens of
deaths.

