
Toyota's Gill Pratt on Self-Driving Cars and the Reality of Full Autonomy - mcspecter
http://spectrum.ieee.org/cars-that-think/transportation/self-driving/toyota-gill-pratt-on-the-reality-of-full-autonomy
======
jayjay71
It's refreshing to see an article like this among the many other articles
predicting self-driving cars will be here in ~2 years. Volvo has committed to
selling level 4 vehicles by the end of this year (in limited locations) - I'd
love for them to follow through and see how good their technology really is. I
suspect the transition will be far more gradual than most people are willing
to admit, and take decades instead of years. Keep in mind, serious research
into self-driving cars started in the 80's, and then there was a huge increase
in interest in the 2000's because of the DARPA challenges. After that Google
took interest, and recently the major automakers have become serious about it
as well. But it's worth noting that originally Chris Urmson had predicted
Google would be selling fully autonomous cars today, and instead Google lost a
lot of their talent in the last year including Urmson and there is no public
deadline for selling a product.

I think progress is great and self-driving cars are awesome, but I don't
believe they're just around the corner.

~~~
felippee
I took the liberty to draw a few situations which AI would have likely no clue
what to do, even if it detected the danger. I'll be adding some more as every
day on the freeway brings me new ideas (in fact I'm editing one right now).
Something like a "Winograd Schema for self driving cars":
[http://blog.piekniewski.info/2017/01/19/what-would-an-
autono...](http://blog.piekniewski.info/2017/01/19/what-would-an-autonomous-
car-do/)

~~~
nmjohn
Your artwork is great! I'm not sure #1 would really be a problem as that
situation never should occur and is totally preventable (there always should
be traffic cones/barricades surrounding the cover, something the car easily
could detect). In the rare case it does happen, the utility company
would/should be at fault for leaving a manhole open like that.

I can also think of a few ways to prevent number 2 (basically, a combination
of GPS + knowing where all intersections occur + road data from thousands of
other connected cars = knowledge of where every stop sign is. Certainly not
foolproof, but I think ultimately it is a problem that has possible solutions)

The rest of your examples are fantastic though, #3 being quite terrifying
actually. They are all very intriguing thought-experiments and I look forward
to seeing your future additions!

~~~
felippee
Hey, I just finished another one. A school bus with a swinging open stop sign
in the middle of the freeway.

Anyway, just a food for though, autonomy really requires a lot of
"intelligence" and our technology is not quite there to deal with all these
bizarre corner cases. I'm glad you like it.

~~~
ygra
I think such things can be discounted by knowing that motorways never have
stop signs (same with the prank stop signs). Of course, there are a lot of
things that only cause a change in driving behaviour in very specific
contexts. The bus' stop sign probably only applies when the bus is stopped.
Similarly, in Germany at least a bus may turn on hazard lights on a bus stop,
requiring everyone to drive past only at walking speed. A bus stopped on the
shoulder of a motorway with hazard lights on is a different context again and
thus should not trigger the same behaviour.

There are a lot of rules and laws and they change from country to country, or
in the US' case, even from state to state. Self-driving cars must know these
things and react accordingly. So I think the scenarios presented here are just
a few (admittedly, more far-fetched than others) more contexts amidst the
probably hundreds of others that already have to work correctly for switching
safely between city and motorway driving, driving in a living street,
observing right of way correctly in all circumstances (roundabouts, weird
stuff like four-way stops, signs changing ROW for one intersection, or a
stretch of road, lowered kerbs, people exiting a living street even though
it's to the right, cars on an on-ramp and perhaps letting them in based on how
far the on-ramp still continues, ...).

Stop signs are interesting in any case, since they have a characteristic
shape. If we go full autonomous, then snow-covered signs must be correctly
observed as well, at which point any octagon shape may be a stop sign
(perhaps, again, depending on context). Same with signs that don't reflect
well anymore at night.

~~~
TeMPOraL
The real sign SDVs are here will be when infrastructure starts accommodating
their needs. Humans aren't really good at driving either, so we've invented _a
lot_ of ways to help them and direct their attention. Open manholes are
supposed to be marked clearly, because people do miss that stuff. When there's
snow on the road hiding lane markings, someone will come and clean it out.
Signs are made to be retroreflective. Etc.

So at some point, I suppose the infrastructure (broadly understood - including
laws) may be modified to reduce the dependence on cultural context and other
things machines are weak at. So for instance, it won't be _every_ sorta-
octagonal shape that works as stop sign, it will be required by law to be
clearly visible and also have some machine-friendly accommodations, and SDVs
will be free to ignore signs without those accommodations.

(Doesn't solve the prank problem, but humans are equally vulnerable to a
targeted prank anyway.)

~~~
felippee
> (Doesn't solve the prank problem, but humans are equally vulnerable to a
> targeted prank anyway.)

I would hesitate using the word "equally". People are actually quite robust.
Particularly the second human in row will certainly not be tricked by the same
prank that tricked the first one.

~~~
TeMPOraL
I want to emphasize the word "targeted" I used. Pranks involve an intelligent
agent with malicious intent and an attacker's advantage - i.e. prankster is
free to exploit any vulnerability of its victim. People have different
vulnerabities than machines, but they still have them.

~~~
felippee
Sure they do, agree with that. But word "equally" suggest the susceptibility
is the same. I would actually emphasise the difference. It is much easier to
fool a machine than a human, particularly if we have a copy of the machine at
hand and tinker with it (see adversarial examples for deep nets). Humans are
all different, so we can never expect our "adversarial example" to be 100%
certain to work.

------
Animats
This is Toyota, which is coming from behind here. They started work on self-
driving last summer, which was late to be getting into it. They hired Gill
Pratt, who took over the MIT Leg Lab after Raibert left, but never did much
with it. His job was to start to build up an organization to work on self-
driving. So he has to make excuses for Toyota being behind.

Toyota recent technology direction isn't looking good. Instead of selling
battery electric cars, they're selling hydrogen fuel cell cars in
California.[1] Nobody is buying. Their electric cars are mini-cars. Something
went wrong over there.

Pratt has some legitimate criticisms. But many of the really hard but rare
problems can be solved by stopping, or going really slow. When you're going
really slow, your sensor data from LIDAR is very good and you should have a
full ground profile. If you have to inch your way through a field of rocks,
that can be done. Remember, the DARPA Grand Challenge cars of 10 years ago
could drive off road.

As for "why did it do that" issues, that's mostly a problem for those self-
driving systems where machine learning is connected directly between camera
and steering wheel. Those are easy to build and give the illusion of working,
but are not going to work in hard cases. You want a world model and object
recognition, like Google. You can tell how well your object recognition is
working by checking its results at long range against its results at short
range.

[1]
[https://ssl.toyota.com/mirai/fcv.html](https://ssl.toyota.com/mirai/fcv.html)

~~~
grogenaut
If you go really slow on a freeway with low visibility you get rear ended at
highway speed. Which doesn't work out well for you. Or the other person.

Heck if you go real slow on a highway and traffic isn't... you get rear ended
at highway speed.

~~~
hueving
If you go slow in low visibility on the freeway, you are doing the correct
thing. Driving too fast for the visibility is reckless and will get you a
ticket for driving incorrectly for the conditions (assuming you don't get
yourself or someone else killed).

[https://www.youtube.com/watch?v=i975P28WMNM](https://www.youtube.com/watch?v=i975P28WMNM)

~~~
argonaut
When it's raining on the freeway, people in the faster lanes _will_ slow
down... to around 50-60 mph (instead of 70-80 mph). I don't think 50-60 mph is
what Animats had in mind, though.

------
millstone
> if autonomous cars are even just 1 percent safer than humans, we should
> still be using them

We could improve safety by at least 1% in lots of other ways: lowering speed
limits, requiring rear view cameras, raising the licensing age and requiring
more frequent tests, cracking down on drunk driving, etc. Building a fully
autonomous vehicle seems unnecessarily elaborate to get a 1% safety.

~~~
ENGNR
We have many of those things in Australia (low speed limits and huge fines for
doing 3kmh over the limit, automatically sent through mail), but we're seeing
this:

[http://www.abc.net.au/news/2016-10-26/speed-enforcement-
detr...](http://www.abc.net.au/news/2016-10-26/speed-enforcement-detrimental-
to-road-safety-study-finds/7965082)

Most of our safety improvements have come from better car designs like crumple
zones, ABS, air bags.

The more you go against natural human behaviours and start treating humans
like machines with very low tolerances, the more it makes sense to just have
machines do the job

~~~
megablast
Australia doesn't have low speed or huge fines. Many places like the city of
Sydney doesn't even have red light or speed cameras in the city, it is a joke.

Australia does not take traffic crashes seriously at all.

~~~
threeseed
Sydney absolutely has red light cameras in the city.

But to be fair given the level of congestion and quality of the roads I doubt
the point of speed cameras at all within the CBD. It's purely revenue raising.

And strongly disagree that Australia doesn't take crashes seriously. We are
one of the most regulated societies in the world.

------
general_ai
Ha-le-freaking-lujah! I've been saying the same for years, ever since I
completed the Udacity self-driving car AI course (taught by Sebastian Thrun,
then head guru of autonomous vehicle research at Google). 70% of the problem
was solved in mid-90's, by Ernst Dickmanns et al, Daimler Benz had autonomous
cars on the autobahn. After billions of dollars and two decades, we solved
another 20% of the problem. The remaining 10% is not going to be conclusively
solved in the next 20 years at least, not under the current constraints.

Now if roads are outfitted with instrumentation for autonomous vehicles, _and_
human drivers are prohibited on such roads, _then_ we _might_ see full
autonomy. But not before.

If it were up to me, I'd focus on this instrumentation instead: RF guide
wires/tags for car localization on the pavement, machine readable signage
(even in fog, snow, and heavy rain -- conditions with which cameras and LIDARs
cannot deal in principle), inter-car coordination mesh networks and security
thereof, autonomous vehicle readable road work signage, police gear to direct
traffic of autonomous vehicles, and so on and so forth. There. Hundred billion
dollars worth of startups in one paragraph.

As things stand, you can only be fully autonomous at 25mph in California where
it never rains, as long as there's no fog, no road work, and no one has messed
up the markings on the pavement.

~~~
exDM69
Yes! A bit of common sense would be useful here.

The current generation of self-driving cars is fairly impressive already, but
what I'd like to see is a city full of them. I have a feeling that self-
driving only works as long as the majority of drivers are human drivers. It's
trivially easy to come up with traffic situations that could lead to a
deadlock by blidndly following the rules.

Humans will use hand signals, eye contact or someone will violate the rules a
little or make way when they don't strictly have to in order to ensure that
traffic flows.

I think that car-to-car and/or car-to-road communications will be required
before large scale deployment is possible. And I have not heard from a cross-
manufacturer effort of creating a protocol for such communication.

Although I do understand why the automotive industry is hell bent on getting
their level 2 autonomy out there. Money from the customers is needed to keep
the R&D effort going.

I personally did not understand why anyone would want a Level 2 car where you
have to be constantly on the lookout until I visited Silicon Valley and drove
a stint on US Hwy 101 in rush hour. And I guess this is the initial target
market for the self-driving car industry: wealthy individuals who have a
stressful morning commute in stop'n'go traffic. Money from these early
adopters will go to funding the R&D for the next generation in the hopes that
Level 4/5 will some day become reality.

But in my conservative estimate, that's still years away from being adopted en
masse. There may be a significant minority of them on the road in 3-5 years
but I can't imagine it working very well if they were in the majority.

~~~
general_ai
At the moment I'd pay a lot of money for a car that I drive mostly myself, but
that also had an advanced safety system which makes it much harder to get into
an accident. I.e. fix my driving mistakes and lapses in judgment for me while
I'm in control. I'd be willing to pay a hefty premium for this.

------
imh
I hate this rhetoric about "saving even one life." It's closer to "saving five
lives but killing four people who otherwise would have lived." Some people say
that's the same thing. I disagree.

~~~
danenania
Yep. If autonomous vehicles become widespread before they are really ready and
the news starts filling up with accidents a human would have avoided, it will
cause a massive political backlash. The actual statistics will be irrelevant
to most people.

~~~
happosai
Very frustrating how people fail to see the full context. However, it is also
useful in this case. The engineering, marketing and CEOs of companies doing
autonomous vehicles are aware of the huge negative media potential a single
accident has. Thus Management and marketing have an incentive to give
engineering the time and resources to do things properly.

------
cperciva
We need a level between Level 4 (the vehicle can operate autonomously in
almost all conditions) and Level 5 (no manual control): Level 4.5, wherein the
vehicle and operate autonomously in almost all conditions _and can identify
and safely stop in other conditions_.

If there's a severe blizzard and my car can't see the road in front of it and
we're in a 4G dead zone and it can't get an accurate enough GPS fix to figure
out where it's going, I'd be perfectly happy if it safely pulled over to the
side of the road and stopped. Heck, I'd _prefer_ to have it safely pull over
rather than trying to continue based on some premise of "this vehicle be able
to navigate under all possible conditions".

~~~
arebop
I think you've misunderstood level 4; it seems to be defined as you suggest
4.5 ought to be:
[[http://www.sae.org/misc/pdfs/automated_driving.pdf](http://www.sae.org/misc/pdfs/automated_driving.pdf)
]; the car has to operate safely after autonomous mode is enabled, even if the
human has no further interactions (even if the car requests human
intervention).

~~~
cperciva
It seems like the definition of level 4 is a bit ambiguous; your link defines
it as operating without human intervention, but only in "some driving modes".
I've seen this interpreted as meaning that the vehicle can operate without
human intervention _as long as the driving mode does not change_ , but may
still require human intervention if the conditions change (e.g., it starts
snowing, or you exit the highway).

~~~
exDM69
The key distinction here is that a Level 4 is safe even when the driver _does
not intervene_. If it starts snowing, the car will park by the side of the
road if the driver isn't there to pick up the wheel.

Level 3 is when you have ample time before the switchover, and Level 2 is when
the driver must be there in a matter of seconds.

------
chrischen
It's easy to accept this when you realize that nobody has even managed to make
a generally useful voice assistant yet, how ludicrous the "close to self
driving" claim is.

~~~
threeseed
I have to agree. Especially a car that can work worldwide.

For example here in Australia we have an animal called a Kangaroo which when a
car approaches will move completely randomly. This includes at the last second
jumping straight in front of a car. Most drivers from these areas know to slow
down and attempt to drive in the middle of the road if at all possible.

Surely weird situations like this which require local knowledge exist all
across the world. But I've yet to see any acknowledgement that this is useful.

~~~
qball
>But I've yet to see any acknowledgement that this is useful.

I think that the reason why this isn't acknowledged (by people who should
know) is likely to be a political one. Examining the solution that I typically
seen given for this problem should explain why.

First, the assumption is that driving algorithms are updated to account for
any edge case (for instance, an unpredictable animal jumped out this time)
after it occurs, so that case is then guaranteed to not re-occur.

This presents a problem: anyone who's ever worked with software before knows
that that's an impossibility until a sufficient number of incidents happen in
nearly identical ways, or until a fix is manually applied (which itself could
take a very long time).

But we fall back on the second part of our argument, statistics. The fact that
_you 're_ going to crash into something you wouldn't before is outweighed by
_more than one other person_ not crashing in a place where the computer can
predict and avoid a crash more effectively than a human.

This too, presents a problem, if you also consider the implication that the
availability of manual-driving cars will likely be degraded in some way after
the introduction of self-driving ones (from having increased
cost/regulation/insurance premiums or simply being banned entirely).

If we discount these "weird" situations and fail to make allowances for them,
people will get hurt where they wouldn't otherwise (even if the local or
global sum of deaths is reduced). The benefits skew in favor of cities where
traffic behavior is/will be much more consistent.

So it's best if local knowledge can't be useful- why complicate the matter or
feed skeptics talking points if you don't need to?

------
tarcyanm
I love the idea of self driving cars but it scares me that no one seems to
remember the history of the 5th Generation project (I was a kid in the 80s)
and other AI hype-dreams over the years. It has always been the case that
approximate solutions can be found for 85% of cases, which has given the
illusion of a nearly solved problem.

There is a lot of hype regarding deep learning, but I have struggled to find a
concise definition apart from the fact that it is now relatively easy to work
with monstrously big networks. Backprop and related algorithms have been
around for decades. From what I remember of neural nets, one huge drawback was
that they would be close to un-debuggable. The learning contained in the net
would be inscrutable to a human, to all intents and purposes. Failure data
could be recorded and replayed, but any actual _reason_ for failure would
frequently not be found. So, tweak the network, resize some layers and try
again... I can think of several reasons why that's fundamentally unsuitable to
the problem of driving.

I was in Egypt recently, and the sheer amount of lane crossing, merging,
pedestrians ducking through multiple lanes of traffic, roadside obstacles,
donkey-drawn vehicles etc would be 100% impervious to even a level 3 solution
today. I believe the same would be true in India and many other parts of
Africa.

So, I really hope that we are not falling blindly into another 5th Generation
sinkhole here. History should have taught us better.

------
ohstopitu
Everyone is discussing about how autonomous cars will react to various
situations. However, the real question is - once we have walled gardens,
who'll control it - the manufacturers or the government?

By that I mean, all automated cars (especially level 5) would need some kind
of network connection (both cellular and mesh) - which means "your" car (if
you own one - which from the looks of how things are shaping is going to be
highly unlikely), could easily be stopped for variety of reasons with or
without your permissions.

It could be hacked by 3 letter agencies, or by hackers who can than case mass
disruption.

While I'm not against self driving cars, I'm very cautious against the "self-
driving are the best - they'll reduce death on the road and if you don't
support it you are a death loving luddite who can't get on with the times"
school of thought.

We saw what happened with we gave up control on our phones, and that's slowly
happening to our computers (sure, most of readers here can run/use linux, but
linux on desktops still has a much lower market share compared to Windows 10).

I just feel like it's not the right future that I imagined as a kid watching
star trek and it's disappointing.

EDIT: another unrelated point I'd like to add is the fact that if we want to
have autonomous cars on the road - we'd need co-operation between various
companies (which would be easy - we have browser vendors do that, why not car
companies?), but more importantly, we need a huge overhaul of our
infrastructure for autonomous cars and we need to outlaw humans driving cars
on such roads).

------
sytelus
Approximately 35,000 people died in car accidents in 2015. That's almost 100
people each day. Imagine you had self-driving car that was exactly as good as
humans. This car will kill 100 people each day despite of being human-level
competent. It has slim chance of surviving public outrage and lawsuits. This
points to the fact that self-driving cars needs to be few thousand times
better than humans. As in all machine learning problems, as you get along the
curve, progress becomes much more expensive for next 1% improvement than last
2% improvement.

One way to circumvent this is establish special lanes for self-driving cars
where things are much more controlled, well defined and cars in that lane can
communicate with each other to avoid crashes. Long segments of highways might
be great candidates. This can heat up the virtuous cycle where people buy
self-driving cars to be in that lane which pressures authorities to make more
lanes available for them and eventually most lanes are for self-driving cars.

~~~
randcraw
35k die per year in the US alone. Worldwide it's more like 1.25 million.
[http://www.who.int/gho/road_safety/mortality/traffic_deaths_...](http://www.who.int/gho/road_safety/mortality/traffic_deaths_number/en/)

You don't need autocars to be 1000 times better than humans. You just need
them to be equally good as the average driver ALL THE TIME, such that they're
never distracted, sleepy, driving unlawfully, etc. The occasional lapse in
driver competence is the cause for the vast majority of crashes. I suspect the
limitations of driver (or autocar) perception plays a small role in most
accidents when compared to lapses.

As residents of a developed country, with a very low death rate per mile
driven, we also see traffic risks very differently than the world's norm. The
annual death rate per 100k motor vehicles in the US is 13; in India it's 130
in Africa it's 574. In the more unsafe countries, I suspect driver error is
the cause of 99% of accidents. Replacing human driver decisions with autocars
would reduce worldwide driver fatality risk enormously, reducing perhaps 90%
of global traffic fatalities -- about one million people a year.

------
moflome
I personally tried to give the Google car "hard miles" by cutting it off in
traffic last year. I believe I was being helpful, but I feel bad about scaring
the "driver."

~~~
dsfyu404ed
Why wouldn't you cut it off if you saw it coming? You know it's gonna drive to
the letter of the law. Do you really want to get stuck behind that?

~~~
cdetrio
The article links to Rodney Brooks, who mentioned this in a recent blog post
[http://rodneybrooks.com/unexpected-consequences-of-self-
driv...](http://rodneybrooks.com/unexpected-consequences-of-self-driving-
cars/):

> At least one manufacturer is afraid that human drivers will bully self
> driving cars operating with level two autonomy, so they are taking care that
> in their level 3 real world trials the cars look identical to conventional
> models, so that other drivers will not cut them off and take advantage of
> the heightened safety levels that lead to autonomous vehicle driving more
> cautiously.

------
lefstathiou
There is a great proverb that reads "man who says it cannot be done, should
not interrupt man doing it."

If you removed Tesla from the equation, Level 4 autonomy would be half a
decade away... it's pretty sad how far behind the industry is from this
perennially cash strapped upstart.

~~~
unityByFreedom
> If you removed Tesla from the equation, Level 4 autonomy would be half a
> decade away... it's pretty sad how far behind the industry is from this
> perennially cash strapped upstart.

This is simply untrue. Volvo will have a release this year.

The notion that Tesla are the only ones innovating because they're the only
ones using the public as guinea pigs is hogwash. Mobileye, Tesla's original
self driving technology, isn't even part of Tesla. They have other clients.

~~~
johansch
Some Volvo press on that:

[http://blog.caranddriver.com/meet-the-first-real-family-
slat...](http://blog.caranddriver.com/meet-the-first-real-family-slated-to-
get-keys-to-volvos-self-driving-xc90/)

------
komodo
a genie offers you a deal. "I will prevent all car accidents that cause death
for an entire year. In exchange, at the end of that year, I will randomly kill
people, one by one, until as many as XX% of the prevented deaths are 'repaid'.

The 1% safety people would take the deal at 99% repaid deaths. But I think
most people would only be comfortable with a number much lower.

~~~
loafa
Well, it's not that equivalent a situation. I'd feel personally responsible
for the particular set of individuals killed by the genie rather than some
other set of individuals. That's too much responsibility for me, I didn't ask
for this crap, I should know better than to trust random genie bargains.

~~~
taneq
In the least convenient universe (and therefore the most useful for examining
this moral quandary), you don't have the luxury of dodging the question. If
you choose to tell the genie to do nothing, then you're choosing for X people
to die to save Y.

(The scenario needs work, though - it should be Y people chosen from the road
users in question, not just Y random humans.)

------
ramzyo
I found this quote from Dr. Pratt particularly salient, "But fundamentally a
thing your readers should know is that this really comes down to testing. Deep
learning is wonderful, but deep learning doesn’t guarantee that over the
entire space of possible inputs, the behavior will be correct. Ensuring that
that’s true is very, very hard to do."

------
Negitivefrags
I made a bet with someone recently that within 10 years you will not be able
to buy a self driving car off the lot that can drive someone around that
doesn't have a licence.

It's a testament to Tesla's propaganda that he took the bet. I look forward to
claiming my $500 in 10 years time.

~~~
erikpukinskis
So does he.

~~~
Neliquat
Wanna make a bet?

~~~
erikpukinskis
It's a bad bet because Tesla doesn't sell out of lots, and Google won't
either.

If you reformulate the bet as "it will be possible to legally ride in a
driverless cab without a license somewhere in the world by the end of 2027"
I'll take the bet for $20.

I think you might win your original bet on the technicality of car sales lots
being averse to selling these kinds of things and ride sharing services being
early movers, and more amenable to insurance. Being able to keep cars in a
specific service area will help.

------
saosebastiao
It's been a while since I've looked at the actual numbers, but from what I
recall (the point could be made over a reasonable approximation of X), >80% of
the spoken words in an average english speaking adult's spoken language corpus
exist in the vocabulary of the average english speaking kindergartner.

In other words, it takes us two-ish years to learn how to use our vocal cords,
and another two to three years to get to 80% of an adult's vocabulary. And yet
it takes another 12 years just to increase our vocabularies a handful of
percent and to become fairly proficient at piecing together those words into
coherent and mature enough communication for full time employment.

And that is just one example. Learn 90% of Haskell in one afternoon...learn
the rest over the next two decades of your life. Learn 90% of derivatives
trading from one book, but spend the rest of your life learning the rest. They
aren't examples so much as they are a fact of life: There is an extremely long
tail to learning, and extrapolation of where you will be given how fast you've
learned up to some arbitrary point will be impossible.

Self driving cars aren't just learning the rules of the road. They are
learning human spatial sensory perception and fast heuristics for ad hoc path
planning that have been evolved over several millenia. And yes, they are
becoming extremely capable extremely quickly...but you won't be able to
extrapolate linearly to get to a point where they can take over.

~~~
CobrastanJorji
I'm not sure that's right. Googling around suggests that the average 5 year
old knows somewhere between 3000 and 10000 words, and the average adult knows
somewhere between 15000-20000 words. So I think saying a five year old has
even 50% of an adult's total vocabulary is probably generous. Of course, if
you meant that a five year old would understand about 80% of the words spoken
by an adult over the course of a day, 80% sounds like a very conservative
guess.

Not that it affects your point at all, I'm just thinking about words now. And
while I'm on the subject, XKCD's "Thing Explainer," a book on how things work
using only the 1000 most common words, is great fun.

~~~
saosebastiao
Definitely the latter. Played a little too fast and loose :)

------
nmeofthestate
It's great to see this posted on HN, which in the past has suffered from an
excess of gung-ho robocar advocacy IMO.

------
DonHopkins
Is this the same Gil (with one l) Pratt of the MIT Leg Lab? (Not to be
confused with the MIT Lego Lab! ;)

Are they different people, or did he change the spelling of his name when he
moved from legs to wheels? Or is he developing walking cars for Toyota?

Actually his first name is spelled two different ways on this one page, and he
looks like the same person, so maybe he changes the spelling of his name
frequently:
[http://images.sciencesource.com/preview/BA4147.html](http://images.sciencesource.com/preview/BA4147.html)

------
throw2016
It's refreshing to get a perspective that scopes the true scale of the problem
and addresses technical, societal and moral issues.

This seems to be a more responsible and comprehensive engineering perspective
than the frenzy and hand waving that accompanies most self driving topics
here.

Overestimating AI with little to no data and vastly underestimating not only
human driving but the varied traffic conditions they operate in doesn't seem
like a realistic or responsible way to solve the problem.

------
JohnJamesRambo
It's funny, Uber is betting the farm on autonomous cars becoming reality and
reading this reminds us of just how far away that is.

~~~
stale2002
What? This article is talking about level 5 autonomy. AKA, "works literally
everywhere and all the time".

Uber only needs to get to level 4 to change the world.

~~~
falcolas
If Uber gets to only level 4, it still needs drivers to handle those cases
that the car can't handle. Doesn't really change the world.

------
blazespin
So much wrong with this logic: "If autonomous cars are even just 1 percent
safer than humans, we should still be using them." Does that "humans" include
drunks, idiots, and teenagers? If so, I'm sorry, I would never use that car.
I'd like to think I drive 2x safer than the average idiot out there.

~~~
jwilliams
I took that statement to refer to the cumulative effect of autonomous cars --
Those people you describe are already on the road with you, whatever car
you're in.

------
Shivetya
Way back I remember when William Shatner's TekWar was brought to TV. One scene
that stood out was when the lead was driving into town, it appeared that
limited access roads; US interstate type; were done by car driving itself.

a similar idea would do quite a bit to move the technology forward and
increase acceptance. we already have projects that create toll and hov lanes.
why can't these also be equipped to assist self driving cars with their tasks?

While I think the Tesla videos were cool, even GM did something similar
recently in San Francisco, they are still pretty much scripted events. Tesla
is guilty of exploiting the over trusting side of the issue simply by product
naming but some of their demos could lead people into assuming far too much
ability; will that become a legal liability?

------
Ericson2314
Gotta appreciate that shout-out to formal methods!

------
emodendroket
This is really excellent.

------
tapmap
just found this www.mobiltyx.co

------
tapmap
www.mobilityx.co

