
NHTSA’s full investigation into Tesla’s Autopilot shows 40% crash rate reduction - fmihaila
https://techcrunch.com/2017/01/19/nhtsas-full-final-investigation-into-teslas-autopilot-shows-40-crash-rate-reduction/?ncid=rss
======
Animats
It's interesting how vague this is. There's an NTSB investigation still
pending into a specific Tesla crash.[1] The goals are different. NHTSA asks
"do we need to do a recall?" NTSB asks "exactly what, in detail, happened
here?" NTSB mostly does air crashes, but occasionally they do an auto crash
with unusual properties. Here's the NTSB report for the Midland, TX crash
between a train and a parade float.[2] That has detailed measurements of
everything. They even brought in a train and a truck to reconstruct the
accident positions.

It took a combination of problems to cause that crash. The police lieutenant
who had informed the railroad of the parade in previous years had retired, and
his replacement didn't do it. The police marshalling the parade let it go
through red lights. They were unaware that the traffic light near the railroad
crossing was tied in to the crossing gates and signals. That's done to clear
traffic from the tracks when a train is approaching before the gates go down.
So ignoring the traffic signal took away 10 seconds of warning time. The
driver thought the police had taken care of safety issues and was looking
backwards at the trailer he was pulling, not sideways along the track. People
at the parade were using air horns which sounded like a train horn, so the
driver didn't notice the real train horn. That's what an NTSB investigation
digs up. Those are worth reading to see how to analyze a failure.

[1]
[https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...](https://www.ntsb.gov/investigations/AccidentReports/Pages/HWY16FH018-preliminary.aspx)

[2]
[https://www.ntsb.gov/investigations/AccidentReports/Pages/HA...](https://www.ntsb.gov/investigations/AccidentReports/Pages/HAR1302.aspx)

~~~
blazespin
The only thing that matters is "NHTSA notes that crash rates involving Tesla
cars have dropped by almost 40 percent since the wide introduction of
Autopilot"

Even if autopilot was faulty in some way, if more people were living than
dying - does it matter?

~~~
gambiting
Well, yeah. Imagine there's heart surgery which has only 20% success rate. So
every time 100 people go in for that surgery, only 20 survive.

Now, someone makes a robot that has 100% success rate, but every 100
operations, it has a glitch in the sofware, where it stabs and kills the
patient instead. It has been calculated that on average, using the robot only
kills 1-2 out of 100 people going in for surgery, compared to the old method
of 80 people out of 100. The robot is clearly 80 times better than manual
procedure, so why not keep using it?

The answer seems to be - because robot killing people is not an unavoidable
outcome. It can be fixed. It's a software problem, a mistake that costs
someone their lives. Similarily, an auto-driving Tesla may be killing less
people on average than manual drivers, but that doesn't mean that any software
problems that lead to it don't matter.

~~~
loup-vaillant
Step 1: _don 't_ recall the robot surgeon. While it does have a fatal bug,
recalling eat would kill 79% more patients. Oops.

Step 2: Correct the bug, so the robot stops killing patients.

The Tesla autopilot may be in a similar situation. Even if there is a number
of fatality inducing bugs, disabling it would be even worse. That said, 40%
crash reduction is not enough. I want to see 75%, 90%, and more.

~~~
Piskvorrr
You are omitting the PR angle: "I DON'T CARE THAT FAR MORE PEOPLE WOULD DIE
OTHERWISE - BAN KILLER ROBOTS NOW!!!!"

~~~
loup-vaillant
That one is easily dealt with: "Our robot surgeon reduces the fatality rate by
98.5%. Our study of the remaining fatalities suggest we can reduce them
further. We hope to reach near-zero fatality rate within the year.

Don't even speak of the bug. Just state what matters: this stuff is better
than what we had before, and it can (and will) be even better. This may
prevent talks about killer robots.

------
uncoder0
After looking at the report it looks like Tesla ran into the same issue we did
in the 2007 DARPA Urban Challenge. The trailer was higher than the front
facing sensors. We and most other teams had all assumed 'Ground Based
Obstacles' meant that any obstacles on the test track would make contact with
the ground in the lane of travel. DARPA decided to put a railroad bar across
the street and expected cars to back up and do a U-Turn when they encountered
it. The bar was too high off the ground for our forward LIDAR to see it so we
collided with the bar at nearly full speed.[1] The sad part about this is that
when we were drinking after dropping out of the challenge our team leader said
something along the lines of 'At least we know no one will ever die now from
the mistake we just made.'

[1] [https://www.wired.com/2007/10/safety-last-
for/](https://www.wired.com/2007/10/safety-last-for/)

~~~
mjevans
Predictions like that only work out if the lesson is sufficiently broadcast.
Clearly, since this is still newsworthy here, that is not the case. (However
this back channel is helping.)

~~~
maxander
If there isn't some compilation or review article along the lines of "all
serious failure modes encountered in autonomous vehicles since 2005-ish,"
there should be.

~~~
taneq
That's an excellent idea and should be managed centrally (by the NHTSA or
similar). Basically a communal regression test suite that all self-driving
vehicles have to pass.

------
snewman
Tesla comes off extremely well in this report. For one thing, the 40%
statistic cited in the headline appears to be well supported by the NHTSA
report (section 5.4) and actually manages to frame the incident in a very
positive light:

 _ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY
2014 through 2016 Model S and 2016 Model X vehicles equipped with the
Autopilot Technology Package, either installed in the vehicle when sold or
through an OTA update, to calculate crash rates by miles travelled prior to
and after Autopilot installation. Figure 11 shows the rates calculated by ODI
for airbag deployment crashes in the subject Tesla vehicles before and after
Autosteer installation. The data show that the Tesla vehicles crash rate
dropped by almost 40 percent after Autosteer installation._

I had hoped to see more information about this specific incident. For
instance, any data on whether the driver had his hands on the wheel, what
steps the car had taken to prompt his attention, etc. But that doesn't seem to
be included.

~~~
Roritharr
I really would like to know the absolute numbers for this reduction. It's
really questionable if this would hold up with millions of Teslas on the road
in the hands of less wealthy(probably better educated) drivers.

~~~
lamontcg
> wealthy (probably better educated) drivers.

You mean drivers like the ones behind the wheel of all the BMWs and Audis that
cut me off with no turn signals, speed, make aggressive lane changes,
tailgate, and generally act like giant douches because a simple traffic ticket
is nothing to their pocketbook while their time == money?

Those people are going to be a hell of a lot safer behind the wheel of an
autonomous (level 4) vehicle where they can be on the phone and their laptop
as the vehicle obeys the speed limits and safe following distance.

~~~
brokenmachine
Anecdotally I have a lot more near-incidents with 4WDs and little hatchback
getabouts. In my experience the 4WDs have been distracted or drifting outside
their lane, and the little getabouts have been tailgating and driving
aggressively. Having "P" plates (1) also raises the chances _massively_.

Could just be the numbers of those kinds of cars in my area though.

Generally I feel that a douche can drive any type of car, and I try to drive
in fear like everyone else on the road is a drunk prison escapee.

(1) provisional licence holders - new drivers (and also people who have lost
their licence) have to display "P" plates for a year here (Australia).

~~~
mjevans
I find that drivers who fail to keep right except when passing are the leading
cause of 'aggressive' and 'unusual' passing conditions.

~~~
lamontcg
Because the aggressive drivers behind them aren't responsible for their own
actions in rage passing left lane hogs?

Somehow I manage to consistently undertake people who are going slow in the
left lane safely. Its not that hard.

------
xenadu02
For those who don't want to signup to Scribd just to download a publicly
available PDF: [https://static.nhtsa.gov/odi/inv/2016/INCLA-
PE16007-7876.PDF](https://static.nhtsa.gov/odi/inv/2016/INCLA-
PE16007-7876.PDF)

~~~
sigmar
Thank you! I'm completely unwilling to register for that site just to download
a pdf

------
stale2002
Oh, hey, will you look at that.

The imperfect, incomplete, beta, level 2 self driving cars that were supposed
to be the "dangerous" area of self driving are ALREADY better than human
drivers.

Can we stop the politics and deploy all the real self driving cars to the road
immediately, since the government has proven that even the shitty variety is
safer than humans?

~~~
mikeash
Level 3 is the real danger area. That's the level where the driver is able to
do other things, but must be prepared to retake control at any time. Those two
ideas are basically contradictory: if you're not already involved with
driving, it takes too long to get back into it to do anything useful.

Tesla's system is level 2, meaning that the driver must (as in, is supposed
to) remain engaged and aware of the driving task at all times, although the
car will handle common cases on its own.

I believe the plan for everybody is to go straight from level 2 to level 4,
since it looks like level 3 is just not going to work.

In any case, we can't stop the politics and deploy all the real self driving
cars to the road immediately, because there aren't any real self driving cars
to deploy yet. The technology is advancing extremely quickly and it's getting
close, but politics is _not_ the only obstacle.

If and when we have a real, production-worthy self-driving system that's being
blocked by politics for no good reason, we can revisit this. I get the sense
that, at least in most places, the politics will be pretty easy once the
technology gets there. Getting a lot fewer registered voters killed has a way
of swaying legislators. Especially in the US, where it's a state-by-state
decision, a few states will want to get a jump on it and then the rest will
face immense pressure to follow.

~~~
Pxtl
There is zero chance I'd stay awake behind the wheel of a level 2 or 3
vehicle.

~~~
Dan_Nguyen
I'm in the same boat as you here. Level 2 and 3 are unsafe for me as an
individual. When I drive, actively focusing on the road and driving keeps me
awake. When I'm a passenger, the lack of mental concentration plus the gentle
rocking motion of a moving car knocks me out within ten minutes.

The current Tesla autopilot is obviously working well for many people, but for
myself I have to wait until level 4. Anything more hands on than that is
inherently unsafe because of my habits.

~~~
DonHopkins
What kind of "CARTCHA" challenges and games might be fun to play, to prove to
your car that you were paying attention to the road and traffic, yet not
distract you from being ready to take over driving?

Computer vision and speech recognition could be useful for proving the
driver's awake, as well as driving the car itself.

[https://en.wikipedia.org/wiki/Car_numberplate_game](https://en.wikipedia.org/wiki/Car_numberplate_game)

~~~
1209120931
The challenge of driving the car manually, without any hyped autopilot.

~~~
NamTaf
This. There's nothing else because any other task requires diverting mental
focus away from the act of constantly assessing the changing environment
around your car. That by definition leads to slower reaction times.

------
sxp
The 40% number isn't very informative. The report has multiple notes about it:

 _ODI analyzed data from crashes of Tesla Model S and Model X vehicles
involving airbag deployments that occurred while operating in, or within 15
seconds of transitioning from, Autopilot mode. Some crashes involved impacts
from other vehicles striking the Tesla from various directions with little to
no warning to the Tesla driver._

 _ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY
2014 through 2016 Model S and 2016 Model X vehicles equipped with the
Autopilot Technology Package, either installed in the vehicle when sold or
through an OTA update, to calculate crash rates by miles travelled prior
to[21] and after Autopilot installation.[22] Figure 11 shows the rates
calculated by ODI for airbag deployment crashes in the subject Tesla vehicles
before and after Autosteer installation. The data show that the Tesla vehicles
crash rate dropped by almost 40 percent after Autosteer installation._

 _21 Approximately one-third of the subject vehicles accumulated mileage prior
to Autopilot installation._

 _22 The crash rates are for all miles travelled before and after Autopilot
installation and are not limited to actual Autopilot use._

So the actual rates of crashes for Teslas using Autopilot vs Teslas not using
Autopilot aren't reported.

~~~
gwern
Reading it, I think it is essentially a 'before October 2015' vs 'after
October 2015' comparison of overall accident rates, given the caveats there.
So the -40% here should be a lower bound, since it includes a time period with
0% Autopilot usage (before October 2015) and a time period with an unknown but
<100% Autopilot usage (after October 2015). If people after October 2015 used
Autopilot for 50% of their mileage, then you'd expect the true effect of
Autopilot to be more like -80%.

This strikes me as suspiciously high: what fraction of airbag-worth accidents
are supposed to happen on the highways where one can use Autopilot at all? So
maybe there's some other trend going on which is driving this decrease.

~~~
OrwellianChild
This accident reduction is driven not by the auto-driving part of the package,
but the emergency response part. The autonomous breaking and evasive steering
that occurs when an accident is imminent is what is preventing accidents.

~~~
usrusr
Also, airbag activation isn't a perfect proxy for accident rate. Passenger
safety is only a subset of vehicle safety.

------
randomstring
Waiting for the headline: "Human Fails to Prevent Accident, Outraged Public
Calls for Banning of all Human Drivers"

The obsession with perfection in self-driving cars is misplaced, they just
need to be demonstrably better than humans.

This is obviously the future.

~~~
NamTaf
At the very least in a legal standpoint that's not true, because with a human
driver it's far easier to define fault (which in turn allows for the issue to
be resolved) than if two autopilots happen to collide. Who pays for what then?

From a safety point of view, I'm more inclined to agree however I'm not
entirely convinced that we've passed the novelty stage of it yet where drivers
will remain vigilant because it's a new and exciting feature and that mentally
primes them to keep focused. It wouldn't suprise me that as it becomes more
mainstream, drivers begin to be less vigilant on staying focused as society as
a whole begins to trust it more. That's when everything could go bad.

Nevertheless, you're absolutely right that this is the future no matter what
so we need to design it as best we can to resolve all of the issues it may
give rise to.

~~~
furyg3
There's all sorts of cases in real life now when defining fault is not at all
easy. Children and animals are two examples of 'autonomous' individuals who's
liability _may_ lie with another party (parent/owner), buy also may not,
depending on the circumstances.

Additionally, who's at 'fault' really doesn't matter when in most cases the
damages are fully outsourced to insurance providers. In the case of autonomous
driving, this insurance burden is shifted up a level to the automaker, but
this cost will be passed on to the car's owner in some way, either in the cost
of the car or a subscription to autonomous driving.

However if autonomous driving is safer than non-autonomous driving, premiums
will go down, probably dramatically. Insurance companies can much more easily
monitor the quality of autonomous driving for various automakers / cars either
by the testing specification that they meet or the real-world statistics. Are
autonomous Tesla's in less accidents than autonomous Volvo's? Cheaper premiums
for Tesla.

This is exactly the situation you want to be in. Premiums for individuals are
blunt instrument... it's hard to change many individuals behavior, the rewards
may be low for the individual (save $3 on your premium!), and the penalties
can be pretty random (you sped that one time so you're a high risk driver...
or are there just more police in your city?). It's way better to incentivize
at a higher level, like the automaker.

The only case where automakers would truly be at risk is something they
_should_ control, and that is if they pulled a VW and committed fraud on their
autonomous driving tests, or were negligent in overlooking scenarios or
providing updates.

------
huangc10
Can anyone who is in the industry comment on how Autopilot performs in poor
weather (ie. flash floods, thunderstorms, snowstorms etc...)

All I can find from the article about weather was in section 3.1:

> The manual includes several additional warnings related to system
> limitations, use near pedestrians and cyclists, and use on winding roads
> with sharp curves or with slippery surfaces or poor weather conditions. The
> system does not prevent operation on any road types.

~~~
problems
Not sure about Tesla's, but many others are completely disabled or say
specifically not to use them in poor weather conditions.

You have to look at what they're using for this, it's cameras with computer
vision and radar. Radar has reflection problems like all hell at ground level
and computer vision using cameras is generally piss poor (hence why you see
videos of the Tesla hitting reflective objects).

It's a miracle that they can do this well using it IMO, even without adverse
weather.

------
cbr
This is really good news. A major worry with driverless cars has been that
companies would be harshly punished for accidents, even if there was a
dramatic reduction in crashes overall.

~~~
scotty79
They managed to weasel themselves out of similar problem when they rebranded
vehicular manaslaughter to jaywalking shifting the blame to the victim. I'm
sure there's no problem here that a bit of marketing and societal engineering
can't solve.

------
lz400
I think there's sometimes a lot of marketing and hand waving in this type of
argument "crashes go down with autopilot". Most car accidents are caused by
drunk or old people, and they drag the average up. if you tell me a Tesla
autopilot beats a drunk guy it won't surprise anyone. Now as a non-drunk,
young(ish) driver but experienced and careful, my statistics don't look
anything like the average, they look at lot better. You have to convince this
demographic, not beat the averages otherwise it's not rational for me to buy
the feature. I'm guessing this goal post is a lot harder.

~~~
phinnaeus
I'm confused. You think young(ish) people need to be convinced to buy semi-
autonomous vehicles? If they can afford it, why wouldn't they buy it? People
love things that make their life more convenient.

In my experience, its the older people that are somehow scared of the concept
of a self-driving car and would need to be convinced.

~~~
lz400
Because of security. I wouldn't buy an autonomous car that has an accident
rate worse than my demographic. A car that has an accident rate better than
average is not good enough. I hope that explains my point.

------
brilliantcode
For your reference, a level 4 automated car will look something like this:

[https://www.youtube.com/watch?v=jhUX8qWFGc4](https://www.youtube.com/watch?v=jhUX8qWFGc4)

Imagine you are too fucked up to drive. Your car will be able to pick you up.
Do you need Pepto Bismol too? Your car will pick it up from a drive through
billed through your license plate. I'd give roughly 15~20 years for this to
take place.

------
bcaulfield
So I'm far less likely to crash if I use this, and I have something to blame
if I do. Everybody wins! (Except the engineers).

------
brighton36
The truck was visible for at least 7 seconds prior to the crash in the full
report - another article here:

[http://arstechnica.com/cars/2017/01/after-fatal-tesla-
crash-...](http://arstechnica.com/cars/2017/01/after-fatal-tesla-crash-probe-
us-regulators-conclude-theres-no-need-for-recall/)

Strangely enough, giving more people autopilot would probably be better than
letting people drive. I think Tesla's picked the right time to enable it,
since the cross-over point between autopilots being better than humans in
general use cases has been reached.

Call it a beta if you want, but it's a pretty damn promising beta.

------
themgt
I don't always like Gladwell, but his piece on the Ford Pinto and the NHTSA
philosophy towards auto safety more generally is quite worth the read [1]. I
hadn't considered the intersection of this and self-driving car tech, but I
wonder if NHTSA will basically take the position that as long as self-driving
tech saves lives overall, a few "bugs" where the car kills the driver are an
acceptable trade-off.

[1] [http://www.newyorker.com/magazine/2015/05/04/the-
engineers-l...](http://www.newyorker.com/magazine/2015/05/04/the-engineers-
lament)

------
tn13
40% figure is meaningless unless the absolute numbers are reported. How do we
know if this difference is statistically significant ?

~~~
elchief
was 1.3 crashes / million miles. Is now 0.8 crashes / million miles.

Tesla's have driven >= 3 billion miles

~~~
danielharan
So 3000X the difference of 0.5, or 1500 crashes avoided.

:O

------
mrtron
What other car company can even recover the airbag deployment rate per mile?

~~~
greglindahl
I believe that GM's OnStar also collects that data.

------
em3rgent0rdr
I only want to use open-source code for something that my life depends on.
That way it can be open to any one to inspect some can independently determine
if the code is behaving as desired.

------
ChuckMcM
That is a pretty remarkable report. It essentially holds Tesla up as an
exemplar of the standard other car makers will be expected to achieve.

------
ridiculous_fish
This means that autopilot must be engaged at least 40% of the time (Amdahl's
law!). Tesla owners, is that realistic?

------
schraitle
Does anybody know what the "Population" field indicates at the top of the
report?

------
zekevermillion
Impressive. 2/5 reduction is a lot of lives saved.

------
NamTaf
I've railed on about the safety issues of autopilot before and how I'm not
entirely comfortable with the pace they've developed compared to the
considerations of human-machine interfaces and driver attentiveness,
particularly given my (moderate) exposure to these sort of problems in other
industries. Thus I'm particularly interested in that section of the article.

What I found interesting is that figure 10 shows that as you jack up the
independence of the machine, the level of driver distraction accordingly
increases. Adaptave Cruise Control (ACC) shows a significantly higher
percentage of shorter-duration off-road glances than Limited-Ability
Autonomous Driving Systems (LAADS). Additionally, countermeasures help to
alleviate _some_ but not _all_ of that increase in distraction. Importantly,
this is coupled with the point that the duration in which drivers have to
react to most impending is under 3 seconds. This may seem obvious but it's a
critical set of data to help objectively demonstrate the risks involved with
losing or even reducing alertness.

It goes on to say that Tesla has addressed the risks of mode confusion,
distraction, etc. and has implemented solutions to address this unreasonable
risk, which they in turn define as abuse that is reasonably forseeable. In
this, they're talking about the reasonably forseeable risk of eg: the driver
not understanding if they're in autopilot or not. It goes on to mention that
Tesla has also changed its driver monitoring strategy to promote driver
attention, which I take to mean detecting hands on the steering wheel.

Either way, Telsa's main approach to dealing with driver alertness by testing
for hands on the steering wheel. My concern is that this doesn't consider the
alertness of the driver to their surroundings, particularly other vehicles
that may be approaching them or the process of anticipating hazards
(approaching an intersecion where there's a blind corner and adjusting focus
to pay more attention to what may come from it, for example). I don't see how
Tesla's countermeasures address this.

The physical act of manually driving causes drivers to maintain alertness not
only to where they're going, but also the situational alertness of what's
around their vehicle. Specifically, it's the process of _random_ actions that
requires an taking input, making a decision and executing the appropriate
action that maintains this alertness. If the driver isn't having to make those
random decisions and take action then their alertness drops. Autopilot, even
with hands on the wheel, eliminates much of that random decision-making and
reacting.

When you drive, you mentally note the vehicle over your shoulder that is in
the lane next to you, and subconsciously consider that they may do something
insane. You consider those blind corners as you approach them and that
vehicles may spontaneously appear out from them. You see a truck on the road
which is approaching a bend and give it a wider berth because its centre throw
may cause it to cut the corner into your lane slightly. These are all tasks
that you do, that you may not do as well or at all when autopilot is steering,
because you are not as engaged with the driving process.

Critically, I don't see how ensuring hands are on the steering wheel causes
these alertness tasks to continue as frequently as manual steering. The driver
may be in the physical location to quickly take over, but they may not be in
the _mental_ location to do so. This is the major issue I have with the rapid
autopilot development based on my experience in related areas where
maintaining situational alertness proved to be very difficult when the person
was engaged with only a limited scope of requirements to prove their conscious
presence.

I feel like the report doesn't really drill in to this as much as it needs to.
It begins to touch on it around Figure 10 but sort of hand-waves it away
saying 'Telsa considered it discharged their responsibility to make sure
drivers stay focused by implementing countermeasures', but I believe it's more
nuanced than that. It investigates the extent to which Tesla's system is good
at ensuring drivers are physically present (that is, their hands aren't on the
passenger seat making breakfast) but it doesn't really look at the _mental_
presence that delivers situational alertness.

That mental alertness is the major sticking point for me. I don't really have
a solution beyond "drive manually" which isn't reasonable, because this
technology is here to stay and will continue to grow, but it's why I've always
been bearish about the rapid pace of rollout of these driverless technologies,
particuarly when advertised as 'beta'. As I've said before, no amount of
disclaimer and 'hey, you _should_ do this' really changes how drivers behave
once the equipment is placed in their hands.

------
sandworm101
Great. There is no doubt that driver assists cut down on crashes. But what
tesla has on the road is far from a total eyes-closed autopilot. That is an
inflection point with this tech that nobody has dared to test on the public
road. I remain unconvinced pending those trials.

Also, still havent seen any autodrive handle off-road driving such as boarding
a carferry or navigate a construction zone manned by an inattentive flag
person.

~~~
mikeash
Haven't there been a lot of tests of self-driving technology on public roads?
Tesla has released a couple of videos of theirs, and Google has put a couple
million miles on theirs. What hasn't happened yet is a production-ready,
customer-owned vehicle on public roads.

~~~
sandworm101
Not unsupervised. Unleashing a robot into school zones without a human in the
loop is still considered too dangerous legally if not physically.

~~~
scotty79
That's pretty much the rule about all robots operating in proximity to humans
(except perhaps elevators, escalators, automatic doors and toy robots).

------
battlebot
I don't completely trust the NTSA and I'm skeptical about auto-piloting cars
but accept that more and more of those will be on the roads. I will _never_
ride in a vehicle that lacks an override mechanism.

In general, I think we are moving way too fast towards these self-driving
vehicles because certain factions want to try and replace long and short haul
truckers with robotic systems that are cheaper and damn the consequences.

------
dkonofalski
I don't really know why this is surprising. Computers are already better than
humans at most tasks that involve a limited set of behaviors and they have
infinitely better response time than humans (and continue to get better). How
could anyone think that a report like this was going to end up any
differently?

~~~
krschultz
The history of self driving cars shows that it is not an easy task. We've been
working on this for over 15 years. I was on a self driving car team during
2008-2010. We won multiple intercollegiate competitions. At that time I was
certain it would be 20+ years before we had something that could drive on the
roads. The progress has been spectacular.

~~~
dkonofalski
No, programming the self-driving car is not an easy task. A car that's already
been programmed, though, is definitely going to be safer than a human driver
and driving for the computer _is_ an easy task.

~~~
inimino
> definitely going to be safer than a human driver

That's just ridiculous. It is going to be safer in some cases and less safe in
others, and obviously it depends on the system and the driving conditions. To
say that a computer is necessarily a better driver is just science fiction
until we have solid data showing that to be the case.

