
California plans to allow human-less self-driving car tests - rmason
https://www.engadget.com/2017/03/11/california-human-less-self-driving-car-tests/
======
rmason
I've been a loud critic for a long time here in Michigan that we were letting
the technology part of the car business migrate to California without much of
a fight.

But all of a sudden the Big 3 woke up to the fact that Silicon Valley would
control the high margin part of cars and they would be left with the high
expense low margin job of actually manufacturing them.

So Michigan government responded, just like they did when they banned the
sales of Tesla's in the state without debate at 3 am. I'm very interested in
seeing how Michigan responds.

------
revelation
What a great idea. Disengagements in 2016:

Google: 124

GM: 149

Tesla: 72 (on all of 550 miles)

The others I checked didn't get any amount of useful mileage. Why expand the
testing when most can't even demonstrate capabilities under the current
regime?

~~~
lern_too_spel
That's one disengagement every 5000 miles on average last year. It sounds like
California is right on time.

~~~
revelation
Except there is no graceful failure mode here.

~~~
stale2002
Of course there is. 99.9% the safe behavior to do is to slam on the breaks and
pull off to the side/avoid obstacles. And humans can't even do this simple
behavior.

"oh, stale2000, but what about highways!", and I'd respond by pointing out
that highway driving is the easiest thing for a self driving car to do. Also,
it is STILL probably safest to slam the breaks if you are about to hit a car
pileup.

~~~
mannykannot
Pulling off to the side and avoiding obstacles is continued engagement. What
we need here is more information, specifically about how many of the
disengagements would have presented a risk (and the severity of the risk)
without a driver to handle them.

The second paragraph is a straw man argument.

~~~
lern_too_spel
Disengagements requiring immediate manual control are also in the
disengagement report.
[https://www.dmv.ca.gov/portal/wcm/connect/946b3502-c959-4e3b...](https://www.dmv.ca.gov/portal/wcm/connect/946b3502-c959-4e3b-b119-91319c27788f/GoogleAutoWaymo_disengage_report_2016.pdf?MOD=AJPERES)

One every 14000 miles in 2016, an improvement from one every 1500 miles in
2015. Waymo has already indicated that it wants to run a mobility service by
the end of the year. This regulation is right on time.

~~~
mannykannot
That is a valid point, but the report also says this:

"Safety is our highest priority and Waymo test drivers are trained to take
manual control in a multitude of situations, not only when safe operation
"requires " that they do so. Our drivers err on the side of caution and take
manual control if they have any doubt about the safety of continuing in
autonomous mode (for example, due to the behavior of the SOC or any other
vehicle, pedestrian, or cyclist nearby), or in situations where other concerns
may warrant manual control, such as improving ride comfort or smoothing
traffic flow."

This appears to throw doubt on the relevance of the statistics to driverless
vehicles because drivers have been instructed to preemptively disengage where
problems may be anticipated.

~~~
lern_too_spel
All of those disengagements, which have erred on the side of caution, have
been reported, so why does it throw doubt on the statistics? The rate of
disengagements has gone down so much that Waymo has announced a mobility
service by the end of the year, which seems aggressive to me, but within two
years seems perfectly reasonable. After all, they can limit service to the
environments that they know they'll ​have no problems with.

~~~
mannykannot
The statistics are not in doubt, but their applicability to the safety of
driverless vehicles is, given that the drivers preemptively disengage in
precisely those cases we are most interested in. The 1 in 14000 miles figure
you quote effectively takes the position that they did not happen.

Waymo simulates those situations to estimate how they would have played out,
for the purpose of improving its algorithms. As a first analysis, there are
three possible outcomes: the car would have handled the situation safely, it
would have disengaged, or it would have done something that led to at least
the risk of an accident. The report does not appear to allow us to distinguish
between them, probably because they are somewhat speculative.

This brings us back to where I joined the thread: the idea that 'slam on the
brakes' is a fallback position for not having a driver present does not work
when the system fails to recognize a developing problem.

------
Animats
Manufacturers have to take full responsibility for the behavior of their
vehicles. They're the driver. Reckless driving means jail time for the CEO.

~~~
notyourwork
Let's define reckless. Also, aside from the CEO what about the engineer who
writes the code? I would think the CTO would be more responsible for reckless
automated operation than the CEO. Regardless, companies will probably have to
have insurance similar to what people who drive cars have.

~~~
akira2501
> Let's define reckless.

Legal statutes and prior case law defines it for us.

> Also, aside from the CEO what about the engineer who writes the code? I
> would think the CTO would be more responsible for reckless automated
> operation than the CEO.

Perhaps responsibility shouldn't be doled out by title but by actual action.

> Regardless, companies will probably have to have insurance similar to what
> people who drive cars have.

Absolutely, but there's no effective way to insure the value of your brand.

