
Trump administration re-evaluating self-driving car guidance - devy
http://www.reuters.com/article/us-usa-trump-selfdriving-idUSKBN1650WA
======
mcafeeryan92
This is a really difficult situation, in that lowering regulatory overhead
would mean getting automated solutions to the public faster, and even if they
weren't optimally safe they'd likely be vastly better than human drivers and
the sooner we could reap that benefit the better. However, there is a
substantial long-term risk to the viability of self-driving cars if any high-
profile accidents do happen, because outside of Silicon Valley people
generally trust themselves more than technology. So on the other hand it might
be better to continue the dangerous status quo for longer, until regulatory
framework can assure safety, so as to make the rollout of autonomous cars
actually result in adoption of autonomous cars.

~~~
freehunter
I've recently been disappointed by the "Uber's car ran a red light!" stories
I've been seeing. I know a lot of people hate Uber for various reasons and
there's every reason to stay cautious about making sure self-driving cars are
predictable and have the right sensors on-board, but imagine if we saw a
headline on CNN every time a human-driven car ran a red light.

There's no excuse for a self-driving car to run a red light, but human drivers
do it millions of times every day and in most cases when we see it we just
shake our heads and say "what is that fool thinking?!" But a self-driving car
does it once and it's headline news for months.

~~~
marcell
Usually when a human runs a red light, they are aware that it's red. Typically
the light is just turning red, and they think they can still make it.

For Uber, their car was probably not aware that the light was red, which is
more dangerous.

~~~
pyre
People often run red lights because they aren't paying attention too. It still
doesn't make headlines on CNN.

~~~
saynay
A person running a red light is driving just one car at a time. A self-driving
car's software is (potentially) driving many cars.

I do not think "a self-driving car runs a red light" is analogous to "a person
runs a red light". A closer analogy is to "a distinct class of people who run
red lights".

I do agree that the media response is overblown, however I do not think it is
without some merit.

~~~
benologist
"A person" discounts the millions of other persons individually doing it
around the world - orders of magnitude more than driverless cars could
possibly achieve this decade. Their combined effort, while not just running
red lights, kills an estimated 1.25 million people a year.

[https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r...](https://en.wikipedia.org/wiki/List_of_countries_by_traffic-
related_death_rate)

------
alexc05
> urged companies to explain the benefits of automated vehicles to a skeptical
> public.

As a "tech guy" myself I have trouble fathoming a position of skepticism
around the benefits of self-driving vehicles.

I know it is generally a good idea to be skeptical even as a "devil's
advocate" but to my mind, there are so many upsides that it is hard to connect
with me?

~~~
majormajor
Here's a bitterly cynical contrarian view: hope you _really really really_
like getting bombarded with ads even more now that you don't have to pay
attention to the road! Marketing opportunities abound! Or, hope you don't mind
shelling out for the "ad free experience" every trip.

Here's a less bitter contrarian view:

Self-driving vehicles will be bad for American city development and reinforce
existing patterns of inefficiency, waste, and self-segregation.

The reasons I expect this to happen: * People will likely put up with longer
commutes if they don't have to pay attention the whole time. * The amount of
cars on the road for a given level of congestion will probably increase due to
more efficient cross-vehicle coordination and faster reaction times. * I
expect most consumers will still prefer to own rather than perpetually rent
their vehicle (the average age of cars on the road right now is 11.5 years -
that's quite a few years of not making payments compared to being a perpetual
Uber-rider), especially given the convenience in family situations (keeping
stuff in the car all the time to entertain the kids, keeping the back seats
covered to protect from the dog fur and claws, etc).

So those plausible resulting conditions seem likely to lead to less
centralized cities, reinforcing a future of inefficiently using a bunch of
energy to move small numbers of people individually in cars vs in a denser,
older-style urban rail system.

~~~
Turing_Machine
"Self-driving vehicles will be bad for American city development"

Cities don't have rights. Why are you equating "good for cities" with absolute
good?

"and reinforce existing patterns of inefficiency, waste, and self-
segregation."

What you call "inefficiency, waste, and self-segregation" other people call
"living the lifestyle they want, having a nice yard, and being able to raise
their kids away from the dirt, crime, and horrible schools found in the
cities".

Efficiency is not the end goal of human life.

If we were really after maximum efficiency, we could have everyone sleep in
bunks (maybe even hot-bunk it with three shifts) and eat all their meals in
communal chow halls.

But we don't. Why is that, do you suppose?

------
Animats
(I had a long reply on this, but Firefox 51 crashed and lost it. Rewriting).

California DMV's regulations for testing are straightforward. 1) Get $5M in
insurance, 2) Report all crashes immediately, and disconnects annually, and 3)
only trained drivers associated with the manufacturer can drive. That's what
Google wanted, and there are about 20 manufacturers with CA autonomous vehicle
testing licenses. Their reports are one of the few objective data sources we
have on self-driving cars. (Uber whined about having to report their failures,
refused to register, and moved testing out of CA.)

California's deployment regulations are still in flux, although there's a
draft version. So far, nobody has applied to deploy production self-driving
vehicles in California.

NHTSA started with definitions of levels of autonomy. Informally:

\- Level 0 - manual driving

\- Level 1 - auto brake or lane keeping, but not both.

\- Level 2 - auto speed/brake control and lane keeping (Tesla, BMW, Mercedes,
etc.) Driver may have to take over control at any time.

\- Level 3 - fully automatic driving on some roads. Driver may have to take
control, but not instantly. (Google/Waymo, Volvo)

\- Level 4 - fully automatic driving on all roads.

\- Level 5 - no driver needed.

NHTSA proposes that at level 3 and above, the manufacturer is responsible for
all accidents. Volvo is on board with this, and the US Big Three automakers
don't object. NHTSA sees a bright line between levels 2 and 3, and says that
at level 2, manufacturers should enforce hands-on-wheel. That's what got Tesla
into trouble - weak hands-on-wheel enforcement combined with PR that convinced
some drivers that "autopilot" was better than it is.

Urmson, when at Google, said that their testing taught them that humans cannot
take over fast enough to deal with automation failures. This echoes what the
aviation community has discovered over the years - when someone who isn't
actively flying has to take over control, it takes seconds or tens of seconds
before they have full situational awareness.

This means the incremental approach of adding more features to assist the
driver won't work. Somebody has to be in charge, either the human or the
computer. Ambiguity over who is driving leads to accidents. (For an aviation
view of this, see "Children of the Magenta", where a chief pilot is talking to
his pilots about cockpit aviation dependency.[1])

Congressional hearing last week on self-driving cars.[2]

[1]
[https://www.youtube.com/watch?v=pN41LvuSz10](https://www.youtube.com/watch?v=pN41LvuSz10)
[2]
[https://www.youtube.com/watch?v=vJsRF7NcjpM](https://www.youtube.com/watch?v=vJsRF7NcjpM)

~~~
unityByFreedom
> Urmson, when at Google, said that their testing taught them that humans
> cannot take over fast enough to deal with automation failures. This echoes
> what the aviation community has discovered over the years - when someone who
> isn't actively flying has to take over control, it takes seconds or tens of
> seconds before they have full situational awareness.

I'm disappointed that this didn't become the prevailing wisdom for self-
driving cars. With the current state of affairs, it seems there's a real risk
of a few accidents causing this administration to pull the plug on self-
driving altogether until we get a progressive administration again.

Perhaps Urmson ought to have published this research so that others could use
their methods and reproduce the results. It seems to me that user-study was
the most important part of their research.

~~~
Animats
It's in his SXSW talk.[1] It's the Google/Waymo and Volvo position, those
being the two companies closest to a product.

[1] [https://www.youtube.com/watch?v=Uj-
rK8V-rik](https://www.youtube.com/watch?v=Uj-rK8V-rik)

~~~
unityByFreedom
2016 seems a bit late. Also, hindsight is 20/20 and I do nothing for self
driving cars

------
Cursuviam
Having read through the previous guidelines, I am not surprised at all at the
actions taken. However, the changes taken will likely not change the situation
much, as the policy had already been very deferential towards state regulation
and self-regulation and really were mostly just guidelines.

------
dangrossman
This is relevant to a lot of the comments, so I'll just make it as a top-level
comment. First, two videos of the latest version of the Tesla Autopilot
firmware, rolled out to Tesla vehicle owners this week. Each shows the car
repeatedly trying to kill its driver:

1:
[https://www.youtube.com/watch?v=uYav3_7miIc](https://www.youtube.com/watch?v=uYav3_7miIc)

2:
[https://www.youtube.com/attribution_link?a=zYHrJvXcz6M&u=%2F...](https://www.youtube.com/attribution_link?a=zYHrJvXcz6M&u=%2Fwatch%3Fv%3DUZ1XLqc5IUg%26feature%3Dshare)

Tesla broke ties with the previous supplier of its driver assist hardware and
software last year after serious disagreements over the safety of Tesla's
present and future Autopilot systems and how it was being marketed to
consumers.

It's now been just a few months, and Tesla's rolled out their own Autopilot
software to customers of cars built since the breakup with Mobileye.

That software doesn't have the years of testing the old software did. This
one, as you can see in the videos I linked from the latest firmware, does
things like drive your car off clearly marked roads inexplicably without
warning. It is not acting safer than human drivers would in these situations.

I'm not happy about sharing the roads with cars that may drive themselves into
me with no warning that there's anything wrong with what should be a simple
lane keeping system. Especially when that system doesn't force the driver to
keep their hands on the wheel, when obviously they need to be there with the
software in this state.

I want to eventually benefit from the kind of self-driving cars Google is
building (Waymo is down to a human temporarily taking the wheel once per 5000
miles of driving on public roads), without the harm of what startups like
Tesla can do by putting essentially untested systems on production cars I
share the road with.

There must be a regulatory middle ground?

~~~
unityByFreedom
> There must be a regulatory middle ground?

I think there is, I just wouldn't hold my breath for Congress to act on it.
California is leading the way on this. Elect the right people there and
hopefully that trickles up and across to other states.

------
btilly
My fear here is that this is mostly an excuse for established car companies to
get Trump to target Tesla. Trump has no particular reason to like electric
cars, and established car companies have every reason to fear Tesla. I further
suspect that Trump sees being bribed as simply a good business proposition...

------
unityByFreedom
What's the difference? The NHTSA guidelines were only suggestions, not laws.
They were a heads-up to manufacturers in what the public may demand from
companies in the future.

Plus, the states are the ones regulating on this. California is leading the
way in independent testing, and they and a handful of others are the only ones
who've begun requiring reporting about accidents involving driver-assist and
autonomous vehicles.

Am I dumb for saying this administration is unlikely to make federal laws
surrounding this technology? Their whole argument is to shrink national
government and empower the states to do as they please. Every time they make a
national law they shoot themselves in the foot.

------
rdtsc
> She said self-driving cars could dramatically improve safety.

Agree, I believe that's a fair assessment and I am glad they are recognizing
it.

------
thrw99128391
Always when some interest group senses that a thread does not go their way,
they use flagging (of the submission) to censor the unwanted opinions.

Flagging is totally sick on HN.

~~~
grzm
A lot of people on HN are exhausted by a lot of the political submissions. So
many of them descend into inflammatory, nonconstructive point scoring, which
is the opposite of HN's intent for civil, substantive discussion. People are
often flagging regardless of their own political leanings: they don't want HN
to have so many political flamewars.

------
DonHopkins
My immediate reaction to this web page was EEEEEK!!! SPIDERS!!!!

