
Self-Driving Cars Must Meet 15 Benchmarks in U.S. Guidance - etendue
http://www.bloomberg.com/news/articles/2016-09-20/self-driving-cars-must-meet-15-benchmarks-in-new-u-s-guidance
======
Animats
From the regulations: _" Fall back strategies should take into account
that—despite laws and regulations to the contrary—human drivers may be
inattentive, under the influence of alcohol or other substances, drowsy, or
physically impaired in some other manner."_

NHTSA, which, after all, studies crashes, is being very realistic.

Here's the "we're looking at you, Tesla" moment:

 _" Guidance for Lower Levels of Automated Vehicle Systems"_

 _" Furthermore, manufacturers and other entities should place significant
emphasis on assessing the risk of driver complacency and misuse of Level 2
systems, and develop effective countermeasures to assist drivers in properly
using the system as the manufacturer expects. Complacency has been defined as,
“... [when an operator] over- relies on and excessively trusts the automation,
and subsequently fails to exercise his or her vigilance and/or supervisory
duties” (Parasuraman, 1997). SAE Level 2 systems differ from HAV systems in
that the driver is expected to remain continuously involved in the driving
task, primarily to monitor appropriate operation of the system and to take
over immediate control when necessary, with or without warning from the
system. However, like HAV systems, SAE Level 2 systems perform sustained
longitudinal and lateral control simultaneously within their intended design
domain. Manufacturers and other entities should assume that the technical
distinction between the levels of automation (e.g., between Level 2 and Level
3) may not be clear to all users or to the general public. And, systems’
expectations of drivers and those drivers’ actual understanding of the
critical importance of their “supervisory” role may be materially different."_

There's more clarity here on levels of automation. For NHTSA Level 1
(typically auto-brake only) and 2 (auto-brake and lane keeping) vehicles, the
driver is responsible, and the vehicle manufacturer is responsible for keeping
the driver actively involved. For NHTSA Level 3 (Google's current state), 4
(auto driving under almost all conditions) and 5 (no manual controls at all),
the vehicle manufacturer is responsible and the driver is not required to pay
constant attention. NHTSA is making a big distinction between 1-2 and 3-5.

This is a major policy decision. Automatic driving will not be reached
incrementally. Either the vehicle enforces hands-on-wheel and paying
attention, or the automation has to be good enough that the driver doesn't
have to pay attention at all. There's a bright line now between manual and
automatic. NHTSA gets it.

~~~
chc
I don't understand this anti-autonomy cheerleading. It's like people on HN
live in a parallel universe where there have been a bunch of deaths from cars
running Autopilot, whereas in the world I live in, it seems to be somewhat
safer than a human alone. Like, people can mess up either way, but they seem
to be less likely to do so when the car is also looking out for them. What am
I missing?

~~~
blazespin
You have to compare the one death using autopilot to one death of people
driving Teslas without autopilot. Musk tried to compare it against the
universe of drivers (Teslas, kids driving crappy cars, etc), which was a
complete false comparison.

So the reason it was a big deal is because it was a huge fatality. Tesla
drivers are generally a pretty safe bunch. Statistically, if autopilot hadn't
been engaged, that death would not have occurred. Autopilot makes Tesla
drivers _less_ safe, not more safe.

Also, the government is doing self driving industry a huge favor. These
fatalities could screw over the whole industry if they get out of hand. Musk
is giving self driving a bad name.

~~~
Animats
Two deaths using Tesla's autopilot. [1]

[1]
[https://www.youtube.com/watch?v=fc0yYJ8-Dyo](https://www.youtube.com/watch?v=fc0yYJ8-Dyo)

~~~
ekianjo
Why was this not widely reported?

~~~
jaxbot
Last I heard, nobody has been able to verify if the car was actually in
autopilot mode. However, the emergency braking also clearly failed, if the
police report that no attempt to stop was made is true.

~~~
ekianjo
In regard to safety its equally important to report doubtful cases as they may
be a sign of something occuring.

------
owyn
It wasn't mentioned in the bloomberg article, but the 15 areas covered are:

    
    
      • Data Recording and Sharing
      • Privacy
      • System Safety
      • Vehicle Cybersecurity
      • Human Machine Interface
      • Crashworthiness
      • Consumer Education and Training
      • Registration and Certi cation
      • Post-Crash Behavior
      • Federal, State and Local Laws
      • Ethical Considerations
      • Operational Design Domain (operating in rain, etc)
      • Object and Event Detection and Response
      • Fall Back (Minimal Risk Condition)
      • Validation Methods
    

Not sure if they're specifically ordered, but it seems positive that Data
recording and Privacy are up at the top.

~~~
2bitencryption
This seems suspiciously like government getting something right... with
regards to a quickly-evolving new technology market...

has this ever happened before?

~~~
rayiner
Can you think of a concrete example where the government got it wrong?[1] For
the sake of argument, the federal government in the last 50 years? Maybe the
encryption export ban. Or the CDA, but that was quickly reversed and the part
that's left (Section 230) was really instrumental in the rise of the modern
web.

[1] And I don't mean _wrong_ as in "NSA spying" because you disagree with the
policy. I mean like, "regulations mandated everyone use Beta tapes and laser
disk even though they quickly became obsolete."

~~~
harryh
Some HIPAA regulations that pre-date the rise of shared virtual servers in
"the cloud" are quite outdated and cause quite a bit of trouble for no real
benefit.

~~~
dragonwriter
> Some HIPAA regulations that pre-date the rise of shared virtual servers in
> "the cloud" are quite outdated and cause quite a bit of trouble for no real
> benefit.

What HIPAA regulations are you talking about? Other than HITECH guidance
(which can sort-of be seen as a "HIPAA regulation"), HIPAA regulations don't
generally specify technologies at all, and I can't think of any that I would
describe as outdated or troublesome due to the rise of shared virtual servers
and "the cloud", whether they predate it or not.

~~~
harryh
The biggest thing is that we can't run software with unencrypted PHI on
physical hardware that is simultaneously running other people's code. In
practical terms this means that we have to pay AWS some $ to get dedicated
instances and also we can't use ELBs in the standard (easy) way. There are
some other things as well.

~~~
throwaway729
That seems like a fairly reasonable thing given you're talking about encrypted
PHI... it's some extra $ for a considerable reduction to overall attack
surface when processing the most sensitive type of personal data.

I don't think this meets OP's definition of "wrong".

~~~
harryh
In practical terms I don't agree that the threat of someone doing all of the
following things is worth worrying about (in comparison to many other more
likely failures):

1) determine what physical hardware in aws the target is running code on

2) somehow get the aws virtual machine manager to let the attacker run their
malicious code on the same hardware

3) somehow pierce the protections of the virtual machine to read memory being
used by the target application

4) figure out how the data is stored in memory in order to make sense of
anything that was read

~~~
dragonwriter
> In practical terms I don't agree that the threat of someone doing all of the
> following things is worth worrying about

In AWS case, this is an AWS rule about when they will sign a HIPAA BAA, even
though there is no HIPAA regulation that specifically prohibits the
arrangement at issue. AWS clearly thinks it _is_ worth worrying about.

When you run your own public cloud, you can determine what risks are worth
accepting potential liability for.

~~~
harryh
Yes, I agree that Amazon is behaving perfectly rationally given the legal
environment. My point is that the legal environment has been designed in an
un-optimal way from a technical perspective. Identifying such a situation was
rayiner's request.

~~~
dragonwriter
> Yes, I agree that Amazon is behaving perfectly rationally given the legal
> environment.

I'm not commenting on Amazon's rationality (I haven't actually evaluated the
security concerns that would determine that.)

> My point is that the legal environment has been designed in an un-optimal
> way from a technical perspective.

And you haven't pointed to anything in the legal environment that is
suboptimal from a technical perspective. You haven't even pointed to anything
in the legal environment _at all_.

Amazon (as a BAA) has certain administrative responsibilities for putting
administrative and technical safeguards in place to prevent breaches, and
certain obligations and liabilities in the case of breaches. HIPAA and related
laws and regulations _do not_ specify the specific administrative or technical
safeguards, though they do specify areas that must be addressed.

Amazon has decided that the particular technical arrangement you prefer is too
high of a risk, but you haven't pointed out anything that indicates that this
is the result of an outdated regulation that results in poor technical choices
rather than technology-neutral regulation and a reasonable evaluation of the
security concerns of the particular technical arrangement you would prefer.

------
dhagz
I'm astounded that it seems like these regulations are going to be sensible
and promote the technology. It's a good thing that these are going into place,
since autonomous vehicles should definitely not be legislated on a state-by-
state basis.

~~~
gumby
> I'm astounded that it seems like these regulations are going to be
> sensible...

Was that hyperbole? I would say the majority of regulations (at least in OECD
countries) are sensible, and many that are not are intended to be, are
outdated, or are politicized.

~~~
e40
Many people automatically assume a government can't make up sensible
regulations. There are a lot of them in the US. It's a meme you hear all the
time, especially in a POTUS election year.

~~~
luma
I think it might be deeper than that. I don't feel that the US government, on
it's own, is incapable of drafting up reasonable legislation. The problem is
that the US government is 100% for sale to the highest bidder, and corruption
runs deep (we just call it "campaign contributions" as if that makes it
better). If sensible regulation is proposed, it'll last 30 seconds before the
good senator from [some self-driving car company's home state] has turned it
into a document crafted to drive business to his "contributor".

This isn't a political statement as it cuts across both parties, which renders
it all the more insidious.

~~~
krschultz
Surely this is based on 0 personal direct experience with the people that
write these kinds of regulations.

I have worked with engineers that write technical regulations. They are
generally focused on doing a good job at the task at hand. To think some mid
level person that is hired into a normal job and never meets a politician in
their career cares about campaign contributions is asinine.

What do you think the people at NASA and NAVSEA and NIST do all day?

------
throwaway729
Pages 14 - 30+ of the embedded report (pages 16 - of the PDF) are particularly
interesting and promising, especially the portions about transparency around
privacy and ethics issues.

The report recommends that "Manufacturers and other entities should develop
tests _and verification methods_...". Does anyone know whether verification
here means software verification, or does it mean something else in this
context?

Edit: Just noticed that I got to the PDF via elicash's comment and not via the
linked article. Here's a link to the PDF:
[https://www.transportation.gov/sites/dot.gov/files/docs/AV%2...](https://www.transportation.gov/sites/dot.gov/files/docs/AV%20policy%20guidance%20PDF.pdf)

~~~
etendue
The report makes reference to "Assessment of Safety Standards for Automotive
Electronic Control Systems" by NHTSA, which itself reviews ISO 26262, MIL-
STD-882E, DO-178C, the FMVSS, AUTOSAR, and MISRA C.

In this context, they mean verification and validation in the systems
engineering sense. Software would be included in that it is a part of the
whole system.

~~~
seren
I have a hard time understanding the current AV SW stack.

On one hand, at the low level, sensor, motor control, etc you likely have
traditional hard real time/MISRA C code, but on the higher layers you probably
things like DNN, image recognition, which are much less deterministic.

So I am not sure how do you reconcile these two worlds, and prove it is safe
and always work in timely manner.

It seems the only sound approach would be to validate the whole system on a
real road.

~~~
rubidium
Now you understand the job of systems engineering :)

Verify components, validate the entire system is the typical approach.

~~~
seren
The point I was trying to make is that if you have actuators running MISRA C
that are going to be driven by something written in Tensorflow, does it still
makes sense to have a requirement to use MISRA C in the first place for the
low level part ?

~~~
etendue
I'd be very wary of using complex SOUP like TensorFlow, even if brought under
my quality system. I think a good answer here is that once one goes under
design control the subset of functionality needed should be implemented in-
house under the organization's SDLC.

~~~
ansgri
Of course these things are meant to be used (1) to train the system, (2) as a
player in the prototype. Exactly like in the old school ML-based systems: you
train in Matlab or CudaConvNet, and then you load the trained classifier into
the custom-made player highly tuned to your hardware and problem domain.

------
elicash
Info here:
[https://www.transportation.gov/AV](https://www.transportation.gov/AV)
(including noon Eastern livestream)

------
hughperkins
This is excellent news! Guidelines to follow implies that if the manufacturers
can meet these guidelines then they could plausibly have a somewhat legal
basis for putting self driving cars on the roads.

------
etendue
N.B., this policy is mainly concerned with Highly Automated Vehicles (HAVs),
which are defined as SAE Level 3 ("capable of monitoring the driving
environment") and above.

edit: as to SAE Level 2, it has this (and more) to say:

> Furthermore, manufacturers and other entities should place significant
> emphasis on assessing the risk of driver complacency and misuse of Level 2
> systems, and develop effective countermeasures to assist drivers in properly
> using the system as the manufacturer expects. Complacency has been defined
> as, “... [when an operator] over-relies on and excessively trusts the
> automation, and subsequently fails to exercise his or her vigilance and/or
> supervisory duties” (Parasuraman, 1997).

also,

> Manufacturers and other entities should assume that the technical
> distinction between the levels of automation (e.g., between Level 2 and
> Level 3) may not be clear to all users or to the general public.

------
euroclydon
I'm surprised that self-driving technology is focusing on replacing the driver
as an autonomous actor, processing visual and radar/lidar signals in order to
know about its surroundings. I've always thought we'd get further faster by
having automobiles also talk to other vehicles nearby, and design roads to
support the computer driven vehicles.

Two examples are:

1) If the vehicle is talking to the cars in front of it, it can know they are
braking before it senses that visually. Also, the vehicles can speed up in a
gridlock scenario more in unison, like a train.

2) On the interstate, markers in the pavement can be specifically designed for
computer sensors rather than human eyeballs. Also, cars can draft together to
save fuel.

~~~
the_duke
While networked cars are interesting, there is also a massive security issue
here.

Hackers will easily figure out a way to spoof the communication, and could
play with traffic.

There are mitigations for most issues, but it's a complex topic.

Just imagine some scenarios:

-) Spoof an emergency break advisory that causes tailing cars to also do an emergency break. (could be mitigated by first observing that cars in front are actually slowing down before breaking)

-) Spoof a command from a smart traffic light at an intersection to stop immediately for police / other emergency traffic. (need to check if traffic light is actually red)

-) Spoof speed restrictions issued by a smart highway traffic jam prevention system.

-) A system for police to force a car to stop immediately and pull over, eliminating car chases. Just spoof this signal and stop anyone you want. (mitigate by checking if there is a police car trailing you, and ignore otherwise).

And so on...

A way around would be to maintain a national database with public keys for
each registered vehicle, and make cars only accept those keys. But that would
be hard to maintain and still hackers could just get a hold of some PK.

In the end, the driving system will always have to correlate such car 2 car
communication with observations it makes itself.

And an autonomous system can react almost immediately anyway. So coordination
doesen't give you all that much.

\-- There are some useful ideas though, like:

-) Traffic lights can announce an ideal speed for a route, taking into account traffic and traffic light timings, so you can optimize throughput and minimize fuel consumption

~~~
shanusmagnus
All good points. Seems like you could get around it by using these other-car
communications as noisy signals and weighting evidence against the world as
the car sees it, e.g., if you get a spoofed emergency brake advisory, and the
car's own percepts suggest there's no reason to brake, the resultant action
may be to not brake. The signal from the other car[s] becomes just another
feature.

~~~
learningman
Considering the millions of miles driven each day, even if the networked
signal wasn't heavily weighted, a spoofed emergency brake advisor signal could
still do significant damage.

------
yoav_hollander
Live streaming just started here:
[https://www.transportation.gov/AV](https://www.transportation.gov/AV)

~~~
yoav_hollander
Just ended. And here is Obama's call for safe automated vehicles:
[https://www.post-gazette.com/opinion/Op-
Ed/2016/09/19/Barack...](https://www.post-gazette.com/opinion/Op-
Ed/2016/09/19/Barack-Obama-Self-driving-yes-but-also-
safe/stories/201609200027)

------
nojvek
I'm really excited by US govt outlining what it would take to make a legal
self driving car.

I'm also hoping that one of the options is to upgrade an old car to a self
driving car with an open source kit that you can buy and install it via a
certified mechanic.

I think that would be an interesting future I'd like to be part of.

------
kragen
Brad Templeton, who's been working on self-driving cars for a few years now,
analyzed this in [http://ideas.4brad.com/critique-nhtsas-newly-released-
recomm...](http://ideas.4brad.com/critique-nhtsas-newly-released-
recommendations-states-and-regulations). He says, "Broadly, this is very much
the wrong direction... the progress of robocar development in the USA may be
seriously negatively affected."

This is a big deal.

~~~
tstrimple
> I have written that having 50 sets of rules may not be that bad an idea
> because jurisdictional competition can allow legal innovation and having
> software load new parameters as you drive over a border is not that hard.

I'm not sure I would put much weight behind what he has to say.

------
yoav_hollander
Finally posted my initial comments on the verification implications of all
this here: [https://blog.foretellix.com/2016/09/21/verification-
implicat...](https://blog.foretellix.com/2016/09/21/verification-implications-
of-the-new-us-av-policy/)

------
KKKKkkkk1
It's nice that these regulations sound sensitive and not heavy handed. I'm
wondering whether they are needed at all though. They've been formulated in
heavy collaboration with the market leaders Uber, Google etc. Is there a risk
they will help shut out upstarts, similarly to how the FDA makes drug
development astronomically expensive?

------
plandis
I'm cynically imagining that since it's a collection of big automakers, it is
pretty easy for them to affect policy.

It looks like consumers and automakers are both wanting driverless cars so
putting any enevitable regulations quickly benefit both parties.

------
nitin_flanker
I am commenting here as a token of appreciation for Etendue and also so that I
can bookmark this awesome share by him. Thanks.

------
mrfusion
So what are the 15 bench marks?

------
pmyjavec
What is it about self-driving cars that has HN readers so incredibly excited?
Sorry if it's a little off-topic.

The reason I ask is there are plenty of other countries in the world where
cars just aren't that important, let's take Netherlands for example. If you
have automatic cars, society here is not just going to be that excited AFAIK.
Public transport here is great and most people cycle everywhere, because it's
fun, easy and good exercise. Not to mention a lot of people are employed as
drivers.

Same for many Asian countries where population density is high, people just
don't have the money/room for cars. Scooters are the way to go because of
traffic congestion.

Besides, don't people enjoy driving? I don't own a car but when I get behind
the wheel, it's a lot of fun. Will people really be able to handle the car
doing the speed limit?

I understand technologically it's pretty interesting, but we've had commercial
airliners that fly themselves (mostly) for a long time, same for ships and
drones and we don't marvel over those things all the time, though I agree they
are great innovations.

So apart from the tech what is the actual excitement about?

\- Concern for those who will lose their jobs.

\- Concern for others safety.

\- Privacy concerns.

\- Excitement about the safety benefits.

\- Economic opportunity.

\- Fundraising hype.

\- All of the above?

As a Silicon Valley outsider sometimes I read HN and it feels like some
context is missing. Sure it's going to change industries, but is this really
good progress, necessary progress, or just the next _thing we 're told we
need_? I mean can a self-driving car really replace a delivery person yet, a
person who can do things like leave packages with a neighbor and build
relationships, trust etc?

Sorry if this is a little off-topic, but I'm genuinely curious because it's
hard to understand, to me as an outsider, it really looks like some kind of
ride-sharing turf war hype battle more than anything else?

I dare to say it, but it's the same for machine learning, a lot of it is
fascinating, interesting, exiting tech, but how many product recommendations
does one need? How good do my _friend recommendations_ have to be? How smart
does Siri need to be? Will a patient really feel better without being treated
by a human? Are we really going to trust these things handling nuclear
warheads?

Maybe I live a strange life and have unusual views, but I just don't really
see the need most of these things when so many problems could be solved using
other means. Using this stuff to help people is great, but how much of this
effort is actually being put towards that end?

If I'm a little naive, apologies. I'm not having a go but these are just
honest questions I often find myself asking when reading HN lately. Agreed
this might not be the place to ask but I'm prepared to wear the down votes :)

~~~
abawany
In my opinion, you mentioned the core reason why you find the excitement
baffling, i.e. that you don't drive often but you find it exciting when you
do.

Now imagine the scenario for most of the US, a public-transport-hostile
country for the most part, where millions upon millions of people burn their
precious lives waiting in traffic and sucking in traffic fumes. In my mind,
this is one of the most appalling wastes of human potential that has ever
existed. Sure, some try to make lemonade out of lemons by educating/informing
themselves as they see fit but by and large, it is a huge waste. Not to
mention the many thousands of people who die every year in car accidents
during the daily commute.

So from my point of view, the self driving car is a thrilling concept: the
ability to disengage from a useless, pointless, and hopeless daily grind and
engage in something that I want to do, whether it be work, reading, watching a
movie, etc. is cool. The closest I have come to this dream in my transit-
unfriendly Texas city is the one job where I had an opportunity to take the
train/bus into downtown: while this made my daily commute very long, I loved
it because it freed me up from the drudge of driving.

Some might ask that perhaps I just hate driving. That is not true. I love
taking road trips or autocrossing when I can. But to equate the daily commute
with enjoyment is a bridge too far, in my opinion. Banish it, I say, banish
it.

~~~
pmyjavec
Thanks for the response and totally agree with the sitting in traffic, wasted
time thing ,but with all due respect it still seems inefficient, wouldn't
public transport and telecommuting be better options in a connected world?

How will there still be no traffic jams, or the car will be like an office? In
that case why not just work from home and come to work for meetings here and
there? Might flexible / less work hours help?

I mean people will still be driving around in vehicle which often makes people
motion sick if not paying attention to surroundings , cars requires a lot of
energy, take up space etc.

I used to travel to work via train, it was 1.5 hours one way, it was highly
productive time for me, but for some reason trains don't seem to make people
as motion sick?

I guess one other thing to note is that in Australia, where I'm from
originally, some see people think of others using public transport or biking
as kind of _peasants_ or feel it's inferior, that might be part of it too ?
They're also the kind of people who often like to drive fast and own expensive
cars as a status thing, so I'm still note sure it's going to take?

~~~
Tiktaalik
It's incredibly frustrating that people are getting excited for autonomous
cars' ability to free them from the drudgery of driving when this has been a
solved problem for a long time with public transit.

Unfortunately anti-public transit special interest groups have discredited
public transit initiatives all over, and fighting this has been incredibly
difficult.

On your last point people definitely do see owning expensive cars as a status
item and for this reason I think it's valid to question to what degree and
speed will autonomous car sharing networks replace individually owned cars.

~~~
imh
I fully expect self driving cars to arrive before BART goes all the way down
the peninsula. It's total crap, but I'd be surprised if it wasn't true. Same
for more frequent/faster/more reliable caltrain with room to sit or bring your
bike.

On one side, you have a solution that requires a whole bunch of groups to
align. On the other, you have an individual decision (buying a car). That's
why I am excited. If it looked like it was on the horizon, I'd be just as
excited for great public transit.

------
megablast
Woohoo, lets get the lawyers involved!

------
mtgx
The only regulation that really matters is making car manufacturers liable for
accidents and they would have to pay a fine from $100,000 for the smallest
accident (per car) up to $10 million per accident.

When the manufacturers "can't explain" how the accident happened (after an
audit was performed), they should be fined the maximum $10 million amount.

Why? Because for one assuming it's just a glitch and "they don't know" about
it, then they should pay for incompetence. And two, if the car was hacked by a
nation state, then their security sucks, and they should again pay the maximum
amount so they have the maximum incentive to ensure digital security of self-
driving cars.

Where third-party self-driving systems are involved (MobilEye, etc), the
liability/fine should be split 50-50 between the car maker and the system
vendor.

Give car makers these "incentives" and the other regulations are more or less
pointless (other than establishing common V2V and V2I standards and whatnot).
Then you'll see just how hard they scramble to make their systems safe.

EDIT: And here we go. Remote hack of Tesla Model S.

[https://blog.kaspersky.com/tesla-remote-
hack/13027/](https://blog.kaspersky.com/tesla-remote-hack/13027/)

We're only at the very beginning of self-driving cars. What happens when there
are 100 million self-driving cars on the road? Will their security be as
terrible as it is on our PCs?

People should get scared a lot faster about this stuff, before all car makers
start writing their software and then refuse to write it from scratch again
and just tack on to the poorly written systems new security features in the
future as a response to such hacking.

~~~
jacquesm
> Then you'll see just how hard they scramble to make their systems safe.

Chances are they'd opt-out of the opportunity altogether and wait for someone
else to take the heat.

~~~
DSingularity
Those who opt out are those we don't want setting the bar in this domain.

If it becomes acceptable for security and safety to be secondary to "getting
the cars on the road ASAP and capturing as much of the market as possible" we
-- as in the consumers -- will pay the price.

I understand that some of the innovations and progress will only come when we
get the cars on the road at scale, but we should still build a giant --
exactly how the comment or suggested -- to loom over the shoulders of these
car companies.

