
Uber Self-Driving Car That Struck Pedestrian Wasn’t Set to Stop in an Emergency - jeffreyrogers
https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf
======
JorgeGT
Direct link to NTSB preliminary report:
[https://www.ntsb.gov/investigations/AccidentReports/Reports/...](https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf)

 _According to data obtained from the self-driving system, the system first
registered radar and LIDAR observations of the pedestrian about 6 seconds
before impact, when the vehicle was traveling at 43 mph. As the vehicle and
pedestrian paths converged, the self-driving system software classified the
pedestrian as an unknown object, as a vehicle, and then as a bicycle with
varying expectations of future travel path. At 1.3 seconds before impact, the
self-driving system determined that an emergency braking maneuver was needed
to mitigate a collision (see figure 2). 2 According to Uber, emergency braking
maneuvers are not enabled while the vehicle is under computer control, to
reduce the potential for erratic vehicle behavior. The vehicle operator is
relied on to intervene and take action. The system is not designed to alert
the operator._

~~~
nkoren
I worked on the autonomous pod system at Heathrow airport[1]. We used a very
conservative control methodology; essentially the vehicle would remain stopped
unless it received a positive "GO" signal from multiple independent sensor and
control systems. The loss of _any_ "GO" signal would result in an emergency
stop. It was very challenging to get those all of those "GO" indicators
reliable enough to prevent false positives and constant emergency-braking.

The reason we were ultimately able to do this is because we were operating in
a fully-segregated environment of our own design. We could be certain that
every other vehicle in the system was something that should be fully under our
control, so anything even slightly anomalous should be treated as a hazard
situation.

There are a lot of limitations to this approach, but I'm confident that it
could carry literally billions of passengers without a fatality. It is
_overwhelmingly_ safe.

Operating in a mixed environment is profoundly different. The control system
logic is fully reversed: you must presume that it is safe to proceed unless a
"STOP" signal received. And because the interpretation of image & LIDAR data
is a rather... fuzzy... process, that "STOP" signal needs to have fairly
liberal thresholds, otherwise your vehicle will not move.

Uber made a critical mistake in counting on a human-in-the-loop to suddenly
take control of the vehicle (note: this is why Type 3 automation is something
I'm very dubious about), but it's important to understand that if you want
autonomous vehicles to move through mixed-mode environments at the speeds
which humans drive, then it is absolutely necessary for them to take a fuzzy,
probabilistic approach to safety. This will inevitably result in fatalities --
almost certainly fewer than when humans drive, but plenty of fatalities
nonetheless. The design of the overall system is system is inherently unsafe.

Do you find this unacceptable? If so, then then ultimately the only way to
address this is through changing the design of the streets and/or our rules
about how they are be used. These are fundamentally infrastructural issues.
Merely swapping out vehicle control systems -- robot vs. human -- will be less
revolutionary than many expect.

1: [http://www.ultraglobalprt.com/](http://www.ultraglobalprt.com/)

~~~
boxcardavin
This is absolutely the right analysis of how these systems work and why you
can't expect autonomous cars to halt traffic deaths. What the Uber crash has
shown us is that the tolerance for AVs killing people is probably exactly
zero, not some (very meaningful) reduction like 10x or 100x less.

My company didn't start with this zero tolerance thing in our minds, but it
turns out our self-delivering electric bicycles have a huge advantage for real
world safety because they weigh ~60lbs when in autonomous mode and are limited
to 12mph. This equals the kinetic energy of myself walking at a brisk pace, or
basically something that won't kill purely from blunt force impact. I think
the future for autonomy will be unlocked by low mass and low speed vehicles,
not cars converted to drive themselves.

~~~
econochoice
> This equals the kinetic energy of myself walking at a brisk pace, or
> basically something that won't kill purely from blunt force impact.

This is an analogy that cannot completely map to cycling.

A fall at any speed from a bike is literally a potentially crippling or deadly
scenario.

~~~
grkvlt
Potentially deadly? Maybe, sure, but at low speeds, up to 10 mph say, it is
_incredibly_ unlikely that falling off a bicycle (even with no helmet) will do
more than cause bruises and damaged ego.

~~~
xkcd-sucks
Falling while stationary is the second most common type of accidental death,
can't imagine a bicycle makes it any safer lmao

~~~
sobani
Is this including the elderly who often will break a hip that way and then die
of the complications? Because if so, that would not be comparable to a healthy
young (< 60 yo) person falling.

------
jerf
"According to Uber, emergency braking maneuvers are not enabled while the
vehicle is under computer control, to reduce the potential for erratic vehicle
behavior."

If you can't give your "self-driving software" full access to the brakes
because it becomes an "erratic driver" when you do that, you do not have self-
driving software. You just have some software that is controlling a car that
you _know_ is an inadequate driver. If the self-driving software is not fully
capable of replacing the driver in the car you have placed it in, as shipped
except for the modifications necessary to be driven by software, you do not
have a safe driving system.

~~~
jiveturkey
> _If you can 't give your "self-driving software" full access to the brakes
> because it becomes an "erratic driver"_

You misunderstood. The Uber software had sole control of the brakes (plus the
human of course). The Volvo _factory system_ was disabled so that it didn’t
have negative interaction with the Uber system.

Your mistake is understandable. The article was poorly written, perhaps due to
a rush to publish, as is the norm these days. Even if the NTSB report was
unclear, that doesn’t excuse clumsy reporting.

If you’ve ever done significant mileage in a car with an emergency braking
system you probably have experienced seemingly random braking events. The
systems favor false positives over false negatives.

~~~
jerf
I wrote it carefully and I stand by it. If it can't deal with it because your
system is getting too confused for any reason, you don't have a self-driving
system. Being able to function enough like a human that the safety features on
the car don't produce an unacceptable result is a bare minimum requirement to
have a self-driving car.

This isn't horseshoes, as the old saying goes. I unapologetically have a high
bar here.

~~~
YeGoblynQueenne
Far as I can tell, the done thing in the industry is to disable all other
safety systems (or to not have any in the first place) and to delegate safety
entirely to the self-driving AI.

The charitable interpretation of this is that the industry believes that self-
driving AI is a safety feature of greater quality than, say, lane assist or
auto-breaking.

The less charitable one is that they find it too much work to integrate their
AI to other safety systems. Which, to be fair, is really going to be a lot of
extra work, on top of developping self-driving.

------
rossdavidh
So, can't read the article but I did read the ntsb report directly. Basically,
it sounds like Uber was not ready, but reassured themselves with "but there's
a human driver who can intervene". The fact is, humans are very bad at
remaining vigilant for long periods of doing nothing, then needing to
intervene at a moments notice. Computers are good at that (and Volvo's built-
in safety systems might have worked if Uber had not disabled them), but humans
are bad at it.

Volvo has it right: human driver, computer backup. Uber's idea of a human
acting as a last-second backup to a computer gets the relative strengths and
weaknesses of each exactly wrong.

~~~
fixermark
Worse than just assuming a human driver could intervene: they put a human in
the car and then overloaded that human with additional tasks.

~~~
sgnelson
Worse, they knew they needed two people and previously had two individuals,
but then felt that was unnecessary and went to only one person per vehicle.

------
fixermark
I'm irritated at myself for swallowing Uber's initial line on this (and the
interpretation of the officers who reviewed the cam footage provided by Uber)
without sufficient critique.

This accident should have been avoided. No excuses.

~~~
bo1024
The sensors observed the pedestrian 6 seconds before impact! That's more than
enough time to come to a complete stop.

That's enough time to play a bell that alerts the driver and for the driver to
manually react, press the brakes, and come to a complete stop.

And this was a pretty easily preventable scenario (which just makes it more
tragic of course). Software is nowhere near ready to drive cars on real roads.

~~~
fixermark
I'd caution at avoidance of over-extrapolation.

UBER software is nowhere near ready to drive cars on real roads. There are
multiple competitors in this space (Waymo and Argo immediately come to mind);
they don't generally have Uber's reputation of "move fast and break things" or
of "cut human costs as soon as feasible."

In my initial assessment, I was reasoning the other direction---from practices
I was familiar with from stories of those companies to Uber---and falsely
assumed Uber was behaving more responsibly than they were. This Uber tragedy
doesn't significanly update my prior assumptions about its competitors.

~~~
bo1024
I agree this incident doesn't give evidence about Uber's comepetitors, but I
just don't believe software is anywhere near ready to safely navigate
neighborhood driving. Many of the challenges involve assessing the knowledge,
goals, and capabilities of other people and objects in the environment which
is far ahead of anything AI can do except in specialized scenarios with lots
of accurate training data. Many of the scenarios wil be unique and not
encountered in prior training data. So I'm very skeptical.

------
FartyMcFarter
Does anyone else find these two quotes a bit hard to stomach?

> Although toxicological specimens were not collected from the vehicle
> operator, responding officers from the Tempe Police Department stated that
> the vehicle operator showed no signs of impairment at the time of the crash.

> Toxicology test results for the pedestrian were positive for methamphetamine
> and marijuana.

So they tested the victim for drugs but not the Uber employee in the car??

Other than that, Uber's so-called "self-driving" system sounds like crap and
should never have been allowed to be used in that state.

~~~
beisner
The test is probably a routine part of an autopsy, whereas it likely wasn’t a
routine part of the police response (which use visual and verbal cues to
preliminarily assess inebriation).

------
pavlov
What's the point of this emergency braking system if it can't take action and
can't alert the driver?

Maybe it reports so many false positives that Uber turned those off and just
collects the data to improve the algorithm?

~~~
SlowRobotAhead
I could be totally wrong, but I thought I read the emergency braking function
was Volvo’s, and built into the base vehicle. Uber had disabled because they
were testing their own software.

~~~
vatueil
The NTSB report mentions the standard automatic emergency braking features
from Volvo:

> _The vehicle was factory equipped with several advanced driver assistance
> functions by Volvo Cars, the original manufacturer. The systems included a
> collision avoidance function with automatic emergency braking, known as City
> Safety, as well as functions for detecting driver alertness and road sign
> information. All these Volvo functions are disabled when the test vehicle is
> operated in computer control but are operational when the vehicle is
> operated in manual control._

However, that appears to be separate from emergency braking under Uber's self-
driving system:

> _At 1.3 seconds before impact, the self-driving system determined that an
> emergency braking maneuver was needed to mitigate a collision (see figure
> 2).[2] According to Uber, emergency braking maneuvers are not enabled while
> the vehicle is under computer control, to reduce the potential for erratic
> vehicle behavior. The vehicle operator is relied on to intervene and take
> action. The system is not designed to alert the operator._

> _[2]: In Uber’s self-driving system, an emergency brake maneuver refers to a
> deceleration greater than 6.5 meters per second squared (m /s^2)._

It sounds like Uber didn't trust their own self-driving system enough to allow
it to initiate sudden crash stops. Too many false positives, I guess? Of
course, simply disabling the function leads to other obvious problems, as
shown.

~~~
aeorgnoieang
That's a really weird interpretation. Uber's system could obviously brake but
not 'emergency brake'? The system disabled a part of itself? How is that any
different than it just never deciding to 'emergency brake'?

I think the better interpretation is that the Uber system disabled another
separate (non-Uber) system.

~~~
throwaway_se099
_Uber 's system could obviously brake but not 'emergency brake'?_

There are two systems in the vehicle. One is the manufacturer's, let's call it
System V after Volvo, and the other is System U, for Uber.

System V provides collision detection and emergency braking. It played no part
in this incident, since it's inactive if the car is under control of System U,
which it was at the time.

System U can decide that the car should slow down in some situations. Let's
call gradual slowdown Action U1, and emergency slowdown Action U2. The
incident called for Action U2, by Uber's criteria. What Uber said is that a)
they disabled automatic execution of Action U2, punting it to the driver
(really, a bored passenger in driver's seat), _and_ b) that the driver would
get no indication of emergency situations from the system.

The idea is, presumably, that driver should watch the road and react in
emergencies. But we also know that the driver had the duty of working with the
onboard console, which must have been quite a distraction. Effectively, Uber
has set themselves up for failure, and it happened.

~~~
aeorgnoieang
You were right. It seemed so stupid I refused to believe it!

------
antihero
Why on earth would they mention that someone's been on weed and meth when they
are the victim of a car not stopping?

~~~
IkmoIkmo
Because more context makes for a more comprehensive story.

Fact of the matter is, the person was not allowed to cross there. Not only was
it not allowed [0], it was also extremely dangerous. Why would she do this?

That's one part of the conversation.

The other part of the story is that the self-driving car was mismanaged and
misprogrammed, and that this was likely also a reason why the woman died.

There is therefore reason to believe both parties could have prevented this
death. You're not sure. Perhaps drugs weren't a factor. And perhaps a driving
person could not have braked quickly enough either to save her. You don't
really know. But when you have information which lets you speculate plausibly
to explain unknowns, I think it's relevant to include that information as a
journalist. Meth use falls within that range of relevant information in this
case, if you ask me.

It feels a bit analogous to saying a woman got shot and killed at a gun range.
She ran into the shooting range and got shot. Why would she do this? The
person shooting wasn't paying attention and was just firing casually down the
range. Knowing the woman was on meth helps explain a lot. As a journalist I'd
think that was relevant, and as a reader it offers a possible explanation for
this kind of behaviour. It doesn't negate the fact the shooter didn't follow
procedure and could have prevented this death, too.

I thought it was fitting at the very end of the article like it was in the
original analysis that they refer to. Not so much in the subtitle, I feel that
was in poor taste and driven by clickbait. It shouldn't be the focus of the
article.

[0] [http://darychuklaw.com/legal-services/personal-injury-
claims...](http://darychuklaw.com/legal-services/personal-injury-claims/what-
is-the-law-personal-injury-claims/who-is-at-fault-when-a-pedestrian-is-hit-
outside-a-cross-walk/)

~~~
Obi_Juan_Kenobi
> the person was not allowed to cross there.

Have you seen the intersection in question?

[https://pbs.twimg.com/media/DYrspoFVwAAtJCs.jpg](https://pbs.twimg.com/media/DYrspoFVwAAtJCs.jpg)

I think the pedestrian is very low on the list of who to blame, with the
intersection designers and Uber engineers _high_ above her.

~~~
IkmoIkmo
I have seen it. Check streetview too, it's helpful.

I really hate to defend this position because I feel terrible for her and it's
quite obvious uber made some very bad calls here. (in fact I even used the
very loaded word murder in another comment, see my post history). I'm
definitely not arguing to put all or even most of the blame on the woman.

The intersection is indeed absolutely terrible. It makes no sense. You're not
allowed to cross there, there are even signs which state it on both sides,
referring to a crosswalk a minute walking up ahead. i.e., there is just no way
you're allowed to cross there, despite the island in the middle having the
cosmetic design of a walkway, you're not supposed to get on the island or get
off it or cross the roads at that location. It's a terrible design choice,
both inviting people to cross (in a dangerous spot), while also saying it's
illegal. With better infrastructure planning you could have railing there
preventing crossing, and no island at all.

That having been said, I can't for the life of me imagine crossing there and
not seeing an oncoming car with its headlights on, assuming I was paying
attention to the road I was illegally crossing. The fact meth is involved is a
helpful possible explanation for why this kind of attention was not given.

You can check out the road on google street view, it definitely helps. I think
you'd agree on two things, one is that it's not allowed to cross there and two
that if you were to cross there, it'd be pretty easy to see cars. The reverse
isn't necessarily true if you wear dark clothing at night.

~~~
gkya
AFAIU, the pedestrian who was killed has indeed part of the blame, and we
probably would not feel as guilty saying that if it was a normal car that
killed her. But for the topic of this general discussion, that is irrelevant;
the emphasis is (and should be) on the incompetence and negligence of Uber,
which lead to a person's death, albeit her negligence is at play too, and
which demonstrated that (i) Uber is not capable of this sort of business as an
unscrupulous and unethical company and that (ii) we should still be deeply
sceptical of the capabilities of automated vehicles in responding to arbitrary
reconfigurations and movements their paths.

------
sschueller

      - 70mph to 0 around 184 ft. [1]
      - 43mph = 63.066667 Feet per Second
      - At 6 seconds the car was 378 ft. away.
    

[1] [http://media.caranddriver.com/files/2016-volvo-
xc90-t6-awd-i...](http://media.caranddriver.com/files/2016-volvo-xc90-t6-awd-
inscriptioncomplete-specs-and-performance-data-2016-volvo-xc90-t6-awd-
inscription-september-2015-road-test.pdf)

~~~
IkmoIkmo
The detection at 6 seconds was just of an object though, not an object moving
in to the car's path. You couldn't drive a car if you had to constantly break
because objects (such as people standing by the road) were being detected.

It's not clear at what point the car ascertained a collision would occur
between detection 6s before and the determination that emergency breaking was
necessary 1.3 seconds before.

Was there any other determination in between, and when? What I'd like to see
is Uber's modelling of the woman's trajectory and the likeliness of collision
across the 6 second window. That's completely left unsaid.

The average braking distance of a car is about 24m at 40mph, which is
approximately the distance between the woman and the car at 1.3 seconds out.
So perhaps the 1.3s figure wasn't the first moment the car determined a brake
was necessary, but rather, the last moment the car could have braked to
prevent a substantial collision. I want to know the first moment the car
determined a brake was necessary at all. It's likely not 6s, but it's also
likely not 1.3 seconds. It seems this was entirely preventable, or at least
the collision impact could have been mitigated severely, had there been a
braking and/or warning system in place.

Shutting off brakes on literally the only driving agent tasked with full
attention is inexcusable. But that's what they did. To me that's murder. They
used to have two passengers, one for tagging circumstantial data, the other to
override the car when necessary and keep eyes on the road at all times. Either
keep that and shut off emergency brakes from the car and put a warning system
in place for the 'driver'. Or do not shut off emergency brakes. Instead they
put a single person in the car, tasked to do things that kept her eyes off the
road half of the time, and shut off brakes for the AI. That's insane.

~~~
bo1024
> You couldn't drive a car if you had to constantly break [sic] because
> objects (such as people standing by the road) were being detected.

Yes, you could -- that's how I drive. Do you not? If I detect a mobile object
that might be moving into my path, I slow down to give myself time to react
until I am reasonably certain of safety. When doing so I take into account my
situation-specific knowledge -- have I made eye contact with the pedestrian
and do they know I'm coming? Is the dog on a leash and is the owner being
attentive? Does the cyclist seem aware of my presence?

I would expect no less from anyone licensed to drive a car, be they human or
software.

~~~
IkmoIkmo
You're misinterpreting my point.

I didn't say you can't drive a car without being cautious.

I said you can't drive it without constantly braking the moment you detect an
object, irrespective of what the object is doing. (e.g. moving into or away
from the driving path).

i.e., just because an object was detected 6 seconds before impact did not mean
the car ought to have started braking at that moment. It could be that the
object was 200 feet away and moving away from the car's driving path, 6
seconds before impact. It'd be absolutely ridiculous to brake in that
situation.

We have no information about this context, e.g. the car's data or
determinations within the 6 second window. We only know it detected an object
6 seconds before impact.

It appears like the person I was replying to implied 'the braking distance was
180 feet, but the person was 380 feet away, thus uber could have prevented
killing this woman had it not shut off the brakes'. In reality, the 6 second
figure isn't relevant. What is relevant is the context that allowed a
reasonable driver/AI to determine at a particular point in time, that the car
should have slowed/braked. And we don't have that information yet. That's what
I'm interested in.

~~~
bo1024
I don't think I'm misinterpreting, just disagreeing about the level of
caution. One point is that humans are quite good at immediately recognizing
objects and evaluating threat level (at least when attentive). So a human is
rarely in a scenario of "there's something up ahead I have no idea what it is
or where it's going." But if they were, I don't think it's at all ridiculous
to slow down until determining those things. If software is in that scenario,
I absolutely expect it to slow down until it can determine with high
confidence that no object ahead is a likely threat. (edit) For instance, an
attitude like "in my training data, unidentified objects rarely wander into
the road" is not good enough for me, I want to hold software (and humans) to a
much safer standard.

~~~
niftich
Humans are frequently in this scenario, especially at night. For example, a
reflection from a rural roadside mailbox's prism looks similar to the eyes of
a deer, and shredded truck tire treads look similar to chunks of automotive
bodywork debris. This doesn't invalidate your point about slowing down.

But we're asking a lot from this software (for good reasons), but humans
commit similar leaps of faith of various severity on the roads daily --
failure to yield, failure to maintain following distance, assuming other
drivers immediately adjacent to you will keep driving safely and carefully --
and only a small subset of these situations results in accidents. We're
expecting an algorithm coded by humans to perform better than a complicated
bioelectric system we barely understand.

Waymo has opted to commit to thoroughly understand its environment, which is
why their cars drive in a manner that bears no resemblance to how humans
actually drive. We as a society have to eventually reconcile the implications
of the disconnect.

~~~
slededit
> reflection from a rural roadside mailbox's prism looks similar to the eyes
> of a deer

Deer hits a major cause of fatality out in the country. If your driving at
night in deer country and you aren't eyes wide open then you are going to have
an unhappy experience at some point. Their instincts are essentially the exact
opposite of what they should do when encountering a car. They will stay in the
middle of the road, and they will jump in front of you if startled.

------
bb101
Nice little FUD by WSJ in their subheadline: "Pedestrian tested positive for
methamphetamine and marijuana" \-- not referred to again in the article,
moreover I have trouble seeing the relevance to the accident.

~~~
Zak
It is mentioned again at the end of the article. It's relevant to the accident
because it provides an explanation for the pedestrian crossing the road
outside a crosswalk without adequately checking for traffic.

~~~
admax88q
Yeah only druggies J-walk.

~~~
filoleg
Not at all, sober people jaywalk too. They just tend to check their
surroundings a bit better than their high counterparts for any cars coming to
potentially hit them.

~~~
IncRnd
Nice try, but that doesn't apply to this specific case, where the car could
and should have stopped but didn't.

~~~
Zak
The car should have stopped automatically, but that system was disabled. The
pedestrian shouldn't have been there, but made a bad decision. The pedestrian
should have been more aware of her surroundings, but was impaired or not
paying attention. The safety driver should have been paying more attention to
the road, but was also monitoring system displays.

This crash didn't have a single cause. Any one of those factors being handled
correctly would have prevented it.

~~~
IncRnd
> _This crash didn 't have a single cause._

It is not the fault of the pedestrian for having been slaughtered, especially
due to the immense amount of time after the car saw the pedestrian until the
time that the accident occurred.

------
sly010
Does this mean the technology is so early that they are struggling to program
it to do the right thing in normal conditions, let alone to prevent an
accident?

~~~
rossdavidh
It sure suggests the possibility of, "that thing keeps braking when we don't
want it to, turn it off". When, you know, human drivers manage to do quite
fine with it on. If you have to disable the built-in safety features of the
Volvo to get your driving software to work, then you're not ready to do a road
test.

~~~
uxp100
I don't think that is right. My understanding is that they didn't HAVE to turn
the built-in safety features of the Volvo off to make it work, but instead
they had to in order to test their equivalent safety feature. If they have a
feature that exactly mimics Volvo's, it can't be tested while Volvo's is
active (or at least that is the idea, I think it probably could be tested in
some way.)

But then they turned their own safety feature off, because it failed to be as
good as Volvo's. And then did not turn Volvo's feature back on.

~~~
marcosdumay
You either works with higher error margins than the Volvo software (what
should be the case, since yours is being tested), or you log your software
decisions and compare with the Volvo's one after the fact.

~~~
uxp100
Right, that makes sense. I think they were really underestimating the risks of
distracted driving. Which I think a lot of people (myself included) do.

------
blhack
Isn't the person actually driving the car supposed to be driving the car?

I've ridden in these self driving Ubers. When I rode in one, the driver drove
almost the entire time, except on a few straight stretches of road. They
always had their hands ready to grab the wheel, were always attending to what
was happening etc.

It seems like the marketing and the engineering got crossed here. Marketing
says these were self driving, but anybody who rode in them knew they weren't.
They were supposed to be getting driven by real drivers. From the report, it
sounds like the drivers were listening to the marketing instead of the
engineering team (who presumably would have told them that the system doesn't
brake on its own).

It sounds like the real driver wasn't driving as they were supposed to be.
From the video, it looked like they were reading something on their phone[1]
instead of driving the car.

Compare this to a pilot flying in an autopilot. They don't shut their radios
off and stop paying attention to the flight, they still fly the airplane and
remain attentive to what is happening with it. That's what this driver should
have been doing, not looking at their phone.

It frustrates me that this level of negligence could set self driving tech,
something that will save countless lives, back. This was the Chernobyl moment
for self driving tech. It's safer than alternatives, but now this is all
people are associating it with.

[1]:the driver states that they were interacting with the Uber self driving
system, not their phone.

~~~
Jasper_
> That's what this driver should have been doing, not looking at their phone.

According to the NSTB report, they were looking at a separate diagnostic panel
and flagging messages which Uber asked them to do as part of self-driving
training duties.

> It's safer than alternatives

The system also decided it should have applied the emergency brakes, but then
didn't. I don't think this system is safer than alternatives.

------
reacharavindh
The number of occurrences of "break" in place of "brake" in this discussion is
very disconcerting.

~~~
meritt
Someone at Uber probably recently committed:

    
    
        if (personDetected())
        -   break();
        +   brake();

~~~
ht85
Maybe it's written like a game engine

    
    
        for (;;) {
            if (detect_obstacle()) break;
            move_forward();
        }

------
TheForumTroll
Let me get this straight:

The driver has to look down on a console to see warnings like this AND drive
the car? This and that the emergency system was off tells me that the accident
was 100 % the fault of Uber even if the "driver" were dancing in the backseat.

------
tareqak
Yet another reason in favour of professional engineers being required to
design and implement safety-critical features using software: because "move
fast, and break things" becomes unacceptable when those "things" are the lives
of living, breathing people.

------
gok
Yikes, it had not occurred to me that even Uber would ignore their automation
system's pleas to emergency brake because it was making their cars look too
jumpy.

------
ggg9990
The one thing I spent years teaching my wife is that when there’s anything
untoward on the highway, JUST STOP. I’ve avoided at least two major accidents
by stopping instead of swerving — in a few feet you can get your speed down to
levels where a crash won’t be fatal or severely injurious. And even if you get
rear ended by stopping short, it’s usually at a much lower speed. All
autonomous cars should just STOP when something is not right.

~~~
cmurf
Or just slow down a lot. The fatality is a way lower at 18mph than it is at
43mph.

[https://ec.europa.eu/transport/road_safety/specialist/knowle...](https://ec.europa.eu/transport/road_safety/specialist/knowledge/speed/speed_is_a_central_issue_in_road_safety/speed_and_the_injury_risk_for_different_speed_levels_en)

------
kbos87
Someone should be facing manslaughter charges here.

~~~
zerostar07
It's gonna be a case of distributed responsibility

------
mLuby
"Struck" is a strange euphemism for "killed". :(

~~~
nkrisc
I agree, but in this context "struck" is what happened. "Killed" was the
consequence in this case but not necessarily a guaranteed one (although at the
speeds involved in this particular case, it was almost certainly guaranteed).

~~~
mLuby
Pedantically true, but in this case (and most) the result is more newsworthy
than the action. See "person pulls trigger many times in school".

~~~
danso
The event of interest is the vehicle striking/killing someone. Pulling the
trigger is not that in the context of a shooting, and "pulling the trigger" is
not really a euphemism for shooting and killing someone with a firearm.

~~~
solarkraft
"person shoots at people in school" still doesn't convey them _killing_ anyone
when the effect is a newsworthy subject (though maybe not that much in the
US).

------
IncRnd
Article from 1996 about Space Shuttle software:
[https://www.fastcompany.com/28121/they-write-right-
stuff](https://www.fastcompany.com/28121/they-write-right-stuff)

 _But how much work the software does is not what makes it remarkable. What
makes it remarkable is how well the software works. This software never
crashes. It never needs to be re-booted. This software is bug-free. It is
perfect, as perfect as human beings have achieved. Consider these stats : the
last three versions of the program — each 420,000 lines long-had just one
error each. The last 11 versions of this software had a total of 17 errors.
Commercial programs of equivalent complexity would have 5,000 errors._

Also from the article: “If the software isn’t perfect, some of the people we
go to meetings with might die." _

------
booleandilemma
Future engineers growing up today will read about this incident alongside the
Therac-25 and the Tacoma Narrows Bridge.

------
lbriner
Isn't the problem here much like the presumed cause for Air France 447
([https://en.wikipedia.org/wiki/Air_France_Flight_447](https://en.wikipedia.org/wiki/Air_France_Flight_447))?
The pilots are so used to automatic operation that happens 99.9% of the time
that they are not prepared for the random time when it does. A bit like
someone guarding a bank for 20 years and then gets robbed and can't react in
time because they mind has taught them that nothing ever happens.

~~~
emodendroket
Well, I'd say that's _an_ issue, but not the only one.

~~~
lbriner
I was thinking that the assumption in the design is that the operator can
always handle the hard/fringe cases but the reality is that due to the AF447
principle (whatever it's called), this is precisely what the operator cannot
do reliably.

~~~
emodendroket
Yeah, I agree, but it seems that they also had no alert and were asking the
operator to simultaneously record data, leading to them not even looking at
the road at the time the crash happened.

------
codedokode
That's so irresponsible. So the system cannot apply emergency braking, but why
didn't it try at least to decrease the speed?

Those cars are not ready for public roads. If your sensors give too many false
alarms then you should improve them.

Also I think initially there should be a speed limit for self-driving cars.
For example, 15-20 mph should be enough for driving in the city and won't
cause too much harm in the case of an accident.

------
YeGoblynQueenne
>> Roadway lighting was present.

It's worth remembering that the video of the acciedent that Uber made
available showed a pitch-black road with very little lighting, something that
is not corroborated by the NTSB report. Given that the only comment about
visibility in the preliminary report is that "(r)oadway lighting was present"
it sounds very likely that Uber deliberately tried to create a misleading
impression.

>> The forward-facing videos show the pedestrian coming into view and
proceeding into the path of the vehicle

This is a little less clear-cut but it also seems to cast doubt onto the
initial statement by the Tempe police chief, that Uber was not at fault
because the pedestrian dashed onto the road suddendly and the crash was
basically "unavoidable". [1]

______________

[1] [https://www.sfchronicle.com/business/article/Exclusive-
Tempe...](https://www.sfchronicle.com/business/article/Exclusive-Tempe-police-
chief-says-early-probe-12765481.php#photo-15257361)

------
lisper
This one is a complete no-brainer: At the first hint of an anomaly, the AI
should have 1) alerted the driver and 2) started to gradually slow down.

If that strategy produces too many false positives, it's time to go back to
the drawing board. The right answer in that case is absolutely NOT to say,
"Ah, just fuck it" and deploy the system in the field.

------
perl4ever
Two semi-automated types of cars are out there:

1\. Ones where a human pilots them most of the time, but a computer steps in
in emergencies. 2\. Ones where a computer pilots them most of the time, but a
human steps in in emergencies (hopefully).

For some reason I don't understand, people treat (2) as an evolution of (1).
But it is the inverse.

~~~
forgot-my-pw
What would be the point of #2? Humans have bad reaction time at emergencies
most the time.

~~~
JackCh
Presumably the point of #2 is that it's easier than the implied #3 ( _" The
computer always drives"_) which is their as-of-yet unrealized goal. Of course
deploying #2 into the wild may prove to be ethically unjustifiable. Being
easier than the implied #3 is a poor excuse.

------
raven105x
Funny, all this contention around technology that simply doesn't exist yet.
When automakers market SAE Autonomy L2
([https://en.wikipedia.org/wiki/Autonomous_car#Classification](https://en.wikipedia.org/wiki/Autonomous_car#Classification))
certified vehicles as "self-driving" in official marketing media, and even
more so when their sales reps blatantly lie to customers (both of which should
be illegal), they're responsible for lost lives and undermining their
companies' image, ethics and financials.

There's only one car certified for SAE L3 / "eyes off" and it's the 2018 Audi
A8, only up to 60kmph - this is still not self-driving.

The primary concern here is false advertising as pervasive as it is blatant.

------
knodi
Why not alert to the driver????

BMW has collision detection system that makes a loud beep when it determines
an imminent collision. The beep alerts the driver regardless of false
positives or false negatives. Its a simple solution which many cars have had
for over 10 years.

Why Uber engineers chose to not alert the driver is beyond me.

------
podiki
Coverage on Ars: [https://arstechnica.com/cars/2018/05/emergency-brakes-
were-d...](https://arstechnica.com/cars/2018/05/emergency-brakes-were-
disabled-by-ubers-self-driving-software-ntsb-says/)

------
mnm1
> According to Uber, emergency braking maneuvers are not enabled while the
> vehicle is under computer control, to reduce the potential for erratic
> vehicle behavior. The vehicle operator is relied on to intervene and take
> action. The system is not designed to alert the operator.

Wow. So the vehicle and these tests were run knowing full well that an
accident like this would not be preventable. This isn't manslaughter, this is
murder. Uber was letting its car drive without any safety systems, without
even an alert driver behind the wheel because her job was to monitor the
panel. Fuck me. Wow. They should never be allowed to operate another
autonomous vehicle again. The ceo should go to fucking jail. Fucking
murderers.

------
jellicle
If any non-rich, non-corporate individual had done the following: sent out a
vehicle on city streets, driving fast under computer control, which did not
have any capability to brake and which did indeed strike and kill someone, I
think that non-rich person would certainly be sent to jail for their reckless
behavior and also receive very large fines and civil judgments. It wouldn't
even be close. The judge would shout at them during sentencing and the papers
would cheer.

So, let's see what will happen to the Uber personnel involved.

By the way, it's insane to be an Uber test driver under these circumstances.
They're going to hang you out to dry. Quit.

------
diebir
Interestingly, the picture shows that the car seems to have tried to avoid the
collision by moving to the right.

Of course, every human drivers with experience knows that you pass pedestrians
behind, not ahead. The Artificial Intelligence obviously was not smart enough.

In other words, as suspected, the self driving cars are a pipe dream. One
can't just mix a bunch of statistical woodoo into a neural net and hope it
will work. It may work most of the time, but the mistakes would be
catastrophic and incredibly stupid.

------
yalogin
Now I understand why Uber started their advertising campaign. They know this
will come out and show the world how irresponsible they are, and they wanted
to get in front of that.

------
jacksmith21006
Uber should bail on doing their own and work a deal with Waymo.

~~~
CobrastanJorji
I have this weird hunch that those two companies' self-driving car divisions
may not be huge fans of each other these days.

~~~
forgot-my-pw
It's possible. The lawsuit is settled and Waymo actually owns like 1% of Uber.
In the past, Google also invested on Uber.

~~~
jacksmith21006
Google owns more like 7% or maybe 8% of Uber. They already owned a piece
before the lawsuit.

Think they also own a piece of Lyft.

------
jonbarker
This entire debate reminds me of this classic Milton Friedman video (I think
the key difference with Uber, and where they are likely to get in trouble, is
that they never made any estimate about how many lives their car experiment
would cost that I am aware of):
[https://www.youtube.com/watch?v=EYW5I96h-9w](https://www.youtube.com/watch?v=EYW5I96h-9w)

------
cmurf
a. _the self-driving system software classified the pedestrian as an unknown
object, as a vehicle, and then as a bicycle with varying expectations of
future travel path_

All of those things are bad things to hit, why not slow down?

b. _1.3 seconds before impact, the self-driving system determined that an
emergency braking maneuver was needed_

Too late for either a human or computer.

c. _operator is responsible for monitoring diagnostic messages_

There is superseding responsibility to drive the car safely. Uber's policy
sets up the test driver for failure, and puts people's lives at risk.

d. _emergency braking maneuvers are not enabled while the vehicle is under
computer control_

The pedestrian had no chance.

The very feature that should make the car safer was disabled, but also the
judgement was poor and too late, and Uber policy sabotaged the car driver
judgement as the exclusive (not merely primary) safety mechanism by
distracting them with data that in most every way would have been more
diverting than talking on a cell phone or fiddling with the radio.

I hope the family got a lot of money in the settlement.

------
tomtimtall
So as many pointed out quite early, this isn’t litterally a case of Uber
killing people because they cared more about taking I up miles than safety an
in effect human lives. They disabled safety measures because they where
getting in the way.

Hopefully they will get the book thrown at them and then a couple of chairs.

------
aphextron
Someone from Uber needs to go to jail over this. That woman's death was the
result of pure criminal negligence.

------
gepoch
A good podcast on the risks of semi-automation:
[https://99percentinvisible.org/episode/children-of-the-
magen...](https://99percentinvisible.org/episode/children-of-the-magenta-
automation-paradox-pt-1/)

------
pasbesoin
Perhaps Uber is going to once again "ask for forgiveness". (As in, it's better
to ask for forgiveness than permission.)

This time, I don't think they should get it.

(Not that I, for one, should have received it, previously.)

------
JustSomeNobody
Move fast and break, nay, kill people.

I hope this accident weighs on the people who made these decisions so that
they will be more careful in the future. I would hope they would not see it
simply as a bug in the system.

------
Zelphyr
Uber is a prime example that the “Move fast; break shit” mentally that
Facebook pioneered and the Valley companies so love is now costing lives.

------
selfsteeringcar
Even if the emergency braking was disabled, why was there a lack of automatic
steering, e.g. to change lanes, to avoid the collision?

------
Waterluvian
It won't load for me. Are they referring this to the Justice department?
Someone needs to go to prison.

------
uberkills
All Uber executives should be forced to recreate this scenario as the
pedestrians crossing the street, while their test vehicles drive the loop,
over and over again until there are no pedestrians killed by their self-
driving cars!

------
billsmithaustin
For a government-authored report, that was very readable.

------
mindfulplay
This is what happens when software companies that have traditionally had an
incentive to "fail fast, fail often" mentality ship something in the physical
world.

Especially something as powerful as a ton of steel moving like a missle on our
roads.

Absolutely stunningly stupid that there are teams that built this and felt
incentivized to put this on our roads without any concerns or safety
mechanism. Shameful.

Legally, it will be amazing in the future to hold these people - the
engineers, product managers, the PR people and the CEO (ex and current) all
accountable. We did for something far less serious with VW...

~~~
linuxftw
I have to blame management in this regard. There must have certainly been
engineers that spoke up about this problem who may be silenced by NDA's (or
other means), even if they did resign in protest.

Stopping in time to not run over an unexpected pedestrian crossing the road
would be item number one on any sensible person's agenda. Uber needs to be
liquidated.

~~~
pradn
We shouldn't assume that engineers have spoken up about this problem. All the
same incentives (stock, bonuses) and fears (pushback for whistleblowers, an
anti-truth company culture, getting fired) that apply to management apply to
engineers, too. Engineers are fallible people, too.

------
newnewpdro
Criminally negligent.

------
joncrane
Uber has the ability to learn from it's mistakes, I hope.

~~~
stefan_
Mistake? Turning off emergency braking because it happened too often _then
continuing to drive on public streets_ isn't a mistake. That calls for a
custodial sentence.

~~~
smm2000
Vast majority of cars on the road and majority of new cars sold in US has no
emergency braking assist. Who should we sentence every time somebody dies
because of lack of EBA?

~~~
makomk
The vast majority of cars are being driven by humans who we expect to carry
out an emergency stop if necessary. Replacing the human with a computer driver
which is intentionally incapable of any kind of emergency stop and trying to
justify this by relying on a human supervisor who's required to take their
eyes off the road for much of the trip to stop the car is basically murder.

(Besides, relying solely on an unassisted human driver - even an attentive one
- is dangerous enough that the industry wants to make automatic emergency
braking systems mandatory on all new cars as soon as it's practical.)

------
sarreph
Does anyone have a non-paywalled link? Thanks :)

~~~
aeorgnoieang
Yeah, I'm of two minds of the usefulness of sharing links to things that a
significant proportion of people might not be able to access.

This is actually a good use-case for that pay-per-article site/app (but I
can't remember its name off the top of my head). Unfortunately, I don't know
how easily I could find the article in it. I emailed them about that and they
told me they were working on integrating with publisher sites; maybe they
should look into integrating with _aggregator_ sites instead.

~~~
aeorgnoieang
[Blendle]([https://blendle.com](https://blendle.com)) was the site of which I
was thinking.

~~~
aeorgnoieang
Here's my Blendle link for the article (which costs $0.49):

[https://blendle.com/i/wsj-com/uber-self-driving-car-that-
str...](https://blendle.com/i/wsj-com/uber-self-driving-car-that-struck-
killed-pedestrian-wasnt-set-to-stop-in-an-emergency/bnl-
wsj-20180524-SB11988325855978843674704584244083984615954?sharer=eyJ2ZXJzaW9uIjoiMSIsInVpZCI6Imtlbm55ZXZpdHQiLCJpdGVtX2lkIjoiYm5sLXdzai0yMDE4MDUyNC1TQjExOTg4MzI1ODU1OTc4ODQzNjc0NzA0NTg0MjQ0MDgzOTg0NjE1OTU0In0%3D)

------
moonbug
Notwithstanding a click-baiting take from the headline writer, why on earth
would that be a configurable option?

~~~
josecastillo
I'm not sure the title is click-baiting; the reality sounds exactly as dire as
the headline.

> The agency, which investigates deadly transit accidents, said Uber’s self-
> driving system determined the need to emergency-brake the car 1.3 seconds
> before the deadly impact. The NTSB report said that, according to Uber,
> automatic emergency braking isn’t enabled in order to “reduce the potential
> for erratic vehicle behavior” and that the system also isn’t designed to
> alert the operator in case of an emergency.

------
matt-attack
> the operator said she had been monitoring the self-driving system interface

I guarantee that driver was on her phone. Just watch.

~~~
jacksmith21006
Yes they were. You can not be kind of driving. Google has indicated this
several times. Either the car can drive itself or not. This half way is
dangerous.

