
Uber vehicle reportedly saw but ignored woman it struck - runesoerensen
https://techcrunch.com/2018/05/07/uber-vehicle-reportedly-saw-but-ignored-woman-it-struck/
======
freditup
I'm curious what degree of "cultural context" is built into automated cars - I
feel that there has to be a lot of locale-specific techniques for driving.

For example, in NYC, there's nothing at all uncommon about a pedestrian
beginning to cross the street in front of you and approaching the area of your
lane where you will shortly be. Pedestrians typically then stop about a foot
away from your lane, let you drive past them, and then continue walking. If a
car slammed on the brakes in this situation, it would likely cause more
accidents than not braking would cause.

On the contrary, in Tempe, a pedestrian starting to do the exact same thing as
above is likely much more a case of them not realizing your car is coming at
all, in which case slamming on the brakes is appropriate.

This isn't to defend Uber, clearly their car did do the wrong thing in the
Tempe situation (though I make no judgment on who was at fault). And clearly
cars need to be able to handle the variety of unwritten, localized driving
quirks different regions have. But it seems like a very non-trivial problem to
do this correctly.

~~~
uptown
NPR had a story about how self-driving cars in Pittsburgh wouldn't be adhering
to the "Pittsburgh-left":

"Even harder for autonomous cars to master are local quirks; things like "the
Pittsburgh left." That's a custom unique to the city that allows the first
driver trying to make a left turn to do so before oncoming cars pass through
an intersection. It's one thing that Emily Duff Bartel says Uber's fleet of
self-driving cars won't be programmed to practice. "We're following the rules
on this one," she says."

[https://www.npr.org/sections/alltechconsidered/2017/04/03/52...](https://www.npr.org/sections/alltechconsidered/2017/04/03/522099560/pittsburgh-
offers-driving-lessons-for-ubers-autonomous-cars)

~~~
matte_black
Is this the same as pulling straight into the middle of the intersection with
live traffic swirling around you on both sides then making the left after the
lights have reddened and traffic has cleared?

~~~
vkou
No, what you're describing sounds a hell of a lot safer. (It's also how people
make left turns in Vancouver. The fact that it's illegal in Seattle infuriates
me to no end.)

The Pittsburgh left sounds like a recipe to get T-boned by an inattentive
driver. Pulling into the intersection, and making a left turn when it is safe
is a much better alternative.

~~~
WillPostForFood
It is legal to pull into the intersection on green or yellow, and wait until
it is clear (often when the light turns red) to turn left in Seattle. It is
the correct/legal way to make a left turn. Otherwise you are just going to sit
there blocking traffic for multiple cycles during busy times.

~~~
NeutronStar
That's why you go during the yellow. Cars are stopping at the new red. You
then go left before the cars getting the new green start moving.

~~~
Ajedi32
The oncoming traffic doesn't stop until the light turns red. When the light is
yellow, they're actually more likely to _speed up_ in order to "make the
light" rather than stop. So when the light inevitably turns red and you're in
the middle of the intersection, do you either:

a) Just sit there, in the middle of the intersection, blocking traffic.

b) Attempt to back up, hoping there are no other cars behind you in the left
turn lane blocking your path.

c) Continue through the intersection.

Unless the law says "never stop in the intersection in the first place", I
can't imagine it enforcing any option other than c.

~~~
klibertp
> The oncoming traffic doesn't stop until the light turns red.

Wait, why? No new car should enter the intersection when the lights turn to
yellow. The yellow light should last long enough for everyone to be able to
safely drive off the intersection, in orderly manner.

~~~
kd5bjo
In the states, there's often a 1-second all red cycle to let the intersection
clear, because drivers treat a yellow light as a green that's about to expire.
Even with this, I often see 2 or 3 cars enter the intersection after I already
have a green signal.

------
Animats
I'm looking forward to seeing the full NTSB report.

This hints that Uber's mindset is focused on "obstacles", not "flat road". The
first job of automated driving is to drive only on flat surfaces. Doesn't
matter why it's not flat. Then you worry about where to go. That's how the
off-road DARPA Grand Challenge vehicles had to work. "Flat road" is a pure
geometry thing. Not much AI needed.

On road, non-flat road is rare, and you don't have to worry much about going
off cliffs, rocks in the road, and such. So it's tempting to focus on
"obstacles" to be tallied and classified. Tesla definitely has an "obstacle"
focus, and a limited class of obstacles considered. Waymo profiles the ground.
Looks like Uber had the "obstacle" focus.

If you have a "flat road" detection system, things the obstacle detector
doesn't understand get stopped for or bypassed. So the vehicle isn't
vulnerable to this flaw. There's a higher false alarm rate, though. And a non
false alarm problem. A piece of trash on the road is likely to result in a
slowdown and careful avoidance, not a bump as the car rolls over it. Still,
better to take the safe approach unless you're on a freeway and the cars ahead
of you just got past the obstacle successfully.

The Udacity self driving car "nanodegree" trains people to build obstacle
detectors and classifiers. That may be a problem.

~~~
krschultz
Is another way of framing this:

"flat road" => assume you can't continue until you prove it's safe

"obstacle detection" => assume you can continue unless there is something
blocking the current path

The bit of autonomous vehicle work that I did started from a set of potential
trajectories that were then pruned by obstacle detection. So we always assumed
you could continue moving forward unless every potential trajectory was
pruned. We ignored the cliff problem as beyond our scope.

------
tlb
The disturbing thing about this crash is that it's such a basic thing to test
on a test track. A pedestrian walking in front at night, with nothing else
unusual happening, should be on the list of scenarios to get 5 9s reliability
on before taking it out in public.

The things you have to test in public are the complex real-world interactions,
like these: [https://medium.com/kylevogt/why-testing-self-driving-cars-
in...](https://medium.com/kylevogt/why-testing-self-driving-cars-in-sf-is-
challenging-but-necessary-77dbe8345927)

~~~
ars
> get 5 9s reliability

That's it? Humans have greater than 7 9's reliability when driving. (As
measured by fatalities per time of driving, and assuming a fatality takes 5
minutes, and average speed is 30mph.)

If the cause of a fatality is 10 seconds of failure, not 5 minutes, humans
have greater than 9 9s of reliability.

Math: 10 seconds / (1.13 / (30 miles/hour / 100 million miles))

Fatality rate is 1.13 per 100 million miles.

Good luck programming a computer to even _stay on_ and not crash with that
level of reliability.

There's this belief that humans are horrible at driving, etc, etc. Until you
run that math and realize, no, they're not. There's just a lot of driving
going on.

~~~
sdenton4
Fatalities seems like a pretty low bar to measure failures by... I would
prefer to measure reliability by reported accidents per 100k miles. This stat
seems to be 183 per million miles in 2009, compared to 1.13 for fatalities per
million miles. So quite enough to shave 7 9's down to 5ish...

~~~
oldgradstudent
The problem is that a lot of accidents are not reported and even injury
severity can be very subjective (especially if there's an interest to do
so[1]). I prefer counting fatalities because it's a hard endpoint - bodies are
hard to hide.

[1]
[https://www.youtube.com/watch?v=_ogxZxu6cjM](https://www.youtube.com/watch?v=_ogxZxu6cjM)

------
assblaster
The question I want to know is:

Did Uber doctor the camera video to make the road seem darker than it actually
was?

If you recall, there was initially a doubt that Uber was at fault because the
streets appeared extremely dark, but I think they were buying time to delay
the outage at incompetent Uber.

The police initially didn't fault Uber based on this video.

~~~
jakelazaroff
That was a bad call on the part of the police.

You're only allowed to drive as fast as conditions permit. If it's dark enough
that you can't avoid an obstacle by the time your headlights illuminate it,
you need to _slow down_.

~~~
hazardmat
That's not really true. Thousands of deer get killed on the road every year,
because deer aren't smart enough _not_ to be an obstacle when it's dark. Do we
therefore change all the speed limits at night? No. It's a driving risk taken,
just like any other.

~~~
davidgould
Those deer are killed because thousands of humans aren't smart enough to not
overdrive their headlights so they hit obstacles in the dark. Hitting a deer
is not a small thing. You will seriously damage your car, and may be severely
injured or killed yourself if the deer is tall enough to come through the
windshield. In open range country you can also hit a cow. In some areas you
can hit a moose.

 _Do we therefore change all the speed limits at night?_

No, but we shouldn't drive like idiots either. It's not just _" a driving risk
taken"_, it's reckless and negligent behaviour.

~~~
docker_up
I've driven in Palo Alto near 280, and from the distance, in broad daylight, I
saw a buck run across an entire field at full speed and straight across the
road about 3-4 cars in front of me, and get hit by the car. There was no way
the driver could see what I saw, and he had no chance to avoid it. The deer
was simply acting irrationally. To say that people who hit deer are "driving
like idiots" is wrong.

~~~
davidgould
I should probably let this go as it's off topic, but I don't understand the
description of this incident:

\- Was this on 280, or near 280?

\- If the car that hit the deer was 3-4 cars ahead and you could see the deer
"run across an entire field", why was "there no way the driver could see what
you saw"? I'm trying to imagine what sort of obstruction could block the
drivers view but not yours.

\- How fast was the car going?

\- How long did it take the deer to "run across an entire field"?

\- Normally deer are active around sunrise and sunset and bed down during the
day (source: deer hunting). It seems odd to see a deer in running in "broad
daylight".

------
toast0
> We’re actively cooperating with the NTSB in their investigation. Out of
> respect for that process and the trust we’ve built with NTSB, we can’t
> comment on the specifics of the incident.

Hey look, somebody knows how to follow the script! Uber may be grossly
negligent, but at least they respect the process unlike Tesla.

~~~
TillE
It's actually the rest of that statement which shocks me a bit:

> In the meantime, we have initiated a top-to-bottom safety review of our
> self-driving vehicles program,

So they actually think they can continue to work on self-driving vehicles
after this, and not scrap the program entirely. That's...not what I expected.
No municipality in the country should allow Uber to test on their streets
after they killed someone, almost certainly because of their recklessly fast
development. That's not something you come back from by promising to Do
Better, especially not when there are more than a few competitors already way
ahead of you.

~~~
lhorie
> No municipality in the country should allow Uber to test on their streets
> after they killed someone

This is pitchfork mentality.

Many industries have to deal with unintentional deaths. Car manufacturers
sometimes have to deal with deaths resulting from hardware failure. Food
companies sometimes have to deal with recalls due to food poisoning deaths. If
the response to a death is to ban all further production work, eventually
nothing is ever going to get done because it's inherently impossible to be
perfect all the time.

It sucks that it had to come to someone's death, but the rest of the statement
said:

> we have brought on former NTSB Chair Christopher Hart to advise us on our
> overall safety culture

so it sounds like Uber is at least trying in earnest to put the house in
order.

~~~
cimmanom
Or a cynic could interpret that as "we've hired a crony of the people who will
be judging us so he can lobby them to let us off easy".

~~~
lhorie
Said cynic sounds pitchforky. Even HIPAA tries to work with non-compliant
companies to get them compliant before resorting to dishing out their
notoriously expensive fines.

At the end of the day, the NTSB isn't in the game of shutting down companies.
NTSB would much rather see that self-driving vehicle development is happening
safely since the matter of the fact is that it _is_ happening, despite all of
its imperfections.

~~~
oldgradstudent
Hiring a former NTSB chair while there's an active NTSB investigations sounds
a lot like a revolving door to me.

~~~
lhorie
Um, to go back to being NTSB chair, he would have to be re-appointed by the
POTUS...

One does not just hire this guy as if he was some sort of money-chasing
sleazeball. I mean, just google the guy's bio

~~~
oldgradstudent
I may have abused the 'revolving door' term a bit, but I think you are missing
the point of hiring former regulators.

You are not hiring them for their bio, achievements or abilities. You are
hiring them to signal existing and future regulators that being nice to you
can be highly profitable.

~~~
lhorie
You keep saying Uber hired him, but where did you read that? The statement in
the article said "we have brought on former NTSB Chair Christopher Hart to
advise us". Given how high profile Hart is, and given the focus of his career,
I understood that to mean that he agreed to review safety practices and make
safety policy recommendations, a topic in which he's an expert on. Unlike
wording like "joining to lead a safety program", this arrangement doesn't
really imply profit-driven motivation, IMHO.

~~~
oldgradstudent
I don't see any other way to interpret their statement:

> Uber, which suspended testing of autonomous vehicles after the accident, on
> Monday said it was looking at its self-driving program and said it retained
> Christopher Hart, a former chairman of the NTSB, to advise it on safety.

> “We have initiated a top-to-bottom safety review of our self-driving
> vehicles program, and we have brought on former NTSB Chair Christopher Hart
> to advise us on our overall safety culture,” Uber said. “Our review is
> looking at everything from the safety of our system to our training
> processes for vehicle operators, and we hope to have more to say soon.”

[https://www.reuters.com/article/us-uber-selfdriving/uber-
hir...](https://www.reuters.com/article/us-uber-selfdriving/uber-hires-safety-
adviser-after-fatal-crash-wont-confirm-report-on-software-flaw-idUSKBN1I81Z4)

~~~
lhorie
> said it retained Christopher Hart

Ah I see, you're talking about "hiring" as in paying a retainer fee. I thought
you were saying he was hired as a full time employee, my apologies.

Assuming Reuters' wording is accurate, that actually seems fairly reasonable
and in line with how I understood it.

------
thegambit
It seems like it was actually Uber who disabled the safety settings that were
there to prevent this according to Volvo. "Uber Technologies Inc. disabled the
standard collision-avoidance technology in the Volvo SUV that struck and
killed a woman in Arizona last week, according to the auto-parts maker that
supplied the vehicle’s radar and camera."

[https://www.bloomberg.com/news/articles/2018-03-26/uber-
disa...](https://www.bloomberg.com/news/articles/2018-03-26/uber-disabled-
volvo-suv-s-standard-safety-system-before-fatality)

~~~
nmca
This whole line of reasoning frustrates me somewhat. The story of "Uber
Disabled the Safety System" paints Uber out to be negligent for doing so.
Outside of consideration of all the other facts (which unlike this one, do
indeed suggest that Uber are to blame), this element of the story is not
indicative of negligence. Of course they disabled the existing safety system -
they were aiming to build a new, safer one that could operate without reliance
on or interaction with the existing one. That (again, on its own) seems like a
totally reasonable engineering call.

~~~
jschwartzi
They are absolutely negligent for doing so. I wouldn't have made that
decision, because in a safety-critical system it's ALWAYS preferable to have a
backup system, especially when the system you're working with is unproven
software that you don't fully understand. It doesn't matter that the old
system would have interfered with the new system, and in fact if the two
systems did interfere it would behoove you to understand why. The decision
speaks volumes about their engineering culture leading up to this incident.

You can't just call something a safety system. You have to prove that it is a
safety system by testing it, which is something that Uber hadn't done before
they disabled the existing system.

~~~
nmca
So I agree that the system was insufficiently tested to be operating on public
roads and endangering lives, and that lack of testing was negligent; I also
agree with your comment about their engineering culture.

But those two points seem distinct from the idea of disabling a system in
order to have a better understanding of what's going on. Suppose they had
built a car that was safe, conditioned on the presence a black box system that
they likely didn't have access to the internals of - would this be
satisfactory?

~~~
mannykannot
Even with the red herring about the safety overrides being a black box, yes,
it would be satisfactory - see my other post. Not only would the triggering of
the safety override generally indicate a failure of the autonomous system, the
use of failsafe overrides to catch corner cases should be a feature of the
final system.

If Uber could demonstrate, through the analysis of a statistically significant
number of events, that its system was actually safer without the car
manufacturer's override (e.g. if all the events were false positives), then it
would be appropriate to disable it at that time. That's how you do it.

~~~
nmca
Replying to your other comment here as well - the inclusion of conceptually
simple safety mechanisms is important (eg I agree), as is the broader scheme
of including both hardware and algorithmic redundancy to improve safety. I
also agree that "live" initial testing of such safeguards is inappropriate,
and as above Uber clearly failed to do appropriate testing.

However you describe the (potential) black box nature of the existing system
as a red herring -> to be honest, this is what I'm most interested in. My
opinion is that including a black box component into a saftey-critical system
would be inappropriate. Do you disagree with that? If your answer is "probe it
until it's no longer a black box and then include it", would you not consider
that to be overall semantically equivalent to "don't include a black box"?

~~~
mannykannot
It is a red herring because:

Firstly, it assumes Volvo is not sharing the parameters of the system. It
seems unlikely that Uber is installing an automated driving system into these
cars without the cooperation of Volvo, especially with the agreement to
ultimately get 24,000 autonomous-system-equipped cars from them.

Secondly, if Uber could instead determine what it wants to know about the
parameters by testing, then the question is irrelevant, as are the semantics.

Thirdly, it is presumably safe for humans to be driving cars without knowing
the exact parameters, and so should not present any particular problem for the
autonomous system - if the emergency brakes are triggered, it is likely to be
a situation in which it is the right thing to happen, and possibly a result of
the autonomous system failing. Just as for human drivers, an autonomous system
is expected to usually stay within the parameters of the emergency system,
without reference to those parameters, just by driving correctly. For example,
if the emergency brakes come on to stop the car from hitting a pedestrian
because the autonomous system failed to correctly identify the danger, what
difference would it have made if the system knew the exact parameters of the
emergency braking system?

Lastly, the road is an environment with a lot of uncertainty and
unpredictability. If the system is so fragile that the tiny amount of
uncertainty of not knowing the exact parameters of the automatic braking
system raises safety concerns, then it is nowhere near being a system with the
flexibility to drive safely.

It is possible that a competent autonomous driving system might supplant the
current emergency braking system, in which case the way to proceed is to
demonstrate it in the way I outlined in the last paragraph of my previous
post.

~~~
nmca
Thanks for answering in so much detail - I think the last two points make a
compelling case for not disabling the system, even in the true black box case,
and the first two are very compelling in the real world, even if they don't
apply to the thought experiment of an actual black box. You've broadly changed
my mind on this issue :)

~~~
mannykannot
I should have said that your concern is valid where two systems might issue
conflicting commands that could create a dangerous situation, it is just that
I don't see it likely in this particular combination of systems.

------
stillsut
The exact circumstances of the struck pedestrian likely violated many
[possibly curde] bayesian priors for what to consider a positive collision
threat: A rogue pedestrian at an unflagged portion of road, at an odd time of
night, on a road that doesn't usually have pedestrians, with a bike moving
perpendicular across the road (instead of parallel along it). Each one of
these had been learned to be a high false positive to true negative ratio.

The product of these rarely encountered events (even independently) allowed a
higher level algorithm to "score" the approaching unverified object through
bayesian inference, below the level of a reasonable expectation of human
collision.

In a way, this system should never be expected to become overly reckless or
feckless as compared with even top human drivers in the long run: close calls
of this nature should be input to the system (perhaps deliberately through
staged QA?) and added to the model of collision threat identification.

~~~
justicezyx
I am rather confused:

Should the car always respond by slow down as the mandatary response when
there is object in front?

Are you implying that the algorithm would conclude that because it's unlikely
that there is an object in that circumstance, it is ok to disapprove its own
observation that there is some thing in front of the car? That sounds utterly
nonsensical.

~~~
danenania
Exactly. I'd expect collision-prevention to have absolute priority over any
other system. It should not be possible for any logic bug or combination of
environmental factors to make the car run into a large object in its field of
vision. There's no need for statistics here--an overriding 'don't run into
stuff' rule will suffice.

If there's truly no possible route to avoid some collision and therefore the
car needs to decide which collision is preferred, then the statistics can kick
in, but in this Uber scenario, it shouldn't have gotten that far. From the
outside, it seems like a fundamental design issue.

~~~
stillsut
> I'd expect collision-prevention to have absolute priority ..to [not] make
> the car run into a large object in its field of vision.

This is a good point.

But I think we've all placed our "toes over the curb" with a car hurtling
towards us in preparation to cross the street as soon as the car has passed.
No system can stop at each instance of this, nor can it recover past the
point-of-no-return created each time this event occurs.

What constitutes "the curb" is subjective when it's not a raised sidewalk.
Surely you don't have to be "on the grass" of a multilane high speed road but
merely somewhere on the shoulder to claim expected pedestrian lane status. But
that (may!) also imply you're an 'awaiting street crosser' not an 'engaged
street crosser'. That's where the high level stats comes in.

~~~
danenania
The toes over the curb thing is a _potential_ collision threat. It's fine to
statistically categorize and prioritize these, as they're everywhere. But as
soon as it becomes an imminent threat (i.e. is directly in the vehicle's
path), avoiding the collision should immediately become the overriding
objective. If the collision is truly unavoidable (this should _very_ rarely be
the case with an AV), then it should still be slamming on the breaks and doing
everything possible to lessen the force of the impact.

------
w_t_payne
I think that I am qualified to say that cost-effective testing is and will
continue to be the barrier to self-driving car adoption.

A big part of the solution to this problem is design-for-test; and the
adoption and tight integration of a wide range of (quite conventional but
sadly often overlooked) testing and systems engineering standards and
processes.

------
jacquesm
Anybody still want to defend the police's claim within a day of the accident
putting the blame on the woman that got killed?

~~~
hazardmat
Yes. They said "preliminarily" the Uber driver was not at fault. Judging from
the footage, that was reasonable. You can't put the blame _off_ the woman, you
can't just cross a road at night like that and expect to be noticed.

Accidents like that happen all the time, it's only because of the self-driving
car that this is newsworthy.

Imagine if there wasn't any dashcam footage and this was a normal car with a
distracted driver, without any other witnesses they would likely be off the
hook.

~~~
jacquesm
A distracted driver has the option to tell the police they were distracted. Or
are you of the opinion that since the woman was dead it didn't matter anymore?

~~~
hazardmat
First of all, in this scenario, the woman would have at least shared _some_ of
the blame.

Second of all, what are the odds that a human who just killed a woman would
admit even to themselves that _they_ were the one to blame in such an
accident? Practically zero. Cognitive dissonance won't allow it. In their
minds, they weren't distracted, at least not enough to take any of the blame.

~~~
FireBeyond
> Second of all, what are the odds that a human who just killed a woman would
> admit even to themselves that they were the one to blame in such an
> accident? Practically zero.

As a paramedic who sees fatality accidents? Hardly. "Oh my god, I've killed
someone!" is a fairly common statement, or paraphrasing.

~~~
hazardmat
"Oh my god, I _accidentally_ killed someone" is something quite different from
"Oh my god, _my willful negligence alone_ has caused a fatality and I'm ready
to take full responsibility".

While not entirely impossible that the latter happens, it's highly unlikely.
What's much more likely is that these people are saying this because they want
to hear "Oh, but _it 's not your fault_" from somebody else.

------
mcr1983
Uber's ultimately at fault because it was their car. But the woman driving
should be arrested for vehicular manslaughter. Uber knew its cars weren't
autonomous, that's why they had a driver to take over when the car screws up.
But she was texting while she should've been ready to take control of the
vehicle and hit a pedestrian any driver that was paying attention would've
easily avoided.

~~~
bkor
> Uber's ultimately at fault because it was their car.

Uber while hiring should've specifically looked for people who can pay
attention for hours on end. Basically like a train driver. Further, they
should review if people pay attention and have a system in place.

She was employed by Uber. If you make a huge error while working for someone,
often it's the employer who is responsible. There's way too much which
should've been done which wasn't to just blame this driver.

------
sharemywin
Is this a binary decision? I normally slow down when I see a something at the
side of the road.

------
wilburTheDog
If I understand it right, this is the biggest blind spot of all the automated
driving systems. The software just ignores stationary objects because the
sensors don't have the resolution to determine whether they are in front of
the car or on the side of the road. And if it didn't ignore stationary objects
the car would come to a halt at every garbage can, mailbox, and bush along the
road. Until they can work lidar into the system this will not be preventable.
And, correct me if I'm wrong, but I think the vast majority of accidents that
were the fault of the self-driving car happened because it hit something
stationary in the road (or off to the side of the road, in one case I recall).

Since I've learned about this blind spot in these system I no longer trust
them. I think everyone with a self-driving car should understand this. But
maybe it's not in the interest of the manufacturer to explain it, or people
would demand they fix the problem, which they cannot do just yet.

~~~
Symmetry
That wasn't the case at all for this collision. As the article says, the
sensors had detected the woman but the information had been disregarded. And
the moving object thing is exclusive to radar, vision systems don't suffer
from it though vision systems have their own problems.

~~~
wilburTheDog
The article linked mentioned "B was the problem", meaning it ignored the
sensor input. Another article(1) about it mentions that Uber had reduced the
number of LIDAR sensors from five to just one rooftop sensor. The president of
the company that builds the sensors even said you would need side sensors to
see pedestrians, especially at night. But LIDAR is expensive. So it seems
that, like Tesla, they were trying to rely on radar for that kind of thing.
But that does leave the blind spot(2) I mentioned.

Given that I don't see how this case is so different.

Though I still don't understand how, specifically, that blind spot problem
works. They say when the car in front of you changes lanes and a stopped truck
is now in your lane you'll hit it. What if no car changed lanes in front of
you? Would you still hit a stopped truck in your lane? The guy whose Tesla hit
the bridge column seems to tell me yes, but I'm not sure.

1\.
[https://www.theregister.co.uk/2018/03/28/uber_selfdriving_de...](https://www.theregister.co.uk/2018/03/28/uber_selfdriving_death_may_have_been_due_to_lidar_blind_spot/)

2\. [https://www.wired.com/story/tesla-autopilot-why-crash-
radar/](https://www.wired.com/story/tesla-autopilot-why-crash-radar/)

~~~
Symmetry
I haven't worked with automotive radars but the way it works in aeronautical
radar is that you apply frequency binning in the electronics and ignore
returns that have the wrong Doppler shift at the waveform level, long before
step A in the article. If you didn't do it this way your computer would be
overwhelmed considering every tree and house you can see but only moving
objects like airplanes.

It's the same principle as the way your eyes only report things that move over
your retina as your eye moves, letting you filter out imperfections in the
lens of your eye and your blind spot.

Similarly, I assume that the computers in a car would be overwhelmed by the
task of taking the returns from every single post on a guardrail and sorting
them into distinct objects before deciding to ignore them due to relative
velocity so I'm fairly sure that a similar mechanism is at work within the
hardware of the radar unit the Uber car.

So given that the woman showed up on Uber's sensors she would have been
detected by the camera system or the lidar. And apparently she was detected.

~~~
wilburTheDog
Thanks for the explanation. I hadn't heard of frequency binning(1) before. It
makes sense that stationary objects would be filtered out before they even
reach the signal processing software.

But Uber did seem to be relying more heavily on radar than the other sensors.
From an article I linked in another comment "the number of LIDAR sensors were
reduced from five to just one – mounted on the roof – and in their place, the
number of radar sensors was increased from seven to 10. Uber also reduced the
number of cameras on the car from 20 to seven." And for LIDAR this "results in
a blind spot low to the ground all around the car."

They had to know radar wouldn't detect stationary objects. So the signal
processor should be prioritizing camera and LIDAR reports of anything
stationary in the road. If it really was programmed to ignore such a signal it
seems like gross malfeasance to me. If not, then the speculation in the linked
article is incorrect and it's the same problem Tesla's system has.

1\.
[https://www.eetimes.com/document.asp?doc_id=1278779](https://www.eetimes.com/document.asp?doc_id=1278779)

------
bsaul
i haven't read much about the observation that driving = communicating. aka :
whenever you drive, you send and receive information from all the environment
and human, establishing sometimes what could be similar to a dialogue.

Doesn't this implie that autonomous driving won't be solved until we're able
to have "true" personnal assistant ?

The message in driving are simpler than with a regular voice communication,
but the objects and concepts visually present seem just as diverse.

------
NegativeLatency
Didn't that car also have a builtin automatic braking system that was
disabled?

~~~
jsight
I think so. But I'm not sure why that would be a bad thing. The Uber built
system was designed to fully replace such systems.

~~~
leggomylibro
As long as Uber's system logged which automatic brake was triggered - or
simply failed to log that its own braking system stepped in - it could have
saved a life without impacting the data gathered.

They didn't put a tarp over the windshield; why get rid of other proven safety
features?

~~~
carlosdp
You don't generally want multiple autonomous control systems in a single
vehicle competing with each other for control. There are scenarios where the
less-sophisticated collision avoidance radar could do the unsafe thing where
the self-driving system has a more holistic plan.

Basically, there's no reason the self-driving tech shouldn't have been way
more effective than the collision avoidance radar other than Uber royally
screwed up. The LIDAR and other sensors should have (and by this article,
likely did) detected the woman long before the vehicle was in striking
distance, much longer range than the forward radar.

~~~
jameshart
That argument applies equally to human drivers competing for control of a
vehicle with automated braking systems. Are there not also scenarios where the
less-sophisticated collision avoidance radar could do the unsafe thing where
the human has a more holistic plan? If not, why not?

------
SilasX
Sorry for the snark, but there's a joke in here about:

"Uber business development saw but ignored laws it broke."

~~~
hughes
Snark aside, false-positive rejection will be an aspect of any lawful
autonomous vehicle.

Incorrect false-positive rejection will be a reality, and will have to exist
within lawful operation.

------
mtgx
> This puts the fault squarely on Uber’s doorstep, though there was never much
> reason to think it belonged anywhere else.

Are you kidding me? At least 80% of the comments I saw online the following
week were blaming the woman.

It seems to me like Uber's self-driving system is absolute trash. But they
were in a hurry to "go live" with it, because they _need it_ so that the
company can become profitable ASAP. And thanks to the deregulation bill from
last year (which was almost universally praised online), they could actually
do that, too.

~~~
ajross
It's still the biker's "fault" in some sense. That same video would have 100%
exonerated a human driver. There's no way that was a safe crossing. The fact
that there was a clear automation fault _also_ doesn't change the fact that
traffic safety is everyone's job, including but not exclusively the computers.

~~~
telchar
You mean the video that showed the safety driver staring at a cell phone for 6
full seconds right up until nearly the moment of impact? Maybe that would have
exonerated the driver, but only to prove how perverse our driving laws are.
But in my state the driver would have been convicted.

~~~
Rapzid
Do we know what she was looking at? I haven't read anything conclusive of
that.

What would the driver have been convicted of? Would you be willing to get
second opinions from prosecutors in your state as to whether or not they would
have persued a conviction of that crime?

~~~
telchar
It was pretty clear in the video.

I myself was charged for a much lesser distraction a while back involving a
crash in which I was the only one at risk and for which the other driver
shared equal blame. If I had run down a pedestrian I have no doubt the charges
would have been severe.

------
erdo
I'm wondering how much more attentive the emergency driver would have been if
they were one of the developers. Sometimes I worry about how much trust non-
developers assign to software (in all sorts of areas)

------
chiefalchemist
"Saw" doesn't seem like the right word. Granted, apparently, the sensors
picked something up. What it failed to do is recognize and categorize; and
from there properly take action.

Given the failure of the recognition + categorization system, it would be
interesting to know what it thought it saw. Was it filed under IDK, a plastic
bag blowing in the wind, what?

------
jessaustin
ISTM that HN policy would replace TFA with the link cited in TFA:

[https://www.theinformation.com/articles/uber-finds-deadly-
ac...](https://www.theinformation.com/articles/uber-finds-deadly-accident-
likely-caused-by-software-set-to-ignore-objects-on-road)

...although, there is a paywall, so maybe not.

~~~
jessriedel
Not just any paywall either. It's $40/month.

------
dboreham
Um...if I as a human see a plastic bag on the road ahead, I slow down to be
able to carefully ascertain if it perhaps is a plastic bag full of nails. I
don't drive full tilt over it.

------
jiveturkey
i wonder how this info got out. i thought (learned this from tesla) that
companies are not allowed to release info about accidents under NTSB
investigation. which this one is.

------
kalal
Sounds like the departments are fighting over the responsibility of the
accident. Correct me if I am wrong.

------
m3kw9
What’s worse, it saw it but misinterperated or sensors has a blind spot and
missed something?

------
jlebrech
Ethical dilemma, should a vehicle swerve to avoid an unavoidable pedestrian
and potentially killing more people or should it ignore that person (not
saying that what happened)

Should a autonomous vehicle also prioritise it's own passenger over the safety
of others, aka head on collision with a people carrier.

------
agumonkey
it's dramatic but it's also a good reminder that "modern" "recent" wonder tech
doesn't comes free. It needs proper examination. This will surely push toward
more serious work.

------
dingo_bat
> This puts the fault squarely on Uber’s doorstep, though there was never much
> reason to think it belonged anywhere else.

> This is not good.

The state of journalism saddens me.

