
Velodyne Lidar Inc. baffled by Uber crash - neom
https://www.bloomberg.com/news/articles/2018-03-23/sensor-supplier-to-self-driving-uber-defends-tech-after-fatality
======
danvoell
“Certainly, our Lidar is capable of clearly imaging Elaine and her bicycle in
this situation. However, our Lidar doesn’t make the decision to put on the
brakes or get out of her way.” - Would be very interesting, in the name of
transparency, to see the log. Whatever malfunctioned has to be there.
Hopefully some sort of mechanism to prevent future crashes arises from this.
Instead of just a few lawsuits, payoffs and coverups

~~~
candiodari
There are 2 problems with this way of reasoning is what you see in the
aviation industry.

One, you're comparing this incident with an idealized human. Everybody thinks
pilots and drivers are magical beings that don't make mistakes. And then 90
people die because one engine catches fire, pilot turns off the other engine
and then banks the aircraft for make a quick emergency landing, obviously the
plane loses lift immediately, drilled itself into a bridge, taking a taxi with
it into the river below. No survivors, almost a hundred dead [1]. And I get
it, the pilot needed to rapidly make a series of decisions under ridiculous
levels of stress, that's the real cause. But when things go wrong close to the
ground at 150km/h in an object that weighs 50 tons, they go wrong quickly, so
you need to respond quickly. Needless to say, nothing prevents reoccurence.
Quite simply, a plane will crash if you do this. There's nothing that can
technically be done to prevent it. As for cars, over 10.000 people die every
year because humans can't be bothered to wait until they sober up to drive
[2]. Those are the human pilots that we should measure against.

Frankly, I don't understand how humans are allowed to drive cars or fly planes
at all. We pass signals in our brain, they can cross from neuron to neuron in
about 10ms. That means that in a second, a signal can affect, at most, a ball
with radius 2cm in our brain. To spread out over the brain requires 7-8
seconds minimum in theory and in practice minutes is the more common scenario.
That means, it's your spinal column that's driving the car/flying the plane
and it gets updates "from upstairs" that are 2-5s old by the time they reach
the control loops. Our brain is very good at predicting events so it doesn't
look like that's the case, but it is.

Compare self-driving or other autopilots against realistic humans, who make
these sorts of mistakes. An average autopilot needs to do better than a decent
human driver. It should not need to outperform a magical how-we-imagine-
ourselves perfect human driver.

That said, I do agree that we need some basic rules. Tesla's car did NOT stop
after crashing into a truck after a lidar mistake (and a serious mistake by
the truck driver that it was attempting to compensate for I might add), but
not stopping, that's utterly unforgivable. Same here, obviously the car was
obviously either driving with the lidar turned off or ignoring it's output.
That's like a human driving with their eyes closed.

But certification must be functional. It can't be on the quality of individual
components. It can't be on code review. It must be functional. We should have
test tracks where autopilots get confronted with dozens of situations,
preferably combining 5-6 individual problems at the same time. And then it
needs to navigate it safely, under constant decision pressure. And then it
needs to end with 5 cars being sacrificed to test their reactions when
something heavy drops on them, when they drop off a cliff, when they get
mechanically blocked, and when catastrophic mechanical failure occurs ... (so
they don't put the pedal to the metal causing 10x the damage to others when
they do have an accident).

But it needs to be a functional, practical test. Not the madness we currently
have for aviation.

[1]
[https://www.youtube.com/watch?v=jKNREZ_u8E8&t=31s](https://www.youtube.com/watch?v=jKNREZ_u8E8&t=31s)

[2]
[https://www.cdc.gov/motorvehiclesafety/impaired_driving/impa...](https://www.cdc.gov/motorvehiclesafety/impaired_driving/impaired-
drv_factsheet.html)

~~~
_dps
> We pass signals in our brain, they can cross from neuron to neuron in about
> 10ms

Could you clarify this point for a non-biologist? I understand the neuron-to-
neuron transmission is not going to happen at the full electrical conductivity
rate (something like 100 m/s) but this seems so much slower as to be hard to
understand as a lay person.

~~~
candiodari
Simple: signals travel through the brain through to a kalium-natrium cascade
reaction (not even a real chemical reaction, just a gradient change), and
every time they hit a synapse it becomes a lot more complex involving a dozen
plus neurotransmitters. (This is why a kalium/potassium injection will kill
you: it reverses the gradient for a long, long time, meaning the nerves cannot
fire during that time, which means your heart and breathing (and everything
else) stops. Incidentally, this also fires every pain nerve in your body so it
should be unbelievably painful, and survivors do report that. Kalium is the
Latin word for potassium)

Electrical conductivity is barely used at all. It is used in the processing of
the resulting signal, but not in transmitting it. Even that part is very
different from a current on a wire or through a transistor.

This video describes the 99.99% part well (99.99% in terms of distance, which
is the axons): [https://www.youtube.com/watch?v=C_H-
ONQFjpQ](https://www.youtube.com/watch?v=C_H-ONQFjpQ)

~~~
_dps
Thanks, very interesting!

------
aecs99
I lean towards what Velodyne is saying in this situation. I have been working
with LiDAR systems for over 4 years of which the last 1.5 years have been
towards building autonomous driving vehicles. When I saw the videos, I was
truly baffled by how a LiDAR can miss that. I worked with different types of
LiDARs (from different manufacturers) and there is a very high chance that the
LiDAR point cloud contains all the information corresponding to the person and
the bicycle to make a decision.

What we need to keep in mind is that sensing an object is different from
deciding whether or not to take an action (e.g., hitting brakes, raising
alarms, swerving, etc.).

Most LiDAR/RADAR/Camera manufacturers only provide input data. It's like
saying "hey, I see this". It's up to the perception software to decide whether
or not to make a decision.

In most cars, relatively simpler decisions are made by the car's perception
software (e.g., adaptive cruise control, lane change warning, automatic
braking, etc.).

Self-driving companies override such systems, and rewire the car such that it
is their perception software that makes the decision. So the onus is
completely on the self-driving company's software. In this case, it is the
perception software developed by Uber to be critiqued - not Velodyne, not
Volvo, not the camera manufacturer.

It looks like the engineers at Velodyne feel confident that they should (and
would have) sensed the person, and hence their statement. I wouldn't doubt
them much as they have been in the LiDAR game since DARPA days when self
driving was considered experimental.

From a different angle, Velodyne may not have much to loose by throwing Uber
under the bus - especially when compared to how much their reputation is at
stake. This is because Velodyne has several big customers (e.g., Waymo, and
almost every other self-driving, mapping company that is serious about getting
big).

NTSB should and will get access to the point clouds. Uber has a choice of
releasing the point clouds to the public - but I highly doubt they will.

~~~
codedokode
If you worked with LIDARs, maybe you know how much noise do they give in the
output? Could not it be that Uber software filtered pedestrian image out as a
noise, for example because there was no matching object on a camera or because
reflections from the bike looked like a random noise?

~~~
mturmon
Both effects you mention (sensor fusion problem between camera/lidar; spotty
lidar reflections from bike) are possible.

These problems probably _should not_ have prevented detecting this obstacle,
though. But, a lot depends on factors like the range of the pedestrian/bike,
the particular Velodyne unit used, and the mode it was used in.

One key thing is that lidar reflections off the bike would have been spotty,
but lidar off the pedestrian's body should have been pretty good. That's a
perhaps 50-cm wide solid object, which is pretty large by these standards. But
the number of lidar "footprints" on the target depends on range.

You'd have to estimate the range of the target (15m?) and compute the angle
subtended by the target (0.5m/15m ~= 0.03 radian ~= 2 degrees), and then
compare this to the angular resolution of the Velodyne unit to get a number of
footprints-on-target.

Perhaps a half dozen, across a couple of left-to-right scan lines. Again,
depending on the scan pattern of the particular Velodyne unit in use. The unit
should make more than one pass in the time it took to intersect the
pedestrian.

This should be enough to detect something, if the world-modeling and decision-
making software was operating correctly, hence the puzzlement.

------
adamson
This seems exceptional for Velodyne to come out with a statement like this
directly from a spokesperson. I would expect a supplier to be more restrained
before throwing a large (and growing) customer under the bus like this.

~~~
Roritharr
Well the question of "why didn't the lidar see the pedestrian?" was on
everyones mind in the industry, so openly going out and declaring: "it surely
must have seen it, the lidar's are fine" is an attempt at reassuring everyone
who is now questioning if Lidars are not reliable enough.

This is not exactly throwing Uber under the Bus as they themselves have an
interest in being able to tell that story later on: "Our analysis concluded
that our algorithms didn't put enough weight on the data coming from the
lidar, which worked as intended and should have been weighted higher in these
specific circumstances, we will adjust our efforts accordingly and will Donate
$largeSum to CarsAgainstHumanity to bribe everyone into forgetting how bad we
fucked up."

~~~
gowld
Why is it OK for the LIDAR company to make a blanket statement of innocence
without proof, but not OK for Uber to do the same?

~~~
lisper
But Velodyne has proof -- gobs of it -- that their product is "capable" of
seeing the pedestrian in the dark. It's possible that this particular unit
malfunctioned, but for this to be Velodyne's fault and not Uber's it would
have had to malfunction in such a way that it gave the appearance of operating
normally. That is extremely unlikely.

~~~
dmix
> She said that lidar has no problems seeing in the dark. “However, it is up
> to the rest of the system to interpret and use the data to make decisions.
> We do not know how the Uber system of decision-making works,” she added.

> “In addition to Lidar, autonomous systems typically have several sensors,
> including camera and radar to make decisions,” she wrote. “We don’t know
> what sensors were on the Uber car that evening, if they were working, or how
> they were being used.”

There's still a ton of variables here besides whether or not the Lidar
detecting the pedestrian. Including the other sensors, how the software works,
etc. All things out of the scope of Velodynes knowledge.

Velodyne is not saying that they are certain the car should/could have stopped
in time or avoided the crash. Their perspective is merely regarding the
functionality of Lidar being able to detect the person.

Not to mention we don't even know whether the Lidar malfunctioned yet
either...

~~~
lisper
Go back and re-read the GP, which was trying to draw some sort of moral
equivalence between Uber and Velodyne:

"Why is it OK for the LIDAR company to make a blanket statement of innocence
without proof, but not OK for Uber to do the same?"

This is a disingenuous question. It assumes facts not in evidence, to use the
legal aphorism. Velodyne did not "make a blanket statement of innocence
without proof". It made a very narrow and defensible claim, namely, that its
product, when working properly under the conditions at the time, should have
been able to detect the pedestrian. It is obviously true that there are "a ton
of other variables" but that is a red herring with respect to the original
question.

~~~
dmix
> Velodyne did not "make a blanket statement of innocence without proof".

I agree, if anything I supported this statement with my comment.

The difference is that regardless of the narrowness of their claim, it will
have a broader impact on how people judge Uber. Nor do we even know if the
Lidar was functioning properly, which is an assumption Velodyne is making when
they made their claim.

We simply need more evidence before we can fully judge Uber. And before we can
give Velodyne a complete pass in terms of the functionality of their Lidar.

------
notananthem
Can I ask a dumb non-engineer question? Backup cameras in areas with basically
any road debris/weather get covered with dirt. I'm in the Seattle area now,
and I often lick my thumb and wipe off the camera, because its a vision system
I rely on so I don't kill people.

Could lidar/cameras/etc on the vehicle be obscured by road debris or worse,
things being bumped/moved, smudged, or even foul play?

~~~
hnaccy
You would have Lidar wipers[1] and the software should hopefully be able to
detect obscured or failing sensor and alert driver or stop.

[1] [https://i.imgur.com/2QTHjij.gifv](https://i.imgur.com/2QTHjij.gifv)

~~~
aprao
That's an informative but unsatisfying gif. What if the residue on the top
leaks down?

~~~
sand500
the wipers spin again?

------
whoisjuan
Of course the Lidar wasn't at fault. That's like blaming your laptop for the
bugs of your own shitty code.

~~~
cmpolis
Hardware is not faultless.

The analogy doesn't stand up - there's a non-trivial chance that a memory
module or your hard drive in your laptop will fail over the lifetime of the
device.

Furthermore, a LIDAR unit is complex and has firmware and lower-level software
embedded that may be at fault.

~~~
kuber_harbinger
Yes, I've even seen this first hand where it was, in fact, the laptop as the
issue source. Not that I'm taking Uber's side at all, just saying it is a
possibility and we should wait for official evidence of the fault before
jumping to conclusions.

------
peterwwillis
I haven't been following this closely, but how was the car able to go over the
speed limit? If a speed governor was turned off, or was able to be overriden,
isn't it possible that multiple systems in the car were turned off, such as
systems that regulate gas and brake?

~~~
teraflop
From what I remember reading, the car was going 40mph in a 45mph zone.

EDIT: [https://www.nytimes.com/interactive/2018/03/20/us/self-
drivi...](https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-
pedestrian-killed.html)

~~~
tobltobs
Depending on the sources the speed limit in the area was 35 mph.

[https://www.azcentral.com/story/news/local/tempe-
breaking/20...](https://www.azcentral.com/story/news/local/tempe-
breaking/2018/03/21/video-shows-moments-before-fatal-uber-crash-
tempe/447648002/)

~~~
tzs
The speed limit on that road, in the area of the crash, is 45 mph in the
northbound lanes, and 35 mph in the southbound lanes. The crash occurred in
the northbound lanes.

------
candiodari
If you look at how lidar's work this does indeed make no sense. Lidars will
see vertical poles that stick up high, like pedestrians. That's exactly what
they're good at. For a lidar with even bad resolution to have missed this
pedestrian ... it's technically not impossible, but it'd be one-in-a-trillion
streak of extremely bad luck. It seems much, much more likely that the car was
somehow ignoring the output of the lidar.

Lidar's see everything in a specific plane that's originated at their sensor.
This situation is exactly what lidars are made for ...

Remember the Tesla accident ? A lidar was scanning planes in front of the
Tesla and there was a large truck in front of it. A truck hangs low at the
front and at the back, and the tesla autopilot saw both of them. Presumably
because it was using lidar it decided that the front of the truck was a car,
and that the rear of the truck was a car, and then when the truck changed
direction it compensated by doing a high speed maneuver directing the car
beteen the front and the read wheels of the truck. Needless to say, results
were less than optimal (and then came the unforgivable: after the crash the
autopilot was still in control, but it did NOT stop until it was mechanically
blocked from going on). That's the sort of mistake you'd expect a lidar to
make : it misses objects that are very close to the ground, or "far" off the
ground. It sees things starting at 50cm high until 1m30 or so (also depends on
the distance to the sensor. The closer the sensor, the narrower the range), no
more. That's the weakness of lidars.

That unfortunately means what they do miss is ... well, let's put it this way
: you can't mount it low in the car, because at that height it'll think large
pebbles are telephone poles (plus mud will splatter up and block the sensor).
So you don't do that. You can also mount it high but that means those
detection planes don't get very close to the ground. And that means, what
it'll miss is anything that's close to the ground. Dogs. Children. Parking
poles. Stairs (or any kind of abyss). That's where you'd expect mistakes.

An adult crossing the street ... there's no way.

------
FabHK
After the Tesla fatal accident, there was some discussion about the difficulty
of stationary obstacles. Apparently, there are so many false positives, that
they're quite readily discarded (otherwise the car would stop all the time).

See discussion here, "Why Tesla's autopilot can't see a stopped firetruck":
[https://news.ycombinator.com/item?id=16239010](https://news.ycombinator.com/item?id=16239010)

As the victim was traveling perpendicular to the movement of the vehicle, I
wonder whether that had anything to do with it. If so, quite a severe
limitation.

------
oh_sigh
Someone in a previous thread mentioned that Uber had disabled LiDAR input
because they were testing visual-light-only navigation. Was that just
speculation/rumor?

~~~
tjoff
You don't have to disable it to test that, just run two systems in parallel
and detect large deviations.

I refuse to believe that they would just disable it on public roads to see if
it could manage.

~~~
gmueckl
I hate to say it, but never underestimated the human potential for stupidity.
To make matters worse, this is amplified in organizations. But in this
specific case, this is all speculation until the data gets analyzed.

------
rtfs
this is quiet interesting. two days ago the main stream media here in germany
said that the accident would have happened anyways, regardless of the self-
driving technology built into the vehicle.

Looking at the video, the quality of the current self-driving technology is
really questionable, especially if i recall also the other non-fatal road
traffic offences publicized so far.

~~~
cjbprime
That claim by the media sounds obviously untrue.

And alongside being untrue, it ignores the differing severity of collisions.
Even if the car had to use its normal video camera instead of Lidar for some
reason, the second or two of applying the brakes that would provide can easily
-- and will likely -- turn a fatal collision into a non-fatal collision.

------
saurabp
So if the LIDAR did not fail, it was probably the neural network that takes in
LIDAR data and makes the decision to brake. Would love to see NTSB releasing
the data for us to analyze.

~~~
jefft255
Just to be clear, neural networks typically aren't used to work with lidar in
self driving cars. NN for point clouds are still at the research stage.

~~~
adrianmonk
Does this mean this is just a research project that Uber doesn't actually use
in their cars yet?

From [https://eng.uber.com/sbnet/](https://eng.uber.com/sbnet/)

 _" By applying convolutional neural networks (CNNs) and other deep learning
techniques, researchers at Uber ATG Toronto are committed to developing
technologies that power safer and more reliable transportation solutions."_

 _" CNNs are widely used for analyzing visual imagery and data from LiDAR
sensors. In autonomous driving, CNNs allow self-driving vehicles to see other
cars and pedestrians"_

~~~
jefft255
There are many projects using deep learning with lidar. Google PointNet,
PointNet++, also
[https://www.youtube.com/watch?v=UXHX9kFGXfg](https://www.youtube.com/watch?v=UXHX9kFGXfg)
. These are all much newer than 2D CNNs and I don't know if it works well
enough to actually be used in SDCs. Also, using CNNs on point clouds comes
with all sorts of problems.

------
colordrops
To be fair it probably wasn't caused by a lidar failure.

~~~
lima
Even if the Lidar did fail, the software would need to handle it gracefully.

------
riffic
[https://usa.streetsblog.org/2016/04/04/associated-press-
caut...](https://usa.streetsblog.org/2016/04/04/associated-press-cautions-
journalists-that-crashes-arent-always-accidents/)

------
Piskvorrr
Aaand here comes the CYA storm, with everyone declaring they had absolutely
nothing to do with the crash.

~~~
Declanomous
I mean, it is pretty baffling that people are defending the self-driving car
when it has LIDAR.

~~~
mateuszf
Really? Isn't LIDAR just some input device, whose data is interpreted in
software?

~~~
AstralStorm
A pretty complex device too. I'm not sure about this specific one, but LIDAR
tend to operate in a scan fashion with a refresh rate. This means it can get
chopped noisy data in a time sense which requires post-processing.

~~~
jessaustin
Can we agree that robocars shouldn't drive faster than that processing can
occur?

~~~
Piskvorrr
Nobody should; this is one of the basic tenets of highway codes everywhere.

------
908087
Velodyne clearly have no clue what they're talking about. They should consult
with the "futurologists" on reddit who only needed 5 seconds and a video to
conclude that "it was too dark out".

~~~
nugi
Forgot your (/s) tag buddy.

~~~
908087
Was kind of hoping it was obvious enough that it didn't need one.

------
haZard_OS
From my perspective, the first question should be, "Given the same
circumstances, would a human driver have performed better?".

After viewing the video of the incident, I strongly believe that human driver
would NOT have done better.

Therefore, I believe that any attempt to blame or spread mistrust in these
technologies because of this incident is (at best) misguided and (at worst)
alarmist.

~~~
InclinedPlane
Nighttime dashcam videos typically do a very poor job of representing what a
scene looks like to the human eyes. Here is a phone camera video of the same
stretch of road as the accident took place on, at night:
[https://youtu.be/1XOVxSCG8u0?t=26](https://youtu.be/1XOVxSCG8u0?t=26)

From this you can see how good the visibility is. We know that the visibility
on this stretch of road was pretty good for nighttime driving. We know that
the pedestrian that got hit had crossed an entire (empty) lane of traffic
before entering the Uber vehicle's lane. I would say that any competent driver
who was paying attention and who was driving a car with working headlights (or
perhaps even without) would have spotted the pedestrian well in advance and
been able to avoid the collision fairly easily.

The fact that the Uber vehicle did not do so despite having an abundance of
opportunity and despite having not only visible light data but also lidar and
radar is almost certainly a massive failure on its part. Almost certainly this
is due to a failure to integrate sensor data properly or to fail to categorize
a pedestrian correctly.

I have no faith that the local PD will do a good job here, but I do have faith
that the NTSB will be exquisitely thorough, and I would bet hard-earned
dollars that they are going to tear Uber's self-driving technology up one side
and down the other.

