
Why collision avoidance is harder for an AI-based system - IndrekR
https://medium.com/@rebane/could-ai-have-saved-the-cyclist-had-i-programmed-the-uber-car-6e899067fefe
======
hangonhn
Hang on. That argument makes no sense. We've had subsumption architecture for
a long time now(
[https://en.wikipedia.org/wiki/Subsumption_architecture](https://en.wikipedia.org/wiki/Subsumption_architecture)
). Subsumption architecture puts some of the intelligence in the lower level
systems. While the higher level controls can tell the lower level systems what
it wants, it can't do things that the lower system determines is dangerous.

So if a normal Mercedez has a collision avoidance system that automatically
brakes, there is no reason why an AI based system can't be built on top of
that and the collision avoidance system automatically braking without
intervention from the higher AI systems. A subsumption system prevents higher
level controls from doing something catastrophic like hitting a pedestrian or,
in biology, a person holding their breath until they die.

~~~
pj_mukh
Very true. Which is why I'm still waiting for what happened to all the lower
level safety systems, presumably driven by lighting invariant LIDAR or RADAR
systems, in the Uber situation.

The article kind of mentions: "cyclist detection from 3D point cloud is much
harder task than cyclist detection from an image"

The system didn't need to know that it was a "cyclist". There was something
there, in its path. Stop.

~~~
gambiting
In fact, the XC90 is already equipped with a radar-based emergency stop system
from factory - that alone would have been 100% sufficient to prevent or vastly
reduce the impact. But of course it looks like the system was
disabled/overriden by the Uber's autonomous solution.

------
Animats
This is looking at it all wrong. Watch Chris Urmson's video from SXSW on how
Waymo does it. I've mentioned that before. You build a map of what's around
the vehicle. Then you try to tag some obstacles to help predict their
behavior. But the obstacle detection is geometric, and does not depend on the
tagging, which uses machine learning. If there's an obstacle, the system
doesn't hit it, even if it has no clue how to identify it.

Tesla/Mobileye managed to get that backwards. Their original system was
"recognize vehicle visually, compute distance and closing rate to vehicle". If
it didn't recognize an obstacle as a vehicle, it ignored it. We know this for
sure, because you can buy a Mobileye unit as a dashcam-like warning device and
many people have seen how they work. That led to three collisions with
vehicles partly blocking the left side of a lane. One death ramming a street
sweeper, one collision with a stopped fire truck, one sideswipe. The NTSB is
investigating the fire truck collision.

The NTSB is now investigating the Uber collision.[1] As they usually do, the
first thing they did was to get control of the wreckage.[2] Uber does not have
control of the investigation. The NTSB investigators are working this like an
air crash. They are "beginning collection of any and all electronic data
stored on the test vehicle or transmitted to Uber". As usual, they haven't
announced much, but they have mentioned that the video seen publicly is from a
third-party dashcam, not the vehicle sensors.

[1] [https://www.ntsb.gov/news/press-
releases/Pages/NR20180320.as...](https://www.ntsb.gov/news/press-
releases/Pages/NR20180320.aspx)

[2] [http://wsau.com/news/articles/2018/mar/21/arizona-police-
rel...](http://wsau.com/news/articles/2018/mar/21/arizona-police-release-
video-of-fatal-collision-with-uber-self-driving-suv/)

~~~
philwelch
Devils advocate here, but what if a supposed “obstacle” is a plastic bag or a
jet of steam coming out of a sewer grate?

~~~
dreamcompiler
Then you stop. Type 2 errors for cars are much less dangerous than Type 1
errors.

~~~
philwelch
Unless it’s icy out, you have to swerve, or you’re being tailgated...but yes,
stopping (or at least attempting to stop) is the best decision from a
liability perspective.

~~~
taneq
Well that's when you're grateful that along with your forward collision
prevention, your car's ABS and electronic stability control systems are also
still enabled.

As for tailgating, this seems to be a problem with U.S. attitudes, not vehicle
mechanics. Stop making it acceptable to tailgate! If you're close enough to
the vehicle in front of you that any significant braking on its part will
cause you to hit it _you are too close_ and it's _your fault if you hit it_.

~~~
gonvaled
Isn't that the case in the US? In the EU, you can be fined for driving too
close to the car in front of you.

I think that driving close to the car in front of you is the number 1 cause of
accidents. Much more dangerous than driving fast.

~~~
pwg
Legally, in most (if not all) of the US you /can/ be cited for tailgating (the
laws require a minimum following distance).

Reality is that the traffic cops seldom ever cite for tailgating in general.
If one were to see a citation for such, it is likely after an accident where
the officer can deduce that the cause was "following too closely" and so they
then issue the citation.

~~~
taneq
In Australia, if you are the rear vehicle in a rear-end accident you are
automatically at fault in almost all circumstances. The only exceptions I know
of are if the front car pulled out immediately in front of the rear car, or if
the front car was reversing.

------
icc97
> Also, lets back up our AI’s with old school collision avoidance!
> Intelligence is not the same as perfection, at least for now.

The car was travelling at 38mph and never braked. Even if the collision-
avoidance only saw it at the last moment it would still have braked and
potentially slowed the car enough so that the woman was injured instead of
being killed.

I'm all for self-driving and fully believe it can improve on humans, but I
don't see how it's possible for self-driving cars to be on the road if they
can't properly detect the most vulnerable users in all conditions.

~~~
jvanderbot
This is absolutely a case where an autonomous system should have out-performed
a manned system. Lasers see through darkness very well. A camera-only system
is insufficient.

~~~
daveguy
Not to mention it wasn't even very dark on the road. If you look at other
video and images, that stretch is very well lit. The dash cam video was poorly
calibrated / selected (but probably not used in the decision process of the
AI).

------
blensor
If the technology were as undeterministic as this post makes it seem to be,
there should not be any self driving car allowed even under the best
conditions.

I fully agree that an AI can be fooled, but that is high level logic (path
planing), the system should be designed to have a fallback that does emergency
braking if all else fails.

There simply is a point where the high-level AI does not matter any more. And
that is if I (the car) am moving at 45 mph towards an obstacle that is in the
middle of the road less than 2 meters from my projected path. This does not
mean that a full brake is required but the speed definitely needs to be
reduced to account for the uncertainty, and once it is determined that it is
physically impossible to miss the obstacle the system must do a full stop to
reduce the impact velocity as much as possible.

It's fine, if the LIDAR data is plugged into a machine learning algorithm, and
you will probably get less than the 10-20 Hz the scanner can produce, but at
the same time it is probably also used by a much simpler obstacle tracking
algorithm that can run at near real-time speeds.

~~~
dclowd9901
Right; if we want to compare it to ourselves, maybe the various minds that
control our impulses. We can hold a knife over our finger with our high level
cognitive mind, but (hopefully) our low level brain will prevent us from
bringing it down and chopping our finger off, even if we _really really_
wanted to do it.

I know that's sort of a weird example, but I think it's really illustrative of
a dual/multiple mind scenario playing out in our own understanding.

~~~
blensor
Or reflex.

We act on an input on a lower cognitive level first before a higher level
function even has the chance to intervene.

If someone is throwing a ball at my face, my body hopefully reacts before my
higher level functions had the chance to evaluate if the ball will really hit
the face and if I may look silly if I wave my hands in the air while no actual
danger exists. Because it actually IS on a trajectory to my face the benefit
far outweigh the risk of looking silly

------
jdmichal
> AI is not preprogrammed to monitor a known input from a sensor to take a
> predefined action.

I guess one of my outstanding questions, which reading this only confirmed, is
_why_ this is the case? I mean, humans are pretty good examples of
intelligence. And yet we still have and use these anti-collision systems.
Because, in the end, when wrong decisions are made these systems save lives.

Why would AI-driven vehicles _not_ have dedicated, single-purpose subsystems
such as anti-collision? I mean, are we going to also remove ABS, because the
AI could learn to modulate the brakes itself? How much are we going to push
into AI, when the purpose-built systems are both functional and effective?

~~~
SlowRobotAhead
>I mean, are we going to also remove ABS, because the AI could learn to
modulate the brakes itself?

This is the topic that I feel SDC enthusiasts forget. Not everything HAS TO BE
AI. And we could maybe make steps towards SDC, not fantastic leaps that get
people killed and really just get government involved where it doesn't need to
be yet.

We could replace the ABS/ESP/traction control systems in current vehicles with
a machine learning / deep learning system absolutely - that's not sexy though!

No one wants increments, they want whole self-driving cars right now - and
while I have opinions about that, there is no doubt it's driving (pun
intended) the industry.

Ideally, at first, we'd see components in consumer vehicles, and completely
automated long haul trucks from A to B determined routes - but like I said,
not sexy.

~~~
couchand
> Ideally, at first, we'd see components in consumer vehicles

We do, though. Look at the cruise control or auto-park on a vehicle produced
in the last few years. The totally autonomous car may make headlines, but
these sorts of features will be what really make the technology ubiquitous.

~~~
SlowRobotAhead
I work on ECUs like this. High end stuff is doing computer vision, but 99% of
what is on the road is pre-calibrated motor/solenoid control and that’s all.

Even adaptive cruise, steel camera object “detection” are just pretty simple
systems. Almost nothing is doing even pieces of what the whole-package SDCs
are.

------
dreamcompiler
I am freaking sick of this notion that AI==ML. AI is a much bigger field than
neural nets. AI can be _programmed_ with rules, with logic, with symbols, with
subsumption, and with a thousand other things that are both deterministic and
don't require huge training sets.

If you're making a living doing AI, you damn well should know this.

~~~
akvadrako
If you define AI like that, then basically every decision making program, i.e.
with branches, is AI.

~~~
8note
how else are you going to include pacman's ghosts as using AI?

~~~
akvadrako
You shouldn't because those ghosts are stupid.

If intelligence just means "can make decisions based on inputs" then
everything is intelligent. That's a useless definition.

------
natch
I'd like to see a law that says: to obtain certification to operate a self-
driving car on a roadway, an organization must agree to:

1) Conform to a standard set of protocols for how sensors provide data to a
self-driving software system.

2) Log data in a form that could be submitted to any conforming self-driving
software system, to obtain results from that system reporting what the system
would do given these inputs.

With this in place, it would be easy to do after-the-fact comparisons of data
leading up to incidents, and learn from the differences in results between
systems.

It could be taken a step further if the car makers would also share data on
near misses, which could uncover cases where other car makers' systems did not
handle the situation as well.

Even if the sensors are different, I suspect some good mileage could be gotten
out of this. The fact that learning opportunities are not perfect, is not
always a good reason to pass them up.

~~~
lighthazard
Perhaps some sort of logging standard that is enforced across all systems? A
black box of some sort?

~~~
natch
Yes, exactly, but with the additional factors I mentioned.

------
cameldrv
My educated guess: The somewhat unusual shape of the combination of the woman
and the bicycle, combined with the uneven lighting caused the vehicle to
misclassify her as lightweight road debris.

To a real degree, this is a downfall of machine learning. Every distribution
has tails. If we learn purely from data, rather than from principle, we will
necessarily make mistakes on the tails. For problems that can be effectively
solved with 99% accuracy, this is fine, and we just deal with a few mistakes.
With more data, our accuracy will improve anyhow.

If a datapoint costs a human life though, we can't afford to collect enough
data. We must have a more sophisticated model of the world in order to operate
on the tails without killing people.

I think that this might actually be a watershed moment for ML. Supervised
learning is not adequate to this type of task. Either the computer does low
level perception, and a human writes a high level algorithm to manage the
risk, or datapoints have to contain a lot more information than just
safe/unsafe. When you made a mistake as a child, your parents didn't just
punish you, they explained what you did wrong and why, and a rule to follow to
do better next time.

~~~
invalidusernam3
> misclassify her as lightweight road debris.

Surely the classification has a confidence level, and a low confidence score
should cause the vehicle to slow down if it's not confident in knowing what
it's looking at? Also, the size of the "lightweight road debris" should have
made the vehicle slow down slightly at least, because hitting a 6ft pile of
paper wouldn't be great at even a low speed

------
mtgx
I don't know what would have happened if "this guy programmed it", but the
answer should be _yes_ , the car should have seen the cyclist and it should
have pressed the brakes.

This was an interesting post, too, by Brad Templeton who worked on Google's
self-driving car project for a while:

[http://ideas.4brad.com/almost-every-thing-went-wrong-uber-
fa...](http://ideas.4brad.com/almost-every-thing-went-wrong-uber-fatality-
both-terrible-and-expected)

~~~
decacorn
in response to that piece by Brad, I sincerely hope that the "safety" driver
in the uber accident was fired.

~~~
mulmen
Why? If it is not proven the driver was negligent why fire this individual?
Presumably this individual has a lot of experience and domain knowledge in
testing self driving cars so replacing them with someone else may not be an
improvement.

If they did make a mistake that got someone killed that could change things
but I would hope we wait to find out if the driver was actually at fault.

~~~
amaranth
I don't think Uber's safety drivers have any domain knowledge. The law says
you need bodies in seats so Uber is paying as little as possible to put bodies
in seats. They aren't engineers or professional (as in stunt/trick/etc)
drivers.

~~~
mulmen
I’m not sure stunt or trick drivers are the best choice anyway but you make a
good point. Do we know this individual’s qualifications or are we just
guessing at Uber’s standards?

------
SlowRobotAhead
Has anyone been able to even remotely explain why the LIDAR system wasn't
going nuts? I saw the "it was dark" nonsense, but I assume this vehicle had
laser and IR right?

The camera footage was released, I'd like to see the lidar representation.

~~~
martinrebane
Author here :) LIDAR itself is just a sensor, it does not process the data nor
does it output a directly usable image like a camera more or less does.

LIDAR in this case is a rotating laser and while it scans, the vehicle moves
(imagine moving a paper when a copier scans it). All processing is done later,
first to construct an image and then to understand and use it. Part of why I
wrote the piece was to explain how things can go wrong even if your LIDAR
works fine.

~~~
wnoise
You don't have to "construct an image" to use LIDAR returns. Minimal
processing on the point cloud will tell you that there's an obstacle, and you
don't need more than that to start trying to avoid hitting it. A simple
occupancy grid map, for instance, would suffice.

[https://en.wikipedia.org/wiki/Occupancy_grid_mapping](https://en.wikipedia.org/wiki/Occupancy_grid_mapping)

~~~
nabla9
The problem with naive occupancy grid mapping from sparse LIDAR data is that
things like birds, falling leaves, pieces of paper of plastic bags flying in
the wind can mark the grid occupied.

Emergency breaking for all these cases would be very dangerous. The same
object must be scanned multiple times to get the idea if the object is
something to be avoided.

~~~
wnoise
Quite true -- but LIDAR has a very high refresh rate, and it's not hard to do
better than naive grid mapping. This was a full height pedestrian plus
bicycle, which would have multiple nice returns from a Velodyne HDL 64 at
these distances.

------
jvanderbot
Either the author works on toy systems, or is being disingenuous.

These are not problems with AI, they are problems with using _only_ AI, and no
viable-for-development system that I'm aware of in industry does that.

------
wffurr
They should release all of their sensor data and logs from the incident, and
then some of us could actually _answer the question_ in the headline.

This should be standard procedure for any future incidents.

~~~
bkanber
The NTSB has control of the investigation and the data for now. _That_ should
be standard procedure IMO.

------
arithma
Why isn't car driving AI put to the same rigour of testing and approval as
that of airplanes? There are far more people on the road than in the sky, and
yet, there's formal verification for airplane autopilot code but an accident
like this is supposed to be "you couldn't have prevented it either?"

If no one could prevent such a thing, this AI should never drive a car!

~~~
ltrcola
More rigor is definitely required, but it may not be possible to formally
verify AI driven code to the same extent. Aviation software is driven from
formal requirements, has a strict set of coding rules, and the generated
assembly code is compared to the original source code. Within the coding
rules, the software isn't even allowed to dynamically allocate memory.

I have no idea how you'd build an AI system with those constraints, given that
the computer essentially programmed the model itself by learning.

------
justicezyx
This guy seems specificially talking about a certain style of ai approach.

------
wirrbel
I admit I stopped reading halfway through, but managed to scroll to the last
paragraph.

This article is a dance through many important topics in a AV. Yet, it fails
to actually answer the "Why collision avoidance is harder fo an AI-based
system" question, really. Some arguments argue that systems with a smaller
scope are easier, systems with a larger scope are more difficult, it brings on
arguments about determinism in decision making. It brings on sensor sets,
neither is really about AI or hand-crafted rules, but about problems inherent
to robotics as a whole. Again, it is a fine example why the therm AI is
useless and harmful for discussions, as it blurs what is talked about
considerably.

------
magoon
Unpopular question but here goes:

What if some crashes are unavoidable? e.g. somebody darts out in front of a
mobile vehicle. We accept that trains are not at fault for striking
“trespassers” on their railways.

Also, when we all drive cars with collision avoidance systems, who gets sued
by whom? If my car e-brakes for no reason and I get rear-ended, is the guy who
hits me still at fault like usual?

I believe computer control will be super helpful and is here to stay, but it’s
interesting to see it implemented in cars as emergency help versus in modern
commercial airliners (where autopilot and landing control systems are
ubiquitous) when it is only relied on for the most routine and straight-shot
ability.

~~~
jsjohnst
> If my car e-brakes for no reason and I get rear-ended, is the guy who hits
> me still at fault like usual?

What’s different about your SDC slamming on its brakes vs you doing the same
thing? That’s why the law says how much spacing is to be between cars!

> What if some crashes are unavoidable?

From all evidence which is publicly available at this time, it appears this
accident was anything but unavoidable, so the question is a red herring.

------
amanzi
All I'm going to say is that I could tell this was a Medium post by the
title...

~~~
phyzome
I couldn't, but I could immediately tell once I clicked and the page was half-
covered by persistent sharing dickbars. -.-

------
kristoffer
The author writes a long text about how this is "not as easy for an autonomous
system to solve" just to make the obvious point at the end of the article.

It should be obvious to anyone that you need to compose systems of different
criticality to build a safe autonomous vehicle.

Of course the "AI" system needs to be complemented with a safety critical auto
brake and other fail safes.

------
lukemunn
I've been really interested in exactly this question - the technical drive to
revisit moments when contingency (and tragedy) emerges. I've been working on
an artwork around this, "Iterated Accident", which I just put online this
morning, [http://darkmttr.xyz/16/](http://darkmttr.xyz/16/)

------
andreyk
This is shoddily written bunk. I would downvote if I could. Object detection
from LIDAR/Radar data plus path planning is 'AI-based', and I guarantee you
Waymo's systems (which are 'AI-based', though more old school and less ML
based) would have done collision avoidance better than Uber.

------
et2o
This is an extremely unconvincing argument

------
jvanderbot
I smell a straw man argument.

