
When’s a pedestrian not a pedestrian? When it’s a decal - edward
https://www.technologyreview.com/s/608321/this-image-is-why-self-driving-cars-come-loaded-with-many-types-of-sensor/
======
ZeroGravitas
That decal is a bad idea, regardless of whether the driver is an AI or a
human.

Apparently painting vehicles grey can cause people not to see them as well.
Having photo realistic images of road scenes seems much worse.

Have they never watched roadrunner cartoons?

~~~
tyingq
Unfortunately, cute ideas like that are popular.
[https://m.imgur.com/K1w16U3](https://m.imgur.com/K1w16U3)

~~~
rcMgD2BwE72F
Also: [https://i.imgur.com/rQm66By.jpg](https://i.imgur.com/rQm66By.jpg)

~~~
samueloph
This happened in Brazil, for some time there was a hoax that a car did crash
on that painting.

The painting lasted less than 24 hours though.

[http://www.e-farsas.com/parede-desenhada-com-o-tunel-papa-
le...](http://www.e-farsas.com/parede-desenhada-com-o-tunel-papa-leguas-
causou-um-acidente.html)

~~~
gus_massa
Snoopes' version of the link: [http://www.snopes.com/road-runner-tunnel-crash-
rumor/](http://www.snopes.com/road-runner-tunnel-crash-rumor/)

------
raverbashing
This happens to humans as well. Case in point: radio shows/ads that have car
horn noises

The decal is bad but the car doesn't risk doing anything that it shouldn't
(like driving through it). If it was a decal of an empty road ahead it would
have been more problematic

~~~
Grue3
The decal might be deliberately constructed (using perspective tricks) to
alter the behavior of an autonomous car following your van. The cyclists might
appear either much closer or farther than the plane they're drawn on. If you
install a video panel on the back of your van, there are even more
possibilities. You might be able to play a video of an object rapidly moving
towards the autonomous car, which will cause it to swerve to try to avoid it.

And yes, some of this stuff will probably work on humans as well, but
autonomous driving is supposed to be better than humans.

~~~
kbart
Deliberately malicious behavior would land such person in jail as soon as
serious accident happens. You can do a many things to trick human drivers as
well to cause an accident, the only reason we don't see this happening very
often is because most people in society are decent human beings.

~~~
unityByFreedom
> Deliberately malicious behavior would land such person in jail as soon as
> serious accident happens

Edit: In the real world, this is more likely to happen by accident than
intent.

No regulations exist yet.

The offender could claim whatever mod he made wasn't intended to cause harm to
others. Then you have to prove intent, which can be difficult.

~~~
dragonwriter
> The offender could claim whatever mod he made wasn't intended to cause harm
> to others. Then you have to prove intent, which can be difficult.

Or you have to have laws which cover acts done with mental states short of
intent to cause harm like (in ascending I order of seriousness in existing
law—and all of these are already covered for acts that get people killed)
criminal negligence, recklessness, and depraved indifference to human life.

Of course, it's a lot less difficult to convince even a criminal jury of
intent than people in HN often seem to think; legal proof, even to meet the
“beyond a reasonable doubt” standard, is _far_ less than logical proof.

------
filereaper
I really _hate_ to be that guy, but I think my biggest worry is how each of
these car manufacturers will handle such edge cases.

Each car manufacturers autonomous driving is a black box, we don't know
whether said edge case is handled or not, or what action the car will take.
Maybe now's not the time but it would be nice to see a deterministic rule book
that says, for this edge case, the system will react this way. We can't
account for all edge cases but some default fall back would be nice.

~~~
westoque
Maybe we need an open source protocol that has all the standards and edge
cases handled.

I think competitiveness is a good thing and will drive innovation. But in this
specific field, with regards to safety and security, it needs to be out in the
open.

~~~
yellow_postit
The protocol is likely uninteresting rather it's the models, training data,
and training configurations which are most interesting and the least likely to
be open sourced. I don't work in the autonomous vehicle space but I would be
surprised there's a set of deterministic rules that are easily reviewed
instead collections of deep nn models.

~~~
ajanuary
Wouldn't you test the output rather than the input? Before an autonomous
vehicle / update is allowed to go on public roads, it must pass a "driving
test" that includes a bunch of known edge cases.

~~~
ghaff
Sure. And maybe that's what will happen. But now you have pre-defined tricky
situations. Everyone builds those very specific situations into their systems
but may not really generalize them.

------
jacquesm
I have been deaf in one ear for a bit (a few weeks) a while ago (aftereffects
of a flu), I did not realize before then how much I rely on my hearing during
driving. Do self driving cars use audio sensors?

~~~
asadjb
I think we rely on our hearing because we can only see in one direction. With
an autonomous car, you can have sensors pointing in every direction, so
visual/radar/lidar clues might be all you need.

~~~
buro9
Sound travels around corners.

It not only reflects very well but the way some low frequencies spread out is
enough to allow certain vehicles to be heard before they are visible (solid
concrete could obscure the view).

~~~
cesarb
Also, sound can give early warning of an approaching train, or of an emergency
vehicle nearby, or of a truck in reverse.

------
vbuwivbiu
Apart from illusions, what about discriminating between genuine obstacles and
things that can be safely driven over ? For example, a barrier vs some old
snow. The car's software has never experienced the properties of materials by
walking, jumping on, touching etc. It can't predict how materials will behave
when driven over.

~~~
albertgoeswoof
Your car might not have run over snow before, but millions of other cars
equipped with self driving senors will have.

~~~
swiley
And this is why you don't want to be one of the first people riding self
driving cars. Especially since there isn't even a way for you to manually
inform the car about things like this.

~~~
albertgoeswoof
I wouldn't worry about that, millions of miles have been driven by them by
now. By the time you get into the first mass produced truly self driving car
you will not be able to compete with it.

------
tyingq
Well, is one of many reasons why. I assume there are lots of edge cases.
Floating plastic bags, for example.

~~~
hectorr1
My Volvo slammed on the autonomous brake yesterday for no discernible reason.
Best guess was a leaf blew across the sensor.

~~~
losteverything
"Avoid highway braking" was drilled into us as 15 year olds in drivers ed.
That still is one sacred rule i swear by.

Chain reaction braking, unnecessarily has caused many accidents (witnessed by
me). Driving on I-95 in Florida you have (upgrades) where bridges are were
built (limiting sight). Heavy congestion causes unnecessary braking. Several
rear ends were witnessed on a trip north from the keys.

Never highway brake.

Im glad you posted this because it is good to know what cars may not respond
the way a human was trained.

Try braking on the Merritt parkway and see what happens

~~~
bryanlarsen
But when an accident due to highway braking occurs, the person in the
following vehicle will always be considered at fault -- the person in front of
you may brake hard for any reason good or bad, and if you collide into the
back of that vehicle, you were following too closely, driving too fast, and/or
not paying attention.

The person in front may also be partially considered at fault if they didn't
have a good reason, but the person following will always be considered at
fault.

~~~
matthewowen
Who is at fault matters... but it still sucks to be involved in a collision.
Not being responsible is cold comfort, especially if it results in injury to
passengers in your car.

------
jackcarter
At first, I interpreted the image as a mockup of a self-driving car
broadcasting warnings about upcoming hazards, so that cars to its rear could
know about them before directly observing them. I wonder whether that will
ever be possible across differing car brands.

~~~
icebraining
Both the Federal government and the EU are trying to force the industry to
come up and accept standards: [https://en.wikipedia.org/wiki/Vehicle-to-
vehicle](https://en.wikipedia.org/wiki/Vehicle-to-vehicle)

------
clairity
this almost throwaway line at the end caught my eye: "The safest bet, then, is
for automakers to use an array of sensors, in order to build redundancy into
their systems."

experienced engineers know that redundancy is a double-edged sword. what do
you do when the data from redundant systems disagree? having more data sources
means dealing with more edge cases. a single reliable data source (as in the
limitations and biases are well known) is better than two less predictable
ones.

i'm not saying that that's necessarily the case here, but it's not as obvious
as this line suggests that the answer is just to add more kinds of sensors
(and thereby add more diverse data sources).

~~~
catbird
I disagree. In an autonomous system, more sensors is almost always better,
provided you have enough computing power to process the data. Having two
sensors that disagree gives you much more information than if you had a single
sensor with a malfunction. In the first case, you know there's some kind of
fault and can take steps to fix it. In the second, you're running off bad data
and who knows what could happen.

Three sensors are better than two—it's a common setup in flight control
systems to have three identical computers running all calculations, and the
majority opinion is taken as truth.

~~~
clairity
yes, in a resource-unconstrained world, we could put every sensor on the car
and add as much computing power as we'd like.

in your 2-sensor scenario, how do you know which one is providing incorrect
data?

------
throw2016
When image processing and pattern matching driven by the need to identify
pornographic images by social media companies masquerading as AI meets the
real world.

This just underlines what AI actually means and what is currently trying to
pass for AI.

------
unityByFreedom
Not sure why all the defense of "humans make this mistake too" in this thread.

Isn't the point of SDC that they are _better_ than humans? Shouldn't we
support a diverse array of sensors to cover all the eventualities that we can?
If some form of radar + camera solves this problem, what's the point of
arguing humans would get confused by it too?

To be clear, the article alludes to the existence of three types of cars in
the wild,

(1) Cars with many types of sensors (2) Cars with some types of sensors (3)
Cars with no sensors

All other things being equal, why are people voting for (2) rather than (1)?

------
smileysteve
I don't entirely understand why with this image though.

While just a camera, looking just at the picture of the cyclist experience
some perception issues due to skewing, the camera still sees the distance
between the front bumper and the bicycle, but also sees the breadth of the
car.

If, in this image, we see 3 bicyclists and a car in front of us, what's the
worst decision we can make? To only pass the bicyclists at that car's width?
To wait until we can safely pass? To treat it like a bicycle race with a
support vehicle and wait to pass until we can pass the entire group?

~~~
Djvacto
I think the image is just meant to showcase that one sensor can be confused by
edge cases. It's a good image that shows unintended interpretation of data.
There are likely many other edges cases that are not as clear why a car will
misinterpret them, which is why they went with this image.

~~~
studentrob
Yeah. Once seen, engineers could correct for this kind of situation.

The point is there are many such unseen sitauations that current systems don't
handle, and that diversifying the sensor array can help handle those.

------
Pulcinella
Does anyone have any good research or review articles into the current state
of sensor fusion?

To me as a layman, Car AI really seems like one of the most "intelligence"
type of AI. Yes Alpha Go and Deep Blue are really good at something that was
once thought "easy" for humans and hard for computers, but AI cars seems like
trying to construct part of an animal's nervous system and sticking it into a
vehicular body.

~~~
unityByFreedom
Personally I think medical diagnostics are the most interesting and useful AI.

Radiology, and healthcare overall, is set to be improved rapidly if we can get
some more labeled data. The hardware and software is already ready to start
learning how to diagnose cancer and other health issues based on human-scan
data.

Not sure whether an "intelligence" rating is possible, given that AI systems
are still designed to be domain-specific, i.e., self driving cars aren't
detecting cancer.

------
DashRattlesnake
I understand the argument, since car sensors are currently less robust than
human eyes, so you need to use many different kinds of them to get near-human
results.

However, humans get by driving reasonable safely using just vision. I'm
totally a computer vision layman, but couldn't you correctly interpret this
situation with binocular vision?

~~~
Eridrus
Humans are kind of shitty at driving though.

~~~
ghaff
You have to define "shitty" in this context. I agree that there are a lot of
bad drivers out there. But even with those bad drivers, people driving while
texting/intoxicated, and with older vehicles that don't have all the latest
safety and assistive driving features, the fatality rate in the US is still
something like 1 per 100 million miles. There are a lot of auto-related deaths
in the US but there are a lot of miles driven.

~~~
Eridrus
Given modern safety standards in vehicles, I don't think fatalities are a good
measure of how good humans are at driving.

It's news when a self-driving car runs a red light, but pretty normal for
humans.

------
brreakdown
How about when photons flood the camera/sensor ( the sun shines it's rays
directly at you ) -- or - when photons cover the subject that the
camera/sensor is supposed to be reading -- last this happened the rider died (
[https://www.theguardian.com/technology/2016/jul/01/tesla-
dri...](https://www.theguardian.com/technology/2016/jul/01/tesla-driver-
killed-autopilot-self-driving-car-harry-potter) ) -- they're too many natural
occurring factors at play to predetermine what a moving vehicle should or
shouldn't do every second - for this reason - vehicles we travel in should be
put in an environment first that blocks distractions - like Musk with his
Boring machine - this controlled environment makes more sense - otherwise I
need an alarm to wake me up when the sensors know they can't tell what's going
on - I need to regain control of my vessel and guide it correctly through the
photons - yes?

~~~
hanbura
>How about when photons flood the camera/sensor

First you try countermeasures, if that fails you remove yourself from traffic.
At least that's what humans do in that situation.

>or - when photons cover the subject that the camera/sensor is supposed to be
reading -- last this happened the rider died

A frequent problem for human drivers too.

We just need good heuristics how to come to a safe stop, and be generally
better than the average driver.

~~~
pbhjpbhj
Similar to fog or heavy rain which can also readily obscure the road, and
hazards.

Not sure how well these things are currently managed by self-driving cars??

------
GrinningFool
"When’s a pedestrian not a pedestrian? When it’s a decal"

This sub-title is without context. Would be nice if the title here matched the
article title, "This Image Is Why Self-Driving Cars Come Loaded with Many
Types of Sensors"

