
A $300 projector can fool Tesla’s Autopilot - buran77
https://arstechnica.com/cars/2020/01/how-a-300-projector-can-fool-teslas-autopilot/
======
aaron-santos
When I was a kid I drew STOP on the street in chalk. Yes, real human drivers
stopped as I watched through the window and my parents made me spray it down
with the garden hose. So I guess a $1 stick of chalk can fool human drivers.

------
jiofih
Next: “A $300 projector can fool a human driver”

Watching the video with the projected pedestrian, I’m 100% sure any person
would have stopped the car, if not for “person in the road”, just out of
confusion. Is there a difference?

~~~
buran77
"Next: “A $300 projector can fool a human driver”"

I know you're using it as a hyperbole but there's no way you'd trick a human
like this. You might do it by throwing a balloon dummy in front of them
causing them to swerve for example. But the chances a human driver will fall
for lines drawn on the road are very slim.

I'm a regular human driver, nothing special about my skills or capacities. But
I navigate daily on a maze of roads with or without markings, or with a bunch
of conflicting and overlapping marking on the road and on the side of the road
(infrastructure is not that great where I live). Even when driving on a road
at first sight I was never tricked by any of the fake lines or signs, and
never randomly swerved because a sign/line told me to. As a human I can just
ignore them and plot my own course having a better awareness of what's
happening than any car has today.

It's a new tech and many drivers actually (over)rely on it. It's a serious
issue because it's accessible down to prankster level.

Just because I can extract a password from you with the legendary xkcd $5
wrench [0] doesn't mean finding a way to hack any password from a computer for
$5 is something to scoff at.

[0] [https://www.xkcd.com/538/](https://www.xkcd.com/538/)

~~~
close04
> Just because I can extract a password from you with the legendary xkcd $5
> wrench [0] doesn't mean finding a way to hack any password from a computer
> for $5 is something to scoff at.

I'm trying to imagine what the reaction would be if when Spectre/Meltdown
exploits were published someone would reply "I can get admin privilege with a
$2 knife".

~~~
jiofih
Well, that analogy doesn’t hold. In this case the exploit is introducing
something to your visual field that _looks like an obstacle_. Both humans and
robots mimicking human vision will be susceptible to it.

If you look at the video in the article, they don’t just show line markings,
but project a life-sized person on the road in the correct perspective for the
driver eye height. Of course brightness is off, it has no volume, but I bet a
horse 90% of drivers would stop if that projection showed up in front of them.
In fact, it would be so easy to test this, that one wonders why they didn’t do
just that in the study. A lot of people would even fall for the fake markings.

The real interesting bit in the study is the _reaction time_ , which is only
shown at the end of the video. Humans will not react to a stop sign that
blinks in and out of existence, while for the computer it was there for long
enough. Sounds totally fixable in software by introducing some kind of
persistence parameters. And in the end, it only makes it harder to detect, but
is fundamentally still the same attack as drawing STOP in chalk.

[https://www.nassiben.com/phantoms](https://www.nassiben.com/phantoms)

~~~
close04
> Both humans and robots mimicking human vision will be susceptible to it.

Obviously but the bar is much lower for tricking cars than it is humans. And I
think that is the problem being highlighted by this. Not that you can trick
the computer but that you can do it with something that would never work with
a human. Self driving systems and driver assists are always marketed as
systems that _improve_ some things humans can already do and the expectation
is that recognizing lane markers or signs on the side of the road are some of
those things.

We adjusted all the infrastructure around driving to fit humans. The height or
appearance of a stop sign accounts for this. It's possible that we'll have to
tweak this to be more suited for cars now which would come with a different
set of design rules: shapes, colors, materials used, etc. to better fit
computer vision + lidar/radar. But as it stands today it's far easier to trick
a car than it is to trick a human and I think that's an important thing to be
aware of, not just find excuses for why it happens because of brand affinity
or anything like that.

------
tinus_hn
In other news, a $1 blindfold can fool a human!

~~~
close04
True (and funny) but the difference is one of expectations. When you climb in
the car and start driving there's no expectation that a blindfold will fall
over your eyes. And the bar for tricking a human is higher even if obviously
possible.

People letting their car drive itself do not expect something like this to
occur and may even have no hint that it's happening (like with speed limit
signs). The bar for tricking the car is far lower. None of the autonomous
driving systems built so far account for maliciousness and they are still far
behind humans in "processing power" so it will be easier to trick them. For a
while at least.

~~~
tinus_hn
So people are actually going to project cars onto the road to cause Teslas to
get into accidents? They could as well project strobing patterns and cause a
lot more trouble.

------
close04
Seems like this is not necessarily Tesla exclusive and a projector could fool
any system based on "classic" 2D vision until they're smart enough to put
human like processing power behind those images.

Even people can fall for a "Wiley E. Coyote" type gag where they drive into a
painting of a tunnel entrance on a wall [0]. Cars (computers in general) are
nowhere near smart enough to deal with such tricks. But it does mean that the
bar is significantly lowered for tricking cars in self-driving mode and
causing crashes or at the very least some serious disturbance.

[0] [https://www.insideedition.com/headlines/15350-street-
artist-...](https://www.insideedition.com/headlines/15350-street-artist-
painted-a-road-runner-tunnel-on-a-wall-someone-tried-to-drive-through)

~~~
rtkwe
There is a solution already and it's LIDAR/RADAR. None of these should fool a
LIDAR or combined LIDAR and camera system.

~~~
buran77
"None of these should fool a LIDAR or combined LIDAR and camera system"

How would a LIDAR help with the lines on the road? At best it could see the
guard rail or another car in its path and break to avoid a crash but this
could very well take a car off the side of the road if there's no clear
obstacle.

~~~
rtkwe
I had skimmed and saw the fake traffic sign and fake car examples both of
which shouldn't fool LIDAR systems. It would have a much harder time with the
fake lines situation, it'd rely on looking for curbs or something, maybe some
inference based on how other cars are moving.

~~~
close04
> I had skimmed and saw the fake traffic sign and fake car examples both of
> which shouldn't fool LIDAR systems

There's no RADAR/LIDAR system that would reasonably tell apart real from fake
signs or lines (the lines being the real dangerous issue). They come in many
shapes and sizes. A fake sign would also fool a human but overall a human
driver has many other cues to base a decision on. At the very least someone
trying to trick you would have to invest far more into the props used.

~~~
rtkwe
The fake signs in this example were projected/displayed in/on trees/billboards
and would be ignored fairly easy because it would be the image embedded in a a
shape that didn't match the shape of a sign. These types of signs shouldn't
fool people, you could fool both human [0] and CV with a fake sign on a pole
but it shouldn't fool either just projected onto a tree. With LIDAR you can
tell if the image of the speed limit is on a speed limit sign or not.

[0] Though people would have an additional layer of reasoning about if the
speed limit was reasonable for a particular set of road. Just putting up a 90
MPH speed limit on a random residential road even with a convincing sign
should get people to do 90 on it. (Granted computers already have databases of
posted speed limits they could draw from too)

------
hurricanetc
Projecting an image of a person or a stop sign was a funny troll. But then it
got deadly serious when they also demonstrated projecting LANES and tricking
the car into veering into oncoming traffic.

------
aidenn0
[https://xkcd.com/1958/](https://xkcd.com/1958/)

------
aigen001
This information will be useful for the future self-driving truck highway
bandits.

------
solarkraft
a $0 rock thrown from a highway bridge can probably fool Tesla's Autopilot as
well (and humans).

~~~
buran77
And you'd still be a lot more worried if an autonomous system threw it than if
a human did. Autonomous systems making mistakes (especially the potentially
deadly kind) triggers more anxiety. Almost any example you can think of sounds
worse when an autonomous machine is involved because it's one more thing to
worry about.

~~~
craftinator
I think you missed the context slightly. OP is saying that there are cheap and
unsophisticated way to fool the Tesla autopilot system, in addition to
expensive ones. If an autonomous system threw the rock vs a human, that has
nothing to do with OP's comment.

