Hacker News new | past | comments | ask | show | jobs | submit login
A $300 projector can fool Tesla’s Autopilot (arstechnica.com)
16 points by buran77 on Jan 28, 2020 | hide | past | favorite | 25 comments



When I was a kid I drew STOP on the street in chalk. Yes, real human drivers stopped as I watched through the window and my parents made me spray it down with the garden hose. So I guess a $1 stick of chalk can fool human drivers.


Next: “A $300 projector can fool a human driver”

Watching the video with the projected pedestrian, I’m 100% sure any person would have stopped the car, if not for “person in the road”, just out of confusion. Is there a difference?


"Next: “A $300 projector can fool a human driver”"

I know you're using it as a hyperbole but there's no way you'd trick a human like this. You might do it by throwing a balloon dummy in front of them causing them to swerve for example. But the chances a human driver will fall for lines drawn on the road are very slim.

I'm a regular human driver, nothing special about my skills or capacities. But I navigate daily on a maze of roads with or without markings, or with a bunch of conflicting and overlapping marking on the road and on the side of the road (infrastructure is not that great where I live). Even when driving on a road at first sight I was never tricked by any of the fake lines or signs, and never randomly swerved because a sign/line told me to. As a human I can just ignore them and plot my own course having a better awareness of what's happening than any car has today.

It's a new tech and many drivers actually (over)rely on it. It's a serious issue because it's accessible down to prankster level.

Just because I can extract a password from you with the legendary xkcd $5 wrench [0] doesn't mean finding a way to hack any password from a computer for $5 is something to scoff at.

[0] https://www.xkcd.com/538/


> Just because I can extract a password from you with the legendary xkcd $5 wrench [0] doesn't mean finding a way to hack any password from a computer for $5 is something to scoff at.

I'm trying to imagine what the reaction would be if when Spectre/Meltdown exploits were published someone would reply "I can get admin privilege with a $2 knife".


Well, that analogy doesn’t hold. In this case the exploit is introducing something to your visual field that looks like an obstacle. Both humans and robots mimicking human vision will be susceptible to it.

If you look at the video in the article, they don’t just show line markings, but project a life-sized person on the road in the correct perspective for the driver eye height. Of course brightness is off, it has no volume, but I bet a horse 90% of drivers would stop if that projection showed up in front of them. In fact, it would be so easy to test this, that one wonders why they didn’t do just that in the study. A lot of people would even fall for the fake markings.

The real interesting bit in the study is the reaction time, which is only shown at the end of the video. Humans will not react to a stop sign that blinks in and out of existence, while for the computer it was there for long enough. Sounds totally fixable in software by introducing some kind of persistence parameters. And in the end, it only makes it harder to detect, but is fundamentally still the same attack as drawing STOP in chalk.

https://www.nassiben.com/phantoms


> Both humans and robots mimicking human vision will be susceptible to it.

Obviously but the bar is much lower for tricking cars than it is humans. And I think that is the problem being highlighted by this. Not that you can trick the computer but that you can do it with something that would never work with a human. Self driving systems and driver assists are always marketed as systems that improve some things humans can already do and the expectation is that recognizing lane markers or signs on the side of the road are some of those things.

We adjusted all the infrastructure around driving to fit humans. The height or appearance of a stop sign accounts for this. It's possible that we'll have to tweak this to be more suited for cars now which would come with a different set of design rules: shapes, colors, materials used, etc. to better fit computer vision + lidar/radar. But as it stands today it's far easier to trick a car than it is to trick a human and I think that's an important thing to be aware of, not just find excuses for why it happens because of brand affinity or anything like that.


>Is there a difference?

Yes.

Basic human ability can only change over evolutionary timescales so that a human will always be fooled by a projected image until such time that they start ignoring suspected projected images and plowing into cars and people.

The software and hardware in the Tesla, and all self-driving platforms, can be constantly and iteratively updated to better detect spoofing attempts.


In other news, a $1 blindfold can fool a human!


True (and funny) but the difference is one of expectations. When you climb in the car and start driving there's no expectation that a blindfold will fall over your eyes. And the bar for tricking a human is higher even if obviously possible.

People letting their car drive itself do not expect something like this to occur and may even have no hint that it's happening (like with speed limit signs). The bar for tricking the car is far lower. None of the autonomous driving systems built so far account for maliciousness and they are still far behind humans in "processing power" so it will be easier to trick them. For a while at least.


So people are actually going to project cars onto the road to cause Teslas to get into accidents? They could as well project strobing patterns and cause a lot more trouble.


Seems like this is not necessarily Tesla exclusive and a projector could fool any system based on "classic" 2D vision until they're smart enough to put human like processing power behind those images.

Even people can fall for a "Wiley E. Coyote" type gag where they drive into a painting of a tunnel entrance on a wall [0]. Cars (computers in general) are nowhere near smart enough to deal with such tricks. But it does mean that the bar is significantly lowered for tricking cars in self-driving mode and causing crashes or at the very least some serious disturbance.

[0] https://www.insideedition.com/headlines/15350-street-artist-...


There is a solution already and it's LIDAR/RADAR. None of these should fool a LIDAR or combined LIDAR and camera system.


"None of these should fool a LIDAR or combined LIDAR and camera system"

How would a LIDAR help with the lines on the road? At best it could see the guard rail or another car in its path and break to avoid a crash but this could very well take a car off the side of the road if there's no clear obstacle.


I had skimmed and saw the fake traffic sign and fake car examples both of which shouldn't fool LIDAR systems. It would have a much harder time with the fake lines situation, it'd rely on looking for curbs or something, maybe some inference based on how other cars are moving.


> I had skimmed and saw the fake traffic sign and fake car examples both of which shouldn't fool LIDAR systems

There's no RADAR/LIDAR system that would reasonably tell apart real from fake signs or lines (the lines being the real dangerous issue). They come in many shapes and sizes. A fake sign would also fool a human but overall a human driver has many other cues to base a decision on. At the very least someone trying to trick you would have to invest far more into the props used.


The fake signs in this example were projected/displayed in/on trees/billboards and would be ignored fairly easy because it would be the image embedded in a a shape that didn't match the shape of a sign. These types of signs shouldn't fool people, you could fool both human [0] and CV with a fake sign on a pole but it shouldn't fool either just projected onto a tree. With LIDAR you can tell if the image of the speed limit is on a speed limit sign or not.

[0] Though people would have an additional layer of reasoning about if the speed limit was reasonable for a particular set of road. Just putting up a 90 MPH speed limit on a random residential road even with a convincing sign should get people to do 90 on it. (Granted computers already have databases of posted speed limits they could draw from too)


The car is actually a lot smarter in that case, since it has radar to help detect obstacles.


> radar to help detect obstacles

>> could fool any system based on "classic" 2D vision

I think I was pretty clear about which systems can be fooled. And radar will not help with projected images like speed limit signs, human figures, or lane markers. Nothing from the article (or common sense) suggests any of these issues could be helped by a radar. Worse yet, some of these images can be projected for such a short time that the human driver would have no hint that something is about to go wrong or explanation for it.


I was not responding to the article, but specifically to your “Cars (computers in general) are nowhere near smart enough to deal with such trick [the Coyote tunnel gag]” - the radar will certainly detect a wall, while a human might still be fooled by the a drawing.


Projecting an image of a person or a stop sign was a funny troll. But then it got deadly serious when they also demonstrated projecting LANES and tricking the car into veering into oncoming traffic.



This information will be useful for the future self-driving truck highway bandits.


a $0 rock thrown from a highway bridge can probably fool Tesla's Autopilot as well (and humans).


And you'd still be a lot more worried if an autonomous system threw it than if a human did. Autonomous systems making mistakes (especially the potentially deadly kind) triggers more anxiety. Almost any example you can think of sounds worse when an autonomous machine is involved because it's one more thing to worry about.


I think you missed the context slightly. OP is saying that there are cheap and unsophisticated way to fool the Tesla autopilot system, in addition to expensive ones. If an autonomous system threw the rock vs a human, that has nothing to do with OP's comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: