Hacker News new | past | comments | ask | show | jobs | submit login
GhostStripe attack haunts self-driving cars by making them ignore road signs (theregister.com)
32 points by curmudgeon22 13 days ago | hide | past | favorite | 40 comments





These guys had to work really, really hard to find a way to fool the machine vision of recent self-driving software!

My take: Maybe the machine vision of recent self-driving software has become harder to fool than human vision? Human vision is remarkably easy to fool. See https://www.ritsumei.ac.jp/~akitaoka/index-e.html and https://en.wikipedia.org/wiki/Optical_illusion for example.

Luckily for all of us, there are no smart people laboring all day long in a lab trying to find new ways to fool human drivers into doing dumb, dangerous things on the road. Human drivers already do that on their own -- no tricks or illusions are necessary.


Researching adversarial patterns is important especially for systems you can’t formally verify other than through exhaustive use testing.

It’s not just about are they harder to fool than humans but to find the conditions under which they are fooled which are often can be very different.

This is needed to both identify edge cases where these patterns can appear in the wild and to make the systems resistant to outside interference.


I agree. It seems the edge cases that trip these systems are becoming so rare and convoluted that the return on this kind of research is approaching zero. At some point we just need to collect data from real-world tests.

Could be, but in this case it may actually have real world implications.

LED wall displays are becoming more and more common, I’ve already seen ones that cause annoying reflections on road signs as they turn the retro-reflectors into a disco ball.

I’ve skimmed through the paper and I can’t understand how likely this to occur unintentionally in the wild but their strobing pattern doesn’t seem to be that aggressive so a combination of the animation itself on the sign and LED strobing due to PWM control can quite possibly cause similar interference.


Optical illusions are cool and all, but can you name one that would make it harder to read a sign?

We don't make things where you need to pick up on small differences in alignment or color, and the illusions that cause motion have to be very big to be a distraction.


Yesterday I saw someone with their left turn signal on. They were turning left (the wrong way) onto a one way street. There was a one way arrow, a Do Not Enter sign, and an arrow for their lane pointing straight ahead only. I pulled up next to them on my bike and tapped on their window. They were unpeeling a banana. I shook my finger and pointed at the sign. The light turned green and they turned into three lanes of quickly oncoming traffic.

I'm honestly not worried about cars being occasionally unable to read a sign. In fact, I know my car already knows where the signs are and what they say before it sees them. I'll always be more scared of human drivers: computers can't get distracted peeling a banana.


if someone on a bike shook their finger at me i'd turn towards them

That was sort of the point, we were stopped at a light, I wanted to get her attention and wagging my finger and pointing at the sign was the best I could come up with to try to explain that she couldn't make that turn

https://www.google.com/search?q=confusing+road+signs&udm=2

Illusions aren't even necessary with human drivers, many of whom are shockingly bad -- look around next time you're on the road.


If you put up a big sign that says "potato" it's going to confuse both humans and computers, but that's not fooling vision in any way.

The type of issue in the article needs signs that seem to be working to humans, but are secretly failing. Not just "oh they put up too much in a single spot" or "there are two signs that contradict each other".


Given how bad human drivers already are, I wouldn't be surprised if there are lots of ways to fool them that don't fool the machines. Speaking from experience, having used Tesla's "Supervised FSD" v12.3.6, which is far from perfect, I've been surprised by how often it picks up things (signs, cars, pedestrians, etc.) that I have missed on the road. AFAIK, no one is laboring to find how to fool human drivers into misreading road signs.

Not an illusion but people still run into the 11'8" (now +8") bridge despite plenty of signage.

https://www.youtube.com/watch?v=8qiGP72GFUc


Interesting, but wouldn't it be less work to cover/remove/deface the sign if you're trying to do something like this? Planting a sneaky black box with an LED would seem super suspicious.

The one version of their attack requires access to the vehicle. If you already have access to the vehicle, you could just load a different program on it to do your bidding and have it replace itself with the original code once the deed is done. That reminds me of several recent CVEs that begin with "If an attacker has root access on a target system, then ... [series of steps] ... they can gain root access.

The thing is, it's very easy to get physical access to a parked car. Even with your car locked, there's a lot that can be done on the outside of a car not to mention, it's trivially easy to get inside if you are a determined bad actor.

It's basically just LEDs lighting up the sign in multiple colors at a frequency adapted to the vehicle speed. You could package it up to look like a regular light (that looks white to humans) and a regular traffic camera or surveillance camera.

The camera is easy enough to make inconspicuous. The LEDs and processor can be in a regular light enclosure. You just need to find a good excuse for the sign to be illuminated.


As long as cars are gracefully handling weird visions (like Stop sign in middle of 80mph highway) it should be fine.

When I was 1yo my dad was driving on the highway, got distracted, and ended up on a highway portion that was under construction. He was driving full speed, when he saw a barrier, late enough that he swung the steering wheel to avoid the barrier, and ended up driving on 2 wheels before the car felt back and stopped. No baby at the time, my grand-ma was holding me in the back (good old years when cars were lighter and less secure).

Random things can happen on the road, including wrong signage. As long as the car can assess the current state and graceful transition to a safe spot, it should be fine to throw at them random signage.


Somewhat confusing because it seems like they didn't test it against any actual self-driving car stack, just some kind of toy version?

Who knows what context besides the actual sign itself is used to determine where to stop, for example. Maybe the line on the road, maybe map data, maybe the pole or the appearance of the intersection itself. Maybe the vision system is resilient to this attack in some other way. Maybe the system detects this state and has a fail-safe behavior.

Anyway, interesting to research, but unclear how it affects production systems.


It’s good to know the limits AI systems. However this doesn’t mean that we shouldn’t develop self driving. Infrastructure is vulnerable, AI or no. You can attack the infrastructure, there is no counter other than a populous that mostly obeys the rules. People can go out and remove road signs, paint fake lines on the highway or coverup stop signs with plastics bags. Those would be crimes. Intentional interference with a self driving car would also be a crime. A certain amount of trust is required to make society work.

Just read that xkcd, https://xkcd.com/1958/. Same point, most people aren’t murderers.

This is a little bit silly because it glosses over the fact that since some people are evil, we want to limit the blast radius of the damage they can cause. One person switching signs for mischiefs sake has a few victims. One person exploiting a vulnerability that’s shared by X% of vehicles is a very different matter

So they’ve finally reached parity with human drivers then.

Six boffins mostly hailing from Singapore-based universities have proven it's possible to interfere with autonomous vehicles by exploiting their reliance on camera-based computer vision and cause them to not recognize road signs.

How do non-camera based systems (lidar etc) get road sign information? I would expect with cameras...?


With cameras alone it seems pretty easy to trick them into dismissing road signs as nonexistent.. I think Lidar alone should be telling you there is a sign, so your camera is faulty or someone is putting up blank signs.

Lidar as I was told to use it would be in conjunction with a database of way points like signs, so the trouble would be knowing if the sign was updated.


My car doesn't seem to have any trouble reading temporary construction speed limit signs and the like. And it has lidar. This seems to be a solved problem.

Are you saying it exclusively has lidar? If not then I don't understand your comment.

It also has cameras. I might have assumed too much.

I don't know how it determines the existence of a sign. It might be visual, lidar, or a combination.

I think I drew a faulty and unstated inference. Feel free to disregard.


The only difference between this and https://xkcd.com/1958/ is that this attack confuses cars from certain manufacturers but not human drivers, and I'm not sure that that distinction is important.

Is this attack actually in anybody's threat model?


It should be. We've seen real, in-the-wild attacks on self-driving systems: People putting cones on hoods.

There are people out there who don't want autonomous vehicles on the streets. Whatever their reasoning is isn't particularly relevant, because if someone wants to accomplish a given end, this has potential as an attack vector.


Trapping a car is very different from blocking important information while it's at speed.

A cone is not an example of a dangerous attack.


Failing defensively seems... good?

It's not like cones cause the cars to jam the accelerator.


We live in a time where kids will call a SWAT team to someone's house because they don't like their twitch stream. I wouldn't underestimate what people will do for the lols, especially if there is a disconnect between their actions and the outcome.

Perhaps sign recognition should be lossy-tolerant multifactor for redundancy: size, shape, orientation, GNSS position against GIS map data, color, text font, etc.

So change the cameras to global shutter?

Yeah, came to ask this. Aren't global shutters becoming a thing? One would assume they'll be commonplace in the not-all-that-distant future, given their advantages.

irl traffic signs are retroreflective, which might give the headlights an edge over the malicious leds

The future is hilarious.

- my car is haunted by visions

- my computer needs a pep talk to generate some work

- my other computer gets upset when I'm wearing sunglasses because it doesn't recognize me


- my car doesn't like vanilla ice cream (but others are fine) https://news.ycombinator.com/item?id=37584399

- My car won't move because it is too cold.

- My car is stuck because it cannot see the road lines through the snow.

- The power is out and my car cannot handle a broken traffic light.

- My car doesn't like tunnels because there is no cell coverage inside.

- The lollipop guy dropped his sign and my car treated it like the start of a drag race.

- All traffic ordered stopped after Fast and Furious 19 was accidentally used as training footage by the AI.


The future is now. I had CoPilot gaslighting me and changing the subject the other day because I wasn't happy with its work.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: