Misleading headline. They spoofed GPS and caused the car to safely exit the highway at a different exit than planned. This is hardly an emergency. A human might easily make the same mistake if their GPS was spoofed.
I don't trust Autopilot myself, I've had too many phantom braking incidents. But GPS spoofing is not a good reason to criticize Autopilot.
It's not just the headline. The whole article is written with a FUD spin. The actual information about the results is presented in a very cryptic fashion , making it easy for news outlets to write "OMG Tesla hacked" articles, and circulating their name and their product in the process.
You have to get pretty deep into the article to get to these two lines:
>Any product or service that uses the public GPS broadcast system can be affected by GPS spoofing... this research doesn’t demonstrate any Tesla-specific vulnerabilities
>The effect of GPS spoofing on Tesla cars is minimal and does not pose a safety risk
It would of course be better if Tesla vehicles weren't vulnerable to this type of attack, but the headline and much of the article has potential to be very misleading to someone who doesn't completely understand what is going on here.
EDIT: I stand by the general point of this comment. However I did miss the context of the quotes I included here. See nirvdrum's comment and my reply.
I'm not sure those two lines really support your conclusion. They may, but they're both taken from Tesla's response to a previous test performed with the Model S. If you read further on, the researchers take issue with Tesla's response, particularly the part about it not posing a safety risk:
> The fact that spoofing causes unforeseen results like unintentional acceleration and deceleration, as we’ve shown, clearly demonstrates that GNSS spoofing raises a safety issue that must be addressed
You are right that I was mistakenly quoting a quote. I was skimming the article and was thrown off by the inconsistent formatting. Some of the multi-paragraph quotes are indented to identify them as block quotes. The sections I was quoting from just had a quotation mark at the start then three paragraphs later another quotation mark to end the quote.
Tesla vehicles can likely be made less vulnerable to this attack than even human drivers navigating via cell-phone GPS since Tesla has inertial measurement units.
They are likely 'trusting' GPS signal over IMU currently, but that could be changed via software. Or at least alert if the signals disagree.
How well does the camera and radar cope with a radar transparent sheet with off-ramp lines painted on it? There's a Wyle E. Coyote scenario of hanging that off the side of a mountain road.
Say FSD comes online or NOA is enabled for city streets. A driver without their attention on the road might not notice the mistake, and a dedicated attacker could navigate the car wherever they wanted and do malicious things (attack the car, harm the passengers, etc).
Sure, it's not a big problem now, but it could be a problem when the Tesla self-driving fleet comes online and it drives you to a back alleyway to get mugged and/or killed.
All that sounds pretty implausible. Sure, it's all technically possible, but it's a more complicated way of attacking a target.
For one, the attacker would have to know in advance that you'll a) be actively navigating at the time, and (b) what your programmed route is (they have to know what to spoof).
Then, they'll need to get their radio gear together, wait for you to come near, enable the spoofing contraption, hope you don't notice that the car just made an error and pulled into a dark alley instead of 42nd Street, and ambush you.
Not gonna happen for a simple mugging. If you're a specific target, then sure, but your days are probably already numbered.
We have a whole ecosystem of ride sharing apps that IMO are even more ripe for nefarious activities. If your Uber or Lyft driver wanted to hurt you, they very well could. This is such a low likelihood risk that it's barely work talking about.
> Yonatan Zur, Regulus Cyber CEO and Co-Founder, [...]: “We designed a product to protect vehicles from GNSS spoofing because we believe it is a real threat. We have ongoing research regarding this threat as we believe it’s an issue that needs solving.
So they're selling the cure to an ailment they insist is a very big problem.
I'm not saying GPS spoofing isn't a problem, just that this is hardly unbiased research.
No, basically it'll just slow down 20-30 MPH abruptly and unpredictably. It tends to make the person behind you mad, and could certainly cause a rear end collision if someone wasn't paying attention.
I also wouldn't trust it in any situation where lane lines are ambiguous, incompletely erased, or just unusual. I have about 50% success navigating freeway ramps and interchanges around here, and even when it is successful it drives slowly and erratically, again making other drivers mad. I've basically given up on that feature, and I also don't use it at speeds above 45 MPH.
I'm hopeful that the more powerful "HW3" Tesla neural net computer will allow them to basically redo the whole thing for "fully self driving". If they're just planning on running what they've got now but faster, then they are not going to be successful.
Finally, although I am bullish on the no lidar approach, I am skeptical that their current camera setup is good enough. They really need more coverage and more resolution, and probably more compute than even HW3 provides. But improving the cameras in newer cars would mean giving up on "fully self driving" for existing cars, and that would be disastrous for them given the promises Elon has made and the millions of dollars of "fully self driving" features already sold.
>giving up on "fully self driving" for existing cars, and that would be disastrous for them given the promises Elon has made and the millions of dollars of "fully self driving" features already sold.
That surprises me.
Has Tesla really promised "fully self driving" for existing cars?
Oh, most definitely. You can go to tesla.com right now and configure your very own Model 3 with the "Full Self-Driving Capability" option for $6,000, and they've been selling that option for years.
Personally I think it's a ridiculous idea that it could work well enough to be on the road before current cars are EOL, but even if you did think it was possible, then why not wait until it is an actual reality and buy it later?
I mean, it's not like Tesla will refuse to take your money to sell you the option later anyway.
For me, the most interesting part of this article was the casual mention of "the Black Sea spoofing attack of 2017". Apparently an unknown attacker pretty thoroughly messed up traffic in the area by making everyone's navigation systems think they were over a nearby airport. Given that we've had several major maritime collisions reach the news in the last year with no attacks involved at all, I could imagine dozens of deaths and potentially billions of dollars of damages if they'd done this in a major harbor or with hostile intent. Just spoof an oil tanker into the docks at Corpus Christi (edit: or mess with stationkeeping on an oil platform) and you'd have an impossible disaster on your hands.
GPS is only a supplement for maritime collision / allision avoidance. All you actually need are: radar, eyes, horn, flags, lights, VHF radio, and charts. Of course GPS is certainly helpful as a backup and provides some protection against human error.
The COLREGs (as well as local supplemental rules) are rather complex. All of the additional items I listed are implicitly required at least for certain vessels in certain conditions.
All you "need" are the stars and some way to track time (a basic implementation conveniently provided by the sun). Everything else just increases accuracy. ;)
Others commenting that this could be considered terrorism but if the government is doing it in the interest of defense then it won't be considered illegal.
The spoofing is probably so drones think it's a no-fly zone (since the GPS signals say, "you are now at one of the Moscow airports"), so there's less surveillance by amateurs. Are weaponized drone attacks a credible threat?
While unsuccessful, there were weaponized drones used against Maduro (the Venezuelan president) in 2018 [1]. There are other more recent stories from Saudi Arabia [2].
>Just spoof an oil tanker into the docks at Corpus Christi and you'd have an impossible disaster on your hands.
The crew isn't staring at their GPS screens when they're in a harbor. This also assumed nobody notices the ship that's on a stupid course, they somehow don't get contacted and all the tug boats that are generally responsible for maneuvering those kinds of ships in those kinds of harbors don't do anything to deal with what looks like an out of control ship.
I know it looks like a serious vulnerability on paper but you have to get a lot of things just right for a single system failure (GPS) to cause a serious accident, especially during a time when the crew is on their toes.
When you have multiple streams of information, and one goes haywire, you are only ok if you can identify which information is bad and which is fine. I remember reading about an airplane crash in which the pilot had a long time to try to figure things out, but once it was clear an instrument was faulty, other instruments that were reporting correct, but unexpected data were assumed to be faulty too, and eventually it was too late to recover. You can't compensate for bad information unless you have a correct model of the fault.
My memory of the Black Sea spoofing attack was a lot of speculation that the Russian government was behind it. Consequences that deter an individual (like life in jail) don't necessarily deter state actors.
This is, I think, the essential definition of a "state actor". The actor is operating on behalf of a state that will shield that actor from most if not all consequences of their actions.
If you caused enough damage and deaths you will get life in jail, maybe even death penalty. But only after a pissed off jury declares you guilty.
The reason for Gitmo, terrorist claims, etc... is because there are complex international matters that make prosecution difficult. But if you killed people during a malevolent act, you are a criminal and will face justice. There is nothing complex about that, especially if your are a US citizen on US soil.
Didn't Obama shut down gitmo years ago... just kidding, just one of many failed promises... repealing the "PATRIOT" Act being another important one that instead of repealing, he extended and expanded. Trump is not any different but at least he didn't lie about it... sorry for telling the truth, HN Democrats...
I wouldn't imagine ships would use auto-pilot in a crowded area like this. The pilot would be at the helm and it would be pretty clear they were not where their GPS was indicating.
In open sea, however, I could see this being a problem for warships and territorial waters. But I think James Bond already had this one covered.
A lot of words here, but the video shows nothing besides the spoofed GPS location in the onboard screen. In particular, this is missing any kind of detail:
> Although the car was three miles away from the planned exit when the spoofing attack began, the car reacted as if the exit was just 500 feet away—abruptly slowing down, activating the right turn signal, and making a sharp turn off the main road. The driver was not prepared for this turn and by the time he regained manual control, it was too late to attempt to maneuver back to the highway.
So they forced the car to take the wrong exit? They stated just before that physical navigation/driving has no dependency on GPS. That’s a lot less dangerous than implied for “veering off road”, but the description is too fuzzy to know what really happened, and gives the article a suspicious tone.
Almost like the authors of the article have a conflict of interest as they are actively trying to sell a product to "mitigate" these attacks.
In practice, I'm fairly certain that such an attack being highly illegal (which it already is) would be more than enough deterrent in 99.99% of cases, just as it is with people throwing rocks onto cars from bridges or shining lasers into driver's eyes.
I wouldn't qualify the frequency of either of those things as statistically "alarming". In relation to how many people drive, those occurrences are exceedingly rare.
NL is super small, quite dense and the media here tend to report these things widely so you get a lot of copycat stupidity on top of the first idiot. People have died here because of this.
> Yonatan Zur, Regulus Cyber CEO and Co-Founder, emphasized this goes way beyond Regulus Cyber and Tesla:
“We designed a product to protect
vehicles from GNSS spoofing because
we believe it is a real threat.
..
By reporting and sharing incidents
such as this we can ensure the
autonomous technology will be safe
and trustworthy.“
Regulus is reporting a host of vulnerabilities, and conveniently has already developed a product to "protect" against the problem (not necessarily solve it, no details are divulged in TFA).
Regulus could be a good actor, but there is no denying they have incentive to slant the research and findings in a way favorable to their bottom line - SALES.
I noticed a lot of fear statements in the article. Appealing to the emotion of fear is a classic, common sales tactic. Especially when it's difficult to precisely quantify the risks, likelihood, and full implications of the negative outcome.
Until the claims are replicated and verified by an independent third party, best to take this report with a grain of salt. Especially regarding the Regulus Product purporting to protect against GPS / GNSS spoofing.
---
An interesting and relevant moral question is:
How many people are killed every day by human drivers? Would it be better if all motor vehicles switched to autonomous and the death by automobile accident rate drastically lowered, but was still greater than zero?
Even with exploits like this GPS attack, I suspect the death rate will probably be substantially lower than it is with humans at the helm. As another thread points out, human drivers can be blinded with maliciously operated lasers, yet such events remain rare.
I don't know why you are downvoted, you make very valid points.
This "report" was a very weird read for me.
There is a lot of vague information, it uses language like "mission critical", "high impact", .... It then mentions that the immediate driving decisions are not affected, the attack can affect high level routing decisions - like making the car turn off the highway.
Then it mentions all other car manufacturers as vulnerable,
and indeed conveniently mentions it's product which will protect the car.
There are plenty of concerns one should have about self-driving cars, but this is blatant self-promotion and obviously a PR piece.
> How many people are killed every day by human drivers? Would it be better if all motor vehicles switched to autonomous and the death by automobile accident rate drastically lowered, but was still greater than zero?
The problem with this is the assumption that autonomous motors are good enough that switching everyone over would even be a net win. Accidents are already rare per mile, so evaluating safety is hard. And the makers of the autonomous vehicles are incentivized to fudge their data as it's potentially a massive market.
That's never mind the fact that there's no clear quality control on what counts as autonomous. As an extreme example, a shitty Arduino app hooked up to servos and a single webcam could count and it almost certainly would not be safer than humans.
Most things that can be described as "shenanigans over RF" are illegal, but they're difficult to trace. Sure, if you broadcast from your house 24/7, the FCC will eventually send their direction-finding vans to look for you. But actual attacks can be localized, low power, and short duration.
GPS jammers are frequently used by drivers of GPS-tracked assets to help fudge their numbers. Such devices are very readily available online. However, their goose is cooked the moment they drive past an airport or military base. The airport quickly goes into ILS-only approaches, and local RF direction-finding agents are put into action. I'm surprised it doesn't occur more often than this - https://www.nextgov.com/defense/whats-brewin/2013/08/every-t...
It's come to my mind that someone could easily cause a small panic transmitting a fake EAS broadcast about some disaster or attack over FM stations duting heavy traffic in a city. Never heard of anyone doing it though. In high school I did play a fake alert from youtube about incoming nukes through the aux-in of a portable radio and had someone believing it for a minute.
Good example of this is the 4 watt limit for CB radio transmission. I remember being a kid interested in ham and CB radio as a kid and being blown away seeing someone's "illegal" 300W amplifier.really doubtful that the FCC opens many investigations over that.
CB Channel 6 is still an ongoing cultural phenomemon. Now that it's the summer, it's back alive with 10kW transmitters all over the place. Today not a single channel was empty, and there was a lot of lower sideband, and freeband operations going on above CH40.
I know a guy in my neighborhood who has a old beat up GMC suburban, with a CB radio, a 5kW "12 pill" amplifier, 3 alternators, and 2 heavy-duty CB antennas installed, and he goes by #1 on the air. Guys like him still participate in CB radio shootouts like this: https://www.youtube.com/watch?v=EyAqzFXDMys
All the while the FCC doesn't bat an eye. This is because they typically don't cause that much harmful interference. Even though the transmitters are often overdriven and non-linear, typical spurious emissions are outside of critical communications bands and aren't the source of many RFI complaints which drive a majority of FCC investigations.
Another example is PMR, aka. walkie talkies. In many countries, you can use the PMR frequencies unlicensed only through radios with non-removable antennas limited to 0.5W of TX power. Doesn't stop people without licenses from buying and using handheld transceivers intended for HAMs and professionals (like the popular UV-5R with 5W of TX power and a removable antenna) on these frequencies.
As a fair comparison, has anyone tried spoofing a car GPS with a human driver? If my satnav tried to tell me to exit off the highway (falsely) in 0.2 miles, you’d probably see me “signalling unnecessarily” and changing lanes at incorrect locations. After a while, I might distrust the GPS and attempt to navigate by using road signs instead (and a driverless car could also choose to disable GPS input in case of significant disagreement) but in both cases, a motivated attacker could probably convince the driver to navigate somewhere erroneous.
As a matter of fact, I use Google Maps on my phone for navigation, and sometime within the last year or two, I was driving in a medium sized city and it inexplicably started showing my position as offset by several blocks, and gave correspondingly wrong directions.
I have no idea what that was; it might have been a random bug of Google's, but it was interesting and novel. After about 20 minutes it returned to normal.
has anyone tried spoofing a car GPS with a human driver
In some of the more remote parts of America, GPS doesn't work very well, especially if you're near a military facility. (It's a pretty good indicator that you're near an "undeclared" military facility in the middle of nowhere.)
I can't count the number of times my GPS map has shown me driving through a lake, or over flying over a mountain.
>I can't count the number of times my GPS map has shown me driving through a lake, or over flying over a mountain.
That's odd because I can count the number of times it's happened to me. It's 0. This despite having actually been in the military. Civilian GPS receivers work fine these days, even on military installations.
>It's a pretty good indicator that you're near an "undeclared" military facility in the middle of nowhere.
What does that even mean? If it's really "undeclared" (I'm taking the scare quotes to imply "secret"), how would you actually be able to verify it?
Really? There are a lot of secret military facilities near you that leave tanks or fighter planes lying around or what? For every military base I've ever encountered in my career, the fences with "US Property NO TRESPASSING" signs and giant signs out front warning that you're approaching a military installation were the identifying features. In other words, they were all very much declared. If you took all of the signage away, I think it would be extremely hard to definitively identify most military installations as such from outside the fence.
Could you give a link to one of these facilities on Google Maps or something? I'm honestly really curious to see what you're talking about.
The so called "Area 51" in Nevada has, according to what I've read, a large buffer zone such that guards will intercept you considerably before you get to the fence and signs.
I imagine there are other sites like that, where the boundaries are a little ambiguous.
Area 51 is a bit of a unique case in how incredibly mythologized it is. I've never been, so I can't speak from firsthand experience, but my understanding is that it's not so much that the boundary is ambiguous as that the people guarding the facility are sometimes overzealous. In any case, those guards, like the signs, are declaring the presence of a military facility. While the Air Force might prefer that no one really knows what they do there, there's no doubt that it's a military facility. It's a perfect example of why the phrase "undeclared military facility" doesn't make any sense at all.
I would assume that GPS actually works much better in rural areas. Maybe a bit longer to take a fix if there's no good cell signal to use for AGPS, but once you get a fix, it's all satellite so being in a rural area means no large buildings around you to reflect the signal.
Of course, the quality of your maps is probably a lot worse.
Same thing though. Once you get a fix it's all satellite. Maybe if you're deep in a valley between mountains you'll have an issue as the mountains will block most of the sky, but as long as you can get signal from a few satellites you should be good. Satellites don't care how remote you are, only what around you is blocking the signal.
Can someone help me here? I'm scanning through this and... not really finding much meat. They used a deliberate radio attack to spoof GPS signals and change the car's idea about where it was in a fundamentally unrecoverable way, and it seems the only thing they got the car to do incorrectly was... take a wrong turn?
I mean, yeah. If you take away someone's maps and compass they might get lost. What am I missing?
Not just lost; you can make them drive exactly where you want them to go. As long as a human is behind the wheel this won't be terribly effective except as a disruptive tactic, but once there's nobody behind the wheel you could use this to hijack any autonomous car and make it drive where you want it to drive. This might be especially effective when using autonomous vehicles for long-range shipping, where you could force the truck to drive to a warehouse under your control where you can then steal its cargo.
To be fair, that's not at all what was demonstrated.
But... OK. If you can put a transmitter on top of a vehicle in motion you can take control and make it drive to your destination? How is that significantly more damaging or dangerous or "bad" than just grabbing it with a tow truck? Or hijacking it? Or just stealing the vehicle itself?
I mean... this just doesn't really seem like an indictment of autonomous driving to me. People were successfully stealing stuff out of horse drawn carriages (or hell, just stealing the horses) and we all seemed to survive just fine.
Seriously, this just doesn't seem like a doomsday kind of thing. Needs more spin.
Because this doesn't require putting a transmitter on top of the vehicle. As the article itself mentioned, this sort of GPS spoofing has already been demonstrated to be effective at range. For the purposes of Regulus's tests they didn't need to do it at range, mounting a transmitter on the top of the car was sufficient to demonstrate that an external spoof is possible
> The spoofer can easily use an off the shelf high-gain directional antenna to get a range of up to a mile. If they add an amplifier, a range of a few miles is very much possible. It has already been proven that spoofing can even occur across dozens of miles, for example in the Black Sea spoofing attack in June 2017.
And literally nobody is saying this is a doomsday. What they are saying is that autonomous cars need to recognize this attack vector and take steps to combat it. The fact that stealing or hijacking vehicles has always been possible doesn't mean we need to deliberately turn a blind eye toward a new and potentially very effective attack against autonomous vehicles.
You can't drive ONE vehicle to steal it with a blanket attack, that's ridiculous. You'd have to know exactly where that vehicle is, with orientation and velocity, down to cm-scale precision. And while you could then steal that vehicle you'd disrupt all the activity around it, so I don't see it.
Like I said, needs more spin. This scenario doesn't really fly.
> Although the car was three miles away from the planned exit when the spoofing attack began, the car reacted as if the exit was just 500 feet away—abruptly slowing down, activating the right turn signal, and making a sharp turn off the main road.
Designer fail, why is navigation system explicitly trusting GPS signal while the car has multiple sources suitable for dead reckoning(1)? ABS sensors alone (distance) would tell you something was wrong (being teleported 2.5 miles forward), then you have accelerometers, compass, cameras supposedly able to recognize side roads. Tesla needs to work on their kalman filter implementation.
While I agree with you, you then need to worry about the opposite problem:
Car drives onto ferry, GPS signal is lost, car drives off to ferry, dead reckoning figures the car is lost in the water, GPS is regained placing the car very much somewhere else.
Do you trust dead reckoning or the GPS? In this case, you trust the GPS, one assumes, since dead reckoning places you in the middle of a body of water, though there
And this isn't a made up scenario. My car fights this every weekday. It's always entertaining to watch it figure this out as the GPS recovers badly and it starts to place me inside buildings in an area with poor GPS coverage.
I think 'rasz is suggesting that a well-designed higher layer should cross-check its various inputs before doing stupid things. If GPS says we moved but tire rotation says we didn't, well, maybe don't blindly trust the GPS?
Thats not even all that high level, as per Etak patent:
"Based on the previous position of the object, the GPS derived position, the velocity, the DOP(dilution of precision) and the continuity of satellites for which data is received, the system determines whether the GPS data is reliable."
Car made a sudden decision based on spoofed data from a single sensor despite abundant alternative sources. Article also mentions being able to manipulate victims suspension. Again you have tons of sensors reading road is glass smooth and straight but you listen to one unauthenticated source telling you its a bumpy dirt back alley - who are you going to believe?
I guess I'm an interesting blend of geek and luddite. I enjoy tinkering with computers immensely, but prefer to leave them in their place. I don't like carrying a phone, don't use social networks, don't play video-games, etc. As part of this perspective, I have an intense distrust of self-driving cars. There is a great deal of difference between digitizing analog devices (using new devices to fill old roles), and creating new roles for tech to fill. I'm just a little too wary when it comes to driving. I can't stand the idea of a car grabbing control from me and braking, though some people I know drive such cars and regard it as a feature.
I don't know if this comes because of a technical background or not. As I have learned more about software development, assembly, cybersecurity, etc. over the years, I have become more rather than less accepting of things like self-driving cars.
This applies to IoT as well. I will never, ever, ever have one of those glorified wiretaps they call "voice assistants". What do they do, any way? I don't really care about being able to tell a speaker to order more laundry detergent. Nor do I want any of these goofy "smart" appliances - why do I care about my toaster sending me push notifications when my bread is done? Or mining bitcoin?
I guess I just felt like ranting. I hope one of you can convince me such a cynical outlook is wrong, as I get a bit sick of viewing new stuff with negativity.
Don't see anywhere in the article that this was _responsibly_ disclosed to automakers. They're literally publishing how to kill people on the road for the sake of making a buck (company seems to peddling spoof resistant GNSS).
There's nothing to disclose. Tesla and everyone knows their system is vulnerable, and knows that the solution is that the driver of the car is responsible for driving the car, which prevents this attack from succeeding. Also it's not anymore dangerous than Google maps telling you to make a wrong turn or a map being out of date. Navigation is not a safety feature.
In the article it says that it makes the car drive recklessly, brake checks in the middle of the highway and zigzagging across lanes. I'm not sure how you can equate that to Google maps showing you an out of date map.
Also, even if this isn't new information, that doesn't excuse them from writing an article with dangerous information, I don't recall getting bomb making instructions in the New York Times...
This isn't "killing" anyone, the car simply pulled off onto a small piece of paved ground adjacent the main road instead of at the intended exit. Despite the fact that the title does its best to make you THINK the car just drove off the road into a ditch.
More importantly, responsible disclosure would be necessary, if it weren't for the fact that this attack would work with pretty much any GPS tech.
No killing, ideally, but getting lost maybe. The car should use its optics to stay on SOME road no matter what; in this case maybe it didn't stay on the RIGHT one.
I don't get these fears about spoofing autonomous cars to crash. It's already incredibly easy to spoof a human driver off the highway -- just toss a cadaver or mannequin into the road from a bridge. It's dangerous, illegal, and criminal.
Spoofing autonomous cars with willful intent to injure the humans inside should be treated no differently.
A simpler spoof is to paint new lane markings that just turn off the road and into a field or a bridge pillar (And I genuinely think handling that should be part of safety testing for all cars with lane assist tech. Not because it’s likely but because it’s the worst case outcome of bad markings).
GPS can be wrong whether spoofed or not, and it should only make the car safely take the wrong exit in the worst case. It must be able to tell if the exit looks right and could reasonably represent the desired exit.
AP definitely doesn't handle construction zones well. That's one of the main differences with human drivers is we look at more than just the road, we can see a construction zone a half mile ahead and read the signs that warn us when we get closer. We know the lanes are about to get messed up and we are ready for it. My wife's Tesla is surprised every time the lanes do something funny. We drove through a construction zone on the highway last weekend where they had shifted the road over and left the old lane lines on the road in addition to the new lines they had just painted, basically turning a 3 lane highway into 6 mini-lanes. The Tesla lost it's mind trying to figure out what to do and I had to take over. To a human it was pretty clear as the old lane lines were faded and the new ones were bright white. As humans the lanes are not our only point of reference, the lanes just keep the cars organized on the road. Until we have an autonomous system that can look beyond just the road and lanes and recognize the environment around the road I'm not sure we'll ever see fully autonomous driving.
A few years ago, I was driving home late at night on a wide interstate highway with a nearly-pristine blanket of snow that fully obscured all the lane lines. The plows had not reached that section of road yet and no one was ahead of me. So I made my own path. It was weird, but obviously safe to any human. I wonder what fully autonomous cars would do in those conditions.
Although the car was three miles away from the planned exit when the spoofing attack began, the car reacted as if the exit was just 500 feet away—abruptly slowing down, activating the right turn signal, and making a sharp turn off the main road.
This could be used in an assassination. To prevent this, motorcades should use human drivers until self driving cars become smart enough to realize when they're being spoofed.
This is sarcasm, right? "Self-driving" cars can't handle normal road conditions in most of the world, the idea that anyone would replace drivers skilled in offensive/defensive driving under urban combat conditions with an autopilot system is patently laughable.
The only alternatives I can think of are using the military's encrypted gps, or tesla launching their own encrypted GPS satellite network. .. neither of which seem like realistic options.
If you have a way to navigate cars without GPS, you should probably contact Tesla so you can make millions of dollars.
I would hazard a guess that to provide some protection is that the system would would not permit real time update to existing road structure and using inertial guidance would rule out any change to its location that would be impossible to achieve within a set time.
So say you tell the car an exit it has been confirming for sometime is three miles, any change in that distance based on what the car knows it is doing should be enough to flag it and require intervention.
Granted a lot of the article is one sided but the cars need to be smart enough to know where they should be relative to where satellite places them especially when that information changes to quickly to be valid.
the problem of self driving is fascinating and it just goes to show there is always someone or something that will throw a wrench into the process
> Even though this research doesn’t demonstrate any Tesla-specific vulnerabilities, that hasn’t stopped us from taking steps to introduce safeguards in the future which we believe will make our products more secure against these kinds of attacks.
What would the solution to this be? Signing GPS signals?
Existing laws? Surely intentionally hacking a cars navigation system is a crime in most (all?) states. Analog would be that I can’t go around laying nails and thumbtacks on the highway.
That's like saying that "Gun Free" zones are the solution to preventing shootings. If the law is practically unenforceable, you need a different solution.
Onboard computers can have compasses and a pretty good idea of how fast they're going. You could just be monitoring for any weird jumps or incongruences (eg. there is a road here, but it's at a different angle than the GPS signal indicates, the path recently followed doesn't match up with the map based on current GPS reading, ...) assuming you have good enough accuracy and can track it over time.
Seems like the spoofers could work around that and still do damage if they wanted to, by gradually shifting away from reality. The open-loop position estimate will never be perfect, so some margin would have to be designed into such a check, and that could be exploited. It would still be better than nothing though.
The display in pretty much all GPS units uses a compass to orient the little car/human thingie. I've never seen my phone map display be disoriented by accelerating in an EV.
YMMV but I've done IMU measurements in a EV and the compass course changed like 90 degrees from zero to full throttle. Was a low voltage 50V EV prototype though (higher currents).
To be fair I'm not too sure how sensitive compasses are in high voltage EVs.
A couple of years ago there was a headline about Israeli research into keeping track of movements without GPS, like in a cave. I wonder if these systems could keep track of the last mile or minute and compare various sources.
You can shoot a laser into the eyes of a human driver and cause it to veer off the road too. But that’s illegal and immoral so most people don’t do it.
All of these “attacks on self driving cars” are just different illegal things that a single person can do against a single car.
> All of these “attacks on self driving cars” are just different illegal things that a single person can do against a single car.
A potential difference is that regular attacks on cars don't scale, whereas with some proposed methods against self-driving cars, a single person can attack hundreds of cars just as easily as a single one.
A single laser can be pointed at many eyes. If anything a laser has a much bigger scale than this setup, since most cars are human driven and you would have a lot more targets in the same span of time.
There's nothing you can do with a gun that you can't do with a knife, so what's the big deal?
There's nothing you can do with a pistol that you can't do with an assault weapon, so what's the big deal?
You can make improvised explosives with diesel fuel and fertilizer, so what's the big deal if WalMart wants to sell grenades and mortar rounds?
---
Differences in degree of impact and degree of access all matter. In the case of a laser versus an exploit of autonomous navigation, one of the biggest differences is that with the laser, the moment it happens, you are immediately aware is a crisis.
Whereas people use their navigation every day and build more and more trust in it. The notion that it might be malfunctioning and that there is a life-threatening crisis may not occur to drivers until it is too late.
That alone makes for a vast difference between the two scenarios.
I don't see the difference in impact of a laser vs. this setup. If anything, the laser is far more effective, because most cars still have human drivers, so you'll have many more targets in the same span of time.
I think this is more of a difference in recognizing an emergency and dealing with it. If I’m driving and suddenly blinded, I am going to try to do something, immediately.
If my car is on “autopilot” and it seems to be getting ready to leave the highway in the vicinity of an exit, it might take me a while to figure out that the car is not doing what I expect it to do.
Nobody expects a laser, and nobody expects their car to drive off the highway. But if you trust your car, your reaction may be delayed by the cognitive dissonance between what you observe and your belief that the car will do what it has always done under what you believe to be the exact same circumstances.
"But if you trust your car, your reaction may be delayed by the cognitive dissonance between what you observe and your belief that the car will do what it has always done under what you believe to be the exact same circumstances."
The other thing that causes confusion is that once you recognize something is wrong, you now know that you can trust some parts of the system and not others, but you don't know which. Even in aircraft, which often have minutes between recognizing a problem and a crash, people often cannot figure out what the problem is, because they need to have a theory of what is wrong before they can decide what to trust, to decide what is wrong. It's a catch-22.
But we are probably all in violent agreement, in that there are already ways to crash an automobile if it is on the highway and an attacker can get close to it.
you can google "high powered handheld laser for sale" as fast as you could type that and find "50000mW burning laser" for $240 and 500mw lasers for $39 among the top listings...
Yes, you can find cheap high-power lasers that will blind people and ignite things easily available and cheap.
Related recent research that handles decoy waypoints [1][2]. They use imitation learning for path planning directly from sensor data, instead of just using machine learning for perception, tracking, prediction and doing MPC/LQR for the planner. This allows inferring weird paths, perhaps, like "run into this narrow pit stop" :)
There should be a lot of concern over stuff like this in the future when technology is expected to lead the way. Who pays the cost when something an accident caused by spoofing/etc happens? Who is in charge of checking the firmware/software to confirm what happened? What does the owner of the car do while there is an investigation (lease a car with insurance money?) How would the police find out the perpetrator? What if the perpetrator is out of their jurisdiction?
The FCC hands out a lot of violations in Israel, does it? Regardless, I've never heard of Regulus before but they seem to be a legitimate company, not just a bunch of people winging it. Presumably they know better than we do whether they're violating any laws or regulations.
Could you spoof the car to exit and continue thinking you were still on a freeway? A car being spoofed into thinking it was still on a freeway, while driving on city streets, might not prioritize certain rules when it approaches an intersection.
The first component is the autodrive which uses only visual sensors to stay in the lane, stay away from the car in front of you, and maintain speed.
The second component seems to be the routing which enabled the first component to use turn signals, and route with maps. This is the piece that uses gps.
Since their attack only affects the cars GPS it does not affect the car's driving style.
I wonder if, as self driving technology becomes more prevalent and more of these "hacks" become dangerous, if some federal laws around spoofing cars and using objects to influence an autopilot system become major felonies.
In this case the hack isn't dangerous but I imagine spoofing a self-driving car into driving off a cliff would carry the same penalty as painting over lane lines to get a human to do it in the dark. Which is to say it's clearly murder if the vehicle occupant dies.
Is there maybe a future possibility of an eventual replacement for GPS being cryptographically signed in some way, so that we can verify it came from the United States government or what-have-you?
It can be done, but is not as trivial as you might think. To start with, the actual bitrate of the data is quite low, and adding more bytes to send is not easy.
is it worthwhile for big firms like Tesla that produce these critical software and hardware systems consider hiring independent groups to try and exploit these critical systems for previously unseen bugs before they go into production? outside observers never seem to hurt with these sort of things
> Tesla emphasizes that “in both of these scenarios until truly driverless cars are validated and approved by regulators, drivers are responsible for and must remain ready to take manual control of their car at all times.”
So, tesla/elon are officially thankful for regulation prevent themselves from screwing up worse than they are already?
I don't trust Autopilot myself, I've had too many phantom braking incidents. But GPS spoofing is not a good reason to criticize Autopilot.