I don't trust Autopilot myself, I've had too many phantom braking incidents. But GPS spoofing is not a good reason to criticize Autopilot.
This is exactly the reason why I despise PR.
>Any product or service that uses the public GPS broadcast system can be affected by GPS spoofing... this research doesn’t demonstrate any Tesla-specific vulnerabilities
>The effect of GPS spoofing on Tesla cars is minimal and does not pose a safety risk
It would of course be better if Tesla vehicles weren't vulnerable to this type of attack, but the headline and much of the article has potential to be very misleading to someone who doesn't completely understand what is going on here.
EDIT: I stand by the general point of this comment. However I did miss the context of the quotes I included here. See nirvdrum's comment and my reply.
> The fact that spoofing causes unforeseen results like unintentional acceleration and deceleration, as we’ve shown, clearly demonstrates that GNSS spoofing raises a safety issue that must be addressed
They are likely 'trusting' GPS signal over IMU currently, but that could be changed via software. Or at least alert if the signals disagree.
Sure, it's not a big problem now, but it could be a problem when the Tesla self-driving fleet comes online and it drives you to a back alleyway to get mugged and/or killed.
For one, the attacker would have to know in advance that you'll a) be actively navigating at the time, and (b) what your programmed route is (they have to know what to spoof).
Then, they'll need to get their radio gear together, wait for you to come near, enable the spoofing contraption, hope you don't notice that the car just made an error and pulled into a dark alley instead of 42nd Street, and ambush you.
Not gonna happen for a simple mugging. If you're a specific target, then sure, but your days are probably already numbered.
> Yonatan Zur, Regulus Cyber CEO and Co-Founder, [...]: “We designed a product to protect vehicles from GNSS spoofing because we believe it is a real threat. We have ongoing research regarding this threat as we believe it’s an issue that needs solving.
So they're selling the cure to an ailment they insist is a very big problem.
I'm not saying GPS spoofing isn't a problem, just that this is hardly unbiased research.
Brave new world coming…
What happens when that happens? Did you come to a complete stop?
I also wouldn't trust it in any situation where lane lines are ambiguous, incompletely erased, or just unusual. I have about 50% success navigating freeway ramps and interchanges around here, and even when it is successful it drives slowly and erratically, again making other drivers mad. I've basically given up on that feature, and I also don't use it at speeds above 45 MPH.
I'm hopeful that the more powerful "HW3" Tesla neural net computer will allow them to basically redo the whole thing for "fully self driving". If they're just planning on running what they've got now but faster, then they are not going to be successful.
Finally, although I am bullish on the no lidar approach, I am skeptical that their current camera setup is good enough. They really need more coverage and more resolution, and probably more compute than even HW3 provides. But improving the cameras in newer cars would mean giving up on "fully self driving" for existing cars, and that would be disastrous for them given the promises Elon has made and the millions of dollars of "fully self driving" features already sold.
That surprises me.
Has Tesla really promised "fully self driving" for existing cars?
Seems crazy that anyone would pay for that now.
Personally I think it's a ridiculous idea that it could work well enough to be on the road before current cars are EOL, but even if you did think it was possible, then why not wait until it is an actual reality and buy it later?
I mean, it's not like Tesla will refuse to take your money to sell you the option later anyway.
Edit: Would of course not be a spoofing attack.
I think the military can go even further and verify that the GPS data is correct.
Charts, sextant, and regular visual observations are all strictly necessary.
Some kind of semaphore would probably be handy too.
Of course the SR-71 probably never had cloud cover problems. :)
edit: I think this one: https://www.youtube.com/watch?v=tj9UwKQKE3A
No. You need an actual clock and charts. The sun and stars are completely useless for determining longitude without a clock.
The Moon on the other hand is marginally useful because its phases can be looked up. But its resolution is poor.
This is not completely true. The stars (and planets) provide for a very nice clock when viewed with sufficient resolution.
Others commenting that this could be considered terrorism but if the government is doing it in the interest of defense then it won't be considered illegal.
The crew isn't staring at their GPS screens when they're in a harbor. This also assumed nobody notices the ship that's on a stupid course, they somehow don't get contacted and all the tug boats that are generally responsible for maneuvering those kinds of ships in those kinds of harbors don't do anything to deal with what looks like an out of control ship.
I know it looks like a serious vulnerability on paper but you have to get a lot of things just right for a single system failure (GPS) to cause a serious accident, especially during a time when the crew is on their toes.
I'd imagine you'd be labeled a terrorist and no, you won't even get a trial. If you like the ideal of life in jail, that's one way for sure.
If you caused enough damage and deaths you will get life in jail, maybe even death penalty. But only after a pissed off jury declares you guilty.
The reason for Gitmo, terrorist claims, etc... is because there are complex international matters that make prosecution difficult. But if you killed people during a malevolent act, you are a criminal and will face justice. There is nothing complex about that, especially if your are a US citizen on US soil.
In open sea, however, I could see this being a problem for warships and territorial waters. But I think James Bond already had this one covered.
It is actually incredibly difficult to become a pilot, they must be able to recall relevant charts and regulations from memory.
Source: I come from a long line of seamen
> Although the car was three miles away from the planned exit when the spoofing attack began, the car reacted as if the exit was just 500 feet away—abruptly slowing down, activating the right turn signal, and making a sharp turn off the main road. The driver was not prepared for this turn and by the time he regained manual control, it was too late to attempt to maneuver back to the highway.
So they forced the car to take the wrong exit? They stated just before that physical navigation/driving has no dependency on GPS. That’s a lot less dangerous than implied for “veering off road”, but the description is too fuzzy to know what really happened, and gives the article a suspicious tone.
In practice, I'm fairly certain that such an attack being highly illegal (which it already is) would be more than enough deterrent in 99.99% of cases, just as it is with people throwing rocks onto cars from bridges or shining lasers into driver's eyes.
Both of which happen with alarming regularity.
I've driven hundreds of thousands of miles all over the country and never experienced either.
Anecdata is a funny thing :)
NL is super small, quite dense and the media here tend to report these things widely so you get a lot of copycat stupidity on top of the first idiot. People have died here because of this.
“We designed a product to protect
vehicles from GNSS spoofing because
we believe it is a real threat.
By reporting and sharing incidents
such as this we can ensure the
autonomous technology will be safe
Regulus could be a good actor, but there is no denying they have incentive to slant the research and findings in a way favorable to their bottom line - SALES.
I noticed a lot of fear statements in the article. Appealing to the emotion of fear is a classic, common sales tactic. Especially when it's difficult to precisely quantify the risks, likelihood, and full implications of the negative outcome.
Until the claims are replicated and verified by an independent third party, best to take this report with a grain of salt. Especially regarding the Regulus Product purporting to protect against GPS / GNSS spoofing.
An interesting and relevant moral question is:
How many people are killed every day by human drivers? Would it be better if all motor vehicles switched to autonomous and the death by automobile accident rate drastically lowered, but was still greater than zero?
Even with exploits like this GPS attack, I suspect the death rate will probably be substantially lower than it is with humans at the helm. As another thread points out, human drivers can be blinded with maliciously operated lasers, yet such events remain rare.
This "report" was a very weird read for me.
There is a lot of vague information, it uses language like "mission critical", "high impact", .... It then mentions that the immediate driving decisions are not affected, the attack can affect high level routing decisions - like making the car turn off the highway.
Then it mentions all other car manufacturers as vulnerable,
and indeed conveniently mentions it's product which will protect the car.
There are plenty of concerns one should have about self-driving cars, but this is blatant self-promotion and obviously a PR piece.
The problem with this is the assumption that autonomous motors are good enough that switching everyone over would even be a net win. Accidents are already rare per mile, so evaluating safety is hard. And the makers of the autonomous vehicles are incentivized to fudge their data as it's potentially a massive market.
That's never mind the fact that there's no clear quality control on what counts as autonomous. As an extreme example, a shitty Arduino app hooked up to servos and a single webcam could count and it almost certainly would not be safer than humans.
Yes, GNSS spoofing is an industry wide vulnerability, and not to just cars. If I'm not mistaken, at least in the U.S., it is highly illegal?
> The program has become famous for supposedly tricking some of its listeners into believing that a Martian invasion was actually taking place.
I know a guy in my neighborhood who has a old beat up GMC suburban, with a CB radio, a 5kW "12 pill" amplifier, 3 alternators, and 2 heavy-duty CB antennas installed, and he goes by #1 on the air. Guys like him still participate in CB radio shootouts like this: https://www.youtube.com/watch?v=EyAqzFXDMys
All the while the FCC doesn't bat an eye. This is because they typically don't cause that much harmful interference. Even though the transmitters are often overdriven and non-linear, typical spurious emissions are outside of critical communications bands and aren't the source of many RFI complaints which drive a majority of FCC investigations.
Is there any indication that they intend to use it for vehicle location / navigation? I see nothing in your link about that.
> Yes, GNSS spoofing is an industry wide vulnerability, and not to just cars. If I'm not mistaken, at least in the U.S., it is highly illegal?
Whew, for a second I was worried. At least we know we're safe since terrorists never break the law!
This is not an autonomous vehicle problem..., it is a human nature problem.
And no, I can't speak to all of StarLinks' use cases.
I have no idea what that was; it might have been a random bug of Google's, but it was interesting and novel. After about 20 minutes it returned to normal.
It's a well known issue in big cities with narrow streets (NYC for example)
In some of the more remote parts of America, GPS doesn't work very well, especially if you're near a military facility. (It's a pretty good indicator that you're near an "undeclared" military facility in the middle of nowhere.)
I can't count the number of times my GPS map has shown me driving through a lake, or over flying over a mountain.
That's odd because I can count the number of times it's happened to me. It's 0. This despite having actually been in the military. Civilian GPS receivers work fine these days, even on military installations.
>It's a pretty good indicator that you're near an "undeclared" military facility in the middle of nowhere.
What does that even mean? If it's really "undeclared" (I'm taking the scare quotes to imply "secret"), how would you actually be able to verify it?
Could you give a link to one of these facilities on Google Maps or something? I'm honestly really curious to see what you're talking about.
I imagine there are other sites like that, where the boundaries are a little ambiguous.
Of course, the quality of your maps is probably a lot worse.
The places I'm talking about are "remote," though. Not rural. Places the size of European nations, but with only a few hundred people.
I mean, yeah. If you take away someone's maps and compass they might get lost. What am I missing?
But... OK. If you can put a transmitter on top of a vehicle in motion you can take control and make it drive to your destination? How is that significantly more damaging or dangerous or "bad" than just grabbing it with a tow truck? Or hijacking it? Or just stealing the vehicle itself?
I mean... this just doesn't really seem like an indictment of autonomous driving to me. People were successfully stealing stuff out of horse drawn carriages (or hell, just stealing the horses) and we all seemed to survive just fine.
Seriously, this just doesn't seem like a doomsday kind of thing. Needs more spin.
> The spoofer can easily use an off the shelf high-gain directional antenna to get a range of up to a mile. If they add an amplifier, a range of a few miles is very much possible. It has already been proven that spoofing can even occur across dozens of miles, for example in the Black Sea spoofing attack in June 2017.
And literally nobody is saying this is a doomsday. What they are saying is that autonomous cars need to recognize this attack vector and take steps to combat it. The fact that stealing or hijacking vehicles has always been possible doesn't mean we need to deliberately turn a blind eye toward a new and potentially very effective attack against autonomous vehicles.
Like I said, needs more spin. This scenario doesn't really fly.
Designer fail, why is navigation system explicitly trusting GPS signal while the car has multiple sources suitable for dead reckoning(1)? ABS sensors alone (distance) would tell you something was wrong (being teleported 2.5 miles forward), then you have accelerometers, compass, cameras supposedly able to recognize side roads. Tesla needs to work on their kalman filter implementation.
1) Etak https://patents.justia.com/patent/5948043 dead reckoning patents expired by now.
Car drives onto ferry, GPS signal is lost, car drives off to ferry, dead reckoning figures the car is lost in the water, GPS is regained placing the car very much somewhere else.
Do you trust dead reckoning or the GPS? In this case, you trust the GPS, one assumes, since dead reckoning places you in the middle of a body of water, though there
And this isn't a made up scenario. My car fights this every weekday. It's always entertaining to watch it figure this out as the GPS recovers badly and it starts to place me inside buildings in an area with poor GPS coverage.
The higher layer that manages strategy was tricked to make the lower layer safely exit at the wrong exit.
At least that's my reading of it.
"Based on the previous position of the object, the GPS derived position, the velocity, the DOP(dilution of precision) and the continuity of satellites for which data is received, the system determines whether the GPS data is reliable."
I don't know if this comes because of a technical background or not. As I have learned more about software development, assembly, cybersecurity, etc. over the years, I have become more rather than less accepting of things like self-driving cars.
This applies to IoT as well. I will never, ever, ever have one of those glorified wiretaps they call "voice assistants". What do they do, any way? I don't really care about being able to tell a speaker to order more laundry detergent. Nor do I want any of these goofy "smart" appliances - why do I care about my toaster sending me push notifications when my bread is done? Or mining bitcoin?
I guess I just felt like ranting. I hope one of you can convince me such a cynical outlook is wrong, as I get a bit sick of viewing new stuff with negativity.
Also, even if this isn't new information, that doesn't excuse them from writing an article with dangerous information, I don't recall getting bomb making instructions in the New York Times...
More importantly, responsible disclosure would be necessary, if it weren't for the fact that this attack would work with pretty much any GPS tech.
What makes this irresponsible:
> Prior to the Model 3 road test, Regulus Cyber provided its Model S research results to the Tesla Vulnerability Reporting Team
Spoofing autonomous cars with willful intent to injure the humans inside should be treated no differently.
GPS can be wrong whether spoofed or not, and it should only make the car safely take the wrong exit in the worst case. It must be able to tell if the exit looks right and could reasonably represent the desired exit.
Lanes leading into barriers and whatnot is pretty typical.
This could be used in an assassination. To prevent this, motorcades should use human drivers until self driving cars become smart enough to realize when they're being spoofed.
The only alternatives I can think of are using the military's encrypted gps, or tesla launching their own encrypted GPS satellite network. .. neither of which seem like realistic options.
If you have a way to navigate cars without GPS, you should probably contact Tesla so you can make millions of dollars.
So say you tell the car an exit it has been confirming for sometime is three miles, any change in that distance based on what the car knows it is doing should be enough to flag it and require intervention.
Granted a lot of the article is one sided but the cars need to be smart enough to know where they should be relative to where satellite places them especially when that information changes to quickly to be valid.
the problem of self driving is fascinating and it just goes to show there is always someone or something that will throw a wrench into the process
What would the solution to this be? Signing GPS signals?
Claims without explanation don't move the conversation forward much.
YMMV but I've done IMU measurements in a EV and the compass course changed like 90 degrees from zero to full throttle. Was a low voltage 50V EV prototype though (higher currents).
To be fair I'm not too sure how sensitive compasses are in high voltage EVs.
All of these “attacks on self driving cars” are just different illegal things that a single person can do against a single car.
But it doesn’t prove they’re any less safe.
A potential difference is that regular attacks on cars don't scale, whereas with some proposed methods against self-driving cars, a single person can attack hundreds of cars just as easily as a single one.
There's nothing you can do with a pistol that you can't do with an assault weapon, so what's the big deal?
You can make improvised explosives with diesel fuel and fertilizer, so what's the big deal if WalMart wants to sell grenades and mortar rounds?
Differences in degree of impact and degree of access all matter. In the case of a laser versus an exploit of autonomous navigation, one of the biggest differences is that with the laser, the moment it happens, you are immediately aware is a crisis.
Whereas people use their navigation every day and build more and more trust in it. The notion that it might be malfunctioning and that there is a life-threatening crisis may not occur to drivers until it is too late.
That alone makes for a vast difference between the two scenarios.
If my car is on “autopilot” and it seems to be getting ready to leave the highway in the vicinity of an exit, it might take me a while to figure out that the car is not doing what I expect it to do.
Nobody expects a laser, and nobody expects their car to drive off the highway. But if you trust your car, your reaction may be delayed by the cognitive dissonance between what you observe and your belief that the car will do what it has always done under what you believe to be the exact same circumstances.
The other thing that causes confusion is that once you recognize something is wrong, you now know that you can trust some parts of the system and not others, but you don't know which. Even in aircraft, which often have minutes between recognizing a problem and a crash, people often cannot figure out what the problem is, because they need to have a theory of what is wrong before they can decide what to trust, to decide what is wrong. It's a catch-22.
But we are probably all in violent agreement, in that there are already ways to crash an automobile if it is on the highway and an attacker can get close to it.
Also, the attack in this article required $550 worth of hardware in addition to code + laptop.
Yes, you can find cheap high-power lasers that will blind people and ignite things easily available and cheap.
Spoofer – Blade RF SDR $400
FCC violation – Priceless
If you're going to test RF spoofing please hardwire the transmitter to the receiver so nobody else is affected.
The first component is the autodrive which uses only visual sensors to stay in the lane, stay away from the car in front of you, and maintain speed.
The second component seems to be the routing which enabled the first component to use turn signals, and route with maps. This is the piece that uses gps.
Since their attack only affects the cars GPS it does not affect the car's driving style.
So, tesla/elon are officially thankful for regulation prevent themselves from screwing up worse than they are already?