
Tesla Autopilot tricked into accelerating from 35 to 85 mph with modified sign - harambae
https://electrek.co/2020/02/19/tesla-autopilot-tricked-accelerate-speed-limit-sign/
======
dmitrygr
Further proving that "self driving" is approximately as hard as AGI, and is
_nowhere_ near as "close" as everyone thinks to think it is. A human would
have _context_ and _common sense_ and thus know that a residential street will
simply not _ever_ have an 85 mph speed limit.

And _yes_ , you could special case _this_ case in the code, but there are
hundreds of cases where "common sense" is used in driving, and you will never
teach your NNs all of it.

Prediction: no actual "self driving" (to a point where driver can legally be
asleep) on public streets till > 10 years from now

~~~
joshvm
I don't see why you need special cases in the code. Sat nav units have had
speed limits built in for years. Why not just look it up in a database? (the
car knows where it is after all, and requiring a GPS fix could be mandatory -
most drones won't arm without it, for example)

~~~
rypskar
How much can you trust that data? My car tell me what the speed limit is by
using a database and reading signs. On my commute to work there is a part
where it tell me the speed limit is 5km/h when it actually is 80km/h and
another place where it for a short bit shows the limit as 80km/h where the
correct is 30km/h

~~~
joshvm
Neither is ideal. But you should use what information you have. There is still
no excuse for making a decision like 85 in a 35. Worst case this should be a
disengage or the car could defer to you - eg visual warning on the dash "speed
limit unclear, override?"

In the UK smart motorways can vary from 30mph to 70mph according to traffic.
This can change within minutes and obviously cannot be in databases. So this
is a good case for sign recognition. In theory though, there is some network
that's updating the speed limits and there's no technical reason why that
couldn't be a public information service (maybe it already is).

Then you have the sibling comments - speed limits which are time based. I
would imagine this is country specific and is, mostly, a matter of public
record. But would you trust CV to recognise text on a sign with specific
hours? I don't know how complex the signage is, but trying to use ML for this
seems absurdly wasteful of resources.

Finally you have other information - how fast are the cars around you
travelling? Have you detected an obstruction on the road (eg cones)? Is the
weather inclement and therefore you should drive more slowly, etc.

I would imaging that if self driving cars become ubiquitous there will
eventually be a system for real time speed limit determination, whether that's
some kind of wireless beacon at regular intervals or an online database that's
kept up to date I don't know.

~~~
rypskar
How is the accuracy of sign recognition on the LED signs stating the current
speed limit? My experience is that it is not so good, maybe other cars are
better at reading LED signs.

For the car to react according to how other cars drive, you need to have other
cars on the road at the same time. I have a narrow gravel road where the limit
is 30km/h but safe driving is between 10 and 20, but my car tell the limit is
50km/h. It can be weeks without me seeing other cars on that road at the same
time as me. Some mornings during the winter I can drive because I know where
the road is, not because all of it is visible. Good luck having an autopilot
doing that

~~~
joshvm
Well, quite. But again this is all supplementary information. The car should
have a basic idea of the speed limit from a database and it should adjust as
appropriate.

Similarly the car should be able to identify road surface conditions and also
how much power it's needing to put in to achieve a set speed. If it's
identified that traction is being lost over gravel, then it should reduce
power.

Equally the autopilot should consider visibility. There are plenty of roads
that are national speed limit and you'd be insane to drive at 60 on them due
to blind or extremely tight corners.

But with LEDs this seems like massively over-engineering the problem. Why use
computer vision when you can potentially broker a deal with the highways
agency to get live speed limits on a particular stretch of highway?

------
rootusrootus
They had to use older Teslas to make this work, because newer models don't use
sign-reading to determine speed limit.

Abstractly, it's interesting, if only because it probably wouldn't trick a
human -- everyone would see the 85mph as BS regardless of the defaced sign.

Has anyone hacked into openstreetmaps yet and fiddled with speed limits?

~~~
gizmo385
What do the newer models use? I tried to Google for it, but just kept
stumbling on news articles about the OP.

~~~
bdcravens
"Front-facing cameras detect speed limit signs on AP1 vehicles and display the
current limit on the dashboard or center display. Limits are compared against
GPS data if no signs are present or if vehicle is HW2 or HW2.5"

[https://en.wikipedia.org/wiki/Tesla_Autopilot#Speed_assist](https://en.wikipedia.org/wiki/Tesla_Autopilot#Speed_assist)

From what I've gathered elsewhere, the GPS data is OSM.

------
Robotbeat
This already happens occasionally with regular drivers:
[https://www.telegraph.co.uk/news/uknews/road-and-rail-
transp...](https://www.telegraph.co.uk/news/uknews/road-and-rail-
transport/11138390/Drivers-tricked-by-fake-40mph-speed-limit-signs.html)
"Pranksters blamed for speeding offences after drivers caught out by 40mph
signs in 30mph zone"

Ironically, current Teslas are largely immune to this as they use a database
of speed limits.

------
wpietri
Ah, that's especially good. They just changed the 3 in "35 MPH" speed limit
sign to look more like an 8, and the Teslas dutifully sped up. A mistake no
human driver would make.

Rodney Brooks points out that if we ever get AGI [1], we'll have solved the
autonomous vehicle problem. But it's far from clear to me that we'll truly
solve the problem much before then, as cars and roadways are built with GI
expectations in mind.

[1]
[https://en.wikipedia.org/wiki/Artificial_general_intelligenc...](https://en.wikipedia.org/wiki/Artificial_general_intelligence)

~~~
koboll
>A mistake no human driver would make.

And probably prosecutable as felony manslaughter, the same way removing a stop
sign would be, because you're acting in a way you know will get people killed.

------
elcomet
So basically they modified a 35MHP street sign to make it look like a 85MPH. I
looked at it from a distance, and it was hard to tell between 35 and 85.

So if it is hard for a human, then it seems obvious that a tesla would also
fail to recognize a 35.

Maybe the difference is that a human might not be certain and slow down just
in case, whereas a deep neural network might be certain in its errors.

~~~
tmpz22
A human has the basic intuition to assume the sign itself might be wrong or
misleading, the programmers of the car automation failed to realize that in a
way that might've gotten someone killed.

~~~
russfink
Good thing, too, that nobody stepped out in front of their car, like a kid
chasing a ball, as they did their ad hoc experiment.

~~~
frosted-flakes
They didn't allow the car to reach 85 m/h, for obvious reasons.

------
loser777
This is as good of a demonstration of the big "common sense" gap that end-to-
end computer vision approaches have as it is of adversarial examples. Even
with an occluded speed limit sign, humans have strong priors on what a
sensible speed limit would be based on a more general understanding of the
environment (width of lanes, curvature of roads, visibility, road quality,
frequency of law enforcement vehicles ;), etc.).

~~~
castratikron
Wonder if the neural net could be trained to know the speed limit? "This scene
doesn't look like the usual 85 mph zone..."

~~~
fyfy18
Many cars now come with a database of speed limits and display it on the
dashboard. Although this shouldn't be relied on for self driving, it could be
used to double check. OSM data for types of roads and areas is pretty accurate
too, so you could use that ('an 85mph speed limit for a single lane within a
residential area doesn't sound correct'). This doesn't need AI, it just needs
someone with common sense to come up with a few basic rules about speed
limits. If there is some doubt you could prompt the driver to confirm the
speed limit change, and use that to build up a database of real world data.

It sounds like the Tesla is just using what the camera sees - which is bad if
true. Admittedly this was tested on a race track, so maybe there is no data
saying otherwise (or even the opposite).

------
aero142
If the point is that autonomous cars are succeptable to deliberate malicious
actors, I've got bad news about the rest of the world. Human drivers are
incredibly succeptable to me throwing a $2 brick throught the front windshield
as well. It sounds like the issue is already resolved in this case.

~~~
capableweb
I guess the difference here is the scope of how many you can impact.

In order to get all Teslas who go past a sign on the highway, you just have to
modify the sign slightly and everyone is affected (in theory).

In order to affect the same amount of people with your "brick through the
windshield" strategy, you'll need a lot more manpower than just a sticker on a
sign.

~~~
throwanem
That depends on the size of wreck you manage to cause, don't you think? In any
case, it takes no more people to drop a brick than it does to sticker a sign.

To be clear, I find this entire concept of unsupervised robot vehicles both
dangerous and absurd - even more so than the already dangerous and absurd
baseline of a society so intimately bound up with automotive travel as ours.
Exceeding such a high baseline as that is in its way impressive, and certainly
demonstrates the astonishing overconfidence rampant in some segments of our
very young and rather careless industry, but let's not get distracted from
that essential point and waste our efforts on inconsequential arguments over
whether a sticker is more dangerous than a brick.

~~~
capableweb
> That depends on the size of wreck you manage to cause, don't you think?

No, I think it's the scope of affecting many after another without doing
something more. Throwing bricks requires continues action while a sticker is a
thing you do once and then it "does it for you".

See it as working every day and getting paid for that, versus a savings
account where you get returns without really doing anything.

And sorry, I didn't really join the conversations to argue against "concept of
unsupervised robot vehicles [is] both dangerous and absurd " so I agree with
the rest of your message.

------
valine
Putting aside the fact that this was found on an old Tesla with autopilot 1.0,
this is not even a technically hard problem to solve. The current version of
autopilot uses a database and gps to determine the speed limit. In the future
if Tesla wants to return to a vision based speed limit system they can sanity
check the vision readings against their database and throw out results like
85mph.

------
oyebenny
I live in Atlanta and there's a digital speed limit sign on 75. Sometimes the
light is out and when it means to say the speed limit is "55" it just says
"5". Come to think of it I've always noticed traffic slowing down nearing that
sign. I wonder if any Tesla's have any contribution to that.

~~~
emiliobumachar
I would slow down to a speed limit of 5, even if I was sure it was a defective
display. It's effectively an unknown speed limit. Maybe not if I'm _very_
familiar with the road, but not everyone is.

Yes, I'm sure the courts should throw out the ticket, but sometimes they mess
up, and often you lose time and money to pursue your case.

------
consp
Nice for them use Tesla but I'm pretty sure you can do it with every sign
reading auto throttle system. There are plenty of cruise control systems which
employ similar technologies.

~~~
shiftpgdn
Or a person who misreads the sign. Though the whole point is that a Tesla
doesn't have the context to know that a residential zone with an 85mph speed
limit sign is obviously some sort of error.

~~~
toast0
You don't need residential context. You just need US context. I have never
seen an 85 MPH speed limit sign in the US, ever. Up to 65 is common, with 70
or 75 sometimes on well maintained roads between urban areas. I can't recall
seeing an 80, but I feel like maybe once or twice.

~~~
drunken-serval
There’s an 85mph in one place in Texas.
[https://en.m.wikipedia.org/wiki/Speed_limits_in_the_United_S...](https://en.m.wikipedia.org/wiki/Speed_limits_in_the_United_States)

------
sebringj
What I like about Tesla is they have over the air updates using a fleet of
network data to quickly address issues or exploits that may arise... but this
article is quite funny exposing the gap between AI and basic human intuition.
I do believe human intuition may just be another model to train from in some
sense in the end.

------
asdff
Some sick people get their kicks tossing rocks off of overpasses. Imagine the
games people are going to play with cardboard cutouts of whatever or false
signage. I predict an arms race.

~~~
Robotbeat
People could already cause crashes by making fake signs or repainting lines. I
think this XKCD summarizes the issue nicely:
[https://m.xkcd.com/1958/](https://m.xkcd.com/1958/)

------
JMTQp8lwXL
If a human drove at 85 mph because a bad actor maliciously modified the speed
limit sign, would they be liable?

~~~
binarymax
Co-reference resolution error: who is 'they' in your sentence? The human or
the bad actor?

No matter what, the answer is probably both. You can't modify street signs now
anyway. You also can't drive 85 in residential/city areas.

~~~
JMTQp8lwXL
The human operating the vehicle who reads a maliciously edited sign. It seems
unquestionable to me anybody modifying street signs would be held liable for
the modifications.

------
RcouF1uZ4gsC
>The safety of Tesla's autopilot features has come under close scrutiny, but
CEO Elon Musk has predicted the company will have "feature-complete full self-
driving" this year.

I think that is a real possibility if we are referring to years on Jupiter
(which has an orbital period of 12 Earth years).

------
whyaduck
So cars are using OCR on speed limit signs?

Next step: sql injection.

[https://hackaday.com/2014/04/04/sql-injection-fools-speed-
tr...](https://hackaday.com/2014/04/04/sql-injection-fools-speed-traps-and-
clears-your-record/)

~~~
masklinn
That's pretty common IME though the ones I've driven didn't make decisions
based on that.

Last few cars I've rented had a speed limit sign _generally_ matching the
speed limit on the dash, and on the one I rented this christmas passing a
speed limit sign (or an other sign triggering a speed limit change) while on
limiter or cruise control would flash a message suggesting pressing a button
twice to change the configured speed to that (rather than manually adjust
using the +/\- buttons).

They were mediums (C-segment) and compact MPV not large or luxury, which I
expect is why the sign reading was mostly to purely indicative.

------
Sohcahtoa82
While scary, it's not exactly relevant to any Tesla made after they broke off
from MobilEye (Which I think was 2016?).

Current Teslas don't read speed limit signs, they access an online database.
Vandalize the speed limit signs all you want, as modern Teslas aren't even
reading them.

------
luxuryballs
Shouldn’t it know that 85 is beyond the speed limit? You’d think it would know
the laws based on where the GPS is, like in some states above 80 is reckless
driving. Seems trivial to make this data available.

------
posix_compliant
I'm not what you'd call a self-driving car "skeptic" since I do think that we
will eventually be able to iron out these types of rules, but I think it will
be another 30-50 years before I feel comfortable truly trusting one.

~~~
emiliobumachar
How much do you "truly trust" human drivers?

------
m0zg
Sounds like an easy fix in most cases. Residential road or road work? Can't be
85, period. There are relatively few roads in the US where 85 is the actual
speed limit.

------
threatofrain
Don't cities have easily accessible and authoritative records on locations of
important traffic controls? If so, that would make this problem a little bit
easier.

------
vardump
Another discussion on same topic:
[https://news.ycombinator.com/item?id=22370346](https://news.ycombinator.com/item?id=22370346)

------
tjoff
Even if it was a legitimate 85 mph sign a human would realize that the road
isn't safe at 85 mph speeds.

You must react to the environment as well...

------
NicoJuicy
I'm a self-driving skeptic, but this test is not fair.

It's also illegal to do this, so it's not a valid edge-case.

~~~
dtho
Why would the legality make this test invalid or unfair? Illegal things still
happen. I have seen countless street signs with graffiti on them. I could see
rebellious teenagers doing this on purpose without fully understanding or
caring about the consequences. Search google images for 'street sign graffiti'
and you will see thousands of examples.

~~~
lostlogin
There has been an outbreak of modified stop signs near me. Some are quite
professionally done with stickers championing pet projects. Stop animal
farming, stop eating meat etc. If they weren’t at the end of a street, they
would definitely look like a marketing campaign (which I suppose is exactly
what they are).

------
ebg13
I keep seeing throughout this thread statements similar to "a human would
never do this", and the only thing running through my head is awe at how
little otherwise smart people understand their fellow humans.

Like...y'all have lost your goddamn minds. OF COURSE many humans would do it!
Humans fucking drive straight into lakes because the map told them to. Humans
do not have some kind of magical ward against fucking up or against being
tricked. Quite the opposite.

But you know what the difference is? As soon as the computer system has
programmed in a record of local contextual defaults, this problem won't happen
again. Say the same about humans, I dare you.

~~~
sharkmerry
>> Humans fucking drive straight into lakes because the map told them to.

I had to check if this was real or just a bit from the office that imprinted.
Googling "humans drive into lake" only finds one incident. Which occured at
midnight.[1]

and if you look at the research, they didnt convert it into an 8, they simply
extended the middle part of the 3 a little bit [2]

[1][https://fox8.com/news/a-little-embarrassed-woman-follows-
car...](https://fox8.com/news/a-little-embarrassed-woman-follows-cars-gps-
straight-into-lake-huron/)

[2][https://cdn0.tnwcdn.com/wp-
content/blogs.dir/1/files/2020/02...](https://cdn0.tnwcdn.com/wp-
content/blogs.dir/1/files/2020/02/Screenshot-2020-02-19-at-11.59.56.png)

~~~
ebg13
> _Googling "humans drive into lake" only finds one incident._

You'd find more examples if you tried _slightly_ harder ("lake" isn't the
important part!):

[https://kfgo.com/2020/02/10/man-drives-into-mississippi-
rive...](https://kfgo.com/2020/02/10/man-drives-into-mississippi-river-while-
following-google-maps/983425/)

[https://www.cnet.com/news/man-drives-into-river-gps-
china/](https://www.cnet.com/news/man-drives-into-river-gps-china/)

[https://www.mirror.co.uk/news/world-news/man-watches-wife-
bu...](https://www.mirror.co.uk/news/world-news/man-watches-wife-burn-
alive-5435575)

[https://abc13.com/587100/](https://abc13.com/587100/)

[https://abcnews.go.com/blogs/headlines/2012/03/gps-
tracking-...](https://abcnews.go.com/blogs/headlines/2012/03/gps-tracking-
disaster-japanese-tourists-drive-straight-into-the-pacific/)

[http://news.bbc.co.uk/2/hi/uk_news/england/bradford/7962212....](http://news.bbc.co.uk/2/hi/uk_news/england/bradford/7962212.stm)

[https://www.boston25news.com/news/man-drives-into-pond-
while...](https://www.boston25news.com/news/man-drives-into-pond-while-
following-gps-directions/538872511/)

[https://www.nbcnewyork.com/news/local/gps-leads-nj-
motorist-...](https://www.nbcnewyork.com/news/local/gps-leads-nj-motorist-
into-house/2123304/)

[https://www.dailymail.co.uk/news/article-1164705/BMW-left-
te...](https://www.dailymail.co.uk/news/article-1164705/BMW-left-
teetering-100ft-cliff-edge-sat-nav-directs-driver-steep-footpath.html)

[https://www.westsiderag.com/2013/05/01/gps-brain-fail-
driver...](https://www.westsiderag.com/2013/05/01/gps-brain-fail-driver-car-
ends-up-stuck-on-riverside-park-stairs)

[https://www.news.com.au/lifestyle/real-life/driver-
follows-g...](https://www.news.com.au/lifestyle/real-life/driver-follows-gps-
into-sand/news-story/081ea557f486757a0cdd2722892727bb)

[https://nymag.com/intelligencer/2018/01/waze-app-directs-
dri...](https://nymag.com/intelligencer/2018/01/waze-app-directs-driver-to-
drive-car-into-lake-champlain.html)

[https://www.fox5atlanta.com/news/2-drivers-stuck-on-train-
tr...](https://www.fox5atlanta.com/news/2-drivers-stuck-on-train-tracks-in-
duluth-after-gps-error-1-car-hit)

[https://www.bostonmagazine.com/news/2013/06/19/mbta-train-
ac...](https://www.bostonmagazine.com/news/2013/06/19/mbta-train-accident-car-
tracks/)

[https://www.abc.net.au/news/2015-10-01/car-being-hit-by-
trai...](https://www.abc.net.au/news/2015-10-01/car-being-hit-by-train-after-
gps-sends-onto-tracks/6818564)

[https://abc7chicago.com/395218/](https://abc7chicago.com/395218/)

People act like humans are immune from making idiotic decisions or from
ignoring their surroundings or something. That's...super naive.

~~~
sharkmerry
But in these examples, would AI driving not fail there too? Heres an example
of AI driving into a river [https://electrek.co/2019/03/10/tesla-crash-river-
claim-unint...](https://electrek.co/2019/03/10/tesla-crash-river-claim-
unintended-accelerated/) so it happens as well.

would you be fooled by the attached image in my previous comment? If you were
fooled, would your default action be to accelerate by 50mph, when no other
drivers are?

~~~
ebg13
> _would AI driving not fail there too?_

It might if it were relying solely on a GPS and nothing else, but, of course,
none of them do that. The biggest difference is that navigation software can
improve and sensors don't stop paying attention. You can't say either of those
things about people who screw up in exactly the same circumstances.

> _Heres an example of AI driving into a
> river[https://electrek.co/2019/03/10/tesla-crash-river-claim-
> unint...](https://electrek.co/2019/03/10/tesla-crash-river-claim-unint..).
> so it happens as well._

Let's note that the article is skeptical that the problem was actually the
car:

"I want to give the driver the benefit of the doubt, but every time we have
seen similar circumstances, the logs always pointed to a user mistake."

But even giving the driver the benefit of the doubt and asserting as a premise
that the AI caused the accident, we're still left with the fact that people
unintentionally accelerate their vehicles all the damn time. If a machine does
it once in a while, that's not a regression, that's the baseline.

~~~
sharkmerry
>> we're still left with the fact that people unintentionally accelerate their
vehicles all the goddamn time.

Again, do they do this by 50+mph ALL THE GODDAMN TIME?

>> If a machine does it once in a while, that's not a regression, that's the
baseline.

If all the machines do it at once though? The impact of self-driving failing
is a lot more than 1 person, typically.

Also, we dont know the rate of error. There are 1.2 Billion drivers in the
world. 3.5 Billions smartphone users. Its safe to say 50% of drivers use GPS.
So 600 Millions Drivers using GPS? There arent even 1 million Tesla's on the
road yet and they are already having incidents.

~~~
ebg13
> _Again, do they do this by 50+mph ALL THE GODDAMN TIME?_

No. Usually they crash into something first. I can tell you one personal
anecdote, though, where in 1995 the gas pedal in my truck actually got
physically stuck in the down position and I had to hold the brake pedal down
with one foot while wedging my other foot underneath the gas pedal to loosen
it. Of course I could only do this after the several (5 or 6?) seconds it took
for me to understand what was happening and then react. 6 seconds is a long
time to react to a catastrophic event. Humans aren't great at it. We almost
always do the wrong thing. We turn the wrong way. We push the wrong pedal. We
don't understand our surroundings. We ignore our surroundings.

Anyway. Please note I'm not saying that this isn't a failure of the nav
system. It is. I'm saying that anyone claiming that humans are magically
immune is wrong because humans are very dumb and make dumb mistakes all of the
time.

~~~
reidjs
Why not just pop it into neutral?

~~~
ebg13
Panic and adrenaline? I think you must have missed the part where I said
"humans are very dumb and make dumb mistakes all of the time".

------
kwhitefoot
Very misleading. The car will only accellerate to the speed already chosen by
the driver. It doesn't seem odd for itself to accellerate up to speed limits

------
skymt
XKCD on tricking self-driving cars:
[https://xkcd.com/1958/](https://xkcd.com/1958/)

I wanted to link that not because of the "there's an XKCD for everything" meme
but because it makes an interesting point: sabotaging roads isn't difficult
now, with human drivers. There's no reason to assume it would suddenly become
a substantial threat when autonomous cars are commonly adopted. An issue worth
considering and accounting for, but not worth public worry.

