
Researchers trick Tesla autopilot into driving into opposing traffic - ve55
https://twitter.com/ashk4n/status/1112025340644196352
======
codelord
Previously I worked at Waymo for a year on the perception module of the self
driving car. Based on what I know about the state of the art of computer
vision, I can pretty much guarantee that current Tesla cars will never be
autonomous. This is probably a huge risk for the investors of Tesla, because
currently Tesla is selling a fully-autonomous option on its car which will
never happen with current hardware. We need several breakthroughs in computer
vision for no-fail image based object detection, and you will need higher
resolution cameras, and much much more compute power to be able to process all
the images. It's hard to estimate when we will reach that level of advancement
in computer vision, my most optimistic wild guess is 10-20 years. And then
Tesla will need to upgrade all those cameras and install a dedicated TPU which
should be at least 10x or probably more faster than the nVidia chip they have
installed on their cars, and they should do it for zero dollars because they
have already sold the option. It is kinda amazing that Elon Musk is selling
cars based on speculative future breakthroughs in technology. That must be a
first.

~~~
nradov
I was driving the other day and stopped behind a stop sign at a 4-way
intersection. A police car was already stopped at the same intersection, to my
left. Since he had the right of way I waited for him to proceed. But after a
few seconds without moving he flashed his high beams, which I understood to
mean that he was waiting for something and was yielding to me. Now that's not
a standard signal for yielding in the California Vehicle Code but most humans
can figure it out.

These are the type of odd little situations that come up all the time in real
world driving. I can't understand how anyone would expect level 4+ autonomous
driving to work on a widespread basis without some tremendous breakthroughs in
AGI.

~~~
KKKKkkkk1
Do you really need AGI to understand that some drivers don't respect the
right-of-way rules at 4-way intersections? And even if you don't detect the
high beams flashing, do you need AGI to know that you shouldn't lock yourself
out of the intersection purely based on your place in the queue?

~~~
nradov
Regardless of the possible solutions for that one particular situation, my
point was that odd unpredictable situations come up all the time in real world
driving. We can't hope to code rules in advance for every possible situation.
In the general case is it possible to handle unexpected situations without
true AGI? That remains an unanswered question.

~~~
KKKKkkkk1
I like the analogy of the internet. In order to have the internet, we have
packet switching, retransmissions, flow control with exponential backoff,
distributed spanning tree algorithms, etc. If we accept what the AI proponents
say, the internet is a jumbled mess of conflicting hand-crafted rules that has
no chance of ever working. And yet here we are on the internet, and we don't
even need a Go or chess solver running in each router.

~~~
nradov
You're making a strawman argument and I have no idea which AI proponents
you're referring to. Internet routers mostly rely on deterministic non-
conflicting rules with little or no AI involved. It generally works fine,
although there are multiple major failures every year.

How much large scale network engineering have you done?

~~~
carlmr
>Internet routers mostly rely on deterministic non-conflicting rules

Exponential backoff (which the previous poster mentioned) is randomized and
success is by definition non-deterministic algorithm to resolve conflicting
usage. It works very well, and is very simple. But deterministic and non-
conflicting are really not qualities that the IP protocol is known for.

------
sokoloff
I worked on self-driving cars in 1991 at Daimler-Benz. We used prominent
horizontal lines with significant bilateral symmetry and an overall
trapezoidal shape as a proxy for a car ahead. (Such was the state of computer
vision [and prevailing automotive design] at the time.)

We were running a test day on a wet day at a disused airfield in Bavaria and
during a test run, the car (bus really) _slammed_ on the brakes, throwing some
of the occupants forward, and then, like a recalcitrant horse, the car
_refused_ to move forward. It turns out that the drying pavement on one of the
taxiways had created a visual pattern on the ground that our vision system
ID'd as a nearby car.

~~~
newman8r
Back in 1991, when were you (and your team) predicting that self-driving cars
would be viable?

~~~
sokoloff
Based on that overall experience, I held the same opinion then as I do today:
"never". (I was an intern at the time, though I quickly took over the
implementation and operation of our speed control and vision system.)

I have to admit: _much_ more progress has been made than I expected in the
intervening ~30 years. I still think that full self-driving with zero outside
communications/support is "never", though "mostly self-driving with selective
help sometimes" might be possible within the next 25 years.

It is interesting to see logical descendants of work that group did appearing
in road cars. (CAN bus, Adaptive Cruise, park assist, probably others that
were lab projects at the time...)

~~~
erikpukinskis
I don't mean this to be insulting, but if the intern is taking the point
position on "speed" and "vision" you can hardly call that a serious
engineering project.

There are in fact experts on topics like computer vision, even 20 years ago.
They are expensive though.

~~~
sokoloff
Implement/Operate point of certain subsystems, not theoretical or leadership
point. (I take no offense to your comment as it's spot on if that were the
case. My apologies if I misled.)

I was _implementing_ an improvement and _operating_ our vision system
(doubling frame rate and resolution using an existing algorithm across a
scaled up processing system). Because of where our code sat in the overall
architecture, that I had control systems coursework under my belt, and that I
was "done" with my vision processing project much faster than expected, I took
on coordinating signals from a variety of other subsystems [radar, lane
keeping, sign reading, navigation, etc], computing an overall speed and
lateral acceleration target, steer angle, etc, and emitting those and a series
of additional signals to downstream systems that controlled the servos and
vehicle. Ernst Dickmanns was my boss's project director. I was in no way a
"point" on the overall project, which was by all accounts a serious
engineering effort.

This was part of the Prometheus project (~30-35 years ago, not 20):

[0] -
[https://en.wikipedia.org/wiki/Eureka_Prometheus_Project](https://en.wikipedia.org/wiki/Eureka_Prometheus_Project)

[1] - The vehicle: [https://driving.ca/mercedes-benz/auto-news/news/three-
decade...](https://driving.ca/mercedes-benz/auto-news/news/three-decades-ago-
mercedes-benz-started-work-on-the-first-autonomous-car)

~~~
erikpukinskis
I'm just saying, there are teams meant to give it a shot, and there are teams
meant to provide certainty a thing will get done.

------
cromulent
I was driving towards an oncoming Tesla recently on a 2 lane country road and
it lane changed neatly into my side and drove straight at me. Closing speed
160km/h or so.

Fortunately it chose to do so with enough time for me to stop, and for the
driver to re-take control and take my shoulder as a stopping place. If it had
made the decision a few seconds later, it would have been a different story.

I think old snow across the centre lane markings was the cause, but if you
can't handle the edge cases, then perhaps you shouldn't release.

~~~
PunchTornado
why are ppl blaming tesla? Isn't it in their manual to always keep hands on
wheel?

~~~
cromulent
"Your Tesla will match speed to traffic conditions, keep within a lane,
automatically change lanes without requiring driver input".

"The person in the drivers seat is only there for legal reasons. He is not
doing anything. The car is driving itself".

[https://www.tesla.com/autopilot?redirect=no](https://www.tesla.com/autopilot?redirect=no)

~~~
pps
Quote from this site: "Current Autopilot features require active driver
supervision and do not make the vehicle autonomous."

~~~
cromulent
Sure, and it had supervision. But Tesla's software made the choice, and the
supervisor overrode it, but _only because he fortuitously had time to do so_.

If a passenger aircraft has an _autopilot_ problem that causes a crash, do you
blame the manufacturer or the pilots? The pilots are always supervising the
plane, and it's not autonomous, right?

~~~
bencoder
Judging by a lot of the discourse immediately following the most recent 737max
incident, people were definitely blaming the pilots. Thankfully that's cleared
up a bit now.

~~~
notahacker
I think that's a very relevant analogy. By the standards of the average
driver, the 737 Max pilots were _extraordinarily_ well trained and highly
skilled, and they had a lot of time to correct the software error relative to
most conceivable problems with a semi-autonomous vehicle. But regulators have
still (reasonably) decided that their inability to cope with software quirks
should be treated as a software problem and not a problem with the pilots'
failure to rectify the erratic behaviour.

------
Emendo
People could also be tricked by specifically designed stickers as well.

This organization designed a sticker that is designed to scare people into
stopping their car. [http://news.blogs.cnn.com/2010/09/09/3d-illusion-in-
street-t...](http://news.blogs.cnn.com/2010/09/09/3d-illusion-in-street-tries-
to-change-drivers-attitudes/)
[https://www.youtube.com/watch?v=8r26AwT7PTM](https://www.youtube.com/watch?v=8r26AwT7PTM)

While people would get angry at being tricked and would put pressure on the
city to remove the adversarial sticker, self-driving cars not have the
sympathy of the public.

~~~
LeoPanthera
Some drivers are going to panic and stand on their brakes, possibly causing a
collision.

Other drivers will become desensitised to the image, learn to ignore it, and
possibly then ignore a real child in the future.

It's hard to express how shockingly bad this idea is.

~~~
kruczek
I'd recommend reading the article.

Before the image itself, there is "a School Zone sign, crosswalk, an extended
curb and a sign by Preventable that reads: You’re probably not expecting kids
to run out on the road". It's hard to imagine that anyone would be going fast
enough to cause any damage.

Furthermore the image was there only for a week. I somehow doubt drivers would
suddenly start to ignore children, after they saw fake image of a child five
times.

------
tomcooks
Why link to some dude's twitter, I WONDER, when he links the [actual
paper]([https://keenlab.tencent.com/en/whitepapers/Experimental_Secu...](https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf)).

~~~
ccnafr
Probably because it's 40-pages long.

There's even a better summary of the entire research in this Twitter thread:
[https://twitter.com/campuscodi/status/1112064046369591296](https://twitter.com/campuscodi/status/1112064046369591296)

Research is pretty smart, but highly impractical for real world attacks.

It's also been patched since 2017, with additional patches in 2018. So, it's
also impractical because it doesn't work anymore.

~~~
vilhelm_s
I guess it's a bit ambiguous, but I think the thing that he says was "patched
way back in 2017, and again in 2018" is not the hack to use adversarial dots
on the road to fool the system, but the other hack where the they inject
malicious code into the autodrive system over Wifi and then the CAM bus, and
then take over control of the car from a cellphone.

So maybe the lane marking thing is still unfixed?

I'm also a bit amused by the suggestion that we should feel even safer because
it was fixed twice! :)

~~~
nothal
Rationally, shouldn’t we feel safer if it’s patched twice? Seems like it’s
irrational to prefer a state where it was patched once over one where it was
patched twice. It would seem to indicate that they dedicated time and
resources to resolve it once then did proper follow up to catch missed cases.
It certainly points to it being a pernicious issue but I’m can’t see why you’d
presume it’s worse that it’s been patched more.

~~~
IshKebab
Because the fact that it was patched twice means they didn't fix it the first
time, which means they didn't test it properly and might not have fixed it the
second time.

It's entirely logical.

~~~
zaroth
I’m not sure the number of patches actually provides any definive information
on the current state from a purely logical argument.

Zero patches, one, two, or more. That there are patches merely tells you that
they tried to respond to an identified threat. It doesn’t tell you the threat
was properly mitigated in any case.

I think we can say it is strictly better to have more than zero patches in the
case where a vulnerability is public. Exactly one patch to me doesn’t
guarantee anything about the quality. Multiple patches at least implies
persistent effort and dedication to securing the fix. Maybe at some point the
number of patches implies incompetence but I’m not sure I’d draw that line at
“2”?

------
Smaug123
In fairness, if you put enough fake markings on the road, any human will get
confused too. The important thing here is how small/few were the fake
markings, not the fact that this was possible at all.

~~~
WillPostForFood
Of course if you put bad input into a system you are going to get bad results,
whether human or computer. You periodically read about dumb kids messing with
stop signs and causing accidents.

[https://www.abc12.com/content/news/Stolen-stop-sign-leads-
to...](https://www.abc12.com/content/news/Stolen-stop-sign-leads-to-crash-
that-left-teenager-in-intensive-care-494153591.html)

[https://www.kansas.com/news/nation-
world/national/article218...](https://www.kansas.com/news/nation-
world/national/article218982795.html)

[https://www.foxnews.com/us/charges-dropped-against-1-in-
stop...](https://www.foxnews.com/us/charges-dropped-against-1-in-stop-sign-
prank-death)

[https://www.washingtonpost.com/archive/politics/1997/06/21/3...](https://www.washingtonpost.com/archive/politics/1997/06/21/3-who-
stole-traffic-signs-sentenced-
to-15-years/14c0a68c-e8fd-42ba-881c-454f442a27f8/)

~~~
cimmanom
Humans have enough intelligence, though, to be able to figure out when the
other lane is oncoming traffic and those spurious markings are in fact
spurious (snow or a plastic bag).

~~~
WillPostForFood
Sure, but a self driving car might know it should stop at an intersection even
if the stop sign is missing because it is a database of stops. Computer and
humans are going to have different weaknesses, but if you are intentionally
trying to fool them you are going to be able to find a way to do it.

------
notatoad
Unless i missed something, they don't demonstrate tricking the Tesla into
driving into opposing traffic, but rather into a lane that could contain
opposing traffic but doesn't. If there was opposing traffic in that lane,
would the car still change into it?

If you put up a sign saying "change into opposite lane" and it looked safe to
do so, i'd probably obey the sign while driving my car too.

~~~
logifail
> If there was opposing traffic in that lane, would the car still change into
> it?

Thought experiment for human drivers: if there were a stationary fire truck in
your lane, under what circumstances would you drive into the back of it,
without braking?

~~~
gridspy
Perhaps if I was following a tall vehicle too close then they swerved to avoid
the firetruck rather than stopping....?

"The Model S was traveling behind a pickup truck with Autopilot engaged. Due
to the truck’s size, the Tesla’s driver was unable to see beyond the vehicle
in front.

“... The pickup truck suddenly swerved into the right lane because of the
firetruck parked ahead. Because the pickup truck was too high to see over, he
didn’t have enough time to react.”"

[https://www.teslarati.com/tesla-model-s-firetruck-crash-
deta...](https://www.teslarati.com/tesla-model-s-firetruck-crash-details/)

~~~
logifail
> Perhaps if I was following a tall vehicle too close [..] "The Model S was
> traveling behind a pickup truck..."

I'm struggling to see how the geometry works here, how close do you have get
to a pickup for it to completely obscure a fire truck in the lane ahead? I
can't imagine I'd be happy being in a vehicle that close to another travelling
at those speeds.

Q: Does Tesla's Autopilot simply let you drive(or perhaps more accurately: "be
driven") that close to the vehicle in front?

------
kaycebasques
Eddie Valiant uses the same technique in Who Framed Roger Rabbit:
[https://youtu.be/1mGFCGgH1-A](https://youtu.be/1mGFCGgH1-A)

------
tw1010
I used to be afraid of stuff like this, then I realized that you know what,
prison time is a heck of a deterrent against this, which will probably take
care of most of any actual risk (at least on a "this is the end of society as
we know it" level).

~~~
hellllllllooo
The problem isn't someone doing exactly this, it's that the Tesla auto pilot
is based on a bunch of heuristics and assumptions that if they do not hold on
certain stretches of road people may die. It's just not safe enough to claim
it's a robust working system which is what Tesla and Musk have pitched and
sold it as.

~~~
Robotbeat
Human driving behaviors are also based on a bunch of heuristics and
assumptions that do not hold in all road conditions.

It's good to expect autonomous/semi-autonomous to improve on humans, but it's
not good to expect zero edge cases to be the bar that must be crossed before
allowing this.

It's likely that even any significantly improved (vs humans) system would also
have new fatal edge cases while also sharing some existing fatal edge cases.
What's important is that the systems are not overall worse than humans alone
and that they continuously improve.

~~~
fwip
The thing about humans is we're relatively good at understanding our own
heuristics, and rating the confidence of those beliefs. If there's a storm we
reduce distractions and pay more attention to the road, drive more
conservatively. Other passengers will understand not to distract the driver.

But if the car is confused by some occasional paint splotches on the road that
clearly aren't lane markings to our human eyes, we don't have any
understanding that the car is being misled. Like that video the other day of
the Tesla clipping the barrier - up until about a half second before impact I
would have assumed it knew what it was doing and was on track to avoid it.

It's unrealistic to expect a human driver to take over immediately in failure
scenarios that the person can't recognize.

~~~
Robotbeat
I think there's lots of work to be done in quantifying and comparing
discrepencies and lack of confidence in neural net classifications and
sensor/model outputs. Part of this is human interface design.

An example of this is the "sensor disagree" light on the Boeing 737 max (which
unfortunately was _optional_ equipment not installed for the Ethiopian
Airlines 737 Max crash), although I do believe we can do much better than a
mere light. A full screen and a sound system with over-the-air updates means
we ought to be able to field a really good (and continuously improving) system
for communicating when the autopilot's sensors disagree or if the model has
low confidence.

This is likely going to take years or even decades of operational use over
millions of vehicles to refine with good confidence.

------
dawnerd
Not just Tesla that gets confused. Other cars with lane assist also get easily
confused if there’s markings on the road. Pretty sure my Passat would do the
same thing.

~~~
cheeze
Agree, but Tesla is the poster-child for "self-driving" which certainly makes
something like this _that_ much more alarming.

~~~
ccnafr
I thought Waymo (Google cars) was the poster-child for self-driving cars

~~~
darkpuma
One has better tech, the other has bolder marketing.

------
peterwwillis
If you added up all the R&D money spent on these self-driving features, we
probably could have already covered 80% of American roads with simple guidance
systems that would assist in piloting cars safely. Then nav systems wouldn't
need to be as complex, and wouldn't depend so much on trying to guess road
factors.

If self-driving really is the future, the government's going to need to be
involved, the way it is in building and maintaining roads. Many roads already
have cameras and sensors built into them just for measuring traffic, so it's
not a big leap to say it should also have tech to improve automated driving.

------
emcq
For articles like this there always seems to be a few comments suggesting
statistically Tesla's are the safest cars.

Are the cars safe or is it the drivers? The population of Tesla drivers is far
from average; the average driver certainly can't afford a 40-80k+ car and may
not be interested in electric.

~~~
rohit2412
And what exactly are they comparing.

Just because Lane keeping assist reduced accidents doesn't mean that we remove
human drivers. As long as the driver has to keep hand on the wheel like
autopilot, it remains an assistive technology, and does not support any
arguments for driverless technologies.

------
ricardobeat
While the idea is worrying, note that there is no “opposing traffic” in the
test performed, and very little info on how the car behaved other than a
couple seconds of a TV show. The entire conclusion is:

> Tesla autopilot module’s lane recognition function has a good robustness in
> an ordinary external environment (no strong light, rain, snow, sand and dust
> interference), but it still doesn’t handle the situation correctly in our
> test scenario.

Not one word more, see last two pages in the paper.

------
jaimex2
Good work by the researchers. I'm sure Andrej Karpathy and team are already
working hard on improving this part of Autopilots neural network. Most of the
updates seem to show the system is memorising things it sees at a distance and
building a "mental image" of what the environment is and adding more detail as
it get closer instead of assessing every frame individually.

~~~
Robotbeat
Interesting. I wonder if, as the number of Teslas on the road increases, their
"mental models" could be shared so that obstacles detected (with high
confidence) by earlier cars can be shared the later cars traveling on the same
stretch of road. Like how drivers sometimes signal road conditions (speed
traps being probably the most common one) to other drivers.

Eventually, that should probably be standardized in an open, broadcast format
to other non-Tesla vehicles as well.

~~~
jaimex2
That would certainly make sense. A huge benefit would be it would make it a
lot easier for the cars to co-ordinate which lanes they should be in to
maximise road throughput.

------
mindfulplay
This is an example of releasing things that are potentially unsafe but 'meh we
are part of silicon valley enthusiasts, let's fail fast and fail often'.

The slight problem with this approach is the failures end up killing a few
human beings. Sure, you can learn from the failures but is this the right way?

Very callous and self-righteous behavior from these Elon Musk-style people.

------
rachelbythebay
101 at 85 in Mountain View has a bunch of lane dividers with their far ends
lifted and turned hard to one side. Might this already be happening, albeit
accidentally?

The fact it’s the same junction as the fatal crash is also interesting.

------
ryanmarsh
I live along US 290 near Houston which has been a total shit show of
construction and hazard for the past few years. Adversarial lane markings
don’t necessarily require malice.

On 290 confusing lane markings, old lane markings, changes in surface and
grade, are hard enough for human drivers. I wonder how state of the art self
driving would handle these situations.

It’s been a treacherous stretch of highway. I can’t imagine the construction
is being done is a safety compliant manner but I’m not an expert.

------
Invictus0
Figs. 33 and 35 are really lacking. Can we see the stickers at another angle?
What is on them, just a white mark?

I'm interested to see something more along the lines of Figure 19. That is,
could you take a sticker of a stop sign, add some secret noise to it, stick
the sticker on top of a regular stop sign so no one knows, and then have the
car roll through the stop sign?

~~~
frosted-flakes
Perhaps. There was a road sign on my street that got hit with a paintball or
an egg by some kid, and for years after that it was difficult to see at night.
Somehow, whatever was on the sign interfered with the retro-reflector coating,
even though it looked perfectly normal during the day. (Not a stop sign, just
a narrow road sign for a single lane bridge.)

------
maxander
The “trick” was to add fake lane markings; did they test to see how many
_human_ drivers are susceptible to the same attack?

------
lelf
The actual post [https://keenlab.tencent.com/en/2019/03/29/Tencent-Keen-
Secur...](https://keenlab.tencent.com/en/2019/03/29/Tencent-Keen-Security-Lab-
Experimental-Security-Research-of-Tesla-Autopilot/)

------
dgudkov
I don't understand the haste with self-driving cars. The technology and
related legal issues (who is responsible for accidents?) are not prepared for
mass use. Why put cars with half-baked self-driving technology on regular
roads and highways with other drivers who didn't consent to this? Why not
start from predictable and tightly controlled environments (e.g. inner
territory of a factory or a cargo port), limit their speed to 10-20mph, spend
a few years refining the technology, and only then _gradually_ expand the
operating envelope.

When Boeing puts half-baked autopilot technology into MAX8 that's a horrible
negligence. When Tesla does a similar thing it's for some reason OK.

------
mises
Maybe we just shoudln't have self driving cars. The bias of the HN crows seems
to be that "oh, technology will advance and solve the problems; just give it
time". No code is perfect; no system is 100% secure. This isn't just a
malicious case; what of the situations where old lane lines are clearly
visible? Can't test perfectly. I'd make a similar argument in things like
nuclear centrifuges, namely that there are places where computers should be
relegated to their absolute minimum involvement.

~~~
tgsovlerkhgsel
Or maybe we should accept that no system is perfect and that cars that kill
people at a lower rate than human-driven cars are a good idea, even if they
keep killing people.

------
elamje
This is hardly tricking the Tesla. In the video they have no cars nearby for
the Tesla to also infer from, so saying it's "driving into opposing traffic"
is misleading.

I would imagine the Tesla would do fine if there were cars in front of it, and
cars in the oncoming lane that it could infer with.

With that being said, I'm sure there are plenty of corner cases where the car
can be legitimately tricked.

------
web007
It seems like a Wile E. Coyote attack on Autopilot - draw a line that turns
into a wall, and the car will follow it blindly. I don't see how this is
significantly different from human driver actions. You could put cones up and
a human would deviate as well, or put up something they don't recognize to
effect the same reaction.

------
caycep
This is more fantasy thinking (but hey...all the 20th anniversary Matrix
articles are popping up...)

At what point do machine learning "Adversarial" attacks converge with real
life "adversarial attacks".

Or do we already have this, but it's called "camouflage"?

------
paulorlando
Imagine boing able to do something like this at scale:
[https://unintendedconsequenc.es/autonomous-vehicles-
scaling-...](https://unintendedconsequenc.es/autonomous-vehicles-scaling-
risk/)

------
Elzear
[https://xkcd.com/1958/](https://xkcd.com/1958/)

~~~
Robotbeat
Don’t know why you were down-voted. It’s quite true that humans have the same
kind of vulnerability. Drawing misleading things on the road could kill human
drivers, too. It’s not a new threat, it’s just that most people aren’t
murderers so it’s not a serious problem.

~~~
rohit2412
Because it is not just about adversarial samples. How about old Lane markings,
some paint spillover, contrasting color in asphalt, tire marks in snow
conditions? They'll all come across as new lanes.

These Lane keeping assist systems are nowhere ready to be called driverless.

------
EGreg
This is a general problem with trusting computer AI.

I am talking about image recognition stuff etc.

At the very least we need some invariants on a higher level to kick in and
heavily penalize any such results! Not just bottom up but we need top down.

Where is this feeback mechanism?

------
dkfndodbxob
These sorts of systems tend to be far easier to trick than humans with certain
sorts of "camouflage". Is the reverse true? Are there some kinds of markings
that confuse people but not modern image recognition systems?

------
frogpelt
I feel like we are going to need smart roads before we can have autonomous
vehicles. If the road could talk to the car, the car would be harder to fool
or confuse. Maybe.

------
iabacu
Tesla self-driving is like Theranos.

A big fake-until-you-make promise that gambles with peoples lives, in exchange
for some hype and funding.

------
gzak
Next, someone will stand on a street corner wearing a stop sign t-shirt... All
hell will break loose.

------
paul7986
Yup not going to pay thru the nose to be any billionaires guinea pig!

Unfortunately others will posing a new threat on our roads that previously
didn’t exist.

Progress is a killer and that kills the innocent at the hands of the mega
wealthy.

------
PorterDuff
[https://www.youtube.com/watch?v=VkTrQTt2Zdw](https://www.youtube.com/watch?v=VkTrQTt2Zdw)

------
blueface123
i tricked the autopilot to drive in the right direction is that normal?

------
mistrial9
hey, World ! I dont want a self-driving car with tracking ! thank you

------
alex_lfw
so that's why elon released a rap song. the ol' trump tactic.

------
samstave
What ports does a Tesla have open?

Anyone NMAPed their tesla?

------
blueface123
cool beans

------
shambolicfroli
I wonder how this Tesla issue relates to the admonition "stay in your lane".

------
dboreham
Interesting. This means that one of my predictions: that self driving would
happen but only for highly controlled road sections where markings allowed
very reliable lane following, is likely not to be realized. Unless we invent
digitally signed lane marking..

~~~
cimmanom
Are pedestrians and cyclists going to have to be "digitally signed" too in
order to avoid getting killed?

~~~
Mirioron
Perhaps it'll be in areas without pedestrians or cyclists? Eg some version of
the interstate. But then the question becomes: what happens if there's an
accident and humans are on the road? What if there are wild animals in the
road?

------
joemaller1
California has incredibly good roads, generally maintained, thoughtfully
designed and well-marked. It's the last place we should be testing self-
driving cars.

~~~
notimetorelax
Checking in from Europe, California roads can be very confusing or badly
marked.

