
Another Tesla on autopilot steers towards a barrier - walrus01
https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopilot_barrier_lust_201812/
======
lisper
Here is my armchair diagnosis: right before the car veers towards the barrier
it drives through a stretch of road where the only visible lane marker is on
the left. Then the right lane marker comes into view at about the point where
the lane starts to widen out for the lane split. The lines that will become
the right and left lane markers of the split left and right lanes respectively
are right next to the van in front of the Tesla, and at this point resemble
the diamond lane markers in the middle of the split lanes. My guess is that
the autopilot mistook these lines for the diamond lane marker and steered
towards them thinking it was centering itself in the lane.

If this theory turns out to be correct then Tesla is in deep trouble because
this would be a very elementary mistake. The system should have known that the
lanes split at this point, noticed that the distance between what it thought
was the diamond lane marker and the right lane line (which was clearly
visible) was wrong, and at least sounded an alarm, if not actively braked
until it had a better solution.

This is actually the most serious aspect of all of these crashes: the system
does not seem to be aware when it is getting things wrong. I dubbed this
property "cognizant failure" in my 1991 Ph.D. thesis on autonomous driving,
but no one seems to have adopted it. It's not possible to engineer an
autonomous system that never fails, but it _is_ possible to engineer one in
such a way that it never fails to detect that it has failed. Tesla seems to
have done neither.

~~~
jacquesm
I totally agree with your analysis. What is more worrying is that even
following the rest of the traffic would have solved the problem. If you are
the only car doing something then there is a good chance you are doing
something wrong.

Also: the lane markings are confusing but GPS and inertial readings should
have clearly indicated that that is not a lane. If two systems give
conflicting data some kind of alarm should be given to the driver.

~~~
lisper
> lane markings are confusing but GPS and inertial readings should have
> clearly indicated that that is not a lane

GPS is not accurate enough to reliably pinpoint your location to a particular
lane. Even WAAS
([https://en.wikipedia.org/wiki/Wide_Area_Augmentation_System](https://en.wikipedia.org/wiki/Wide_Area_Augmentation_System))
can have up to 25 feet of error. Basic GPS is less accurate than that.

In fact, it's possible that GPS error was a contributing factor here but
there's no way to know that from the video.

~~~
jacquesm
I drive a lot with my GPS (TomTom) on and it _rarely_ gets the lane I'm in
wrong, usually that only happens in construction zones.

So even if the error can be large in practice it often works very well.

It would be very interesting to see the input data streams these self driving
systems rely on in case of errors, and in the case of accidents there might
even be a legal reason to obtain them (for liability purposes).

~~~
lisper
> it often works very well

Well, yeah, it's not like autopilot is driving cars into barriers every day.
But I don't think "often works very well" (and then it kills you) is good
enough for most people. It's certainly not good enough for me.

~~~
danso
According to the Reddit posterL

> Previous autopilot versions didn't do this. 10.4 and 12 do this 90% of the
> time. I didn't post it until I got .12 and it did it a few times in case it
> was a bug in 10.4.

> Also, start paying attention as you drive. Centering yourself in the lane is
> not always a good idea. You'll run into parked cars all over, and lots of
> times before there is an obstacle the lane gets wider. In general, my feel
> is that you're better off to hug one of the markings.

It kind of does sound like that autopilot is steering vehicles into barriers
everyday, or it would be if drivers weren't being extra vigilant:
[https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...](https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopilot_barrier_lust_201812/dwuuut8/)

Seems like we need a study to assess whether the kind of protections AP offers
outweighs the extra challenges it adds to a daily drive.

------
tjwds
This comment from OP (beastpilot) is pretty frightening:

> Yep, works for 6 months with zero issues. Then one Friday night you get an
> update. Everything works that weekend, and on your way to work on Monday.
> Then, 18 minutes into your commute home, it drives straight at a barrier at
> 60 MPH.

> It's important to remember that it's not like you got the update 5 minutes
> before this happened. Even worse, you may not know you got an update if you
> are in a multi-driver household and the other driver installed the update.

Very glad the driver had 100% of their attention on the road at that moment.

~~~
makomk
Elsewhere from the comments: "Yes the lane width detection and centering was
introduced in 10.4. Our X does this now as well and while it's a welcome
intoroduction in most cases every one in a while in this instance or when HOV
lanes split it is unwelcome." So basically, if I'm understanding this right
they introduced a new feature that positions your car a little more
consistently in the lane at the small cost of sometimes steering you at full
speed head-on into a barrier.

Remember Tesla's first press release on the crash and how it mentioned "Tesla
owners have driven this same stretch of highway with Autopilot engaged roughly
85,000 times"? I imagine the number that have driven it successfully in that
lane since that update was rolled out sometime in mid-March is rather
smaller...

~~~
jacquesm
And that's exactly what makes Tesla PR so disingenuous, they know better than
anybody what updates they did and how often their updated software passed that
point and yet they will happily cite numbers they know to be wrong.

So, now regarding that previous crash: did that driver (or should I say
occupant) get lulled into a false sense of security because he'd been through
there many times in the past and it worked until that update happened and then
it suddenly didn't?

~~~
slivym
I've got to take issue with you there. Tesla Engineering knows better than
anybody what updates they did and how often they update etc.

Tesla PR knows nothing about what updates the engineering team did. At least
some people in Tesla PR probably don't even know the cars update their
software regularly.

It's bad practice for them to speak out of turn, but I can absolutely see the
PR team not having a good grasp of what really indicates safety and their job
is to just put out the best numbers possible.

~~~
jacquesm
I'm sorry but Tesla PR == Tesla. If they don't have a good grasp on this they
should STFU until they do. That would make Tesla even worse in my book.

Their job is not to put out the best numbers possible, their job is to inform.
Most likely they were more worried about the effect of their statement on
their stock price than they were worried about public safety.

If they _do_ put out numbers (such as 85K trips past that point) then it is
clear that they have access to engineering data with a very high resolution,
it would be very bad if they only used that access to search for numbers that
bolster their image.

~~~
icc97
Well I kind of came to hope that Tesla PR was less BS than other PR companies.
But that memo basically shows they're no different from all other PR, just
twisting the truth to avoid negative stories until the fuss dies down.

------
iainmerrick
This kind of video makes me wonder: what is the "autopilot" feature actually
for? Do people generally like it? (I don't own a Tesla and I've never driven
one.)

If you're supposed to keep your hands on the wheel, and given videos like this
that show you really _need_ to keep your hands on the wheel and pay attention,
is automatic steering really that big of a deal?

Cruise control, now, that really is useful because it automates a trivial
chore (maintaining a steady speed) and will do it well enough to improve your
gas mileage. The main failure condition is "car keeps driving at full speed
towards an obstacle" but an automatic emergency brake feature (again,
reasonably straightforward, and standard in many new cars) can mitigate that
pretty well.

It seems to me that autopilot falls into an uncanny valley, where it's not
simple enough to be a reliable automation or a useful safety improvement, but
it also isn't smart enough to reduce the need for an alert human driver. So
what's the point?

If you're excited about self-driving cars because they'll reduce accidents, as
many people here claim, what you should really be excited about is the more
mundane incremental improvements like pedestrian airbags. Those will actually
save lives right now.

~~~
vorpalhex
So human drivers are terrible at driving for any long period of time,
especially on relatively consistent "boring" highways.

Guess what computers excel at? Driving consistently on consistent highways.

The Tesla Autopilot is supposed to be the always aware and paying attention
portion on these cases where a human driver would be very likely to start
texting or dozing off. Now, it's not fully autonomous and may well decide it
can't handle a situation (or apparently try to drive you into a barrier to see
if you're still awake...). In this case the human driver who is somewhat zoned
out needs to take control instantly and correct the situation, until they can
safely re-engage autopilot (or pull over and make sure they're still alive,
etc).

~~~
davewritescode
The problem is autopilot actively engages you even less than highway driving.
I argue this exacerbates some of the problems that cause humans to checkout.

I don't know what the answer is but it feels like GM's super cruise does a
more adequate job of acknowledging the realistic limits of the technology and
explicitly white lists roads where the technology is available for use.

I personally think that without some sort of sensors or beacons in the road,
autonomous driving via camera and LIDAR sesnors is ever going to be good
enough to achieve level 5 autonomous operation.

~~~
bradstewart
If a human can navigate current roads, why shouldn't a computer be able to? We
may need to develop a new types of sensors for vehicles, but that seems like a
better/easier plan than installing beacons on every road, everywhere.

And what about beacon maintenance? Seems like most cities have a hard enough
time keeping up with pot holes, lines, etc. as is.

~~~
monochromatic
If a human can write a novel, why shouldn’t a computer be able to?

~~~
bradstewart
I have no reason to believe a computer cannot write a novel.

------
macawfish
People aren't going to tolerate this kind of stuff. These aren't cosmetic or
aesthetic flaws. This is an area that deserves transparency.

I'm out on the road too, and I don't get to make "consumer choices" for other
people I share the road with. People are putting their market power into
companies that lack the basic integrity to build these technologies with
safety and transparency as the first priority, who evidently see peoples'
lives as necessary sacrifices.

Please remember that we are talking about vehicles speeding down the road at
70+ MPH.

~~~
Cthulhu_
Nobody's forcing autopilot on anyone. I think instead, tesla will / should opt
to disable autopilot earlier in non-ideal conditions like this - that way they
won't be held liable either. ATM it really feels like they're being put on
trial here.

~~~
bjl
All the other people on the road are having autopilot forced on them by
reckless early adopters.

~~~
dwild
> by reckless early adopters.

That's a pretty important point. We aren't going to ban alcohol are we? Aren't
we even trying to allow marijuana consumption? Both of theses put you in
dangerous situation on the road, yet we aren't talking about blocking their
consumption.

It still the responsibility of the adopters, just like it's the responsibility
of the drinkers.

At least doing it safely actually improve security in long term. Theses
systems NEED to be driven to improve their performances.

~~~
henrikeh
Drinking and driving is however banned.

At what cost do we need the supposed improved performance of self-driving
cars?

~~~
fernandotakai
if you crash while under a substance, you not only get arrested but you can
lose your license and get huge fines.

if you crash while on autopilot, does the same happens?

~~~
henrikeh
Personally, I think the company selling the car with self-driving abilities
should be responsible. In turn this requires a clear understanding of what the
responsibility is: what is expected from a self-driving car; what situations
should it handle; what kind of documentation is needed of their development
process.

Like anything else which involves safety and environmental damage.

------
fairpx
Autopilot tech in general should be open and shared among ccompanies and
universities. The more brains we can put on this the better imho. If these
things happen too often, Im afraid regulators will start making laws that
prevent the use of this tech, which I believe has the potential to really make
life better for all... one day.

~~~
DisruptiveDave
Serious question: how will it make life better for all? I haven't given that
question a lot of thought, and I haven't heard legit answers from others yet.

~~~
maxerickson
If autonomous systems reach the point where they are measurably safer than
human drivers, there's an obvious benefit to using them.

Again it's an if, but lots of people drive long distances and don't enjoy it.
An autonomous vehicle would relieve them of the driving task. It would also
add a transport option for people that are unable to drive long distances.

It's likely to reduce the cost of taxi like services, as drivers are currently
a significant portion of those costs (autonomous driving turns pricing into
almost a pure calculation about return on investment).

I guess you might dismiss those as not being "legit" enough since they are all
contingent on the systems working well.

~~~
DisruptiveDave
No, I'm with you here. Just thinking through the other side of some of these.
Lost jobs. Not sure "safety" is a big enough issue here. On a larger scale,
I'm significantly concerned that society (thru tech) is becoming increasingly
optimized for comfort and safety. That bothers me. It removes so much of what
is good about life. I don't want guaranteed life until 100. I don't want
comfort all the time; that stunts growth. There is a lot of grey area here, of
course. But this has been on my mind for a while now.

~~~
solean
Losing truck driving jobs isn't removing what is good about life. Neither is
too much safety. The people whose lives are being saved by technology aren't
worrying about having too much safety. Only those who are privileged and don't
have to worry about anything, realistically.

------
e1ven
> Previous autopilot versions didn't do this. 10.4 and 12 do this 90% of the
> time. I didn't post it until I got .12 and it did it a few times in case it
> was a bug in 10.4.

This is why the idea of self-updating cars terrifies me. I'd never allow
autopatching on a production server - Why would I allow it on hardware I'm
betting my life on?

~~~
dlisboa
"Move fast and break things". Taken literally.

~~~
banku_brougham
“Move fast and break humans.”

~~~
amq
Move fast and break barriers.

------
cmpolis
I can't understand the blindness of the Tesla/Elon fanboys in that thread; the
comments on that thread are so defensive of Tesla and still show little
concern for safety. Despite fairly damning video evidence, a fatality, many
reports of accidents and near accidents...

~~~
Analemma_
People have a lot invested in Tesla-- sometimes emotionally, sometimes
literally. It is difficult to be objective in that case.

------
taneq
What does this say about the quality of the automatic emergency braking, if it
can't detect a substantial metal object directly in front of the car?

~~~
robotnixon
Autopilot has trouble with stationary objects.

[https://www.wired.com/story/tesla-autopilot-why-crash-
radar/](https://www.wired.com/story/tesla-autopilot-why-crash-radar/)

~~~
jacquesm
What would it take to get Tesla to admit the product is unsafe and should be
disabled until they get it right? Or will they simply plow on until a
regulator steps in?

~~~
buvanshak
> What would it take to get Tesla to admit the product is unsafe...

People should start to vote with their purse. In other words, stop buying
these cars, or start selling their stocks.

~~~
banku_brougham
This philosophy, though admirable in support of personal liberty, ignores that
a commons exists.

------
gormz
You can pry the steering wheel out of my cold dead hands. Might be true that
on paper these things cause less accidents and less deaths, but I want to be
in control of my life. If I die in a car I don't want it to be because of a
software bug.

~~~
bufferoverflow
Most modern cars are drive-by-wire, entirely relying on computers and the
software. And their software is huge blobs of proprietary compiled code. It's
guaranteed to have bugs.

~~~
20after4
Probably not that huge, really. Embedded systems on most vehicles are fairly
minimal and reliable, at least compared to modern consumer operating systems
and (apparently) Tesla's systems.

~~~
bufferoverflow
Car software is huge.

[https://cdn-images-1.medium.com/max/2000/1*cjVl6u_s34Mqjcn68...](https://cdn-
images-1.medium.com/max/2000/1*cjVl6u_s34Mqjcn68sWlkA.png)

------
ProblemFactory
If Tesla autopilot relies on good quality road markings, then it's not yet
usable in 80% of the world.

~~~
GoToRO
It's not usable anywhere. What if I draw a fake line on the road? will it
follow it? what if you have wet paint and another vehicle smudges it
diagonally to the lane?

~~~
aerovistae
Exactly. This is what I’ve been saying. It is so dangerous. I love Tesla but I
think autopilot is currently a death trap.

~~~
donkeyd
> I think autopilot is currently a death trap.

I think this is a bit of an exaggeration. As long as the driver keeps paying
attention and uses it as a driving aid, not a driver replacement, everything
is fine. It's the moment that people start relying on it doing something that
it wasn't built for, that the problems arise.

~~~
MadcapJake
> I think this is a bit of an exaggeration. As long as the driver keeps paying
> attention and uses it as a driving aid, not a driver replacement, everything
> is fine. It's the moment that people start relying on it doing something
> that it wasn't built for, that the problems arise.

I think this is a bit too optimistic. People _will_ start relying on it to be
an autopilot. I think most people see this as the desired result: a car that
can drive itself. Do you really think anyone wants "a car that will drive
itself while you are paying full attention to every detail"? Otherwise, what
is a driver really gaining from this? They're still expected to pay just as
much attention (if not more) and I'd bet it's even more boring than regular
driving (no interaction from the driver means it's like the world's most
boring VR movie)

Humans are not machines, we love to find the lazy/easy way and we love to do
things rather than stare at the road, eventually people will grow complacent
(hopefully not before the tech is up-to-snuff).

~~~
aerovistae
> Do you really think anyone wants "a car that will drive itself while you are
> paying full attention to every detail"? Otherwise, what is a driver really
> gaining from this? They're still expected to pay just as much attention (if
> not more) and I'd bet it's even more boring than regular driving (no
> interaction from the driver means it's like the world's most boring VR
> movie)

EXACTLY. They're at even _more_ risk for accident due to inattention because
it's so hard to focus on doing nothing.

------
lvoudour
I wonder what "AI updates" are going to mean from an insurance point of view.
Don't know about the US, but some countries in Europe insure the driver and
not the car (eg. Ireland). How far fetched is for insurers to argue that an
autonomous car software update constitutes a change of driver and demand that
the updates must first be validated from them before installing?

~~~
edf13
It's a good point.

Here in the UK I am insured to drive vehicle A as specified on my insurance
documents. If I modify that vehicle my insurance policy is void and I need to
inform the insurance co. and probably pay a premium.

In your scenario the same could be true on a software update.

~~~
lvoudour
And it wouldn't be unreasonable from the insurers point of view, since any
update carries a risk. What's going to happen I think is that insurance
companies will probably need to vet the updates before allowing you to install
them, much like android phone vendors. Running un-authorized updates will void
your policy.

------
m4x
I don't understand why people use Autopilot, or why Tesla has released it.

To use it safely and according to Tesla's instructions, you have to remain
100% vigilant at all times - so you might as well just drive yourself.

And if you fail to remain vigilant, which is likely since you are sitting
passively in the drivers seat, you might kill somebody.

Where's the upside? Why on Earth would I want to use such a product?

~~~
kwhitefoot
> Where's the upside? Why on Earth would I want to use such a product?

I use it because it makes my driving less stressful. The car and I work
together which means I don't have to work so hard to keep to the speed limit,
to keep a sensible distance from the car in front and stay in lane.

~~~
dingo_bat
So no different than a regular adaptive cruise control.

------
jijojv
[https://www.cnbc.com/2018/01/31/apples-steve-wozniak-
doesnt-...](https://www.cnbc.com/2018/01/31/apples-steve-wozniak-doesnt-
believe-anything-elon-musk-or-tesla-say.html) "Man you have got to be ready —
it makes mistakes, it loses track of the lane lines. You have to be on your
toes all the time," says Wozniak.

"All Tesla did is say, 'It is beta so we are not responsible. It doesn't
necessarily work, so you have to be in control.' "Well you that is kinda a
cheap way out of it."

------
paul7986
2018 the wild wild west of robot cars.

Pay an extremely high cost including your life to ensure technological
advancement. Is that not part of their marketing? If not SNL needs to do a
skit!

I can see the skit now, "You looked S3XY in that Telsa," "Did you get the
update last night?" "Oh your right and it made me feel so S3XY until it drove
me straight into a barrier at 60 MPH."

------
rectang
The ability to discern when the car needs to stop for a stationary object
_right in front of it_ needs to be an ironclad requirement for self-driving.

It seems that sophisticated LIDAR is currently the best way to achieve this,
but LIDAR is expensive. So companies like Uber and Tesla skimp on the LIDAR,
and build self-driving cars that can more or less follow lanes but _plow right
into obstacles_? Whoa.

The feasibility of widespread "self-driving" tech is going to advance only as
the cost of LIDAR falls.

~~~
freerobby
Lidar isn't reliable in conditions that are less than ideal (rain, fog, snow,
etc). If you build a car that is dependent on real-time lidar observations, it
is worthless much of the time in some climates and some of the time in all
climates.

Most companies are pursuing a Lidar approach because dev_speed(Lidar approach)
> dev_speed(camera approach). Tesla is pursuing a camera approach because
max(camera approach) > max(lidar approach).

~~~
Piskvorrr
In such conditions, cameras are not unreliable, they're downright useless.
Heck, even people have trouble driving during snowfall. I have a feeling that
this is a massive set of scenarios that aren't _even_ edge cases, yet SDVs are
completely unprepared for them. ("No snow in coastal California - it doesn't
exist at all!")

~~~
freerobby
I think you may be imagining what a human sees in a foggy camera image, rather
than what a neural network can see in it. In fact, atmospheric obstructions
degrade Lidar's signal much faster than a camera's.

If you check out NVidia's DriveNet demo from a couple years back, you'll see
that they already have a vision-based NN that outperforms humans in fog and
snow[1]. We can debate whether people should drive in those conditions at all,
but today's expectations will be the baseline that SDVs are up against, and
cameras are much better suited than Lidar to achieve that baseline.

[1] Start at 7:21
[https://www.youtube.com/watch?v=HJ58dbd5g8g](https://www.youtube.com/watch?v=HJ58dbd5g8g)

~~~
Piskvorrr
What human sees in a foggy _camera image_ is something quite different from
what we see directly - that's the Uber-scapegoat-camera-human-wouldn't-have-
seen-anything-either argument again.

To the effect of "nobody should be driving like that" \- I have driven at
walking pace at times, considering faster speed unsafe. I do agree that would
be a better task for a SDV, _iff_ it can full fill the promise of that
marketing video.

I also agree that various conditions are suited to different sensor types -
thus for an autonomous vehicle, multiple sensor types are needed.

------
darkerside
I wonder if Tesla has a way to visualize how disruptive a change to Autopilot.
They've supposedly got all this driving data lying around. Would they be able
to apply the updated algorithms to that data and quantify how many decisions
would be affected, and by how much? i.e. 30% of past decisions would have been
changed with the new algorithm, 5% changed by an angle of over 2 degrees.

On a similar note, shouldn't customers be getting release notes for these
updates? I let my customers know when we're updating a web page. Tesla should
surely be informing them when cars are going to start lane centering.

~~~
kalleboo
The problem is you can't simulate the video that happens after your new
decision.

So this steering update would take you 0.3 degrees to the right. So what? Well
those 0.3 degrees might change the angle of the line which influences the car
to steer another 0.3 degrees, etc. But without that followup video (which you
can't simulate from a 2D recording) you don't know how it would react to the
change in environment.

The only way to regression test these things is in a simulated 3D environment
(or miles and miles of real test track)

~~~
darkerside
Following the branch down is obviously impossible, but simply detecting that
there are new branches--perhaps a greater than expected number--could be
useful information. For both the engineers and the operaters.

------
walrus01
Scary as hell for me since I go through this exact location on a regular
basis. The location is the Seattle i-5 northbound express lanes, where the
express lanes end and merge into the main freeway at Northgate.

------
sul4bh

      Previous autopilot versions didn't do this. 10.4 and 12 do this 90% of the time. I didn't post it until I got .12 and it did it a few times in case it was a bug in 10.4.
    

It looks like software updates will soon won't just break your workflow. They
might break your spine or your head or take away your life entirely.

------
mv4
Musk wanted to regulate AI to combat existential threat; in reality, it's not
super intelligent self-aware AI, but buggy/shitty AI that's trying to kill
people.

~~~
walrus01
We will be killed by AI developers and the dunning kreuger effect, not armies
of Skynet T-800s.

~~~
lallysingh
That's a Greek Tragedy-styled scifi story waiting to be written.

------
rsynnott
The scariest bit for me; in the comment's it's claimed that it has only done
this for the last two software updates. Currently the worst I have to worry
about when getting an OTA update is that it may make my phone less stable.
Apparently in the future it may also influence precisely how my car tries to
kill me.

------
VikingCoder
Tesla makes amazing cars, but their attitude to self-driving is terrible.
Waymo's approach is way mo [sic] safer.

------
chillingeffect
Perfect example of a system doing what it's supposed to, but that's not the
right thing. I don't believe it was headed toward the center barrier - I
believe it was neatly avoiding it. Here's what I think was happening:

The car did an admirable job trying to stay in that left lane. Each time it
saw a red and white diagonal lined barrier, it gracefully edged a comfortable
distance away.

The problem is, most humans catch on, "Gee, three scary-looking barriers in a
row! Maybe they're trying to tell me something. "Maybe this road is closed."

But I assume the car's system perceived each one as in individual, isolated
obstacle to be handled without making inferences. And it did that much well,
and when it realized the third or 4th barrier was impassable, it had to cross
the other side of the yellow barrier and it did that well, too.

------
animex
Could someone fill me in on collision detection? My 2016 BMW has Front
Collision warning which can be set to min/med/max range. I've got it set to
max range. It can detect a solid object that is coming up to collide with my
car and warn me via sounds and HUD. Do Teslas not have some method of solid
object collision that should supersede any driving directive? As per my
previous comment, could a prankster paint a set of white lane markers that
verve into a concrete wall or even off a cliff? That's freaking scary to me
that these systems are so reliant on lane markings. At highway speed, a
distracted driver, would have little time to be alerted, assess the situation
and take the correct corrective action before a collision occurred.

------
w_t_payne
I would argue that these incidents indicate a need for aerospace levels of
quality assurance.

With the right tooling this does not need to be expensive.

I hope that incidents like this encourage people to develop open source
tooling to support safer software and higher levels of quality.

We can see how lives might depend on it.

~~~
Piskvorrr
Aerospace QA is expensive by definition, because reliability nines generally
are, whatever the industry: an additional nine costs 10x. You can go cheaper,
but with a corresponding decrease in reliability.

~~~
w_t_payne
A lot (but not all) of the costs are due to additional process steps that can
be automated.

I think we can dramatically increase safety and quality by selectively
adopting some of the concepts from aerospace and aggressively automating them.

Traceability; NLP for requirements etc..

~~~
Piskvorrr
That is probably true, and might help - but that only gets you so far, can't
program your way out of _everything_. For high reliability, you'll need
redundancy (and thus fallback and voting and whatnot), which will a) drive up
HW costs, while also b) increasing complexity.

I do agree there are some LHF opportunities to learn from aviation, but doing
that is insufficient.

~~~
w_t_payne
I agree with you on the topic of redundancy.

I am probably not communicating it very well; but my main motivation is a
reaction to just how much better our tools need to be.

------
jijojv
They should have settled [https://arstechnica.com/cars/2017/04/tesla-owners-
sue-enhanc...](https://arstechnica.com/cars/2017/04/tesla-owners-sue-enhanced-
autopilot-featuressimply-too-dangerous-to-be-used/) last year instead of
dragging it on. These round of videos are more damning than ever.

[https://www.pacermonitor.com/public/case/21195146/Dean_Sheik...](https://www.pacermonitor.com/public/case/21195146/Dean_Sheikh_et_al_v_Tesla,_Inc)

------
S_A_P
This seems to be one of those problems that you can solve 90% of the way very
quickly with a seemingly simple set of rules. 1) stay in your lane

2) dont hit other objects

3) obey traffic laws

Then that devil that is the details shows up...

------
banku_brougham
The reddit thread (of largely tesla drivers) seems to have concluded that the
resolution is “stay alert especially when updates get pushed.”

To me it doesn’t capture the gravity of the situation.

------
senatorobama
Whoever approved this change in the code needs to be out. If I was Karpathy,
I'd be very worried right now that he isn't liable in a civil or criminal
suit.

------
damon_c
I assume they have a test suite that allows them to run 1000s of situations on
the latest version of software to see if it crashes/almost crashes on any of
them.

If so, it seems like it would be fairly easy to add this situation to the
corpus... especially if they are recording data from their cars live.

They should have 100 people going out there and recording this lane split
situation into their test data ASAP.

------
prewett
Tesla says that the driver needs to be constantly watching the road in
autopilot mode. But what kind of autopilot is that? If I have to constantly
pay attention, I might as well just drive. I think it's safer, too, since I am
bad at paying attention for something that rarely happens.

------
antongribok
Here's the exact location where the video ends, in case anyone wants to go
"troubleshooting" this weekend:

[https://goo.gl/maps/Y2kag6yYc3N2](https://goo.gl/maps/Y2kag6yYc3N2)

------
shawkinaw
Why isn’t the traffic ahead of the car taken into account? When I’m driving,
following the car in front is a pretty good way to decide where it’s safe.
Conversely, going somewhere no car ahead is going seems like a bad idea.

------
dhimes
They need to move those damn barriers. This is happening alot. /s

------
tonyquart
I just read at [https://www.lemberglaw.com/self-driving-autonomous-car-
accid...](https://www.lemberglaw.com/self-driving-autonomous-car-accident-
injury-lawyers-attorneys/) about this matter, and I think that these
autonomous cars haven't been ready to be publicly released to citizens now.
Automakers and law makers should think about the regulation and law.

------
aabajian
Does Tesla record when people override autopilot? Seems like this would be a
good source of information about sections of road with obvious problems.

------
l8again
An honest question. Are any of these autonomous driving systems open source or
peer reviewed in any way? If not, isn't it really weird that we are talking
about regulations even though the underlying technology is not even peer
reviewed? How do we know (mathematical proofs, etc.) that a self-driving car
manufacturer has done a good enough job if all of that is proprietary?

~~~
duiker101
In one of the earlier discussions about this(like, last year) I remember an
engeneer's comment about how one of the major manufacturer's (Toyota maybe?)
code base for the monitoring system of the car was something to behold. On the
lines of hundreds of global variables and basically unexisting standards. Now,
I don't say that their or Tesla's autopilot are the same, but it wouldn't
really surprise me either.

------
mattbeckman
One way to help might be to disable AP following an update for X days or
weeks. Let AP run in a disabled simulation mode while you drive your normal
routes. If it determines your course of action in any situation breaks the
threshold of what it was thinking it should do next, auto-report video feed to
Tesla, and prevent AP from being used in a X-mile radius around that area.

~~~
Piskvorrr
It seems that with current developments, that's a complicated way of saying
"turn it off."

------
syntaxing
Does anyone know if all these recent accidents are with the old MobilEye
technology or the new Nvidia system? Or was it a rumour that Tesla was moving
towards Nvidia rather than MobilEye (I think MobileEye was the one that didn't
want to renew their contract?)?

~~~
ocdtrekkie
I would be careful calling it just the MobilEye system or the Nvidia system
and assigning blame accordingly. They both create autonomous driving
components, but at the end of the day, Tesla, Uber, etc. write their own
software, and are responsible for the whole of the systems that are actually
in their cars.

------
planetjones
Does anyone know how software updates are tested for such autonomous driving
software ? Is there a possibility for this driver's video to be added to a
regression suite and tested, or can they take the data from his car and build
a regression test based on it ?

~~~
igammarays
I sure hope they do have a regression suite with thousands of real world
sensor-data samples in some normalized format made amenable to unit testing.
Assuming such a normalized format is possible to generate, I expect these
samples to be open-sourced and standardized. That would give the NTSB a
systematized certification procedure: just run the new version of the software
on all the known test cases before allowing it to be released.

Still, I suspect that may not be enough - a new weird road condition or
construction site can be created at any time. This is why I believe self-
driving companies would be better off spending their budget upgrading and
certifying specific routes as safe for automation. Certified routes could be
subject to special construction protocols and regular road quality audits that
ensure automated cars won't run into a non-automatable condition on those
routes. It's also easy to verify and confirm that a car works on all certified
routes, rather than trying to test the entire North American road network.

------
dingo_bat
I think tesla needs to build a high performance cell network and upload
continuous 1080p stereoscopic video at 60fps and factor that into their
training. No sensor data can be a substitute for high quality video.

------
pooya13
I don't mean to call the OP lier by any means. But anyone knows if there is
any evidence that this video was taken on an autopilot engaged Tesla? Just
want to be sure about it before I form judgment against Tesla.

------
lafar6502
ha ha i dont see how this is supposed to improve safety. Maybe it's safer than
having no driver at all but if we're talking about being better than typical
human then the safest thing to do is to switch this thing off.

~~~
bufferoverflow
It still could be safer than most human drivers. We only have a few examples
of their AI misbehaving. If it consistently crashed into the barriers, we
would have thousands of deaths per day.

~~~
nemetroid
From the linked thread:

> This is my commute home- I know it does it about 90% of the time. It does it
> almost every evening and has for the last few weeks since 10.4 rolled out.

------
thrillgore
Calling this well intentioned series of mistakes "Autopilot" has got to be the
dumbest fucking thing Tesla has done.

Actually, I stand corrected. Their stock gamble is about as dumb too.

------
josefresco
My hands started to sweat watching that video - this is not good.

------
decebalus1
One thing baffles me. Where is the insurance industry in this? I would expect
them to drastically increase premiums in light of these developments.

~~~
kwhitefoot
Here in Norway insuring a Tesla is no more expensive, in fact probably
cheaper, than insuring a petrol car in a similar class. I'm pretty sure that
the insurance companies wouldn't do it if it meant losing money.

~~~
chrstphrknwtn
Wasn't there (or perhaps still is) a government subsidy for electric vehicle
purchases in Norway? There may also be a government subsidy for insurance, to
further encourage people to shift away from ICE vehicles.

Cheaper insurance may have nothing to do with the quality or safety of
autopilot in Tesla vehicles.

------
m3kw9
Man I cannot imagine how they will debug the issue in the algorithm and how
they will re train it to get that barrier mistake out

------
Tomminn
I commented on the first one that humans could easily make the mistake the
Tesla made. This, however, is a scarily bad fuck up.

------
readhn
Why is autopilot feature still allowed to exist? It should be disabled until
further improvements. This blows my mind.

~~~
Too
This is the biggest wtf. We've now seen at least three videos of similar
trivial mistakes by autopilot only this week with one fatal crash and it is
still not disabled.

When aircrafts crash all planes of that model are grounded until the root
cause is found. Self driving cars need similar processes.

------
drderidder
I’ve always thought self driving cars are a monumentally dumb idea. There, I
said it. I feel better now.

------
kozak
The soundtrack fits the video perfectly.

------
jacksmith21006
Problem is the other SDC companies are doing dangerous things trying to keep
up with Google.

------
e12e
Changelog (latest changes on top):

* fix issue where "Marvin" autopilot keeps complaining (rip out voicebox)

* Boost intelligence of "Marvin" autopilot. This car now has the brain the size of a planet.

* test new smarter "Marvin" autopilot

------
PaulHoule
When are they going to release the software patch?

------
return0
AV driving seems to be ripe for disruption

~~~
bennettfeely
Any 16 year old can already "disrupt" the industry just by getting a license
and driving. Humans are and will continue to be far more capable, rational,
and economical drivers for the foreseeable future.

Sounds similar to tech bros who want to radically "disrupt" carbon
sequestration. Here's an actually proven idea: plant a tree.

~~~
return0
maybe it was not obvious , AV = autonomous vehicles

------
huffmsa
Go to Washington DC, head West on Congressional, cross the Roosevelt bridge
and take the exit for the GW.

If there isn't a car smashed into the middle of the north/south Y junction,
you'll get the chance to count the skid marks at that Y junction from humans
making this exact same error.

Tl dr; humans make the same mistake, at a higher frequency, which is why the
accordian barrier exists.

~~~
lafar6502
humans at least are aware that they made a mistake and can avoid collision.
Tesla makes no mistake, tesla has no idea whats going on so you cannot call it
a mistake. Its not a bug, its a feature.

~~~
huffmsa
Except it gets reported back to HQ and like with airline accidents, the
systems are updated across-the-board so that the risk of future occurrences is
greatly reduced.

So instead of a single human learning "gee, shouldn't do that again", 100k+
vehicles learn about it.

~~~
kalleboo
Does Tesla actually do this? They should detect hotspots of people
deactivating Autopilot (like in this video) and make Autopilot deactivate
itself a few miles before

First we had the barrier crash. Then there was the video of someone
reproducing the barrier crash and manually emergency braking. Now we have this
video. One thing I've learned in my software career is that if one of our
customers reports a bug, 10,000 others have encountered it but not bothered
reporting it. So why hasn't Tesla rolled back the software yet?!

------
paulcole
Honestly who is to say that a human driver wouldn't have done the same thing?

------
Y_Y
To be fair it steers _toward_ the barrier. No barriers were harmed.

Does Tesla run some sort of regression suite on this stuff? Can they get a
copy of the sensor data from when this happened so they can reproduce those
conditions as part of their test suite?

~~~
Applejinx
You don't need the sensor data, it's obvious. The lane markers widened, and
'straight' is into the (not seen yet) barrier. If the barrier end had a mockup
of the ass-end of a car stuck on it (about three times as wide) the Tesla
would freak out, panic brake, and still think it was in the center of a single
widening lane. It's not mysterious what the car was thinking.

If the overall shape of the road wasn't veering slightly to the right, the car
would probably not have chosen to swerve to the left into a barrier, but the
car sees the lane widening equally in both directions. Simple as that.

~~~
Y_Y
Obvious? Not to the car! My point is that gathering failing cases like this
could be a a simple way to gain confidence in a self-driving system, either
from the perspective of internal development at Tesla, or even for regulation.
Imagine if before you were allowed to push an update to road-going cars you
had to show that it doesn't crash in any of the simulations of difficult
conditions that the Road Safety Authority has prepared. I'd be in favour of
such a thing at least. It's not proof, but it doesn't give any false
positives.

------
pipio21
One of the great things about self driving cars is that we are going to start
"debugging Roads".

From my point of view all accidents until this time could be attributed to the
civil engineer that build the road as much as the driver of the car.

You make a pedestrian pass between roads made only for aesthetic purposes so
you put a sign forbidding the pass and call it done. But people of course use
it and get killed.

You put concrete barriers between roads but make it so people could crash
frontally against it, something you won't find in any European high speed road
without deflectors that will make vehicles not crash against concrete.

Self driving will give us scientific evidence of what creates accidents like
black boxes did with airplanes. Thanks to that we know that what looks like
insignificant details like the color and placement of buttons, turns out to be
essential.

~~~
jacquesm
> One of the great things about self driving cars is that we are going to
> start "debugging Roads".

We've been debugging roads since Roman times. This is mostly about debugging
software, and even more about crappy software development processes. After
all, if your update is not monotonically improving things you have a real
problem if your product is mission critical and lives are at stake.

~~~
kazen44
also, roads are far harder to debug and fix then software

~~~
jacquesm
I would not be so sure of that. Self driving in _all_ conditions is hard. The
thing that bugs me is that these '90%' solutions are released on the
unsuspecting public without some serious plain language about the software
capabilities and what could be expected. Marketing should not trump safety,
especially not the safety of people _not_ buying the product.

