
Why Tesla's autopilot can't see a stopped firetruck - fanf2
https://www.wired.com/story/tesla-autopilot-why-crash-radar
======
tomxor
> And it's not much of a problem if every human in a semi-autonomous vehicle
> followed the automakers' explicit, insistent instructions to pay attention
> at all times, and take back control if they see a stationary vehicle up
> ahead.

While their technical reasoning is fine... this statement shows a complete
failure of reasoning about human beings and UX. In essence they are asking
people to stay alert without being active, this is actually really fucking
hard to do as a human being. Noticing an object that needs to be avoided while
you are already actively engaged in driving is easy, you are naturally alert
and there is no "intervention time" because there is nothing to intervene
with. Trying to achieve the same level of response while letting a machine
drive kind of defeats the purpose of letting the machine drive.

Yes I'm basically making the argument that semi-autonomous driving is not
safe. Make it fully autonomous or not at all. Two modes, keep it simple,
because human concentration is complex.

~~~
jacquesm
There are mountains of evidence to back this up, quite a bit of it stems from
the problem of keeping airline pilots engaged and alert while the autopilot
functions as it should.

~~~
ghaff
And people deskill when they don't get enough practice as well. I know I got
better over time as a driver and it wasn't just because I wasn't a teenager
any longer. It's easy to imagine we'll end up with a lot of people on the
roads with not-quite autonomous systems who actually haven't actively driven
all that much.

~~~
patcheudor
It's not even about deskilling. I raced road bikes for years and in large part
drive like I rode: highly defensive (bordering on aggressive as I've been
lectured by LEOs). I commuted for years in my roadster with a manual
transmission & power steering set for the track where I'm highly engaged as I
can feel every crack in the road and the car can accelerate to unholy speeds
in seconds. When I drive our sedan, it's boring and I find my mind drifting.
If I turn on assisted cruise control, it's mind numbingly boring and on any
long drive I'm guaranteed to fall asleep so I don't use it anymore because
it's too dangerous.

Without being engaged in the act of driving, without the adrenaline, I switch
off. This is the key problem I suspect everyone at some level suffers from
wether they want to admit it or not. Even awake, on full autopilot I suspect
most people are in full-on daydream mode.

~~~
tomxor
I don't race but I have noticed this with my "normal" car, i've always driven
a hatchback though which are quite common in this country. If I force myself
to drive very steady I can almost feel the alertness drifting away, I become
less aware of vehicles around me and their bearing. If I speed up to a
threshold just between the bored comfortable zone and racing it's enough to
give heightened alertness, I can easily preempt drivers making careless moves
that would otherwise endanger me and I notice pedestrians and other things
around the road more, it's a strange balance, but i've no doubt that sticking
dead to the limit on a large road will decrease alertness in any study.

------
jVinc
> Tesla didn't confirm the car was running Autopilot at the time of the crash"

That essentially means that it was. If it wasn't they would have denied it and
released data.

> but its manual does warn that the system is ill-equipped to handle this
> exact sort of situation: “Traffic-Aware Cruise Control cannot detect all
> objects and may not brake/decelerate for stationary vehicles, especially in
> situations when you are driving over 50 mph (80 km/h) and a vehicle you are
> following moves out of your driving path and a stationary vehicle or object
> is in front of you instead.

Just for those unaware of Teslas previous handling of other crashes, this is
them waving a huge flag saying "We fucked up and we know it".

I wonder how that call went "Hi Tesla, my self-driving car just drove head-on
into a stationary firetruck" "Ohh... did you read the manual? It's supposed to
do that, so it's basically your fault!"

~~~
joekrill
> That essentially means that it was. If it wasn't they would have denied it
> and released data.

What kind of ridiculous, half-assed logic is that? There could be any number
of reasons they can't confirm this yet.

I haven't officially confirmed I'm not an alien being from a far away planet
-- so that must mean I am one!

~~~
toast0
Based on past experience, Tesla PR tends to respond within 24 hours to push
the blame on their customer. In the court of public opinion, a change in
behavior like this is a pretty good indication.

------
xya3453
Apparently the driver of the Tesla didn't have time to react to the firetruck
as their field of view was obscured by a large pick-up truck that changed
lanes seconds beforehand.

If Tesla's autopilot can't detect upcoming stationary objects, it should in
the very least keep enough distance between itself and the vehicle ahead in
order to allow the human at the wheel to have an unobscured view of the road
ahead.

~~~
sjg007
Why can't it detect a static object on the same path? I thought Subaru has a
system that can do that.. At least according to the commercials it detects
when a car in front of the car in front of you has stopped or slowed.

~~~
deelowe
> the car in front of you has stopped or slowed.

This is different from what happened in the tesla case. What happened here is
that the vehicle in front of the car was moving and in view of the tesla's
radar sensor obstructing the view of the firetruck. That car then moved and
now a stationary object was suddenly in front of the car. From the Tesl's view
point, an object just magically appeared somewhere in front of the car.

I'm not familiar with radar systems, but I imagine in this scenario the system
isn't precise enough to know that this stationary object that suddenly
appeared isn't an overhead sign or some other similar object that pops into
the field of view of the radar system. It probably has something to do with
radar not being extremely directional, constantly having stationary things pop
up due to reflections and various objects in the periphery, having a limited
sampling rate, and needing to do a filtering or other sorts of magic to make
sure there aren't constant false alarms etc. I can see how cruising along at
50mph, the system doesn't have enough time to figure all this out and stop
(e.g. not enough samples or there being a noise floor on stationary object
data). It appears the correct way to solve this is with 3d lidar systems
combined with sort of of mapping technology, but those are still too
expensive/difficult to put into every day vehicles.

~~~
sjg007
I said "the car in front of the car in front of you". That car is not visible
either since the car in front of you is blocking it. Subaru has commercials
indicating this scenario is addressed as far as I can tell.

------
mattlondon
For a 65mph crash into a stationary object, that looks like fairly light
damage considering!

Isn't this _precisely_ the scenario that automatic emergency braking systems
are designed to prevent?

The article suggests that no system can handle this, but the EuroNCAP system
specifically tests cars driving at up to 80kph (~50mph) into a stationary
vehicle
([https://youtu.be/AJ4WXgWaNgs?t=2m40s](https://youtu.be/AJ4WXgWaNgs?t=2m40s)
protocol: [https://cdn.euroncap.com/media/32278/euro-ncap-
aeb-c2c-test-...](https://cdn.euroncap.com/media/32278/euro-ncap-aeb-c2c-test-
protocol-v201.pdf)) Is this sort of safety equipment not a "thing" in the US?

It looks like AEB is not fitted to the model S in the EU as standard.
[https://www.euroncap.com/en/results/tesla/model-s/7897](https://www.euroncap.com/en/results/tesla/model-s/7897)
(expand safety assist section - no AEB listed, c.f. XC90
[https://www.euroncap.com/en/results/volvo/xc90/20976](https://www.euroncap.com/en/results/volvo/xc90/20976))

I wonder if AEB was installed and activated in this crash, despite what the
cruise control was doing? Surely it must have done otherwise I'd have expected
_significantly_ more damage, particularly to the fire truck getting a 2t lump
slamming into it @ 65mph?!

~~~
Symmetry
Presumably when the car got within a certain distance of the firetruck the
cone of the radar was narrow enough that the radar could tell that the truck
was definitely in the road and triggered braking - too late to avoid a
collision but early enough to slow down some.

------
BugsJustFindMe
> _Radar knows the speed of any object it sees, and is also simple, cheap,
> robust, and easy to build into a front bumper. But it also detects lots of
> things a car rolling down the highway needn 't worry about, like overhead
> highway signs, loose hubcaps, or speed limit signs. So engineers make a
> choice, telling the car to ignore these things and keep its eyes on the
> other cars on the road: They program the system to focus on the stuff that's
> moving._

So these car-mounted radar systems can't detect how large an object is or
where it is in relation to the road? I don't get how they can be used at all
then. That can't be true, right? Because if you can tell how large something
is, then the hubcap is out. And if you can tell where something is in relation
to the vehicle and/or road, then the signs are out. I mean, hell, the most
maniacal situation of climbing a steep hill with clearance under a sign that
you just can't see because the sign happens to be mounted just after the rise,
which probably doesn't exist anywhere in the world, would be out by other
factors, like knowing that you're going up a steep hill.

So it really sounds like the only thing left is just not being able to tell
how large something is, which sounds wild.

> _This unsettling compromise may be better than nothing_

People frequently find disappointment to be worse than nothing.

~~~
Filligree
There are physical limits to how good a radar's resolution can be, set by the
properties of the radio waves used. Those make it really difficult to detect
small size differences -- relative to the radar's wavelength -- though, not
quite impossible.

It's why everyone else uses lidar instead, or in addition. Really, it has the
same limits, but the wavelength of visible light is small enough that they're
only an issue for microscopes.

~~~
Someone1234
That's why many (most?) other vehicle manufacturers use cameras instead of
radar. You physically see what the driver would physically see and react how
the driver would react with the same information (just faster).

See Subaru, Toyota, Honda, etc.

I'm not sure why this article and this discussion seemingly act like the
predominant technology used in automatic braking systems doesn't exist?

~~~
sliken
Tesla's use cameras as well.

------
altotrees
So I read this and cannot help but wonder if we really are 5 years away from a
massive sea change. People think truck drivers could be obsolete in 5 years
time, the bullish ones anyway.

The tech is still developing. We may be in a bit of a hype bubble, but it is
as real and clear as it has ever been. I think even as the tech advances
rapidly, public acceptance will be way further out than 5, 10 or 15 years. I
would venture to say self-driving cars and truck fleets being normalized is 30
years away and in no way because of the tech, but due to all the red tape and
public debate that will encase the issue.

If you think all truck drivers are going to be displaced in 5 or even 10
years, you may be living in an HN bubble. Sure jobs will be automated away,
but this is going to take more time than lots of people think. For instance,
one article like this could delay acceptance for months or years, in my
opinion.

~~~
rsynnott
> People think truck drivers could be obsolete in 5 years time, the bullish
> ones anyway.

I'm not sure how many people _really_ think that; it's pure fantasy. To be
honest, I wouldn't be surprised if they're not obsolete in 20 years. AI stuff
has a tendency to get nearly good enough quite quickly, and then improve very
little more for decades afterwards.

~~~
ghaff
>AI stuff has a tendency to get nearly good enough quite quickly, and then
improve very little more for decades afterwards.

What tends to happen is that some approach, deep learning in the current case,
turns out to be very effective for a class of problems. But it turns out that
it only gets you, say, 90 percent of the way there to being truly useful in
the real world. And there isn't an obvious way to get you the other 10
percent.

------
alistproducer2
The fact that this flaw is known and in production gives credence to some of
the AI experts who are saying level 5 is waaaaay further away than we are
being led to believe.

~~~
vinceguidry
No, it just requires the sea change of having every vehicle on the road talk
with each other. That would fix this particular problem quite nicely.

It's not a panacea of course.

~~~
croon
What about cars breaking down that stop communicating?

~~~
jclardy
The cars around that car are still communicating. So the car next to it sees a
drastic speed drop and tells everyone else.

Also say the cars regularly ping each other for updates, then one car that was
in sight and is still in sight disappears doesn't respond, then it gets
flagged as a possible hazard.

~~~
mantas
What if 2 cars fail to communicate and can't flag each other?

~~~
rebuilder
You'll have an accident. But we have accidents already. The question is, what
failure rate will we tolerate from autonomous vehicles?

~~~
ksk
It should be Zero. Computers don't make mistakes is the entire reason anyone
would even consider letting an automated car pickup their kids.

~~~
sliken
I disagree and I have a kid. I'd be happy to let an automated car pick up my
kid if it was demonstrated to be safer than I'd likely be.

Car related deaths are fairly common, I can't see any rational reason to not
switch to autonomous driving as soon as they are statistically likely to save
lives.

Do you really want a large (35,000 a year or so in the USA) people to die
every year until autonomous cars are perfect? Not to mention nothing is ever
perfect.

~~~
mantas
It's a trust issue.

Doing your best and working towards safety is one thing. Putting 100% trust to
autonomous thing is a different thing. If you know it's 100% safe, than it's
rather easy to do. But if it's 98% safe.. It's pretty much gambling. You know
it may fail and you have no chance to influence it. Sit back and hope you're
not the (un)lucky one.

------
projektfu
The problem with Autopilot is the name and the marketing. Tesla markets it as
more advanced than other systems, which is not true, and the name suggests to
people that it is a more or less self-driving vehicle, which they are
encouraged to believe with the Summon feature as well. The responsible thing
for Tesla (and other adaptive CC makers) to do is to do studies on how many
people feel safe browsing the web when they enable their cruise mode. If it is
significant at all, it is a sign that they should perhaps disable these modes
of operation or make them require more activity from the user.

------
ocdtrekkie
This is... as the article says... unsettling.

I think it's safe to say Autopilot was enabled. Tesla has never waited to rush
out and state that the car's driver was lying and that their system didn't
screw up. Presumably they will still blame the driver for not paying enough
attention, but as the article says, not being able to detect a stopped
emergency vehicle is a pretty giant gaping flaw.

~~~
ghaff
They're all just assistive driving systems and anyone who thinks that robo-
Uber is going to pick them up at a bar and deliver them to their driveway in
less than a number of decades is delusional. The issue is that these systems
will get good enough to work enough of the time that people think reading a
book or watching a movie with the occasional glance at the road is OK when it
isn't. (Of course, plenty of people don't really pay full attention today.)

~~~
majewsky
> anyone who thinks that robo-Uber is going to pick them up at a bar and
> deliver them to their driveway in less than a number of decades is
> delusional

You are aware that this is literally how Waymo's _existing_ taxi service in
Phoenix works?

I am aware of the limitations, but the parts with "decades" and "delusional"
are obviously overstated.

~~~
ghaff
We'll see. I honestly (and regrettably) do not believe we'll see a taxi
service without a human driver in 20 years if not significantly longer.

~~~
majewsky
Honestly, what makes you think that, given that _multiple_ corps are saying
that they will have autonomous cars on the road _this year_?

I can see how "this year" could slip into next year under unforeseen
circumstances, but when your own prediction is so far away from the collective
wisdom of an entire industry sector, it sure looks like a massive case of
Dunning-Krueger, so you should have a very solid argument to support your
prediction.

~~~
ghaff
No they're not or if they are it's with lots of caveats and limitations. In
fact, Volvo, which has been one of the more aggressive companies in this
space, has apparently moved back their plans. [1] I'm actually willing to
believe that this space will develop faster than it looked a few years back. I
note that one of the more skeptical AI researchers working on this, John
Leonard, is taking a sabbatical from MIT to work on Toyota's self-driving
initiative. But quicker to market is still probably at least 10-20 years.
Perhaps just not "not in my lifetime" as Leonard said a few years back.

[1] [https://www.theverge.com/2017/12/14/16776466/volvo-drive-
me-...](https://www.theverge.com/2017/12/14/16776466/volvo-drive-me-self-
driving-car-sweden-delay)

~~~
razorunreal
I think it will go much faster than people realise, because the big blocker to
infrastructure changes that make self driving easier is the fact that there
are no self driving cars. Once they have a foothold somewhere, with caveats
and limitations, it's much easier to convince the neighboring city that with
"only a few updates to these intersections" or "just a couple of beacons" or
whatever, they can have self driving taxis too.

------
dzdt
I guess the output of the radar is basically a list of relative velocities for
nearby large objects, paired with distance and rough direction to those
objects. There is _always_ a large object essentially all around you moving
with the same relative velocity as your speed : the ground. And there are
often other large fixed objects (overhead signs, overpasses, etc) which appear
roughly dead ahead.

Radar is just a very limited sensor!

------
11thEarlOfMar
Yes, my 2017 Subaru with EyeSight behaves this way.

A couple of other scenarios when adaptive cruise is on: cars changing lanes
into the lane ahead of me at short distances do not cause it to react quickly
enough. It seems there's too much lag in the time to capture the interloper;
and, if I'm approaching a car quickly that is moving much slower than I and I
_signal a lane change but don 't execute it_, I'll collide with the slower
car. (or so it seems to me, the system did not slow down and I had to apply
the brake to avoid colliding).

Another related problem with vision-based systems, these random type of road
artifacts that fool the system. This image shows a false left-side lane marker
(formed by a partial re-paving and the shadow cast by the center barrier). It
sounded a lane departure alert. If the car were autonomous, it likely would
have swerved right trying to center:
[https://imgur.com/a/KnsXU](https://imgur.com/a/KnsXU)

~~~
stevenwoo
That last situation gives _me_ problems - my least favorite is that combined
with jersey barriers so the old marking goes straight into the jersey barriers
versus the new marking.

------
herodotus
I have Adaptive Cruise Control on my VW Golf, and I must say that I preferred
my old Honda Accord vanilla cruise control. I found with the old system that I
could relax if the road ahead was fairly clear. If car changed lanes in front
of me, I immediately touched the brake, which disabled cruise control. I could
relax and yet know I could regain control immediately.

Even before reading this article, I have not been able to relax with my new
automatic cruise control. The problem is that I don't have a solid intuition
for it - so instead of relaxing, I remain nervous and alert while it is on. In
reality, I use it way less than I used the vanilla system.

------
melling
This has been debated for years.

[https://insideevs.com/elon-musk-tesla-doesnt-need-lidar-
will...](https://insideevs.com/elon-musk-tesla-doesnt-need-lidar-will-tweak-
radar-to-be-lidar-like/)

[https://9to5google.com/2015/10/16/elon-musk-says-that-the-
li...](https://9to5google.com/2015/10/16/elon-musk-says-that-the-lidar-google-
uses-in-its-self-driving-car-doesnt-make-sense-in-a-car-context/)

------
woliveirajr
> "a stationary vehicle or object is in front of you instead"

So it isn't that Tesla and volvo don't see a stationaty vehicle: they might
not see objects as well.

Evil plan: you, in the front car, drive against a wall, and turn in the last
second to avoid the collision. The autopilot behind you will accelerate
against the wall.

Evil plan 2: you, in the front car, see a car/truck stopped in the lane. You
waint until the last second to turn and avoid collision, the autopilot behind
you will accelerate against it and crash.

~~~
dx034
Isn't "blinding" self-driving cars with radio waves much more dangerous? Not
sure how today's systems work in that case. But for a fully self-driving car,
losing radars would probably cause emergency breaking.

------
potta_coffee
I'm driving a '97 Miata now. No cruise, no ABS, basically no safety features
other than questionable 90's airbags. It does have an LSD which is great. I
find that being in touch with the road contributes to better control and safe
driving on my part than when driving bigger vehicles with automatic trans,
cruise, etc. Also driving with the top down feels pretty scary at first, and
being that exposed really makes me think about how I'm driving.

------
nkkollaw
I understand that these systems must ignore stationary objects or they'd break
all of a sudden at every road sign you encounter, but is there no way to tell
a road sign (albeit a big one) from a fire truck that is 20 times bigger than
a road sign, and 2-3 times bigger than your car?

No one is going to pay that much attention if your car is on auto-pilot.
That's just how people are, no matter what's written on the manual (which no
one reads).

~~~
Retric
Road signs are huge.

Also the problem is a road sign or overpass over a crest of a hill. It's
stationary directly in front of you, the road simply curves away.

EX: Here is a car in-front of a road sign.
[http://iowahighwayends.net/ends/June06/16/34exit263_99_halfm...](http://iowahighwayends.net/ends/June06/16/34exit263_99_halfmi.jpg)
We assume the sign is not on the actual road, and the software makes the same
assumption. It's just not always true.

EX2:
[https://i.pinimg.com/736x/f7/11/eb/f711ebb462449a422151daab6...](https://i.pinimg.com/736x/f7/11/eb/f711ebb462449a422151daab63f3fa1c
--chesapeake-bay-bridge-west-virginia.jpg) again perfectly safe but it looks
like a static object blocking the road up ahead.

~~~
nkkollaw
> Road signs are huge.

As big as a fire truck, and in the middle of the street, at the same level as
your car!?

> Also the problem is a road sign or overpass over a crest of a hill. It's
> stationary directly in front of you, the road simply curves away.

Isn't the autopilot aware of where the street is, and knows that you are not
going to hit it?

I'm not convinced.

Of course, I know there are a lot of much-smarter-than-me people working on
this for years, but I still don't get it.

~~~
Retric
Size varies, but it's not unusual for them to be 10 feet tall and 20 feet
wide. They can also be stack on top of each other or be placed side to side.

To give an idea individual letters can be 16" tall.
[http://onlinemanuals.txdot.gov/txdotmanuals/fsh/images/Figur...](http://onlinemanuals.txdot.gov/txdotmanuals/fsh/images/Figure%204-10.gif)
And sizes are rounded to nearest 1/2 foot.

~~~
nkkollaw
You ignored my other indication: they are not in the middle of the street, at
the same level as your car...

~~~
Retric
I linked to two pictures with large static objects at the same height as parts
of the road.

~~~
nkkollaw
I don't see them. I've yet to see a road sign in the middle of the street,
where cars drive.

~~~
Retric
The first image has a road sign being blocked by a car heading directly at it.

You are making assumptions about the placement of objects in the image, but
the actual image in no way demonstrates the sign is not at street level.

------
yAnonymous
The amazing thing here is the "no injuries" part in a frontal crash at 65 mph.

------
michrassena
What is the safety record for semi-autonomous cars per mile driven? I worry
that even if it's better than human piloted vehicles, progress could be short-
circuited by the fact that since this is a novel technology, it makes the news
every time an accident occurs.

On the other hand, while it seems that we're on the cusp of an autonomous
vehicle revolution, and we'll see commercial use adopting early, consumers are
going to be wary. The failure modes are very different than what drivers are
used to. The systems appear "dumb" and make mistakes that humans rarely do. I
see more promise in the short term for collision avoidance systems, or other
systems which augment the driver's ability. For example anti-lock brakes have
been an almost complete win, almost to the point of invisible ubiquity. I'd
love to have something on-board my car that can help me safely avoid hitting a
deer at 75mph.

~~~
ghaff
>I'd love to have something on-board my car that can help me safely avoid
hitting a deer at 75mph.

I wouldn't hold my breath for that. Faster reflexes notwithstanding, automated
systems don't change the laws of physics. If anything, I'd expect a human--at
least during daylight hours--would be more likely to notice a deer at the side
of the road and react accordingly before they ran in front of the car.

------
dgreensp
This article is fluff/FUD when it comes to the implications about autonomous
driving. “Autopilot” is a marketing term for a kind of cruise control. The
article repeatedly tries to make the point that if “the best system available”
doesn’t stop for stationary vehicles, how will self-driving cars ever work?
However, Tesla’s Autopilot is not a self-driving system, it’s a cruise control
system. Emergency braking systems are also not self-driving systems. The
comparison is meaningless.

I say the same thing to the comments about the irresponsible “UX” of Autopilot
for not handling this case. It’s more about irresponsible marketing,
capitalizing on the enthusiasm for self-driving cars, potentially lulling
overzealous customers to trust a cruise control system or emergency braking
system with their lives.

------
mixmastamyk
I don’t know the tech well, but the article doesn’t explain why it can’t
differentiate between say, a sign post on the side of the road and a large
object in the direct path of the vehicle, and slow down in the latter case.

~~~
cr0sh
One reason might be a lack of datasets.

There are a ton of datasets out there for things like traffic sign
identification, and vehicle/pedestrian/etc identification - but most of those
vehicles in those datasets are:

1\. From the rear (or oblique rear angles)

2\. Of mostly automobiles and small trucks

There likely aren't many real-world datasets of vehicles facing the vehicle
and stopped in the road, and/or of larger vehicles (buses, semi-
trucks/lorries, emergency vehicles, bulldozers/tractors, etc).

Without having such datasets, the AI you have trained won't have any way of
knowing about those objects, and they may be odd enough that they can't be
easily extrapolated from the datasets you do have to train on.

And by "dataset" \- I mean you need massive amounts of data; 50,000 images
would be a "small" dataset. Ideally you want many times that, plus you want to
"generate" synthetic views for more training data (by skewing/warping the
existing images, it helps the system generalize on the data).

That's a possible explanation, but I don't know if that's the reason, part of
the reason, or no reason at all. Plus, it's not to say they are "at fault" \-
or to absolve the driver of any responsibility.

I'm just throwing it out there, based on my knowledge and experience - which
isn't much, I admit (I completed the Udacity Self-Driving Car Engineer
Nanodegree last year, plus Udacity's CS373 course, and what is now the
Coursera Machine Learning course - but I took it as ML Class in 2011 before
Coursera existed).

~~~
mixmastamyk
Hmm, would have expected something “more simple” like a sensor every 15?
degrees. When a medium+ object is headed at high speed towards 0 degrees the
car should slow, if on a different vector it ignores. The type of object is
somewhat irrelevant, e.g. don’t want to run into a road sign either.

------
devit
Why not just have a system that produces a full 3D model of the surroundings
and their velocity (using something like doppler radar) and then just do a
geometric computation in 4D spacetime to find out a path that doesn't
intersect with the trajectory of anything? (assuming their speed is unchanged)

That should prevent all such trivial accidents and allows to actually
mathematically prove that the software works, as it should be required for
such a critical system.

Machine learning can be added for object detection to allow more sophisticated
trajectory prediction.

~~~
Crespyl
Indeed, "why not just"; I can't tell if you're being sarcastic or not.

I'll assume the latter: Even if you were able to perfectly maintain an
accurate 3D model of the surrounding geometry (and associated trajectory of
all moving objects), it doesn't mean you have enough information to make 100%
accurate predictions. For example, a frozen over road often has very nearly
exactly the same physical geometry and color, but extremely different behavior
to cars trying to drive on it. Other weather effects like fog or snow bring
their own challenges.

You're right though, that precise and accurate 3D models _do_ do a lot to make
driving easier and more correct, the problem is that the hardware necessary to
produce those models is still very expensive (LIDAR). Most (all?) of the
advanced self-driving prototype cars use LIDAR, while Tesla has decided that
they can do without for their consumer models, sticking with just the simpler
and less expensive RADAR and cameras.

In theory, Tesla's right, since humans get by just fine with only two
"cameras", but we're clearly a long way from automating that level of
performance.

~~~
devit
Using multiple separate emitters and detectors it should be possible to
determine how every point reflects light (i.e. sample its BRDF for a limited
angular range and set of wavelengths), which should allow to distinguish at
least between a limited set of materials such as snow, solid ice, liquid water
and asphalt.

Cameras alone seems unsuitable, since it doesn't seem possible to make a
camera-based system that works correctly all the time, ideally provably so, as
required for a life-critical system.

------
ajmurmann
Last year or so there was a article about what sounds like exactly the same
problem with radar. Overhead traffic signs and very reflective small objects
confusing it. The argument was that Tesla is in a great position to deal with
it because they have so much human driver data. If humans keep driving through
the giant obstacle it's probably not a problem. Apparently Tesla instead
decided to ignore the problem...

------
jasode
_> These systems are designed to ignore static obstacles because otherwise,
they couldn't work at all._

 _> But it also detects lots of things a car rolling down the highway needn't
worry about, like overhead highway signs, loose hubcaps, or speed limit signs.
So engineers make a choice, telling the car to ignore these things and keep
its eyes on the other cars on the road: They program the system to focus on
the stuff that's moving._

I don't know about the sensors on today's semi-autonomous cars but it seems
like there's already enough data there to prioritize _collision avoidance_
over forward motion.

It seems like the Wired story is very incomplete about the details. It needs a
more "Ars Technica" in depth treatment or more ideally, an actual Tesla
engineer to explain the self-driving computer's decision tree.

How do today's sensors that advertise their ability to "maintain distance"
between cars work?

What's the difference between the following to scenarios to the front-mounted
sensor?

    
    
      - 60mph Tesla slamming into a slow-moving road cleaning vehicle at 20mph
    
      - 40mph Tesla slamming into a stopped vehicle at 0 mph.
    

Isn't the ongoing computation of dy/dx to determine a "safe" gap between cars
the same?

EDIT to summarize replies with a possible technical explanation...

The resolution of the radar sensor treats overhead signs (like highway
signs[1]) as being on the "same 2-dimensional plane" as cars directly in front
of you. This would generate false positives. It's not 3D radar which would
yield spherical data inputs of impending collisions. Without 3D data, you
can't write a rule that ignores objects not 0 degrees in front of the car.

The resolution of radar treats surface level signs (such as the double-
arrow[2] at T intersections) as being the same harmless mass as a stationary
car. The low resolution cannot distinguish between the shape/footprint of
_small_ signs that are supposed to be there vs (large) stationary cars that
are not. This object categorization requires LIDAR instead of radar.

Therefore, programming an unambiguous algorithm to prioritize "collision
avoidance" is not possible with the current radar sensors. Is that an accurate
summary of the technical limitations?

[1]
[https://www.google.com/search?q=overhead+green+highway+signs...](https://www.google.com/search?q=overhead+green+highway+signs&source=lnms&tbm=isch)

[2]
[https://www.google.com/search?q=2+direction+traffic+sign+dou...](https://www.google.com/search?q=2+direction+traffic+sign+double+arrow&source=lnms&tbm=isch)

~~~
Symmetry
The difference is that if all you have is a radar telling you that something
is moving at 20mph then you can guess that it's probably a vehicle in the road
and you should brake. But if there's something that's stationary at 0 mph it
might very well be an overhead sign or otherwise something that you don't have
to avoid. A radar can be very precise in detecting distance and relative
velocity but very bad at detecting which direction something is in. For the
sort of non-dish radars used in cars I'm not even sure they can detect
direction at all.

Ignoring things that aren't moving is a standard technique in radar to prevent
your returns from being swamped by, e.g., the ground instead of fast moving
planes you're looking for.

~~~
chowells
To add even more detail, as there still appears to be a lot of confusion:

The radar systems in these vehicles send out a radio pulse in a broad
approximate-cone forward. They get bounces back from everything that reflects
radio in front of them. Distance from the object is calculated by time between
pulse and response. Speed towards/away from the object is calculated from
Doppler shift of the radio frequency.

There are two main things that these systems can't detect.

1\. Speed of the object perpendicular to the direction of radio wave travel.

2\. Location of the object within the approximate-cone the radio pulse travels
in.

Note that thanks to the second, you can't calculate the first with higher-
level object tracking, either.

So the data you get back is a list of (same-direction velocity component,
distance) pairs. There's no way to distinguish between stationary objects in
the road and stationary objects above the road, to the side of the road, or
even the surface of the road itself.

Radar just doesn't provide the directional information necessary to handle
obstacle detection safely.

------
rexreed
Isn't the solution some sort of collision avoidance system that is used in
airplanes? Or perhaps as a stop gap, some sort of add-on (passive?) device for
operators of vehicles that are frequently stopped can use to signal to
autopilot vehicles that they are an obstruction to be avoided? It's a hack,
but perhaps a necessary one?

------
adav
Why doesn't the camera system take over from the radar sensor to spot the
stopped vehicle?

------
tbirrell
I think this shows that autopilot won't truly be a thing until the cars start
talking to each other. If the firetruck was broadcasting a stationary signal,
the autopilot would not have to relay of cameras and radar to figure out what
is where.

------
DiThi
IIRC Model S only has cameras while Model 3 also has radars (maybe in response
to the first death with autopilot, which didn't see a white truck against
white sky).

------
Symmetry
I'm sort of surprised that the camera wasn't able to identify an object based
on visual flow, unless this happened at night?

------
mikelbring
I think we need some sorta public domain tech that allows car sensors to talk
to each other real time.

