
Self-Driving Cars Have a Problem: Safer Human-Driven Ones - Bostonian
https://www.wsj.com/articles/self-driving-cars-have-a-problem-safer-human-driven-ones-11560571203?mod=rsswn
======
ChicagoBoy11
As a private pilot it astounds me how the self-driving car industry has
seemingly taken every lesson we've learned from automation in aviation and
chucked it out the window in these systems.

There's a "trust but verify" element of airplane automation that is ingrained
in you as a pilot. The ML aspect of self-driving cars generally make it a lot
less easy to reason about and predict. That's problem 1. Problem 2 is then
speed; Even in the airline world, there are published minimums at which pilots
need to be absolute certain that the AP is doing the right thing before they
commit to land, even in cases where the autopilot is responsible for the
entire landing (Autoland is a capability that a lot of airliners have had for
many decades).

What we're talking about here is several seconds heads-up with visual AND
instrument confirmation, that the computers are doing what they're supposed to
before we allow the plane to land. Not only that, but we bend over backwards
ensuring that there is all kinds of redundancy and procedures in place to
ensure the accuracy of the system: Multiple instruments using different kinds
of technologies confirming one another, greater separation between aircraft to
guarantee ILS radio beacon accuracy, etc. Conversely, in the self-driving
world, we're having these ML algorithms make split-second decisions with no
real way of informing the driver what it is basing it on in an environment
that is far less predictable and constantly-changing with someone who is not a
professional behind the wheel.

~~~
JamesBarney
Autonomous car's don't have to be nearly as safe as airplanes because existing
cars are orders of magnitude less safe than airplanes.

~~~
jandrese
Isn't that one of the problems autonomous cars are trying to solve?

~~~
kemitche
The point is that cars don't need to make an immediate jump from "current
levels of safety" to "plane levels of safety."

Anything that moves the needle noticeably closer to "plane levels" is a solid
improvement in terms of reducing unnecessary deaths.

~~~
perl4ever
Why do you think that autonomous cars will start by replacing the worst
drivers, rather than replacing professional drivers, who are much better than
the average person? Isn't the biggest potential initial demand for autonomous
cars people who don't own cars and currently must choose between bus and taxi?

------
IgorPartola
Here is my proposal for self driving cars: forget trying to make them drive on
local roads at first. Add transponders to all interstate tarmac that can be
used to detect location and lane position. Then use a much cheaper LIDAR just
to map where the other cars are. Boom: you have made long haul driving
autonomous. You can then slowly expand this to smaller roads. But why try to
recreate human drivers when machines can use much better sensors than us, but
can process visual info much slower, and gather it with worse fidelity?

~~~
agildehaus
Because yours is a ludicrous proposal. It'd be well more expensive to modify
infrastructure than to build a computerized driver. Not only is the
modification expensive, you have to convince the local authority to foot the
bill, do it right, maintain it, etc. Never going to happen.

Your proposal also lacks reliability. What if one of these "transponders"
fails? We can build redundancy into a self-driving car quite easily (a second
computer, never relying on just one sensor). Providing redundancy to road
infrastructure planet-wide is ... a much larger problem.

~~~
briatx
You could easily convert HOV lanes into Automated Driving lanes. And if we
have the ability to fund HOV lanes, we could also fund Automated Driving
lanes.

Self driving cars are already dependent on standard infrastructure markings
such as lane lines, and whenever those markings are confusing or faded it has
lead to fatal crashes, such as the fatal Tesla crash on 401.

I predict self driving will not fully succeed until we build infrastructure to
support it into the roads.

~~~
agildehaus
> Self driving cars are already dependent on standard infrastructure markings
> such as lane lines, and whenever those markings are confusing or faded it
> has lead to fatal crashes, such as the fatal Tesla crash on 401.

A Tesla is not a self-driving car and they won't become one with that
approach.

Waymo vehicles have virtual maps that include the lines (along with a LOT of
other data) so they are not affected by fading lines as they always know where
the lines are supposed to be.

Basically Waymo is building the infrastructure you all want, but in a virtual
sense instead of the extremely expensive and impossible physical one.

~~~
briatx
> Waymo vehicles have virtual maps ...

You only need one reality <-> virtual desync to cause a crash.

~~~
agildehaus
Well no, it's not like the map is the only thing used.

------
pjc50
A human paying attention assisted by a computer paying attention is always
going to be safer than either separately. It's just that a _mostly_ self-
driving system erodes human attentiveness.

And we're not going to get to a system that's so much safer as to render human
attention irrelevant without a much more solid and theoretically sound
approach to safety engineering.

~~~
Recurecur
"A human paying attention assisted by a computer paying attention is always
going to be safer than either separately."

No. Stating something as a fact without analyzing it in the slightest is bad.

Example:

(1) Only a computer is "paying attention" performing the "assisted emergency
breaking" role. The computer has an effective reaction time of 1/100 of a
second. Within 2/100 of a second of a qualifying event, the vehicle will be in
full antilock brake mode.

(2) A human and a computer are "paying attention" performing the "assisted
emergency breaking" role (this is a little silly in that the human is already
the primary brake operator...). The computer still has 1/100 s reaction time,
but the human's reaction time is 25/100 s. What exactly is the human bringing
to the table? It is true that the human may detect an actual threat the
computer doesn't...however that slow reaction time means it likely won't make
a difference in the outcome. On the other hand, the human may "detect" a false
alarm, and emergency brake by mistake. That activity often leads to wrecks!

I submit that the computer-only emergency brake assist is safer than the
combination of human and computer. Further, all of these "human in the loop"
systems suffer from a fatal flaw - the inability of people to pay attention
unless it's absolutely crucial.

The way people use Tesla Autopilot shows that full autonomy is necessary, and
will in fact be safer than any "assisted" system.

~~~
radcon
> I submit that the computer-only emergency brake assist is safer than the
> combination of human and computer

You're choosing to ignore the fact that computers still make mistakes.

My friend's car (an Acura) recently slammed on the brakes @ 70mph because it
mistakenly thought a car in an adjacent lane was in his lane. Had there been
someone following close behind him, it probably would've caused an accident.

This is not an uncommon occurrence, especially with Honda/Acura's system. You
can find tons of complaints online about AEB systems reacting to false
positives.

~~~
Recurecur
"You're choosing to ignore the fact that computers still make mistakes."

No, but I'll unequivocally assert that the human false alarm rate is much
higher than that of computers.

"My friend's car (an Acura) recently slammed on the brakes @ 70mph because it
mistakenly thought a car in an adjacent lane was in his lane. Had there been
someone following close behind him, it probably would've caused an accident."

Does it have a readout indicating what it "thought"?

It seems possible that the car in the other lane drifted, and the computer
thought it was changing lanes into the Acura. Braking was appropriate in that
circumstance.

"This is not an uncommon occurrence, especially with Honda/Acura's system. You
can find tons of complaints online about AEB systems reacting to false
positives."

That may be, and I suspect a fix will be forthcoming which will address every
Honda/Acura on the road (or new models, worst case).

Meanwhile, humans will continue to look down at a text, hamburger, or
whatever, and then panic when a few seconds later they look up and think
they're in trouble. Not to mention driving tired, drunk, high, angry, or
stupid.

------
mcguire
" _ultra-detailed, centimeter-accurate maps of much of the U.S. highway
system_ "

Is this a joke? _The roads_ aren't centimeter-accurate from day to day.

------
seibelj
The industry’s PR team starting to place stories about why self driving cars
will take longer than they promised their investors. Tens of billions of
dollars invested and it’s still decades away. I guess we blame other drivers
now(?)

~~~
Recurecur
"Tens of billions of dollars invested and it’s still decades away."

"Decades away"? LOL

Fully autonomous vehicles are already on the road in large numbers. See Waymo
in AZ and Voyage in Florida...

The economic and safety incentives are huge. I predict by 2030 over half of
new vehicles sold will have L5 autonomy.

~~~
glogla
Also see Uber where they killed a woman, faked evidence (with purposefully
making the video look darker) and got away with it.

~~~
mdorazio
I must have missed the part of vehicular accident law where people hit while
jaywalking at night aren't at fault for the accidents. By your logic, the
1000+ jaywalking fatalities every year must all result in human drivers
"getting away with it".

Uber messed up big time in multiple ways, but legally it's pretty clear that
the pedestrian was at fault in this case, not Uber.

------
seqastian
human vs human road interaction is mostly based on trust, and it usually works
out because most humans are sane and healthy while driving.

the same thing can not work with machines cause they are bad at detecting when
humans are not all right and humans are bad at trusting machines.

as soon as we just accept that the save way to operate in close proximity of
others, is to do it at way slower speeds. we will have very save roads with
humans and robots behind the wheel.

~~~
ocdtrekkie
Speed is not the issue, difference in speed is the issue. Road safety actually
improves when you raise the speed limit to the speed most drivers are actually
driving.

~~~
AdamHede
This is only true in a very theoretical sense. Going 180 km/h will 9/10 be
more dangerous than going 120 km/h. Reaktion time, brake time, requirements to
equipment, everything is tougher at higher speeds.

~~~
dsfyu404ed
If by "theoretical sense" you mean "doesn't fit your personal world view" then
sure. The safest speed limit being one that the overwhelming majority of
people would naturally follow were it not posted is basically considered fact
in the civil engineering world. There is study upon study backing it up.

The desire to minimize the speed at which crashes happen literally costs lives
when applied in the real world because reducing the frequency of crashes is
the superior option.

~~~
SketchySeaBeast
> The desire to minimize the speed at which crashes happen literally costs
> lives when applied in the real world because reducing the frequency of
> crashes is the superior option.

While I understand that minimizing crashes is important, I'd rather be faced
with scenarios where we crash in a manner where people survive more often -
any accidents that do happen at the higher speed is going to skyrocket the
chances of a fatality.

------
mannykannot
This is an odd way of looking at it. Far from being a problem for self-driving
cars, the development of ever-more capable assistance and warning technologies
is the rational way to go about refining the technologies that will be needed
for fully self-driving cars.

This situation is only a problem for those manufacturers who want to pass off
partial autonomy as the real thing.

~~~
adrianmonk
I basically agree with you, but after some head scratching, I think I follow
their argument, which seems to be this:

1\. Lack of higher reasoning is a disadvantage that self-driving cars won't
escape any time soon.

2\. But SDCs can more than make up for that by making fewer dumb mistakes,
giving them a better _overall_ safety.

3\. However, if you use machines to take away the dumb mistakes from humans,
you change that equation, and the overall stats could go the other direction.

In reality, I don't think it's that simple. When ABS was newer, they studied
it
([https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...](https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/811182))
and found that ABS improved overall safety, but safety paradoxically got worse
in slick conditions (rain, snow, ice). The effects of safety improvements
aren't always what you'd expect, so theorizing doesn't tell you much.

------
YeGoblynQueenne
>> Initially, all fully self-driving vehicles will be Level 4—that is, they
have to be in geographically constrained areas, and will only operate in good
weather, as does Waymo’s fleet of self-driving vans that it is testing in
Phoenix. Truly autonomous, aka Level 5, cars are still science fiction.

Nobody has actually created a level 4 system yet, not even a prototype, let
alone one ready for production. So level 4, too, is still science fiction. The
same goes for level 3, actually. It's science fiction. And so are claims like
the following:

>> Researchers at Cleveland State University estimate that only 10 to 30
percent of all vehicles will be fully self driving by 2030.

2030 is in ten years from now. In ten years from now, we'll reach "full self-
driving"? Waymo was founded ten years ago and its cars are still in level 2
(allegedly, trying to "jump over" level 3 and go straight to level 4). How are
we going to be suddendly, magickally transported from level 2 to level 5 in
the next ten years, when we haven't budged from level 2 in the last ten?

------
vikramkr
If fully self driving is ever going to be fully safer than humans, which I
think is almost a given, then this is not a problem as much as it is a great
new way to monetize that tech en route to full self driving cars. If it isnt
going to be better than self driving tech + a human at the wheel, then oops.

~~~
simion314
The self driving tech would still be used for driver assist functions, if
today would be possible to replace all the cars in the world with new models
that include all the safety features and a system to detect a drunk or sleepy
driver I think we would get better statistics then Tesla or Google AI(for the
same driving conditions)

------
rexgallorum2
Just a few points:

1\. Addressing (and reducing) major causes of road accidents and fatalities
such as impaired and/or distracted driving (alcohol, drugs, sleep deprivation,
inattentiveness, fiddling with phones, radios, touch screens, etc.) would make
a tremendous difference, and automated driving systems and sensors could
potentially intervene in such situations without necessarily being online all
the time.

2\. Inattentiveness and distraction caused by (over)dependence on automated
systems (as in jets with autopilot) could become a major hazard. This is
already a problem with road designs that minimise driver engagement by
removing obstacles to traffic and designing roadways entirely for cars
(compare European style traffic calming to US style widening of lanes and
rounding of corners).

3\. As mentioned above, infrastructure and urban planning practices are major
issues, both in terms of maintenance and design. Automating passenger cars may
improve safety in some respects, but perpetuating the dominance of individual
motor vehicles (and the vast infrastructure outlay they require) as the
dominant mode of transport is probably the wrong approach. Gradually
transforming urban planning and design to promote mass transit, reduce
commuting, and accommodate bicycle and pedestrian traffic would generally
reduce the need to use cars in the first place. One difference between traffic
death statistics in countries like Germany and the US is that virtually
everybody has to drive long distances on a daily basis in much of the US, with
few if any alternatives, and urban sprawl encourages lengthy detours to travel
trivial distances between fully separated residential/commercial/industrial
zones, whereas e.g. in urban Germany, there are many alternatives, and driving
is not a necessity (compare death rates per capita vs. per km driven). In
Germany and much of Europe, driving is and has always been a privilege, and
one that requires a degree of skill to earn, whereas in the US driving is
viewed more or less as a right, something everyone does and has to do on a
daily basis, i.e. it's easy to get a license and you only lose it for serious
infractions, and even then penalties for driving without a license (as many
people do!) are comparatively minor.

What I am getting at is that the best way to tackle road safety and
environmental problems is to gradually abandon the passenger car as the
dominant mode of transportation. Hybrid automated/human-operated vehicles
could be a great improvement in the short term, but using bikes would be
better in the long term. Freight transport might be another matter though.

------
dejaime
Automatic emergency breaking and other "safer human-driven" cars is not making
the human-driven aspect any safer, but rather putting the working and stable
parts of autonomous vehicles as a tool for unsafe human-driven vehicles. That
said, it will obviously fare better than _fully_ autonomous vehicles, but it
is still just a part of autonomous vehicles in general. In this sense, these
cars are not "a problem self-driving cars have" as the title implies, they are
actually a stepping stone for self-driving cars.

------
torpfactory
I suggest everyone take a look at the most recent available NHTSA data to help
create informed opinions on the topic of car safety with respect to fatal
accident causes:

[https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...](https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812580)

My personal takeaways:

1) The sum of speeding + alcohol is 55% of the total number of fatalities.
Self driving cars won't get drunk (we hope) and speeding is something which
could certainly be limited in software. Lots of reasons to believe self
driving will make a significant impact here, whenever it finally arrives.

On the other hand, you don't really need self driving to prevent these. You
could force all vehicles to have a breathalizer interlock before driving
(assuming these devices could be made to be very accurate and not spoofable, I
believe these are both technically feasible goals). Or just enforce the living
daylights out of it - driving drunk, lose your license for the rest of your
life. Not nice I suppose but neither are ten thousand deaths per year.
Speeding is a similar story. Why not put a GPS device in every car (I think
this may already be the case) and geo-fence speed limits? Technically very
feasible. Some people will claim you need to speed sometimes but I believe
these arguments to be total bullshit. Ambulances may need to speed sometimes.
Or again just enforce the living daylights out of it. Speeding over 5MPH above
the speed limit, license gone for the rest of your life. Not nice I suppose
but neither are ten thousand deaths per year.

We probably won't do any of the non-self driving things I mentioned above
because they are politically untenable in America. Death on the roads is such
a normalized facet of modern life that most people don't really consider the
alternative: limited loss of freedom with many fewer deaths and injuries.

2) The wearing of seat-belts seems to still be a big problem for some people.
44% of motor vehicle occupants who died were not wearing one. No self driving
needed here. In fact, this problem is somewhat orthogonal to self-driving. You
would still want to be restrained in a self driving car. It would be easy
enough to create an occupant sensor and seat belt interlock to operate the
car. The car already beeps at you if you're not wearing it. Why not take the
beeping a step further.

3) Running into cyclists (both motorized and human-powered) is also a
reasonably large problem. Self driving will almost certainly help here (uber's
incident not withstanding) but there are non technological solutions to
consider. What about increasing the liability for drivers who injure or kill
cyclists? What about steep criminal penalties for hitting a cyclist?

4) There isn't even a category for "mechanical failure". Cars don't really
crash due to problems with the car per-se. Or at least not a rates that matter
compared to the others.

5) Rural driving seems pretty dangerous - 50% of fatalities but only 19% of
the population lives there. I don't think they dig into the details here but
I'd be interested to see the cause breakdown for rural users. Growing up in
Wisconsin, driving home drunk from rural bars was basically ubiquitous. I
always advise to stay off the roads around bar time if possible.

It is important to consider that our roads are unsafe mostly because of the
system of policies we have set up governing them. We still aren't serious
enough about drunk driving. The debate around safety and speeding is hardly
even happening. Hitting and killing non-automobile road users is often just a
traffic citation. The licensing system is a joke. In what other safety-
critical certification system can you take one exam at age 16 and then renew
your certification for the rest of your life without any kind of additional
training or examination? There is a ton we _could_ be doing outside of self-
driving technology to make the roads safer.

Personal preferences on the above discussion: FOR self driving cars, even ones
that are only marginally safer than humans. FOR geo-fenced speed limits. FOR
breathlizers in all vehicles. FOR steep penalties for drunk driving, speeding,
and killing non-automobile road users. FOR much more stringent licensing
requirements. Death on the roads ought not be a feature of modern life.

~~~
rexgallorum2
A major problem with your idea about self-limiting vehicle speeds is that
speed limits are usually safe driving speeds under ideal conditions only.
Speed itself isn't really the problem, but rather poor judgement and unsafe
driving habits (often including speeding).

Speed is also an interesting topic in that speed limits are often kept
artificially low in order to generate revenue. Any automated system to force
cars to respect limits would likely meet resistance from municipalities that
are dependent on traffic fines for revenue.

High speeds are allowed on highways in Germany, but only in certain areas, and
German drivers are extremely disciplined with regards to lane changing and the
'hierarchy' of the road. However, traffic deaths are lower than the US, even
when adjusted to reflect deaths per km traveled. Driver discipline and skill
undoubtedly play a role here, but infrastructure quality and maintenance (and
urban planning!) is also a big issue. US roads are comparatively poorly
designed and maintained, and outdated engineering and planning practices are
still widespread (widening streets and removing barriers instead of using
traffic calming--perhaps counter-intuitively, the former causes more serious
accidents by encouraging speeding and disengaged driving).

I tend to think that at least in urban areas of the US, ditching zoning and
onerous anti-alcohol laws (i.e. putting pubs within easy walking distance of
where people live) would drastically cut down on drunk driving. Rural areas
are of course different.

Just my two cents

------
happppy
I DO NOT FEAR MACHINES!!!

