
Google Says It’s Not the Driverless Car’s Fault. It’s Other Drivers’ - gwintrob
http://www.nytimes.com/2015/09/02/technology/personaltech/google-says-its-not-the-driverless-cars-fault-its-other-drivers.html?hp&action=click&pgtype=Homepage&module=second-column-region&region=top-news&WT.nav=top-news&_r=0
======
jerf
'“It’s always going to follow the rules, I mean, almost to a point where human
drivers who get in the car and are like ‘Why is the car doing that?’” said Tom
Supple, a Google safety driver during a recent test drive on the streets near
Google’s Silicon Valley headquarters.'

This is a drum that, like privacy and encryption, we techies need to start
banging hard before it's too late. The fact that laws are _fundamentally_
limited by our abilities to enforce them, and that laws are _fundamentally_
written to be enforced by humans with some discretion, is such a deep
assumption for us that we don't see it. We irrationally tend to conceive of a
law that says "It is a felony to wear your hair in a ponytail on a Tuesday" as
an absolute, even though what it really means is "If a police officer spots
you with your hair in a ponytail on a Tuesday, and by their discretion arrests
you, and at the discretion of a prosecutor charges you, who at the discretion
of a judge and jury convicts you, you will be guilty of a felony." _All_ laws
actually say that, but that's not how we humans speak or conceptualize them.
Also note the importance of a human even spotting the violation in the first
place.

So, naturally, when we get technology that can rigidly enforce law, we don't
program them with the "real" law, we program them with the _text_ of the law.
But the truth is, _the text was never passed as a law_. When the law was
deliberated, everybody was _really_ deliberating the version that involves all
the human discretion, and prior to the introduction of computers, the law that
was really enforced was the one with all the human discretion. A rigidly-
enforced-by-computer law is a _fundamentally different law_ , in the most
concrete way possible; actions that you would never have been penalized by the
human law result in immediate penalty in the computer world. There's no more
concrete way to prove the point that it is a fundamentally different law.

We CAN NOT simply translate the text of the law into computer programs. If we
are going to do computer enforcement of laws (and I'm going to drop the
question of whether that's a good idea, feel free to debate in replies), we
_must_ write new law. By that, I do not merely mean that society has an
obligation to write new laws, but that the very act of programming a law _is_
writing a new law into existence, one never deliberated by the processes of
democracy. We _literally_ can not simply translate laws into programs. It is
impossible. The result is not the same law.

~~~
olalonde
I'm playing the devil's advocate here but another alternative would be to
write laws that are meant to be literal and enforced 100% of the time.

~~~
krapp
That's basically a form of authoritarianism.

Also, this is essentially the underlying premise behind no-tolerance policies
and three-strike drug laws with mandatory sentences, which view interpreting
the law based on context as an impediment to enforcement, the solution for
which is removing the ability of judges to do so. You can't have a just legal
framework in which the only role of humans is to be punished or to exact
punishment.

~~~
Retra
Your conclusions don't necessarily follow. Your interpretation of that post is
exactly the kind of problem being addressed here; your idea of how laws
currently work is preventing you from seeing how to change them.

Basically, what you have to do is codify human discretion. If a judge wants to
say "no, this really isn't a problem," he should be able to directly add that
exception to the written law, and then enforce it as written. It's basically a
kind of open source law. The question becomes of how to distribute authority.

~~~
krapp
> he should be able to directly add that exception to the written law, and
> then enforce it as written. It's basically a kind of open source law. The
> question becomes of how to distribute authority.

So basically, more or less what happens now, only with more bureaucracy and
complexity? I'm sorry, but I don't see the benefit in that case.

~~~
Retra
The benefit is that you can better automate it.

But if that line of reasoning were good enough, then the US made a mistake by
rejecting the rule of a monarch. Democracy is rife with managerial issues.

~~~
krapp
The typical premise behind automation is to remove the (ethical and financial)
burden of a human workforce while keeping the benefits provided by human
labor. For most business processes this is acceptable (inevitable loss of jobs
notwithstanding.)

But when your 'product' is violence on behalf of the state (in the anarcho-
libertarian sense of 'violence' being any non-voluntary interaction with a
person or power structure, sometimes involving actual violence) then the
ability of humans to mediate, alter and affect that system becomes a feature
as well as a bug. This necessarily limits the speed of the law and the degree
to which it can be optimized, and remain humane, because removing humans from
that system tends to merely concentrate power in the hands of the few humans
necessary for it to function.

And then the questions become if humans can write the laws, and humans can
update the laws, and humans are enforcing the laws, and have discretion in
doing so, then what really is being automated, or optimized, other than
recordkeeping? And for that matter, what does law as a programming language
actually _improve_?

~~~
Retra
>then what really is being automated, or optimized, other than recordkeeping?

 _Other_ than record keeping? Why does it need to go beyond that? You don't
want transparency in your laws? You don't want the ability to format and
translate them as needed? You don't want the ability to do queries and
comparisons? You don't want machines to be able to update their own behavior
when the law changes?

I'm not talking about automating anything other than record keeping.

------
seibelj
I'm a big self-driving cars skeptic. Expecting humans to "drive less idiotic"
is basically saying that google's cars won't work until human drivers are
banned. Driving is something of an art, and it's full of edge cases. So many
edge cases.

I'll just keep reading articles that say they are 5 years away, just as I did
5 years ago, and the armchair futurists will keep dreaming...

~~~
baddox
I can appreciate the claim that driving is an art, full of edge cases. But
that claim on its own conceals the reality that _humans_ are extremely bad at
it. A ludicrous number of people die in automobile accidents. While it may be
a long while before self-driving cars can match the prowess of an excellent
driver, much less a professional one, I doubt we're far off from being able to
exceed the ability of the _average_ driver.

~~~
ghaff
Define ludicrously bad. Yes there are a lot of accidents and fatalities every
year. However the rate of fatalities in the U.S. Is 1 per 100 million miles
driven which isn't intuitively a ridiculously high rate.

~~~
baddox
I don't think that fatalities per miles driven is that useful of a statistic,
except for the other conversation of how to enable people to drive fewer miles
(which I certainly approve of). It's still just abysmal that 30,000 people
each year die doing something extremely routine that is virtually required to
live in most areas. That's something like a quarter of US deaths from
accidents/injuries. And, of course, there are presumably far more injuries
than there are deaths, plus property damage.

------
gregwtmtno
I think the accident statistics are a little misleading because as far as I'm
aware (and please correct me if I'm wrong) Google only operates its driverless
cars in good weather conditions.

Not that Google doesn't have good reasons for doing so, but the safety of
these vehicles might look different if they were operated in the rain and
snow.

~~~
Flimm
They also only work in areas where Google already has "very detailed maps of
the roads and terrain", kind of like invisible train tracks. (
[http://spectrum.ieee.org/automaton/robotics/artificial-
intel...](http://spectrum.ieee.org/automaton/robotics/artificial-
intelligence/how-google-self-driving-car-works) )

~~~
teraflop
Bear in mind that the phrase you quoted is from a third party, not someone who
actually works on the project. And it's from four years ago.

------
civilian
Heh, google is training cars to drive like grandmas.

> _But the technology, like Google’s car, drives by the book. It leaves what
> is considered the safe distance between itself and the car ahead. This also
> happens to be enough space for a car in an adjoining lane to squeeze into,
> and, Mr. Windsor said, they often tried._

Yeah. We just don't have the infrastructure in major cities for all of us to
drive safely. We've been slowly crowding our roads over the last century and
we're at the point where most people are used to driving with a 1 or 2 second
rule. Which isn't good, but it's how we do things.

Now I wonder if google-driven cars would actually be less efficient for LA.
Sure there wouldn't be any traffic-slowing accidents, but there'd be 50% less
volume traveling 20mph slower.

~~~
rlpb
I'm not convinced that leaving shorter gaps between cars increases volume.
Instead it creates waves of drivers slamming on the brakes because they are
unable to react gradually, creating congestion instead.

If we all left bigger gaps, we'd all be able to drive faster, increasing
throughput (on highways, at least).

> Now I wonder if google-driven cars would actually be less efficient for LA.
> Sure there wouldn't be any traffic-slowing accidents, but there'd be 50%
> less volume traveling 20mph slower.

Self-driving cars in a line could detect and link up with each other, sharing
sensor data on obstacles up ahead. In the end, this has the potential to
safely reduce the gaps between cars at higher speeds and thus actually
increase throughput, since the congestion issues with human drivers reacting
separately would not apply.

~~~
duaneb
> Self-driving cars in a line could detect and link up with each other,
> sharing sensor data on obstacles up ahead. In the end, this has the
> potential to safely reduce the gaps between cars at higher speeds and thus
> actually increase throughput, since the congestion issues with human drivers
> reacting separately would not apply.

Isn't this already true to a certain extent with map software? It
automatically directs you to the fastest route—congestion relief is, in a way,
built-in to open infrastructure.

It seems to be the topology of the highway system would have a much larger
impact on congestion than the drivers themselves. Among other things, it's
going to be a long time (hopefully never. IMHO) before human drivers are off
the road.

~~~
tcdent
In my experience, including such prestigiously congested roadways as the 405,
101 and the 10, it has a lot to do with behavior (ie. the 'caterpillar'
effect) than route.

Rather, it's not physical space, but delays in processing speed adjustments.
Distracted drivers accelerate and brake late, causing complete stops and quick
acceleration to catch up.

~~~
duaneb
Well, I can't speak for LA, but you have to acknowledge the role of having all
those drivers on the road in the first place. Distributed better, jitter in
driver behavior should tend towards having no effect with the distance between
cars.

~~~
tcdent
Yeah, you're right, physical density does have a role.

There's a sort of critical mass that happens; traffic flows well even though
everyone is being dumb vs. traffic stopping when everyone is being dumb...

------
makeitsuckless
Being a cyclist in Amsterdam, the lack of driver eye contact and understanding
of body language is what worries me about the application of self driving
cars.

Amsterdam traffic is a chaos in which pedestrians and cyclists mix with cars,
and the former two frequently (some would say always) ignore the rules. Not
out of some anarchistic impulse, but for mutual convenience. Especially with
cyclists, for whom coming to a full stop and accelerating again is both
"expensive" and risky (accelerating cyclists tend to swerve), relying on eye
contact and mutual understanding of intent rather than following the rules of
the road by the book results in safer and more fluent traffic.

I imagine a current generation Google self-driving car in Amsterdam coming to
a grinding halt and not being able to move backwards or forwards.

But most of all, as a cyclist I wouldn't feel safe sharing the road with
"drivers" with whom communicating via eye contact and body language are
impossible.

To me, self-driving cars and liveable, human-scale cities are mutually
exclusive, and I very much prefer the latter.

~~~
Shorel
Until you actually live surrounded by self-driving cars, your speculation is
nothing more than that.

As someone who also commutes by bicycle, I would vastly prefer predictable,
100% consistent car behavior.

~~~
ortusdux
I wonder if a few e-ink displays on the outside of the cars could be used to
smooth out these interactions by conveying actions and intents in plain
English. I would be comforting to see 'Stopped, waiting for bicyclist to pass'
flash up on the front display when I bike up to a 4-way stop.

------
jasonjei
I think it would be great if at some point some lanes of the freeways are
dedicated for autonomous vehicles connected to a city or state-controlled
system with some fallback. I suspect there is a fallacy somewhere in my idea,
but I think the first step is to make some lanes only for autonomous vehicles
on the freeway, and eventually all lanes as autonomous vehicles become more
commonplace.

I'm not a civil engineer, but I suspect this could eliminate a lot of traffic
if all the cars are placed in the most optimal lane, assuming all vehicles on
highways are autonomous.

------
pdq
What happens with these driverless cars at four way stop signs? With humans,
generally someone waves to the other driver signaling either to take right of
way or that they will go first.

I presume the car doesn't understand hand gestures, so does it just start
nudging into the intersection and stop if another car goes ahead?

~~~
kevinnk
Actually, it can already understand some hand gestures [1]. Compared to all
the other things an autonomous car has to deal with, gesture recognition
doesn't seem like such a stretch.

[1] [http://road.cc/content/news/150448-google-patent-reveals-
how...](http://road.cc/content/news/150448-google-patent-reveals-how-
driverless-cars-recognise-hand-signals)

------
brownbat
Driving can easily end up as a competitive game where safety is traded against
efficiency. The big versions of this are terrible, assholes weaving in and out
of lanes at +20 mph. But it happens in little ways too, the rolling stops at
4-ways from the article are probably a tiny example.

Someone should host a programming competition where every entrant writes the
AI for a fleet of simulated driverless cars, each with random starting and
ending destinations. Collisions mean you instantly lose. Your score is based
on how quickly you can get through traffic generated by all the other AIs.

I think the results would tell us a lot about where things are heading, or
maybe where they should or shouldn't be allowed to go.

(Of course, if you could control the whole fleet, all the cars could just
ignore all the rules and dodge each other while tearing through
intersections.)

------
sea2summit
Step 1. Release autonomous cars to market with lower incidence rate than
humans.

Step 2. Sit back and watch humans cause accidents; insurance for human drivers
goes thru the roof.

Step 3. ???

Step 4. Skynet.

~~~
lumberjack
Serious question.

Is computer vision so advanced that it is already so obvious that self-driving
cars will definitely be safer?

Yes it's a computer which is more reliable at always outputting the same
answer than a human but it's not a simple application this one.

As far as I know most robots get confused at times just as humans do.

I think eventually they will be, because they can learn and share their
knowledge and keep learning about all the real world exceptions without
limitation unlike humans.

But right now or when they are deployed en mass. I'm not sure they will
DEFINITELY without exception be safer.

And I know there exists statistics about accident rates per kilometre driven
but aren't those only a few cars driving previously selected and well
documented routes?

------
charles2013
> “The real problem is that the car is too safe...”

no. this statement is fallacious, and i believe it is dangerously misleading.
it sounds like something out of the mouth of a corporate attorney, and not
someone who cares about being _actually_ safe. out here in the real world
driving laws are more like guidelines.

my take on the situation is the car knows and follows all of the written rules
of the road, but not the unwritten ones. kind of like a 16-year-old kid who's
just aced his driving exam, but doesn't yet have a feel for driving. every so
often he encounters a situation not covered by the law or the driver's
handbook, and he's forced to choose the safest course of action on his own.

i live near google in mountain view, and (usually) encounter the self-driving
SUVs many times a day, either biking around town or as they drive past my
residence. in hundreds of encounters i've only witnessed one potentially
dangerous incident (many months ago) involving a self-driving SUV. it wasn't
clear to me who/what was at fault.

whilst making a protected left turn onto west el camino real from el monte ave
[0], a self-driving SUV came to a complete stop in the middle of the
intersection, causing the vehicles behind it to halt and lay on their horns.
since the incident occurred a few yards behind me (over my left shoulder) i
don't know exactly what precipitated the incident, but reflecting on the
experience has caused me to recognize some potential flaws in this generation
of self-driving vehicles.

that intersection's turning lanes are delineated by dotted lines. this isn't
unique, but it's interesting to me because dotted lines can sometimes be more
difficult to see than solid lines (especially in certain lighting or weather
conditions, or when there are debris on the road). for example, i remember
lanes on stretches of 101-N marked with faded dotted lines and no reflector
dots. these lanes were impossible to see under wet conditions and the sun's
reflection. drivers were forced to either guess where the lanes might be, or
to follow other vehicles, but we could still proceed safely. how would a
driverless vehicle respond?

further, some human drivers completely ignore lane markers even when they are
perfectly visible. in india and some central american countries, for example,
driving conditions are like the polar opposite of suburban SV: lawless. yet in
my experience, driving there is not without rules; it's just that the rules
are unwritten. and though this is an extreme analogy, it led me to an
interesting question: if self-driving vehicles can't safely navigate the
crowded streets of mumbai, how comfortable are we giving them free reign in
the US? or perhaps should we exclude them from certain roads until they can?

the main point i'm trying to make is this: the current generation of self-
driving vehicles has not developed (whether by feature or flaw) the level of
intuition of an experienced and safe driver. they are quite literally like
student drivers whose instructors have a foot above the brake and a hand on
the steering wheel. and while it's conceptually possible to create safer roads
by eliminating human error, i'm not confident replacing humans with a bunch of
robotic student drivers is the best solution. i'd feel much safer as a
passenger in a vehicle driven by someone or something that's A) primarily
concerned with my safety (and not corporate liability), and B) knows what to
do in unexpected or unpredictable circumstances.

[0] satellite image of the intersection; pin at approximate incident site:
[https://www.google.com/maps/place/1786+El+Camino+Real,+Mount...](https://www.google.com/maps/place/1786+El+Camino+Real,+Mountain+View,+CA+94040/@37.3920222,-122.095255,191m/data=!3m2!1e3!4b1!4m2!3m1!1s0x808fb0c7fbe6c66b:0x260bc9681a0e197)

------
joeclark77
And here come the technocrats ready to say "let's ban driving and make
driverless cars mandatory"...

------
sschueller
Two driverless cars are about to crash into each other on a tight mountain
pass (maybe a bug that couldn't for see the accident).

There are 3 possible outcomes:

    
    
      1. Car 1 and Car 2 crash into each other killing all passengers.
      2. Car 1 with 3 passengers drives of the cliff, car 2 with 1 passenger survives.
      3. Car 2 with 1 passenger drives of the the cliff, car 1 with 3 passengers survives.
    

The cars can communicate with each other. The logical resolution would be to
drive the car with the least passengers off the cliff to save as many lives as
mathematically possible.

I don't know how I would feel about my car killing me although logically I
wouldn't have a chance of survival anyways.

~~~
dman
What do you think happens in this situation currently with human drivers?

Are we holding machines to different standards than we hold each other?

~~~
scrabble
Yes.

Not too long ago I had the opportunity to have a surgery performed by a robot
controlled by a human surgeon. My wife was adamantly against it since there
had been accidents previously with the robot other places in the world.
Overall, the rate of incident was lower than the rate of incident for human
surgeons, but to her that was not the important point.

There are people who will definitely hold machines to higher standards, and if
they don't meet them, to them they are clearly inferior.

~~~
bholzer
Surgical robots do not perform as well as traditional instruments.

I've talked to a few hospital executives and all have said that in most cases,
these robots only cost them money and decrease the quality of the surgical
procedures.

[http://www.dailymail.co.uk/sciencetech/article-2526934/Surge...](http://www.dailymail.co.uk/sciencetech/article-2526934/Surgery-
performed-robots-no-successful-using-humans-cost-LOT-more.html)

~~~
linkregister
I know that disregarding evidence because of association is a fallacy, but can
you come up with a better source? I have to strain my memory to think of an
accurate article written by the newspaper you linked.

~~~
bholzer
I agree, I was hesitant to even use that link, I believe the Wall Street
Journal is generally considered more reputable:

[http://www.wsj.com/articles/robotic-surgery-brings-higher-
co...](http://www.wsj.com/articles/robotic-surgery-brings-higher-costs-more-
complications-study-shows-1412715786)

And another from a relatively reputable healthcare company/journal:

[http://www.healthline.com/health-news/is-da-vinci-robotic-
su...](http://www.healthline.com/health-news/is-da-vinci-robotic-surgery-
revolution-or-ripoff-021215#11)

~~~
linkregister
Thanks! These are good articles.

