
Google Self-Driving Car Project Monthly Report [pdf] - Nogwater
http://static.googleusercontent.com/media/www.google.com/en/us/selfdrivingcar/files/reports/report-0615.pdf
======
brownbat
I'm fascinated by the accidents. The AV is stopped at a light. Someone rear-
ends it. Minimal damage.

Probably similar accidents are occurring every minute between human drivers,
going unreported as the rule.

AVs might one day even avoid this "victimization," if these events keep
following a predictable pattern. AVs could exaggerate the gap, leave a
precisely calibrated amount of extra space. When anticipating a rear end
collision, the AV would honk and flash brake lights while scooting forward.

Google's absolutely correct that its AVs are never at fault in any of these
accidents, legally speaking. Does blame change though if there are ways the AI
can prevent this series of similar accidents, but they choose not to?

The AV yields to those running a red light, even though getting t-boned
wouldn't legally be the AV's fault. That seems wise to me. Is it inconsistent
to expect the AV to avoid getting t-boned, but not expect it to avoid getting
rear-ended? I'm not sure...

Or, more broadly: How do you divide blame between two parties when one has
superhuman faculties? Is the AI responsible for everything it could have
conceivably been programmed to prevent? Or do you just hold it to a human
standard?

Like all hard problems, neither extreme is very satisfying.

~~~
michaelt

      Does blame change though if there are ways the AI can 
      prevent this series of similar accidents, but they choose 
      not to? [...] Is it inconsistent to expect the AV to avoid 
      getting t-boned, but not expect it to avoid getting rear-
      ended?
    

While I was in college I worked on some wheeled robots that played a
competitive ball game. We wanted to avoid collisions between robots, and to
win the game.

One of the things we found was: If one team has great collision avoidance and
the other team has no collision avoidance, the team without collision
avoidance always wins. When there's a contest for the ball the team without
collision avoidance just blasts in there, and when the team with collision
avoidance back off to avoid a collision they lose the ball.

If autonomous cars were so good at avoiding accidents that you could merge
aggressively and they'd always brake, and run red lights in front of them and
they'd always stop, manual drivers might learn to do that.

Riding in a Google autonomous vehicle would be a pretty shitty experience if
you knew you'd get four or five emergency stops in every journey when assholes
decide to cut you up :)

~~~
jjoonathan
Since the AV would be keeping detailed records of everything, legal remedies
are possible. I imagine someone would quickly lose interest in cutting off AVs
if after a modest threshold they would start getting a bill every month.

------
downandout
While most human drivers have a hard time avoiding rear-end collisions in
which they are the victim, it would save a fair number of people from gruesome
deaths if they would build some sort of rear collision avoidance logic for
situations where it may be possible for the system to get the car out of
harm's way. There seem to be hundreds of accidents like this [1] annually
where large trucks smash into cars stopped at red lights or that have slowed
down during traffic jams.

Perhaps the cars should constantly be planning escape routes while slowing
down and stopping with appropriate distance from those in front of it to allow
for escape should it detect an inevitable collision from behind. Even where
the only possible escape route involves hitting another car, it should be able
to make the decision that a light collision with a vehicle in another lane is
perferable to a large truck hitting it at 70mph.

[1] [http://www.wtsp.com/story/news/traffic/2015/04/12/fatal-
cras...](http://www.wtsp.com/story/news/traffic/2015/04/12/fatal-crash-pasco-
county/25670699/)

~~~
gambiting
"There seem to be hundreds of accidents like this [1] annually where large
trucks smash into cars stopped at red lights or that have slowed down during
traffic jams."

Isn't that solved by the automatic collision detection with emergency braking,
as implemented by Volvo?[1]

[1][https://www.youtube.com/watch?v=ridS396W2BY](https://www.youtube.com/watch?v=ridS396W2BY)

~~~
Hytosys
If I'm not mistaken, the OP is postulating an avoidance mechanism for the
front ("victim") car, not the rear car. Indeed the rear car is at a higher
risk of injury than the front car in read-end accidents.

------
Pfhreak
The thing that really stands out here isn't that the accidents occurred (which
is sort of amusing), but rather the excellent analytics the car produced. The
car knew where it was, how long it was stopped, the conditions at the time of
the accident, and the relative velocities of the vehicle that caused the
accident. This is going to change the nature of automobile accidents entirely.
The amount of data we'll be able to collect from even the simplest fender
bender will be fantastic.

~~~
GauntletWizard
I'm sorta unconvinced we'll actually see any benefit. Humans are notoriously
picky about their privacy; The ability to get black-box data for insurance
purposes has been around for years, but most people would rather not, for good
reason, have their insurers poking around their driving habits. I expect some
backlash against cars collecting this data on their driving neighbors. What if
the self-driving car isn't involved in an accident, but witnesses one? Can the
courts subpoena google for the two driver's locations and velocities? That
will be a fantastic NYT headline: "Are Google cars spying on you?"

Look at public health data. There's a vast goldmine of data that could be
collected, that could track, trace, and storm-warning diseases, but that is
for legal reasons hidden behind a confidentiality barrier. I'd like it to be a
simple check if any of your partners were STD positive; This is currently
information that is hard to get reliably (Sure, your partner can hand you test
results, but not verifiable ones; The clinic won't attest to it if you call to
reference your partner's results, so you can never be certain it's not a
clever photoshop). This is data that has direct, tangible impact on those
around you, and in many states it is a crime to not reveal certain STDs.
Still, these spread, because we're afraid of making the STD-infected social
pariahs, and I can't see a world where we don't have the same problem with bad
drivers.

~~~
roel_v
All of this information can be derived from a dashcam, and they're ubiquitous
today.

And yes, in 10 years time there will be no such notion as 'private travel'.
Between your phone's GSM/CDMA, bluetooth and wifi signals and dashcams,
security and traffic cameras, there will be dozens of parties who monitor
every move any person makes outside of their home in urban or suburban areas
(be it by car, foot or bike). With different forms of computer vision, that
data is sorted and linked to other recordings of objects and there will be
dozens of databases that have exact information on where everybody is, 95+% of
the time.

One of my pet research projects (although I'm not making much progress in
terms of actual work or publications) is on a system of tracking 'people' in a
generalized form. It's basically a concept of 'strands' of information along
different axises ('location', 'finance', 'internet', 'health' and a few
others) which can be joined by an overarching matching algorithm
infrastructure. Furthermore, each 'strand' has a 'source' and one can join
datasets by deciding 'this source I know is reliable, take this as truth' or
from several less trusted ones by using voting or bayesian inference. It's
basically a formalization of 'doxing' \- an overarching framework to work with
personally identifying data from sources of varying degrees of
trustworthiness.

I'm sure many people are working on something like this already, but in
private and with the goal of using it against 'us' (for a broad definition of
'us'). The only way (ok, maybe not 'only' way, 'one of' the ways) to defend is
to acknowledge that privacy is dead and to develop offensive capacities; much
like the only last resort against tyranny is a well-armed populace.

~~~
icebraining
_All of this information can be derived from a dashcam, and they 're
ubiquitous today._

They are? Where? According to the SF Chronicle, _" hardly anyone in the U.S.
uses dash cams except police and commercial truck and taxi drivers."_

Here in Europe, many countries don't even allow them (we have a different
concept of "expectation of privacy" around here).

~~~
roel_v
'Here' = in Europe. It's only illegal in Austria.

~~~
icebraining
_Where_ in Europe? They certainly aren't here in Portugal nor in Spain, and I
don't remember seeing them in Belgium, which I've visited recently.

As for the legality, I wouldn't be so sure. Owning them is certainly legal,
but filming the public street indiscriminately is usually considered against
the Data Protection Directive, and I don't see why would dashcams be excluded.

~~~
roel_v
_Across_ Europe. OK I guess 'ubiquitous' might be an overstatement, but I
regularly see cars with dash cams in Belgium, The Netherlands, France and
Germany, which is where I have driven the last few years. I can't find any
Europe-wide actual sales data, but I don't see why there would be huge
differences in other countries. Also, they're readily on sale in e.g. Spain,
Portugal and Hungary (just 3 countries I checked). Which of course doesn't
necessarily mean that they're being bought en mass, but which does give an
indication.

~~~
Ded7xSEoPKYNsDd
They might not be illegal here in Germany, but dash cam videos are regularly
thrown out as evidence by courts. (And that's quite surprising, the bar is
_very_ high for that to happen in Germany. No fruit of the poisonous tree
here.)

~~~
roel_v
Sure, so what? They're still mass surveillance even when their footage is not
distributed, or used in court. That's my whole point - surveillance that is
kept secret is the real problem, and anyone with a few $k or less will be able
to set up a _vast_ surveillance database in a few years - very little of which
is covered by laws today (yes, some countries require you to register yourself
with the privacy watchdog if you hold records of people, but how would anyone
check? They're mostly toothless today already, in many places)

------
r0naa
I have the weird feeling that this report hasn't been written by a human.

~~~
AaronFriel
Copy Errors Reported to CA Writers Guild

Given the time we’re spending on busy scripts, we’ll inevitably be involved in
writing bad copy; sometimes it’s impossible to overcome the realities of speed
and deadlines. Thousands of minor creative failures happen every day in
typical American copy, 94% of them involving human error, and as many as 55%
of them go unreported. (And we think this number is low; for more, see here.)
In the six years of our project, we’ve been involved in 14 minor grammatical
errors of more than 1.8 million words of autonomous and manual writing
combined. Not once was the autonomous writer the cause of the error.

(CA regulations require us to submit CA WG form OL316 Report of Copy Error
Involving an Autonomous Writer for all errors involving our writers. The
following summaries are what we submitted in the “Copy Error Details” section
of that form.)

------
lovelearning
I found the linked TED talk [1] fascinating.

[1]:[https://www.ted.com/talks/chris_urmson_how_a_driverless_car_...](https://www.ted.com/talks/chris_urmson_how_a_driverless_car_sees_the_road)

------
DannoHung
Have any of the AV teams said anything about testing in hazardous road
conditions yet?

Driving rain, thick fog, heavy snow, sleet, the like. Maybe the answer they'll
give is, "Don't drive you dummy", but that's not really an acceptable solution
for most people.

~~~
jakeogh
Isn't it interesting that people who relinquish their mobility to a computer
will loose their driving skills? Maps nicely to other things.

~~~
yazaddaruvala
Just because _all_ of my needed math is done on a calculator doesn't mean I
can no longer add.

~~~
gambiting
Well that already sort of happens to pilots. They spend so much time on
autopilot that they become less and less skilled at manual flying. I'm not
saying that they are not busy in the cockpit,but there was a case where a
plane crashed because the pilot didn't know how to regain control of the plane
after the autopilot disengaged.

------
kriro
What happens after an incident with an AV (let's say I bumped into it)? Do I
just wait and call the police or is the AV reporting it automatically? It's a
little odd to not have a person to interact with in that situation.

------
samcheng
The humans do seem to drive a little crazy on California St. in Mountain View!
I've had a near miss with a negligent driver on that street myself. Nice to
see that they're moving the cars away from the quieter (more suburban)
southern parts of the city near the hospital.

I have yet to see the new prototypes, which are allegedly making tours as
well. Does anyone know what streets they frequent?

~~~
atto
The new ones drive on San Antonio quite frequently, making the loop near
Central Expressway.

~~~
joshu
I live off San Antonio and I see them daily.

Up San Antonio to Leghorn and then to Rengstorff and over 101 is a pretty
standard commute to Google.

------
amelius
What I wonder most about: whenever the Car gets a minor software update, will
they (be required to) do again 10,000 hours of test-driving?

~~~
albertzeyer
I'm quite sure that those 10k hours are not with a single version but they are
already being updated all the time.

A human driver on the other hand similarly also updates his driving behavior
throughout his life.

~~~
amelius
Software and humans are completely different. Humans are, in a way, very fault
tolerant. In software, one erroneous line of code can literally crash the car.
In a human, one stray neuron will probably don't do much harm.

------
saturdaysaint
I found the linked Vox article really interesting
-[http://www.vox.com/2015/6/1/8700459/google-self-driving-
cars](http://www.vox.com/2015/6/1/8700459/google-self-driving-cars)

I've read reports that make me dread being behind a AV at a 4-way stop or a
blinking red light - one passenger described waiting at a light for several
minutes while drivers honked angrily behind them. Maybe AV's will encourage
traffic planners to implement more roundabouts, which might be less prone to
other driver's taking advantage. Or maybe AVs just need to get better at
estimating the speed of oncoming traffic.

------
luckydata
The description of June 4 sounds exactly like an accident where I was driving
the car that touched the one in front of me. My foot slipped from the brake.

The driver in front reported injuries and asked damages, I'm still paying the
consequences on my insurance rate. I still remember her face vividly, I
noticed the moment when she started doing mental calculations about how much
she could get out of that.

I hope she gets what she's got coming for her, and self driving cars can't
come soon enough, especially for the sensor arrays mounted on each cars that
will provide ample proof of what accidents are really like for insurance
purposes.

------
SuperChihuahua
If anyone is interested in learning more, I've found a free online-course
called "Programming a Robotic Car"
([https://www.udacity.com/course/artificial-intelligence-
for-r...](https://www.udacity.com/course/artificial-intelligence-for-robotics
--cs373))

The lecturer is the same guy behind the Stanley car that won the DARPA
challenge a few years ago
([https://en.wikipedia.org/wiki/DARPA_Grand_Challenge](https://en.wikipedia.org/wiki/DARPA_Grand_Challenge))

------
ucho
So... does anybody know what truck manufacturers are doing? As far I can tell
it will take at least 20 years before AV can drop steering wheel and deal with
scenarios like police officer routing traffic around accident site. On the
other hand even current Google tech should allow for soft* road trains made
from 2-4 trucks and only one (active) driver. Also new hardware price would
have smaller impact on already expensive vehicles.

*without physical connection like in Australian ones

~~~
tim333
Daimler have a truck
[https://news.ycombinator.com/item?id=9555295](https://news.ycombinator.com/item?id=9555295)
I'm not sure about the police routing around an accident thing. Google's cars
can already recognise cyclists hand signals.

------
Altay-
Here's a non-pdf version without some of the padding:

[https://docs.google.com/document/d/1vgjC5VeySpjqzcHj1_4_-5MN...](https://docs.google.com/document/d/1vgjC5VeySpjqzcHj1_4_-5MNck2tfIqjpNQBr8ihwz4/)

Side note: Google should whip up a google doc only url shortner... so people
know its a document and not a risky link, but its still short enough to share
easily...

~~~
ariwilson
So ... [http://goo.gl/drive/??](http://goo.gl/drive/??)?

------
jasonjei
I think the only way to have safe driving with autonomous vehicles is to have
roadways that are autonomous vehicles only. When autonomous vehicles become
commercially viable, I think having dedicated autonomous freeways can really
remove a lot of the randomness and dangers of driving. We might be a 100 years
away from this prospect, but it would be great if we lost no more lives on
freeways due to car accidents.

------
jongraehl
I think Google can do better than just "not at fault". Many of us have avoided
(chain reaction or other) accidents by evading a would-be rear-ender (although
this can backfire in that if you use up your cushion you may then be pushed
into the obstacle ahead in addition to being hit from behind). Also, there are
times where overly cautious braking causes chain reactions and collisions (at
least, an extremely competent+cautious friend of mine told me he caused one
once - maybe it's rare).

Similarly, although we might do better to choose a world with different
incentives (like one where 1/3 or more of cars are rigid rule-entitled
automata), many of us have (selfish-rationally) made room for incompetent lane
changers even if it means violating lane rules ourselves.

