
Arizona Government Suspends Uber's Self Driving Cars from Roads - ghshephard
http://www.chicagotribune.com/news/nationworld/ct-arizona-uber-testing-20180326-story.html
======
owenversteeg
Saying nothing about Google/Waymo, it appears they're allowed to continue?
Their self-driving technology seems far more advanced, I think it would be
unfair to punish all the players for the bad actions of a few.

In particular, Uber seems to have been operating self-driving cars recklessly
and in an attempt to "move fast and break things". When those things are
people's lives, that's unacceptable. I'm all for interesting technology, but
sometimes existing rules are there for a reason.

I particularly hope this doesn't dissuade individual states from making leaps
to embrace a particular technology. I think that's one benefit of the US:
individual states have a lot of power to allow a new technology or permit
testing that's illegal or in a gray area in the rest of the country. Arizona
made a bet on self-driving technology. I applaud Arizona, even if the bet in
this case might have turned out badly.

~~~
ducktoller
There was a person at the wheel.

That person wasn't paying full attention to the road. From the video of the
crash, it looks like the woman at the wheel was looking at their phone.

Why is no one acknowledging that this woman was responsible for taking over
the wheel if necessary for any reason. When I have cruise control on, I take
my feet off of the pedals and rest them on the floor. But if, say, a homeless
person with a bicycle walks out into the middle of the road at night with a
black sweatshirt on in front of my car, then I step on the brakes.

And why is no one acknowledging that a homeless person walked out into the
middle of the street at night in a black sweatshirt. I definitely don't know
whether I, even if I wasn't on my phone like the car operator was, would've
seen the person in the road with enough time to slam on the brakes. I might've
killed that person walking across the road by mistake, too.

Like millions of other people on the road every single day, the car operator
was on their smartphone while they were at the wheel. And like 3-5 million
other people a year, the homeless person walking across the street was killed
in a car accident.

The woman walking across the street and the woman at the wheel of the car
broke the law. Uber didn't break any laws as far as I can tell. They were as
responsible as any other company building self-driving cars as far as I can
tell, too. I don't think it's fair to put the fault on Uber here.

~~~
michaelt

      Why is no one acknowledging that this woman was
      responsible for taking over the wheel if necessary
      for any reason.
    

Nobody in the industry really believes safety drivers work, beyond the early
point in testing where they're intervening regularly.

It is widely known [1, 2, 3] that it's extremely difficult for humans to
remain vigilant when monitoring systems for rare events. Airport baggage
screening systems perform "threat image projection" where they show the
operator knives and guns from time to time, just to break up the monotony and
reward attentiveness.

Beyond early testing, safety drivers are there to make the public/politicians
feel safer, for situations where a slow response is better than none at all,
and for more cynical companies, so you can use them as scapegoats in
situations like this one.

[1]
[https://pdfs.semanticscholar.org/ece2/465ed2258585ebb8055fd7...](https://pdfs.semanticscholar.org/ece2/465ed2258585ebb8055fd76997cda6b606af.pdf)
[2]
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5633607/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5633607/)
[3]
[http://journals.sagepub.com/doi/full/10.1177/001872081350155...](http://journals.sagepub.com/doi/full/10.1177/0018720813501550)

~~~
dmix
Even regardless of her lack of full attention, from the video it seems quite
obvious a human driver would have failed to see the woman in time even with
full alertness. The only question here, and why Uber is under scrutiny, is why
a Lidar system which can "see in the dark" wasn't able to detect her and react
in time.

~~~
ceejayoz
> from the video it seems quite obvious a human driver would have failed to
> see the woman in time even with full alertness

The video is _massively_ \- and I'll wager _intentionally_ \- misleading.
There's no chance the car was making decisions from a $20 dashcam.

Here's what a better camera (looks like a smartphone camera) sees in the same
spot (at about 33 seconds - you can see the signs, lights, and terrain match
the Uber video):
[https://www.youtube.com/watch?v=1XOVxSCG8u0](https://www.youtube.com/watch?v=1XOVxSCG8u0)

~~~
stormthebeach
I've driven this area plenty of times, and something that is misleading about
the YT video is that there are around two dozen cars driving around lighting
that area up. I agree that the dashcam video makes it look very dark, but this
video is on the other end of the spectrum and does not represent how dark that
stretch gets when there isn't another car.

The moment another car passes the driver to the left is where the accident
was. That part is typically very dark and has plants as a backdrop - I can see
where it would be difficult to distinguish a human form under specific
conditions. Regardless, the LIDAR and backup driver failed miserably here.

~~~
ceejayoz
The YouTube video has plenty of sections in less well-lit areas. For example:
[https://imgur.com/a/9L53u](https://imgur.com/a/9L53u)

No cars, a light only one one side, and you can still see much further than
the Uber video makes it seem. If the Uber car's sensors can't get a better
idea of its surroundings than a smartphone video, Uber's management should be
liable for criminally negligent homicide for letting them out on the streets.

Plus, the Uber car itself has headlights.

------
johngalt
Several comments here making comparisons to human drivers. As if autonomous
cars are ok so long as they are at or below parity with humans. This is
statistically true but a complete misunderstanding of human nature.

Human risk tolerance varies drastically depending on control/participation.
What is the acceptable casualty rate of elevators?

~~~
eyeing_see
On top of that, I think it is absolutely rational to expect autonomous cars to
perform at the current state of the art. We tolerate certain kinds of bad
human drivers, like beginners, because there is hardly an alternative. A self-
driving car with the driving skills of a beginner would be completely
unacceptable if the state of the art has skills comparable to, say, somebody
with a few years of experience.

~~~
carlmr
We don't really know yet what state of the art is for autonomous vehicles.
Until we have gathered a bit more data from different companies we can't say
anything.

This accident might have been a fluke, it might have been caused by bad
engineering, it might have been caused by many things. We can only compare
once we have a sizeable sample of incidents, or a long enough time of non-
incidents that we're confident the system works well.

And every software update could change something about it.

------
kosei
> “Improving public safety has always been the emphasis of Arizona’s approach
> to autonomous vehicle testing, and my expectation is that public safety is
> also the top priority for all who operate this technology in the state of
> Arizona,” Mr. Ducey said in his letter. “The incident that took place on
> March 18 is an unquestionable failure to comply with this expectation.”

This after the police claimed no fault.[1] Not quite sure how those two
statements square...

[1] [http://fortune.com/2018/03/19/uber-self-driving-car-
crash/](http://fortune.com/2018/03/19/uber-self-driving-car-crash/)

~~~
shakna
The investigation isn't complete, but Uber did disable the car's own safety
features [0].

Both Volvo and Intel tested their software against the low-grade video that
was released, and it was able to detect the impending accident and reduce the
impact to where the pedestrian would likely have survived.

[0] [https://www.bloomberg.com/news/articles/2018-03-26/uber-
disa...](https://www.bloomberg.com/news/articles/2018-03-26/uber-disabled-
volvo-suv-s-standard-safety-system-before-fatality)

~~~
icebraining
_Uber did disable the car 's own safety features_

Well, of course they did. Can you imagine testing a self-driving system when
another system is also controlling the car? What happens when they send
contradictory commands? Race conditions are bad enough outside of actual cars!

~~~
shakna
They disabled a system we know works, for one that failed. No redundancy. If
that is the case, then Uber can absolutely be held accountable for not taking
enough safety precautions.

These aren't just test cars. Uber and Volvo's deal was around Uber selling
'enhanced' XC90s. Why not design around the product you purchased?

> Can you imagine testing a self-driving system when another system is also
> controlling the car?

You mean external forces such as a driver? Which is expected and required at
this point?

~~~
icebraining
I don't pretend to be an expert, but generally adding multiple layers of
control on the same axis adds its own failure points. Having a human that can
override the automatic system is not an argument for leaving that enabled -
it's an argument against it!

Personally, I think Uber simply shouldn't have tested it on public roads at
this point.

~~~
shakna
> generally adding multiple layers of control on the same axis

They wouldn't be. Subsumption architecture is the usual method to couple these
things together, since the 80s.

Individual reactionary modules, that may be 'dumb', like the AutoBreak, that
have a hierarchy of response.

Redundancy is essential for safety.

~~~
icebraining
OK, let's say the Volvo system gets precedence. How will the Uber system ever
learn how to behave when the Volvo system is no longer in place?

If we were talking about a production system, I would agree, but I don't think
this is appropriate for the training phase.

Redundancy is only essential if the potential for failure is unacceptable. If
you remove that (by not driving in public streets), it isn't.

~~~
leereeves
Saving lives is more important than training the self-driving system.
Disabling safety systems for the sake of training isn't acceptable.

That said, how does Uber improve the self-driving system in response to any
failure? Perhaps by training on recorded data? They must have ways to train
the system in addition to waiting and hoping the car encounters the same
situation again.

------
buvanshak
Good riddance. But I think this is not enough to send a strong message to all
those who want to put half baked tech out there in hopes of "disrupting"
something..

Also, I really hope all self driving testing is suspended until there is
sufficient legal and testing ground work before these things are allowed on
roads...

~~~
sokoloff
It might surprise you that we were testing (selectively) autonomous cars on
the autobahn in Germany in 1991 as part of the Prometheus project. Our tech
was nowhere near as good as what's available now and so we compensated by
having attentive drivers hovering over the E-stop button.

I don't think that stopping testing over a single crash is warranted. Maybe
suspend it for a short time while you look into it, but if we stopped medicine
every time a patient died in a trial, society would be worse off.

~~~
dahart
> if we stopped medicine every time a patient died in a trial, society would
> be worse off.

What on earth do you mean? Clinical trials are stopped all the time when
people have adverse reactions & die. And people who die in clinical medical
trials are _volunteers_ , consciously aware of their participation, and having
signed a waiver. In the U.S., medications may not be sold or administered to
the public before being approved via trials. If Uber had to meet the same
standards as the medical industry, Elaine Herzberg would be alive, and we
wouldn't see autonomous vehicles on the roads for several more years.

~~~
YeGoblynQueenne
Also- a clinical trial (usually) can't get out on the streets and run someone
over if it doesn't work as expected.

------
peterept
I don't know much about the logic used in self-driving cars - but I do wonder
how they will handle roundabouts. We have a lot of them here (Australia) and
you have to give way to a car that is approaching or entering the roundabout
on your right - which often is opposite you. How will a self-driving car cope
with detecting a vehicle behind the concrete barrier of a roundabout?

~~~
caf
No, the roundabout rule is that a vehicle entering a roundabout must give way
to any vehicle already on the roundabout, or a tram that is entering or
approaching the roundabout.

See the Australian Road Rules, Part 9:
[https://www.pcc.gov.au/uniform/Australian-Road-
Rules-19March...](https://www.pcc.gov.au/uniform/Australian-Road-
Rules-19March2018.pdf)

If you are approaching the roundabout and someone enters it from your left
before you get there, _you_ have to give way to _them_.

~~~
mschuetz
Things like that may vary from country to country, though. In Austria,
roundabouts do not have a special rule and therefore a car approaching the
roundabout theoretically has the right of way. But pretty much any roundabout
has a yield sign, I've never encountered one without a yield sign at the
entrance. Doesn't mean that there can't be a roundabout that does not have
one.

~~~
provost
As you described, the default is to yield, and so the vehicle can always yield
and signal its intention to do so by breaking slowly. And I agree, there may
not always be a sign (a storm or driver could knock it over), so the default
should always be to yield.

------
Game_Ender
I wonder how long the investigations are going to take, they usually take
months to a year in these cases. Or have for Tesla at least.

The political side is interesting to look at as well. The governor directly
issued the order and he is up for election this year [0] and he played up the
positive PR of the Uber and others coming to the state.

0 -
[https://en.m.wikipedia.org/wiki/Arizona_gubernatorial_electi...](https://en.m.wikipedia.org/wiki/Arizona_gubernatorial_election,_2018)

------
coinjobber
Amen. Uber's fatal accident rate is now 50x that of sober human driver's. At
their current rate of driving, they can't have a _lower_ rate until 2028!
Inexcusable failure of their technology to prevent a human death in the most
basic collision avoidance scenario.

~~~
agildehaus
Before the accident they were infinitely better than a sober human driver.

Understand statistics before employing them please. We don't have enough data
and a single data point doesn't change that.

~~~
simion314
It is not 1 data pint though, say you shot at a target 100 times and hit only
1 time , is this 1 data point only and I can't make any conclusion?

Similar if you drive 1 million km and kill 1 person, if I drive 10 km and kill
1 person is still 1 data point and I can make no conclusion? I think it would
have been 1 data point if this was the first km a Uber self driving car has
driven.

~~~
YeGoblynQueenne
>> Similar if you drive 1 million km and kill 1 person, if I drive 10 km and
kill 1 person is still 1 data point and I can make no conclusion?

Yep. Because I still have another 9 m km to go before I've driven as long as
you have and there is no way to know whether I'm going to kill another 9
people, or 0 more, until I've actually driven them all.

~~~
simion314
You are wrong, there is a conclusion we can make, the conclusion is not
absolute but fuzzy so maybe fuzzy logic is not your thing.

Also you have a mistake in your comment, I would still have to do 999990 km of
driving. If I killed a person in my first 10 km what is the probability that I
won't kill anyone in my next 999990?

Your point is that I can't be 100% sure and that is true but we can compute
the probability, so the probability that I had bad luck is very small, if the
probability of killing 1 person in 1 mil km is 1 or 100% what is the
probability of killing this person in my first 10km? ( you are correct is not
0 )

~~~
YeGoblynQueenne
I misread the numbers in the original post. But what you say in your comment-
well, that's not how it works.

To assess the risk posed by a driver, you wouldn't follow them around,
constantly logging the miles they drive, counting the deaths they cause and
continuously updating the probability they will kill someone, at least not in
the real world (in a simulation, maybe). Instead, what you'd do is wait for
enough people to have driven some reasonably significant (and completely
arbitrarily chosen) distance, then count the fatal accidents per person per
that distance and thereby calculate the probability of causing an accident per
person per that distance. That's a far more convenient way to gather
statistics, not least because if you take 1000 people who have causd an
accident while driving, they'll each have driven a different distance before
the accident.

So you might come up with a figure that says "Americans kill 1.18 people every
million miles driven" (it's something like that, actually, if memory serves).

Given that sort of metric, you can't then use it for comparison with the
performance of someone who has only driven, say, 1000 miles. Because if you
did, you would be comparing apples and oranges: 1 accident per 1000 miles is
not on the same scale as ~1 accident per million miles. There's still another
999k miles to go before you're in the same ballpark.

And on that scale, no, you can't know whether an accident in the first 1000
miles will be followed by another in the next 1000 miles. Your expectation is
set for 1 million miles.

It's a question of granularity of the metric.

~~~
simion314
Do you have any math do back up that what I said is wrong? I can try to
explain my point better but I see you are ignoring the math so maybe I should
not waste my time.(we can reduce the problem to balls in a jar and make things
easy)

But think about this, if I killed a person in my first 10 km of driving, what
is the chance that will kill 0 after the next 999990, would you bet that I
will kill 0 or 1 , more then 10?

~~~
YeGoblynQueenne
I think what you mean by "maths" is "formulae" or "equations". Before you get
to the formulae, you have to figure out what you're trying to do. If the
formulae are irrelevant to the problem at hand you will not find a solution.

As to your question- I wouldn't bet at all. There is no way to know.

Here's a problem from me: I give you the number 345.

What are the chances that the next number I give you is going to be within
1000 numbers of 345?

~~~
simion314
Your problem is not equivalent with what we were discussing about, you need to
change it a bit like

I draw random numbers from 0 to Max and I get 345, what is P that next number
N is in 100 range near 345?

P = 200/Max; in the assumption that Max >445;

For self driving cars, the probability that a car kills a person for 1 km or
road driven is unknown, so you can call it X

Then my self driving car killed a person in first 10 km, What is the
probability that a random event will happen in the first 10km from 10^9 km, is
10^(-8)

Say the self driving car would have the probability of killing N people for
10^9 km, this are random,independent events So the probability that a kill
will happen in first 10km is N*10^-8,

I hope you notice my point that we can measure something, we do not need to
wait for 10 or 100 people to be killed

We are not sure but we can say that is is a very small chance that I will not
kill other person in my next 999 990km.

let me know if my logic is not correct, in statistics is easy to do mistakes.

------
suresk
For all of the theoretical moral questions that have been brought up around
self driving cars (the proverbial lever that lets you kill one person instead
of a handful, for example), it's interesting/inevitable that this one came up
first.

I'm sure lots of people will argue that self-driving cars will, in the long
run, save a lot of lives, and therefore we can justify some loss of life in
the process. They are probably right about self-driving cars saving more lives
in the long run, but I don't know that the conclusion follows.

From what has been released so far, this feels like a case where a human would
have had a high likelihood of doing the same thing and would not have been at-
fault in the collision. But self-driving cars have more sensors and, in
theory, more capabilities to prevent something like this, and we absolutely
should expect them to do better.

As someone with a little bit of experience with the machine learning side of
things, I understand how this stuff is really hard, and I can imagine a
handful of ways the system could have been confused, which makes it all the
more important to understand what went wrong and how we could prevent it from
happening in the future.

As self-driving cars become more prevalent, things like this will happen and
it will be tough to walk the line between over-reacting to every incident and
killing progress in this area and designing safe cars that don't make the
problem worse. I think it is prudent to have Uber stop testing on public roads
until they can explain how this happened and how they can keep it from
happening again.

~~~
MBCook
Based on what came out of the last few days I don’t think it’s clear at all, I
think a human would not have hit the woman.

The footage we saw was from a terrible dashcam and did not represent how dark
the street actually was. Other people who have driven down the street with
cameras show it to be perfectly reasonably lit, with streetlights and
everything.

Second, the woman was crossing across two lanes, so any driver watching would
have already seen her cross the left-hand lane. It’s not like she popped out
from behind a tree suddenly. She wasn’t even going that fast.

This seems to be a massive failure of Uber’s system as well as a complete
failure of the ‘safety’ driver.

I don’t think anyone’s overreacting. I think they’re under reacting. I think
it’s pretty clear over was heavily negligent. There’s no way they don’t get
sued. What comes out at trial is going to be up amazing, I think.

I think if this had been a “real” accident that would’ve been almost
impossible to avoid, around it WAS impossible to avoid, or one where the human
stepped in and it wasn’t good enough… We would be having a discussion we’re
like what you’re saying. Maybe if this was what emai I think if this had been
a “real“ accident that would’ve been almost impossible to avoid, around it WAS
impossible to avoid, or one where the human stepped in and it wasn’t good
enough… We would be having a discussion we’re like what you’re saying. Maybe
if this was Waymo or one of the other comapnies.

But it looks like the dubious honor of first death by a autonomous car was
totally unnecessary and preventable.

~~~
sokoloff
The pedestrian also has a significant contribution of negligence to this
collision, IMO.

~~~
mjmahone17
Why? They crossed a 35 mph (so in Tempe this is a “slow” road) at a corner
(unmarked corners are also crosswalks, legally) on a well lit road. Where the
car was many seconds away when she started crossing.

Would a human driver be deemed at fault in a court of law? Usually not (though
in this case, maybe) unless they kept going after the collision, or were under
the influence. But that’s because we give human drivers a lot of leeway to
kill people. Not because it isn’t their fault.

~~~
sokoloff
I don't live there, but this article suggests that there is signage for "no
pedestrians" at the location where she was struck:
[https://www.curbed.com/transportation/2018/3/20/17142090/ube...](https://www.curbed.com/transportation/2018/3/20/17142090/uber-
fatal-crash-driverless-pedestrian-safety)

This twitter stream suggests the same with more details:
[https://twitter.com/EricPaulDennis/status/975889922413551616...](https://twitter.com/EricPaulDennis/status/975889922413551616/photo/1)

It appears to me that they did _not_ cross at a corner, and the corners
"nearby" in the references above all have marked crosswalks.

If that's the case, I don't see how one could assign 0% negligence to the
pedestrian.

~~~
mjmahone17
Ah I missed the part where there was a sign saying “no pedestrians”. In that
case I’d agree it’s not 0 responsibility on the walker, but I don’t think I’d
assign the woman full responsibility, either. If the driver attempted to slow
down and still hit her, the driver (in this case the car) probably wouldn’t
really be at fault. But a collision at 40 in a 35 zone is really bad for a
driver.

~~~
Piskvorrr
The road where the accident took place is @ 45 MPH; the "35 MPH" figure came
from confusing it with the road in opposite direction (they are separate
there).

------
PeterStuer
Uber and responsibility doesn't seem like a great cultural match.

------
philip1209
This seems like a lesson in why government regulation can be good.

~~~
coryfklein
The government regulation allowed the Uber on the road, so it may be a lesson
in why government regulation doesn't work or provides a false sense of safety.

~~~
philip1209
I specifically mean that Arizona chose to allow self-driving cars on their
roads with little regulation and that lack of regulation contributed to the
death of a pedestrian.

~~~
coryfklein
Yea, I'm really splitting hairs at this point, but you can't use the Uber
incident this month as evidence to support the claim that "regulation is
good". You could use the incident as evidence that "Arizona's regulation is
not good".

We _can_ support the claim "regulation is good" by pointing to other states
that did regulation right and seeing how their fatality rates are better than
Arizona's.

But yes, I agree with the spirit of the statement, Arizona needs better
regulation, and it seems pretty clear that "regulation is good" in this case.

------
takeda
Based what I learned about Uber their self driving cars probably run on
NodeJS.

I'm only half joking.

~~~
bryanrasmussen
I don't actually know what a self driving car should run on, I would want to
make it in Erlang or Rust but - what do you think a self driving car should be
running on?

~~~
piinbinary
Some kind of realtime system. A GC pause of 200ms in a car could easily cause
a crash.

Sadly, I fear it may actually be written in C++.

~~~
Johnny555
It takes around 100 - 400 msec to blink, so an occasional 200msec GC pause
doesn't sound too bad. Though I'd sure hate to be in a car that gets stuck in
a 15 second full GC freeze.

~~~
takeda
> occasional 200msec GC pause doesn't sound too bad

I guess you never used Windows.

But seriously occasional 200ms pause is in ideal scenario if you have some
kind of memory leaks those might become longer etc.

Hard RT system has defined hard deadline for every operation and if it says it
needs to react within 200ms it will or it will be considered a failure of the
system.

This might not be obvious to many people, but as cars becoming more
computerized you no longer directly control it, for example when you press a
brake, it is not directly connected to car's brakes by steel line, instead it
is just sending signal to braking mechanism. That system is hard real-time,
and it is designed so if you press a brake, the car has maximum time it needs
to react to that. There is no excuse to do it later, because it was doing
something else at the time.

Same thing should be with autonomous driving. It absolutely HAS to react
before a deadline.

Actually lets assume the GC will always take at most 200ms and the component
needs to react within 300ms, if that can always satisfied then you can call
the system a real time system.

The problem with GC is that typically you can't guarantee that it will finish
in 200ms or even 400ms. You can see this quite often with our desktop
computers and servers which are not RT and many times you see a slow
application or not responsive.

The difference here is that that when GC is taking its time, your website
might have slower response, it might get you annoyed, maybe it will even make
you swear at the developer of the application/system. When this happens to a
car, someone gets killed.

------
swang720
This is interesting, but can we avoid posting links to articles behind
paywalls?

Here's one on the subject that isn't:
[http://www.chicagotribune.com/news/nationworld/ct-arizona-
ub...](http://www.chicagotribune.com/news/nationworld/ct-arizona-uber-
testing-20180326-story.html)

~~~
stordoff
FWIW, complaints about paywalls are generally off-topic, though I agree, a
non-paywalled source can be useful.

[https://news.ycombinator.com/newsfaq.html](https://news.ycombinator.com/newsfaq.html)

~~~
ineedasername
I think courtesy should dictate that paywalled articles be accompanied by at
least a brief description. Perhaps a change to the FAQ to recommend such?

------
asdfologist
Let’s suspend human driving too, since it kills over 100 people per day.

~~~
vkou
Humans kill 1 person every ~100m miles.

Uber has killed 1 person after ~3m miles.

Yes, I would absolutely suspend a system that kills 3,000 people/day in favor
of a system that kills 100 people/day.

~~~
jazoom
n=1 (To reduce the number of email notifications I'm getting I'd best put this
message in parentheses right next to this statement. NOTE: I'm not a
statistician and don't believe statistics are all that important here! All I
meant to say here is that this is a single incident. That's it. Nothing more.)

Edit: I'm getting the impression some people think I am suggesting there
aren't grounds to suspend Uber's driving based on my 3 character comment
above, so I'll paste my comment to a child comment here. Also, I'm not a
statistician and I don't really care about the statistics here all that much
when there's video evidence showing this was poor driving.

"Same reason clinical trials stop early if someone dies. It might just be bad
luck. Unfortunately we'll never know and that drug might never be tested
again. It might have been an amazing drug. In this case, based on how it
played out, I personally wouldn't want Uber's self-driving cars near me."

~~~
drcode
Agreed, but it's not like it's OK to say "Well sure, so far we have a kill
rate of 50x a human driver, but why don't we wait until Uber kills at least 20
people first so we can determine proper statistical significance"

~~~
jstanley
Why not?

Why in this case is it acceptable to make decisions on non-statistically
significant data? You would exercise more rigour than that in a throwaway A/B
test.

~~~
tedsanders
It is statistically significant, at a 0.03 level:
[https://news.ycombinator.com/item?id=16655081](https://news.ycombinator.com/item?id=16655081)

~~~
jstanley
Excellent

------
paul7986
How is this company still in business...karma is chasing it all the way down
to oblivion!

~~~
smnrchrds
Outside of Hacker News, I have never encountered anyone who followed, was
fully aware of, or cared about Uber scandals. From my conversations, it seems
that people in my city hate taxi companies so much that it takes more than a
couple of manslaughters to make them stop using Uber.

------
notatoad
Seems like kind of a pointless action to make it look like they're responding.
Uber already suspended their testing in Arizona.

