
Video suggests huge problems with Uber’s driverless car program - yohui
https://arstechnica.com/cars/2018/03/video-suggests-huge-problems-with-ubers-driverless-car-program/
======
phyller
"Uber recently placed an order for 24,000 Volvo cars that will be modified for
driverless operation. Delivery is scheduled to start next year."

Wow. The other driverless car players should be all over this lobbying to shut
Uber down. If Uber massively deploys a commercial service with subpar quality
in order to "win", and then those cars start getting into accidents, the
entire field is going to be delayed by 10 years. The general public is not
going to just think "Uber is bad", they are going to think "self-driving cars
are bad". Politicians will jump all over it and we'll see very restrictive
laws that no one will have the guts to replace for a long time.

And honestly if that happens, that's probably what we would need anyway. If
the industry doesn't want to be handcuffed they need to figure out some really
good standardized regulations on sharing data with law enforcement, how to
determine fault for self-driving vehicles, and what penalties there should be.
That are fair and strict.

~~~
selestify
Could Uber be doing this on purpose? They're way behind on self-driving tech,
so maybe if they can't have it, no one can.

~~~
phyller
The more I think about this the more it makes sense in a horrible way. The
important thing for Uber is that they are first to market. If they are first
to market there are two outcomes: a) they are successful, they make more money
with lower fares because they don't need to pay drivers anymore. They
basically take over the market that they are already dominating, beating
competitors into bankruptcy. The market explodes as it is cheaper to get on
demand cars than to own your own. $$$$$ b) they are not successful. They kill
a hundred people in a month or two. The self-driving car industry gets shut
down, and for the cost of a few hundred million dollars in settlements they
keep their current market dominance in the current industry, but have to keep
paying drivers. $$

The alternative, they are not first to market, someone else is and immediately
replaces Uber with a network of cheaper self-driving cars: a) Uber goes out of
business b) They can somehow convince someone to license the tech or sell them
the cars at a reasonable price, making them vulnerable and less profitable,
with no market advantage.

------
pdkl95
> We don't need redundant brakes & steering or a fancy new car; we need better
> software," wrote engineer Anthony Levandowski

Any engineer with this attitude needs to learn the lesson of the Therac-25.
The issues in the Ars article are very similar to section 4 "Causal Factors"
of the report[1].

> To get to that better software faster we should deploy the first 1000 cars
> asap.

Is that admitting that they _do not have_ the "better software" and intend to
deploy 1000 cars using "lesser software"? That's treading dangerously close to
potential manslaughter charges if prove this _willful contempt for safety_ to
a court.

[1]
[http://sunnyday.mit.edu/papers/therac.pdf](http://sunnyday.mit.edu/papers/therac.pdf)

~~~
iClaudiusX
I get the feeling based on comments here that there is a severe lack of
ethical and critical thinking among engineers and developers. I recognize that
this is only a vocal minority but the constant mantra of "move fast and break
things", where getting rich at any cost is seen as a virtue, has made me
extremely disillusioned with this brand of startup culture. Doubly so when
people are trading stock tips on how to profit from tragedy by supporting the
worst actors in the field.

~~~
KKKKkkkk1
"Move fast and break things" is the motto of Facebook. It means that Facebook
engineers are encouraged to make user-facing changes without red tape. If you
are claiming that some self-driving car company has "move fast and break
things" as their motto, then you are being willfully deceptive.

~~~
PhasmaFelis
Have you not noticed that "move fast and break things" is essentially how the
entire startup space functions? When a social media company does it, it's
merely annoying. Do it with dangerous machines, and people die.

------
JohnJamesRambo
"Uber announced that it had driven 2 million miles by December 2017 and is
probably up to around 3 million miles today. If you do the math, that means
that Uber's cars have killed people at roughly 25 times the rate of a typical
human-driven car in the United States."

Wow there goes that "safer than human drivers" argument.

~~~
mabbo
With one data point, you can't extrapolate much. This is misuse of statistics.

Consider if there was a new lottery and you weren't sure what the odds of
winning were. You play it three weeks in a row and the third time you win a
million dollars. Conveniently, no one else tries the new lottery yet.

Does it follow then that the odds of winning a million dollars are 1 in 3? Or
should you play it a few more times before you declare to all that one in
three plays will make one a millionaire?

~~~
soyiuz
They did not sample three times and win once. They sampled three million times
and won once. Hardly "one data point."

~~~
mabbo
The point remains: it isn't fair to decide what the probability is with only
one data point on one side- especially for rare events.

Would it have been fair if Uber last week were to declare that they have a 0%
probability of pedestrian deaths, since they'd never had one yet?

The goal of these statistics is to predict future outcomes. But with such a
small data sample, you cannot fairly predict the future- just as in my lottery
example.

~~~
ggg9990
What if Uber’s first self-driving car killed a cyclist in its very first mile
of operation? Would you find it equally hard to draw conclusions?

------
buildbot
I don't understand how there isn't a non-ML based piece of code that looks at
moving radar and lidar returns and preforms an emergency brake, light flash,
horn, or dodge if that vector would intersect with any confidence. Even
slowing down to 20MPH can turn a fatal accident into an injury.

~~~
KKKKkkkk1
What if it has nothing to do with ML? You see a point cloud that's moving
toward your lane at a speed estimate of say 2 mph. If that's below the sensor
noise threshold, you might classify the cloud as a stationary object on the
other lane (say a stranded car). In that case, by the time you realize that
this stationary object has somehow moved itself into your own lane, it is
already too late.

~~~
AlotOfReading
You slow down anyways, because it will almost always improve the situation and
reduce your liability. Secondly, if the vision is so bad that it can't
identify objects in its lane or predict vectors, it's not ready for the real
world.

~~~
sokoloff
This will have you unexpectedly slowing for road signs, guardrails, and other
stationary objects on the side of the road when they are on the inside of a
curve (as they will have apparent motion towards your lane)

Set "too tightly", it will also have you slowing for every car approaching a
stop sign from a side street.

Cars that randomly slow out of an excess of caution are also a hazard to other
road users. Don't believe that? Go drive for a month and set a series of
alarms on your phone every 5-10 minutes. Everytime the phone goes off,
abruptly slow to half of your prior speed. Do you think you'd make 1,000 miles
without causing a road hazard or collision?

~~~
davidgould
I'm not sure what side you are on here, are you saying that the autonomous
cars cannot be careful as the technology to do so is not good enough? or are
you saying that they should not be required to be careful?

~~~
sokoloff
I'm saying that a rules based system with "slow down arbitrarily whenever the
hell you want" is unlikely to meld well with existing traffic and is likely to
cause as well as avoid unsafe situations.

I believe that might mean that autonomous vehicles are not yet ready for road
testing if that is commonly required by the current state of the art. (I last
worked on autonomous vehicles in 1991; ours was entirely rules-based and we
tested on public roads in addition to private tracks. Ours was bad enough that
the human driver hovered over the red E-shut button and was always paying
attention. It was harder work than just driving the damn thing yourself, but
we had to test in order to make progress. I'm sure loads has improved since
then.)

I also don't think that zero fatalities is a realistic goal nor is it the
standard that should unduly inhibit progress. People have been dying in
transit on foot, on horseback, on bikes, and in cars. This is a version of the
trolley problem. I don't mind and in fact actively prefer a system improvement
that allows 100 deaths while saving 500, even if the 100 is entirely disjoint
from the counter factual 500.

In this specific case, I believe the autonomous car allowed 1 death that would
have also been allowed by a human driver in the same circumstances, so it's a
push.

------
sitkack
> We don't need redundant brakes & steering or a fancy new car; we need better
> software," wrote engineer Anthony Levandowski to Alphabet CEO Larry Page in
> January 2016.

Looks like Uber has attracted Levandowski due to his cultural fit.

~~~
comex
Hmm, but wouldn’t his priorities be correct in the context of this crash?
There hasn’t been any suggestion (so far) that the crash occurred because some
hardware component stopped working; rather, it seems like the software failed
to identify the pedestrian in time. So better software seems precisely what
was needed. Though I can imagine that better sensors might also have helped…

~~~
ddeck
The issue is not that he wanted better software, it's that he appeared willing
to compromise safety to get it faster in order to beat his competitors to
market, as is clear from the remainder of that quote:

 _" To get to that better software faster we should deploy the first 1000 cars
asap. I don't understand why we are not doing that. Part of our team seems to
be afraid to ship."_

And from another email:

 _" the team is not moving fast enough due to a combination of risk aversion
and lack of urgency"_

~~~
toast0
The rest of the quote is much more powerful. It's pretty irresponsible to ship
1000 self driving cars onto public roads at this point. (Regardless of who is
shipping them)

On the other hand, redundant steering and braking seem like probable
overengineering. Brakes are already somewhat redudnant (dual section master
cylinders were common in the 70s and are almost certainly in any modern
vehicle), and better software could periodically verify they're working and if
not, coast to a stop. Steering failure could be handled by engaging the
brakes. Simultaneous failure is likely rare and catastrophic anyway -- losing
a wheel and having the brakes pressure go with it can happen, and when it
does, you put on your blinkers and hope you come to rest in a safe manner.

~~~
05
So, dual action master cylinders are OK by you, but actuators are apparently
so much more reliable you only need one of them? And the same goes for the
control hardware and power supplies because you are ready to handle power loss
in software? I hope you have common sense to stay away from engineering safety
critical systems for the rest of your career..

~~~
toast0
Dual (or triple? I don't know how many you want) actuators don't help very
much if the software doesn't know how to activate them properly (as it seems
is the case here).

You absolutely need a system to ensure a controlled stop in any type of
critical failure in ability to control the system. Assuming you have that, it
seems reasonable to regularly verify the controls are functional (jiggle the
steering, modulate the throttle, gently tap the brakes) every so often, and
rely on your controlled stop procedure in the event of failure.

I do have the common sense to avoid safety critical systems, thanks; however
armchair engineering is a national sport.

------
etimberg
Stuff like this is why P.Engs should be required in certain software fields

~~~
diggernet
I'd upvote you to the top of the page if I could.

------
matte_black
Why don’t we require software engineers who work on self driving car software
to go through licensing and certification.

And then, if their code results in a death, they are liable and can have their
license completely revoked, and they would be unable to work on self driving
cars again.

~~~
superfrank
\- Expecting engineers to always write perfect code is insane. Mistakes
happen.

\- If bad code makes it into production, that is a systemic failure not an
individual one (Why didn't the bug get caught in code review, QA, etc.)

\- No one is going to want to work on a project where a single failure can
taint their career.

\- What if I use a 3rd party lib and that is where the bug is. Who is at fault
then? What if the code isn't buggy, but I'm using it in an unexpected way
because of a miscommunication? If I am only allowed use code that I (or
someone certified has written) development is going to move at a snails pace.

\- What if I consult with an engineer who doesn't have a certification on a
design decision and the failure is there, who is at fault?

\- What if the best engineer on the project makes a mistake and ends up
banned? Does he/she leave the project and take all their tribal knowledge with
them, or are they still allowed to consult? If they can consult, what stops
them from developing by proxy by telling other engineers what to write?

Not to be a dick, but this is an awful idea that would basically kill the self
driving car.

~~~
PinguTS
There is ISO26262 for automotive about safety.

Yes, there are reviews, QA, and all of that. So, yes, there is no single
person responsible (exceptions apply).

But there is no excuse for using 3rd party libs. Just don't use it. If you not
know: do not use it.

That is the reason, why certifications are for. The same rules apply for
medical and other areas.

~~~
colatkinson
> But there is no excuse for using 3rd party libs. Just don't use it. If you
> not know: do not use it.

Wait, what? That goes against one of the core benefits of open source software
--that having many eyes on a problem _decreases_ the risk of bugs. I'm willing
to bet that if Uber had to implement their own machine learning/vision
libraries from the ground up, there would be significantly more issues.

~~~
henrikeh
Is there any evidence that this is the case? That simply putting more eyes on
a system will reveal its problems?

Certification etc is about process. Open source code can be used in a safety
critical product, but it must audited and confirmed against the system
requirements.

------
aylons
"Move fast and break things" is exactly the opposite of what a responsible
driver must do.

~~~
telchar
I've been joking for at least a year that Uber's motto is "move fast and break
people". I'm saddened that this has come to pass but not surprised.

~~~
fstuff
Actually their motto is "Safety third"

[https://jalopnik.com/safety-third-is-the-running-joke-at-
ube...](https://jalopnik.com/safety-third-is-the-running-joke-at-ubers-self-
driving-1793368132)

------
sureaboutthis
I have two problems with this article.

1) They make it appear that Uber is a car manufacturer.

2) Even though Uber has not been determined to be at fault, the author seems
to want to make it that way anyway.

~~~
TillE
Every engineer on this project at Uber knows very well that their car
completely failed in one of its most basic expected functions. It's incredibly
obvious, and a number of independent experts have said as much.

I'd be fairly surprised if there's any real appetite at Uber to continue with
this now. It was never anywhere near their core competency.

~~~
tantalor
> core competency

3 years ago Uber hired ~50 specialists from CMU to work on autonomous
vehicles. I'd call that a core competency.

[https://www.theverge.com/transportation/2015/5/19/8622831/ub...](https://www.theverge.com/transportation/2015/5/19/8622831/uber-
self-driving-cars-carnegie-mellon-poached)

~~~
notyourwork
Core focus, competency implies being competent. With this I’m less convinced.

~~~
neom
My measure of core competency is capability + capacity. (Tesla and Ford, for
example, do not have the same core competencies, and to your point, have core
focus in each of the others.)

------
joejerryronnie
Why is everyone considering it a forgone conclusion that self driving cars
will quickly become much, much safer than human driven cars? Yes, lots of
people die every year in human driven car accidents. But it is equally true
that our most sophisticated AI/ML can only really operate within very narrowly
defined parameters (at least when compared to the huge sets of uncertain
parameters humans deal with every day in the real world). Driving is perhaps
one of the most unpredictable activities we can engage in, anecdotally
supported by my daily commute. What if our self driving software never becomes
good enough? How many more deaths are we willing to go through to find out?

------
speedplane
I was at SXSW a few weeks ago and went to an Uber driverless car talk. They
spent the first half of the talk discussing driver safety, it felt incredibly
hollow.

If you really cared about safety, there are far more immediate and impactful
solutions then spending billions on self-driving cars. If they came out and
said that they were doing it to make money or make driving easier, it would
have carried more weight. But you just can't trust a word this company says.

------
RcouF1uZ4gsC
Are you suprised? Uber is a company that:

* Flouted Taxi regulations

* Living in legal gray zones in regards to contractors vs employees

* Designed a system to avoid law enforcement

* Performed shady tactics with its competitors

* Illegally obtained the private medical records of a rape victim

* Created a workplace where sexual harassment was routine

* Illegally tested self-driving cars on public roads in California without obtaining the required state licenses.

* Possibly stole a LIDAR design from a competitor

Now their vehicle killed a pedestrian in a situation that the self driving
vehicles should be much better than humans at (LIDAR can see in the dark, and
the reaction time of a computer is much better than humans.)

Uber has exhausted their "benefit of the doubt" reserve. Maybe, they need to
be made an example of with massive losses to investors and venture capitalists
as an object lesson that ethics really do matter, and that bad ethics will
eventually hurt your bank account.

~~~
Stanleyc23
if they are at fault they should be punished, but you do realize that the
expectation for self driving vehicles is not to eliminate all car related
deaths, right?

edit: wow this triggered some people. somehow 'if they are at fault they
should be punished' got interpreted as 'they are not at fault and should not
be punished'

~~~
bobthepanda
From the article:

> But zooming out from the specifics of Herzberg's crash, the more fundamental
> point is this: conventional car crashes killed 37,461 in the United States
> in 2016, which works out to 1.18 deaths per 100 million miles driven. Uber
> announced that it had driven 2 million miles by December 2017 and is
> probably up to around 3 million miles today. If you do the math, that means
> that Uber's cars have killed people at roughly 25 times the rate of a
> typical human-driven car in the United States.

We have a sample size of 1, granted, but it's not looking very good. At the
very least they were expected not to be less safe than humans.

~~~
roenxi
You have a sample size of 1. Acknowledging that doesn't suddenly make the
sample evidence of anything, good or bad :P.

Almost all the miles driven are going to be in near-ideal circumstances
(daylight, no rain, good road surface, driver familiar with normal road
traffic conditions and drives the route regularly). I have nearly no insight
into the uber death, but I gather it happened at night. It could easily be
that humans are also an order of magnitude more dangerous at night.

~~~
XMPPwocky
Of course it's evidence of something!

Suppose, as a massive oversimplification, Uber's self-driving cars crash with
some constant probability P for every mile driven (i.e. a Bernoulli process).

We now have learned at least one thing with absolute confidence: P > 0.

The first mile driven before the accident, of course, also showed P < 1.

But beyond _absolute certainty_ , we also have a _better idea_ of the actual
value of P. (Intuitively, the longer we go without a crash, the lower we
suspect P to be, and for every crash, we increase our estimate of P).

If there was only one crash in 3 million miles driven, this is evidence for
values of P near 1/(3 million), and evidence against values of P far from it.

Is it strong evidence? Nope! But it's evidence!

~~~
roenxi
You obviously want to be technical. Technically you are correct. However, the
evidence that this car is a worse driver than a human is currently so weak
we're both wasting time talking about it. We need more of the stuff for it to
be worth considering.

Your pedantry has managed to upset me and I would encourage you to be a little
more understanding of people using language in the way it is used outside of
the world of mathematics. Walking into a practical discussion of safety with
an existence proof of all things is disrespectful of the fact that lives and
enormous quantities of human attention are at stake.

Obviously, technically everything is evidence of something. I know that. Using
the language in that sense is not going to help.

------
throwaway010718
Machine learning and AI are data hungry algorithms and the concern is there
isn't enough "emergency situation" data. Also a detector can not have both a
100% probability of detection and 0% probability of false alarm. You have to
sacrifice one for the other and that is usually influenced by weighted
probabilities and priorities (e.g., a smooth ride).

------
d--b
Uber's culture is bad for anything really...

------
kristianov
I hope on-road testing could be more like human drug testing. After all, both
affect human lives.

~~~
oldgradstudent
Drugs are tested under informed consent, not on unwilling third parties.

Maybe testing of autonomous vehicles should be done off public roads (at least
at this stage of development).

------
ghfbjdhhv
This event has me thinking about job of the behind-the-wheel backup driver.
They get an easier job than a real driver, at the cost of potentially taking
the fall if an accident occurs. I wonder if the pay is better.

~~~
icc97
It's an equivalent job to a train driver

~~~
mrguyorama
Train drivers have immense amounts of regulation and systems in place to
verify that they are paying attention and actually driving the
train:[https://en.wikipedia.org/wiki/Sifa](https://en.wikipedia.org/wiki/Sifa)

~~~
icc97
Yes exactly, we should be applying similar rules to the self-driving drivers.

------
icc97
I thought the speed limit was 35mph, but the article claims 40mph.

------
bambax
"Testing" of driverless cars seem to be the wrong way around. Software should
try to learn from human drivers: watch them instead of being watched by them.

The way it would work would be: the human is driving and the software is, at
the same time, watching the driver and figuring out an action to take. Every
time the driver's and the software's behavior differ, is logged and analyzed
to figure out why there was a difference and who guessed better.

But the way testing is currently going on, it seems millions of miles are
wasted where nothing happens and nothing is learned.

~~~
Animats
No, what that gets you is smooth normal driving and poor handling of emergency
situations. People have tried using supervised learning for that - vision and
human actions for training, steering and speed out. Works fine, until it works
really badly, because it has no model of what to do in trouble.

~~~
mickronome
While I understand the issues, it's still much better in the sense that it
doesn't kill anyone while failing to learn how to drive. The fact that we fail
to develop something without exposing people to risk doesn't create a right to
expose people to risk. It should be seen as either an impassible obstacles, or
motivation to solve the problem of learning safely.

Some argume that it is okay, because it will decrease risks in traffic in the
long run. This is not a valid argument to allow on road bug-testing, as there
is a lot of medical research that we as a society don't allow because of
ethical concerns, even some research where the risk of death is essentially
zero. Applying research ethics to the Uber situation, Ubers vehicles would
under no circumstances be allowed on the road until it could be _proven_ that
they were at least as safe as all vehicles already operating on the road.

So while the technique suggested might not work well to solve the problem of
safe autonomous cars, the more dangerous alternatives should absolutely not be
allowed.

------
aaroninsf
Surprising no one.

"win-at-any-cost" and "second place is first looser" (sic) do not cohere with
safety.

