Hacker News new | comments | show | ask | jobs | submit login
Uber Self-Driving Car That Struck Pedestrian Wasn’t Set to Stop in an Emergency (ntsb.gov)
431 points by jeffreyrogers 4 months ago | hide | past | web | favorite | 468 comments



Direct link to NTSB preliminary report: https://www.ntsb.gov/investigations/AccidentReports/Reports/...

According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2). 2 According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.


I worked on the autonomous pod system at Heathrow airport[1]. We used a very conservative control methodology; essentially the vehicle would remain stopped unless it received a positive "GO" signal from multiple independent sensor and control systems. The loss of any "GO" signal would result in an emergency stop. It was very challenging to get those all of those "GO" indicators reliable enough to prevent false positives and constant emergency-braking.

The reason we were ultimately able to do this is because we were operating in a fully-segregated environment of our own design. We could be certain that every other vehicle in the system was something that should be fully under our control, so anything even slightly anomalous should be treated as a hazard situation.

There are a lot of limitations to this approach, but I'm confident that it could carry literally billions of passengers without a fatality. It is overwhelmingly safe.

Operating in a mixed environment is profoundly different. The control system logic is fully reversed: you must presume that it is safe to proceed unless a "STOP" signal received. And because the interpretation of image & LIDAR data is a rather... fuzzy... process, that "STOP" signal needs to have fairly liberal thresholds, otherwise your vehicle will not move.

Uber made a critical mistake in counting on a human-in-the-loop to suddenly take control of the vehicle (note: this is why Type 3 automation is something I'm very dubious about), but it's important to understand that if you want autonomous vehicles to move through mixed-mode environments at the speeds which humans drive, then it is absolutely necessary for them to take a fuzzy, probabilistic approach to safety. This will inevitably result in fatalities -- almost certainly fewer than when humans drive, but plenty of fatalities nonetheless. The design of the overall system is system is inherently unsafe.

Do you find this unacceptable? If so, then then ultimately the only way to address this is through changing the design of the streets and/or our rules about how they are be used. These are fundamentally infrastructural issues. Merely swapping out vehicle control systems -- robot vs. human -- will be less revolutionary than many expect.

1: http://www.ultraglobalprt.com/


> The loss of any "GO" signal would result in an emergency stop.

That's an E-stop chain and that's exactly how it should work.

But the software as described in the NTSB report was apparently bad enough that they essentially hardwired an override on their emergency stop. The software equivalent of putting a steel bar into a fuse receptacle. The words that come to mind are 'criminal negligence'. The vehicle would not have been able to do an E-stop even if it was 100% sure it had to do just that, nor did it warn the human luggage.

The problem here is not that the world is so unsafe that you will have to make compromises to get anywhere at all, the problem here is that the software is still so buggy that there is no way to safely navigate common scenarios. Pedestrian on the road at night is one that I've encountered twice on my trips and they did not lead to any fatalities because when I can't see I slow down. If 6 seconds isn't enough to make a decision you have no business being on the road in the first place.


> Pedestrian on the road at night is one that I've encountered twice on my trips and they did not lead to any fatalities because when I can't see I slow down. If 6 seconds isn't enough to make a decision you have no business being on the road in the first place.

I've seen a few people comment on the footage that they too would have run the pedestrian over, to which my only response is: I sure hope you don't have a driver's license [anymore]!


The vast majority of those people are being (purposely?) deluded by a very misleading video showing an apparently pitch black section of road. In reality, it was a lighted road and the dashcam footage had a very compressed dynamic range.


> it was a lighted road

The report is a bit ambiguous about that:

The videos show that the pedestrian crossed in a section of roadway not directly illuminated by the roadway lighting.


To me that means they weren't walking directly under the street lamp. If you look at other peoples videos of that street at night on youtube it's well lit. Street lamps cast a wide spotlight so you don't have to be directly under it to still have illumination.


Not to mention that cars these days come with headlights, a radical new feature that provides road illumination in the absence of streetlights.


> If you look at other peoples videos of that street at night on youtube it's well lit.

I don't think you can look at videos and judge the level of illumination well; their videos could be more or less accurate than Uber's, and what I see depends on codecs, video drivers, my monitor, etc. Also, any video can easily be edited these days.

Is there a way to precisely measure the illumination besides a light meter? Maybe we can use astronomers' tricks and measure it in relation to objects with known levels of illumination. Much more importantly, I'm not even sure what properties of light we're talking about - brightness? saturation? frequencies? - nor which properties matter how much for vision, for computer vision, and for the sensors used by Uber's car in particular.

I'm not taking a side; I'm saying I have yet to see reliable information on the matter, or even a precise definition of the question.


It is generally unusual for any camera (not using infrared) to outperform the human eye in low light situations. If a camera (any camera) shows a clear image (at all) a person would have almost certainly seen it.


Dashcam videos typically do not capture nighttime scenes very well. Any human would have been able to see the pedestrian well in advance of a collision. There are cell phone videos of that same stretch of road at night and they show the illumination level much better than the Uber video.


It is the case that even very good driverless cars of the future will cause fatalities now and then. Even if they're safer than human drivers.

Don't conflate that with Uber's screw-up here. This wasn't a situation where a fatality was unavoidable or where a very safe system had a once-in-a-blue-moon problem. It's one where they just drove around not-very-safe cars.


Agreed. Uber disabled a safety feature that would have prevented this fatality -- but that doesn't mean that the automation was therefore safe apart from Uber's mismanagement of it. It's entirely believable that had that safety feature been fully enabled, it would have also e-braked in 1,000 other situations which didn't result in a collision. And false-positive e-brake events are definitely worth avoiding: they can get you rear-ended and injure unbelted passengers.

This doesn't mean that Uber therefore did the right thing in disabling the system; it probably means that the system shouldn't have been given control of the car in the first place. But my point is that there is no readiness level where driverless cars will ever be safe -- not in the same way that trains and planes are safe. The driving domain itself is intrinsically dangerous, and changing the vehicle control system doesn't change the nature of that domain. So if we actually care about safety, then we need to be changing the way that streets are designed and the rules by which they are used.


> It is the case that even very good driverless cars of the future will cause fatalities now and then. Even if they're safer than human drivers.

And that is why I am so mad at Uber. They are compromising the public trust in autonomous cars with their reckless release policy. And thereby potentially endangering even more lives, as we have to convince the public of the advantages of this technology.


I agree with all of this except the tense: haven't they shut the whole thing down at this point with no immediate plans to start it up again? Or am I mis-remembering that.


They were testing in three places: San Francisco, Arizona, and Pittsburgh. They didn't want to get a license from California (probably because they couldn't follow the safety regulations), so they threw a tantrum and moved to AZ. Then after this fatality, they shut the AZ program down and are just testing in Pittsburgh.


That's not true. They shut down everywhere after this fatality. They just said that they'll shut down AZ permanently (not that AZ would probably let them do it anyway), and resume testing in Pittsburgh sometime soon, in a more limited way (which apparently the Pittsburgh mayor isn't wild about).


This is absolutely the right analysis of how these systems work and why you can't expect autonomous cars to halt traffic deaths. What the Uber crash has shown us is that the tolerance for AVs killing people is probably exactly zero, not some (very meaningful) reduction like 10x or 100x less.

My company didn't start with this zero tolerance thing in our minds, but it turns out our self-delivering electric bicycles have a huge advantage for real world safety because they weigh ~60lbs when in autonomous mode and are limited to 12mph. This equals the kinetic energy of myself walking at a brisk pace, or basically something that won't kill purely from blunt force impact. I think the future for autonomy will be unlocked by low mass and low speed vehicles, not cars converted to drive themselves.


> What the Uber crash has shown us is that the tolerance for AVs killing people is probably exactly zero, not some (very meaningful) reduction like 10x or 100x less.

It hasn't shown that at all. It has documented beyond reasonable doubt that Uber should not be allowed to participate in real world tests of autonomous vehicles.

There are plenty of situations where people would fully accept a self driving vehicle killing someone but this isn't one of those.


The Uber crash has shown us that the public tolerance for AVs killing people is somewhere lower than presumptively around 30x more dangerous than the mean human driver.

Uber had a fatality after 3 million miles of driving.

The mean fatality rate is approximately 1 per 100 million miles of driving.

It's a sample size of one, so the error bars are big, but it drives me insane that people are acting like the Uber cars are the ideal driverless cars of the imagined future, and are super safe. The available data (which is limited, but not that limited) is that Uber driverless cars are much, much, much more dangerous than mean human drivers.


My company didn't start with this zero tolerance thing in our minds, but it turns out our self-delivering electric bicycles

That actually sounds like a really interesting concept, one of those ideas that seems obvious only after someone suggests it. What company is this?

Right now, in the Seattle area, we are basically seeing a new littering epidemic in the form of sharable bicycles being left to rust away, unused, at random places. If the bike could cruise to its next user autonomously, that would be really be a game-changer. "Bikes on demand" would turn bikesharing from (IMHO) a stupid idea into something that just might work.

Plus, the engineering challenges involved in automating a riderless bicycle sound fun.


Weel, we're in Bellevue. It's a super fun problem to work on and one of the first thing we figured out was that trikes won't work because of their width and difficulty to ride, so we got a two-wheeled bike to balance. The autonomy problems are easier than cars in a lot of ways, and this Uber case is something we don't deal with because our bikes can always stop when presented with a no-go situation since we're only autonomous when no one is riding.


That's good to hear, sounds like a very cool project. I could see this living up to at least some of the hype that the original Segways received.

The biggest challenge will probably be to keep people from screwing with the bikes, of course. :( An unoccupied bicycle cruising down the street or sidewalk will fire all sorts of mischievous neurons that onlookers didn't even know they had.


Definitely, will be interesting to test. We have several cameras onboard so that we can see what happened but an equal concern with vandalism is how people feel about being watched. We want to avoid feeling like your neighborhood is suddenly a panopticon. Still unsolved.


How do you (plan to) deal with people trying to stop what looks like a runaway bicycle (esp. when it rolls ^W rides downhill)?


Hah, yeah it reminds me of a runaway shopping cart when you see our bike rolling. We expect people will get used to it eventually but we have some ideas to test in the future on how to make it more obvious, such as giving the bike a ‘face’ and having it lit up with LEDs that are visible from all angles. Def not a solved problem, but as far as design problems go it’s a pretty fun one.


Your analysis leaves much to be desired, though, as it comes perilously close to equating "we can't prevent 100% of fatalities" with "we shouldn't care about, learn from, or make changes in response to a fatality".

What the Uber crash has shown us is mostly the willingness of people on HN to excuse Silicon Valley darlings even when they actually demonstrably kill people.


I don't think it has anything to do with "Silicon Valley darlings" (of which Uber is certainly not anymore). It has more to do with "super cool future tech" that they really want to see implemented in their lifetimes - so much so that they may make dubious arguments to support thier position.


> This equals the kinetic energy of myself walking at a brisk pace, or basically something that won't kill purely from blunt force impact.

This is an analogy that cannot completely map to cycling.

A fall at any speed from a bike is literally a potentially crippling or deadly scenario.


Potentially deadly? Maybe, sure, but at low speeds, up to 10 mph say, it is incredibly unlikely that falling off a bicycle (even with no helmet) will do more than cause bruises and damaged ego.


Falling while stationary is the second most common type of accidental death, can't imagine a bicycle makes it any safer lmao


Is this including the elderly who often will break a hip that way and then die of the complications? Because if so, that would not be comparable to a healthy young (< 60 yo) person falling.


Are there numbers on the average height of those fatal falls? If they're from balconies, roofs, etc., I'd say being on a bike (a few feet from the ground) would make it much safer.


When you're balancing on two thin tires, you are lucky if you fall and don't slam your head on the pavement.


Curious if you have ever fallen off a bike? I have fallen over several times on a bike while stationary (when learning to ride with clipless pedals), I have crashed bikes at much higher speeds as well, and I have watched my kids fall off of bikes lots of times while learning. In all of that, I have never seen a (or had my own) head hit the ground. Typically you hit the ground with your arms (slow speed or stationary fall) or your hips, back, or shoulders (if at higher speed).


> A fall at any speed from a bike is literally a potentially crippling or deadly scenario.

For elderly people, I would guess that's accurate.


Don't underestimate how dangerous even a small fall can be, you can end up fine but you could also end up smashing your face into the curb.

A friend of mine, in his 50's very fit, cycling to work and back every day, broke both his arms while doing literally a 10-meter test drive in front of a bike store.

The bike's brakes were setup reversed compared to what he used to, so he ended up breaking with the front brake, flipping the bike over and breaking both his arms while landing. His fault? Sure, but still a rather scary story how quickly even mundane things can go really wrong.


Did he end up purchasing the bike?


I don't think he did, not much use for a bike when both your arms are in a plaster cast from hands to shoulders. Poor guy couldn't even go to the toilet without help.


Yeah, I mean those brakes were real GOOD!


Yet falling is the second most common cause of accidental deaths in the world.


When a rider is onboard there’s no autonomy, the bike is only self-delivering.


I get what you're saying but that feels like a reach


Sure, but "[t]he system is not designed to alert the operator." At least they could have alerted the operator. This seems like reckless endangerment or negligent homicide. Luckily for Uber they hit a poor person and no one will hold them responsible. 1.3 seconds is a long time for the operator to act.


This highlights an interesting general point - in many situations, there is no simple safe fallback policy. On a highway, an emergency stop is not safe. This is a general problem in AI safety and is covered nicely in this youtube video, as well as the paper referenced there - https://www.youtube.com/watch?v=lqJUIqZNzP8


> On a highway, an emergency stop is not safe.

That depends, there could simply be no traffic behind you, which an experienced driver and hopefully and automated one would be monitoring.

Besides, there are many situations on the highway where an E-stop is far safer than any of the alternatives even if there is traffic behind you. Driving as though nothing has changed in the presence of an E-stop worthy situation is definitely not the right decision.


How intelligent is the ML driving the car? If the car slowed down and hit the 49 year old at a reduced speed the insurance payout to a now severely disabled individual would be far more expensive than the alternative insurance pay out with a pedestrian fatality. A choice between paying out for 40 years worth of around-the-clock medical care vs. a one-time lump-sum payout to the victim's family would be pretty obvious from a corporate point of view.


Are you seriously suggesting that the better software strategy is to aim for the kill because it is cheaper than possibly causing 'only' injury?

That should be criminal.

I'm all for chalking this one up to criminal negligence and incompetence, outright malice is - for now - off the table, unless someone leaks meeting notes from Uber where they discussed that exact scenario.


My point is that it's a black box and nobody outside of Uber knows what its priorities are. It could have just as easily mistaken the pedestrian leaned over pushing the bike for a large dog and then proceeded to run her over because it's programmed to always run dogs over at full speed on the highway. Outside of Asimov's "Three Laws of Robotics" there is nothing that dictates how self-driving cars should behave, so my unpopular idea above isn't technically breaking any rules.


You should check out what happened to Volkwagen for a similar trick.


Fines.


>> This will inevitably result in fatalities -- almost certainly fewer than when humans drive, but plenty of fatalities nonetheless.

Where is the almost-certainty coming from that the fatalities would be fewer compared to humans driving? And what does "almost" mean in this case?


Computers have vastly lower reaction time than humans. Computers have sensory input that humans lack (LIDAR). Computers don't get drowsy or agitated.

And "almost" is always a good idea when talking about a future that looks certain. Takes into account the unknown unknowns. And the known unknowns (cough hacking cough).


Fast reaction times, good sensors and unyielding focus are not enough to drive safely. An agent also needs situational awareness and an understanding of the entities in its environment and their relations.

Without the ability to understand its environment and react appropriately to it, all the good the fast reaction times will do to an AI agent is to let it take the wrong decisions faster than a human being.

Just saying "computers" and waving our hands about won't magically solve the hard problems involved in full autonomy. Allegedly, the industry has some sort of plan to go from where we are now (sorta kinda level-2 autonomy) to full, level-5 autonomy where "computers" will drive more safely than humans. It would be very kind of the industry if they could share that plan with the rest of us, because for the time being it sounds just like what I describe above, saying "computers" and hand-waving everything else.


That's a sociopolitical question more than a technical one. I posit that:

1.) Road safety -- as far as the current operating concept of cars is concerned (eg., high speeds in mixed environments) -- is not a problem that can be "solved". At best it can only ever be approximated. The quality of approximation will correspond to the number of fatalities. Algorithm improvements will yield diminishing returns: the operating domain is fundamentally unsafe, and will always result in numerous fatalities even when driven "perfectly".

2.) With regards to factors that contribute to driving safety, there are some things that computers are indisputably better at than humans (raw reaction time). There are other things that humans are still better at than computers (synthesising sensory data into a cohesive model of the world, and then reasoning about that world). Computers are continually improving their performance, however. While we don't have all the theories worked out for how machines will eventually surpass human performance in these domains, we don't have a strong reason to believe that machines won't surpass human performance in these domains. The only question is when. (I don't have an answer to this question).

3.) So the question is not "when will autonomous driving be safe" (it won't be), but rather: "what is the minimum level of safety we will accept from autonomous driving?" I'm quite certain that the bar will be set much higher for autonomous driving than for human driving. This is because risk perception -- especially as magnified by a media that thrives on sensationalism -- is based on how "extraordinary" an event seems, much more than how dangerous it actually is. Look at the disparities in sociopolitical responses to, say, plane crashes and Zika virus, versus car crashes and influenza. Autonomous vehicles will be treated more as the former than the latter, and therefore the scrutiny they receive will be vastly higher.

4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.

5.) Personally, I think that the algorithms won't be able to pass this public-acceptability threshold on their own, because even the best-imaginable algorithm, if adopted on a global basis, would still kill hundreds of thousands of people every year. That's still probably too many. I expect that full automation eventually will become the norm, but only as enabled by new types of infrastructure / urban design which enable it to be safer than automation alone.


> This is because risk perception -- especially as magnified by a media that thrives on sensationalism -- is based on how "extraordinary" an event seems, much more than how dangerous it actually is.

This is a wonderfully concise way of describing a phenomenon that I have not been able to articulate well. Thank you.


OK, this is a very good answer- thanks for taking the time.

I'm too exhausted (health issues) to reply in as much detail as your comment deserves, but here's the best I can do.

>> 4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.

Or at least it won't be morally justifiable for them to be a thing at all, unless they're sufficiently safer than humans- whatever "sufficently" is going to mean (which we can't really know; as you say that has to do with public perception and the whims of a fickle press).

I initially took your assertion to mean that self-driving AI will inevitably get to a point where it can be "sufficiently" safer than humans. Your point (2.) above confirms this. I don't think you're wrong, there's no reason to doubt that computers will, one day, be as good as humans at the things that humans are good at.

On the other hand I really don't see this happening any time soon- not in my lifetime and most likely not in the next two or three human generations. It's certainly hard to see how we can go from the AI we have now to AI with human-level intelligence. Despite the successes of statistical machine learning and deep neural nets, their models are extremely specific and the tasks they can perform too restricted to resemble anything like general intelligence. Perhaps we could somehow combine multiple models into some kind of coherent agent with a broader range of aptitudes, but there is very little research in that direction. The hype is great, but the technology is still primitive.

But of course, that's still speculative- maybe something big will happen tomorrow and we'll all watch in awe as we enter a new era of AI research. Probably not, but who knows.

So the question is- where does this leave the efforts of the industry to, well, sell self-driving tech, in the right here and the right now? When you said self-driving cars will almost certainly be safer than humans- you didn't put a date on that. Others in the industry are trying to sell their self-driving tech as safer than humans right now, or in "a few years", "by 2021" and so on. See Elon Musk's claims that Autopilot is safer than human drivers already.

So my concern is that assertions about the safety of self-driving cars by industry players are basically trying to create a climate of acceptance of the technology in the present or near future, before it is even as safe as humans, let alone safer (or "sufficiently" so). If the press and public opinion are irrational, their irrationality can just as well mean that self-driving technology is accepted when it's still far too dangerous. Rather than setting the bar too high and demanding an extreme standard of safety, things can go the other way and we can end up with a diminished standard instead.

Note I'm not saying that is what you were trying to do with your statement about almost certainty etc. Kind of just explaining where I come from, here.


Likewise, thanks for the good reply! Hope your health issues improve!

I share your skepticism that AIs capable of piloting fully driverless cars are coming in the next few years. In the longer term, I'm more optimistic. There are definitely some fundamental breakthroughs which are needed (with regards to causal reasoning etc.) before "full autonomy" can happen -- but a lot of money and creativity is being thrown at these problems, and although none of us will know how hard the Hard problem is until after it's been solved, my hunch is that it will yield within this generation.

But I think that framing this as an AI problem is not really correct in the first place.

Currently car accidents kill about 1.3 million people per year. Given current driving standards, a lot of these fatalities are "inevitable". For example: many real-world car-based trolley problems involve driving around a blind curve too fast to react to what's on the other side. You suddenly encounter an array of obstacles: which one do you choose to hit? Or do you (in some cases) minimise global harm by driving yourself off the road? Faced with these kind of choices, people say "oh, that's easy -- you can instruct autonomous cars to not drive around blind curves faster than they can react". But in that case, the autonomous car just goes from being the thing that does the hitting to the thing that gets hit (by a human). Either way, people gonna die -- not due to a specific fault in how individual vehicles are controlled, but due to collective flaws in the entire premise of automotive infrastructure.

So the problem is that no matter how good the AIs get, as long as they have to interact with humans in any way, they're still going to kill a fair number of people. I sympathise quite a lot with Musk's utilitarian point of view: if AIs are merely better humans, then it shouldn't matter that they still kill a lot of people; the fact that they kill meaningfully fewer people ought to be good enough to prefer them. If this is the basis for fostering a "climate of acceptance", as you say, then I don't think it would be a bad thing at all.

But I don't expect social or legal systems to adopt a pragmatic utilitarian ethos anytime soon!

One barrier it that even apart from the sensational aspect of autonomous-vehicle accidents, it's possible to do so much critiquing of them. When a human driver encounters a real-world trolley problem, they generally freeze up, overcorrect, or do something else that doesn't involve much careful calculation. So shit happens, some poor SOB is liable for it, and there's no black-box to audit.

In contrast, when an autonomous vehicle kills someone, there will be a cool, calculated, auditable trail of decision-making which led to that outcome. The impulse to second-guess the AV's reasoning -- by regulators, lawyers, politicians, and competitors -- will be irresistible. To the extent that this fosters actual safety improvements, it's certainly a good thing. But it can be really hard to make even honest critiques of these things, because any suggested change needs to be tested against a near-infinite number of scenarios -- and in any case, not all of the critiques will be honest. This will be a huge barrier to adoption.

Another barrier is that people's attitudes towards AVs can change how safe they are. Tesla has real data showing that Autopilot makes driving significantly safer. This data isn't wrong. The problem is that this was from a time when Autopilot was being used by people who were relatively uncomfortable with it. This meant that it was being used correctly -- as a second pair of eyes, augmenting those of the driver. That's fine: it's analogous to an aircraft Autopilot when used like that. But the more comfortable people become with Autopilot -- to the point where they start taking naps or climbing into the back seat -- the less safe it becomes. This is the bane of Level 2 and 3 automation: a feedback loop where increasing AV safety/reliability leads to decreasing human attentiveness, leading (perhaps) to a paradoxical overall decrease in safety and reliability.

Even Level 4 and 5 automation isn't immune from this kind of feedback loop. It's just externalised: drivers in Mountain View learned that they could drive more aggressively around the Google AVs, which would always give way to avoid a collision.

So my contention is that while the the AIs may be "good enough" anytime between, say, now and 20 years from now -- the above sort of problems will be real barriers to adoption. These problems can be boiled down to a single word: humans. As long as AVs share a (high-speed) domain with humans, there will be a lot of fatalities, and the AVs will take the blame for this (since humans aren't black-boxed).

Nonetheless, I think we will see AVs become very prominent. Here's how:

1. Initially, small networks of low-speed (~12mph) Level-4 AVs operating in mixed environments, generally restricted to campus environments, pedestrianised town centres, etc. At that speed, it's possible to operate safely around humans even with reasonably stupid AIs. Think Easymile, 2getthere, and others.

2. These networks will become joined-up by fully-segregated higher-speed AV-only right-of-ways, either on existing motorways or in new types of infrastructure (think the Boring Company).

3. As these AVs take a greater mode-share, cities will incrementally convert roads into either mixed low-speed or exclusive high-speed. Development patterns will adapt accordingly. It will be a slow process, but after (say) 40-50 years, the cities will be more or less fully autonomous (with most of the streets being low-speed and heavily shared with pedestrians and bicyclists).

Note that this scenario is largely insensitive to AI advances, because the real problem that needs to be solved is at the point of human interface.


The problem is that drivers rarely maintain the safety distance they should have to not endanger themselves. BUT in that case, the car should have also noticed if there were near traffic behind. Doing nothing in that case doesn’t seem the right decision at all.

Very good write anyways... indeed many things will have to change - probably the infrastructure, the vehicles, the software, the way pedestrians move, and driver behavior as well.


I love your pods at T5! They're super fun.


I'm with you up until the "almost certainly" part. Is there any actual hard data on that?


That quote is the crux of it when you pair it with this other section: "In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review."

So you have a "driver" who has to be monitoring a diagnostic console, AND has to be separately watching for non-alerted emergency events to avoid a fatal crash? Why not hire two people? Good god.


Move fast and disable the brakes. They in fact began testing with two people per car, but then decided to go with just one. An Uber spokeswoman stated [1]:

> We decided to make this transition [from two to one] because after testing, we felt we could accomplish the task of the second person—annotating each intervention with information about what was happening around the car—by looking at our logs after the vehicle had returned to base, rather than in real time.

However, this seems to contradict the NTSB report which indicates that it still was the driver's responsibility to perform this event tagging task, which necessarily implies taking your eyes off the road.

[1] https://www.citylab.com/transportation/2018/03/former-uber-b...


> which necessarily implies taking your eyes off the road

Don't we have speech-to-text for this sort of thing?


Initially, they did [1].

"Uber moved from two employees in every car to one. The paired employees had been splitting duties — one ready to take over if the autonomous system failed, and another to keep an eye on what the computers were detecting. The second person was responsible for keeping track of system performance as well as labeling data on a laptop computer. Mr. Kallman, the Uber spokesman, said the second person was in the car for purely data related tasks, not safety."

[1] https://www.nytimes.com/2018/03/23/technology/uber-self-driv...


Having a second person in the car would certainly pressure the first person to not slack off, though.


Smalltalk usually keeps people from outright ignoring traffic, too.


"So you have a "driver" who has to be monitoring a diagnostic console, AND has to be separately watching for non-alerted emergency events to avoid a fatal crash?"

This gets your license yanked, herearound. Same goes for texting and driving if you're caught. Even in stop - go traffic.


At least beep, siren, warning blink ... do something.

It feels like a typical coder rethrowing exception somewhere u_u


[flagged]


You can see from the logs that the OOFKiller kicked in

    kernel: [70667120.897649] Out of fucks: Kill pedestrian 29957 score 366 or sacrifice driver's convenience


Nice uptime.


That's not just one error but a whole book of errors, and that last bit combined with the reliance on the operator to take action is criminal. (And if it isn't it should be.)

I hope that whoever was responsible for this piece of crap software loses a lot of sleep over it, and that Uber will admit that they have no business building safety critical software. Idiots.

For 6 seconds the system had crucial information and failed to relay it, for 1.3 seconds the system knew an accident was going to happen and failed to act on that knowledge.

Drunk drivers suck, but this is much worse. This is the equivalent of plowing into a pedestrian with a vehicle while you're in full control of it because you are afraid that your perception of the world is so crappy that you will over-react to such situations often enough that the risk of killing someone you know is there is perceived as the lower one.

Not to mention all the errors in terms of process and oversight that allowed this p.o.s. software to be deployed in traffic.


"for 1.3 seconds the system knew an accident was going to happen and failed to act on that knowledge."

This is so tragic. Even Volvo's own collision avoidance system would (could?) have mitigated the crash a fair bit. From Volvo's own spec. sheet [1]: "For speeds between 45 and 70 km/h, the collision is mitigated." In this case, the NTSB reports mentions that the car was traveling at 43mph, i.e. 68.8 kmph :(.

What bothers me is that these systems are on public roads, without public oversight. Sure, Uber got permission from the local authorities, but getting an independent team of technologists and ethicists to sign off on the basic parameters should have been the bare minimum ... yes, that would take time, but do we really want to give companies, especially ones like Uber with a history of ethical transgressions, the benefit of the doubt?

[1] https://tinyurl.com/y9sp2fmu (WARNING: This open/downloads a PDF that I referred to above. Page 5 has the paragraph on pedestrian collision detection specs)


Is this necessarily different from a car placed in normal cruise control (automatic throttle, no braking), where the driver is under the obligation to managing braking in an emergency? It seems like the human driver here was still under that obligation, but failed to act. (Possibly because they were distracted, but that's not unique to this situation.)


A cruise control system doesn't look out your front window and doesn't steer the car. So the driver is still actively engaged in operating the vehicle, just has one less lever to work on. And at the first tap on the brake it disengages.

If you want to compare it with a car operating on cruise control you'd have to sedate the driver.


"Looking out the window" and "steering the car" is pretty much exactly what current year cruise control systems do. Just go look at the Subaru Eyesight systems, which depend on cameras that face out the upper part of the windshield. https://www.subaru.com/engineering/eyesight.html

(Subaru's doesn't do active lanekeeping, but lots of other manufacturers like BMW and Ford do.)

The newer cruise controls have lane-keep assist and adaptive cruise control - you don't have to actively steer or brake. On an open road, there's effectively little difference from the Uber vehicle, which would also let you disengage autonomous mode by breaking or otherwise interacting with the controls. (The newest mass-market cruise controls are "stop and go", which means they'll even bring the car to a full stop, then start driving again.)


That's not cruise control, that's lane assist. And note that it still says 'assist' right on the tin.


They should also beef up the brakes with a "big brake kit" to aid stopping power in that 1.3 sec window on every autonomous car.


I thought breaking is limited by the tire/street contact, not by the break?

Or at least it should be. That's why there is a v_max that a car is not allowed to exceed and a faster or heavier car will have better breaks. And 70 km/h as here should be far away from v_max.


It's a combination, but any modern (disc) brake can block a wheel so in practice this should not be an issue. The only time it might be a problem is after dragging the brakes on a long downslope (which is why you shouldn't do that, the hydraulic oil will boil and your brakes will stop working).


Ok, so the AI is too panicky and will brake for no apparent reason so they have to disable that bit while they work on it. Fine.

But why the hell wouldn't you have the thing beep to alert the driver that the AI thinks there is a problem and they need to pay extra attention? In fact it seems like this would be helpful when trying to fine tune the system.


> Ok, so the AI is too panicky and will brake for no apparent reason so they have to disable that bit while they work on it.

That simply means it should not be deployed. End of story.


* on public roads, feel free to test that in a safe closed course!


Deployment to me means to release into the real world, otherwise it is just a test and whether that test is in a closed course or even in a simulation makes no difference to me. That environment would not endanger random strangers.


If the system had enough false positives that they decided to disable the braking, it's possible it had enough false positives that the operator would have learned to ignore the alert beep too.


That, of course, is a reason to go back to the drawing board, not to disable the alerting.


I'm not making excuses for Uber here. If their system was that broken it should never have left the test course.


What strikes me odd is that once an unknown is detected on the road, the car should already be alerting the driver and slowing down.

That buys you time for the ai classifier to do its thing and isn’t as dangerous as an emergency braking later on, so seems a sensible behavior all around.


The more after reports on accidents involving self driving cars I've read, the more I've become convinced that the current state of the art in this field is merely an illusion. Reading between the lines, the complete disregard for basic safety protocols like you've just described comes across as more than just a brazen continuation of the "move fast and break things" Silicon Valley culture. Viewed in this light, this entire niche of tech R&D begins to take on the appearance of a giant game of smoke and mirrors being played for the benefit of those recipients of Big Investment $$$ that wooed investors to bet big with promises of delivering a marketable self-driving car within the next decade.

The way I see it,there is only way to make sense of a field where the most respectable R&D house (Google/Alphabet) limits their vehicle to a relative snail's pace while everyone else (including notoriously unethical shops like Uber) is taking a gung-ho, "the only limit is the speed limit" approach. That is to assume "everyone else" is cheating the game by choosing a development path that gives the appearance of being functional in the "rough but will get there with enough polish" sense while the truth is that it's merely a mountain of cheap and dirty hacks that will never achieve the goals demanded by investors.

The only reason a company would overlook such a simple safety protocol as "slow down until a potentially dangerous object is positively identified" is if their "AI" fails to positively identify objects so frequently that the car could never achieve what a human passenger would consider a consistent, normal driving pace. The same can be said for any "AI" that can't be trusted to initiate panic braking in response to a positively identified collision scenario with a positively identified object. The fact that they specifically wrote an "AI" absolving workaround for that scenario their software means the frequency of false positives must be so high as to make the frequency of "false alarm" panic braking incidents unacceptable for human passengers.


I'd guess that "unknown objects" happen all the time - it seems like that's the default until something is classified, so tire scrap or plastic bag would also fall into that category. If the car slowed down every time it saw one it would never get anywhere, it should only slow if the object gets classified as something you can't hit and is clearly in the path of the vehicle. Seems like that decision happened too late here, requiring emergency braking... which was disabled (!).


The missing bit here is "...unknown is detected on the road" - if there is a tire scrap or plastic bag or anything that looks suspicious a normal human driver would slow down and give it extra attention, then try to avoid it anyway. You don't drive over / through an object unless you know that it is not harmful, and even then you try to avoid it so you don't drive over a bag... of nails.


It's not that simple. Humans can also predict movement, and that is necessary because cars don't stop instantaneously. So you have a person walking toward the street and your dumb smart car is constantly hitting the brakes.

This tech simply isn't there yet and I doubt it's all that close.


  This tech simply isn't there yet and I doubt it's all that close.
In which case it's absolutely criminal by the government of Arizona to allow testing such tech on public roads.


> So you have a person walking toward the street > and your dumb smart car is constantly hitting the brakes.

Doesn't sound dumb to me. The car should be going slow enough to emergency stop if the pedestrian enters the road.


People don't drive like that. People expect reasonable behavior from other people and that includes expectations that they won't jump into the road. If Uber will drive unreasonably, its passengers will prefer other taxi who drive more aggressively.


Except no one drives that way and no one would put up with it. Do you slow down every time someone on the sidewalk takes a step toward the street? I doubt it (and, if you do, please never get in front of me.)

So, yeah: dumb.


If I think they might walk on the street I of course slow down (as much as I deem necessary). What kind of question is that?

If I think I saw children run around between cars on the parking lane, are the parents probably morons? Yes. Do I slow down and be prepare to slam the brakes in case a child suddenly runs in front of me? Ab-so-lute-ly.

Even if people behave idiotically on the street, it is obviously still my fault if I run them over.


>If I think they might walk on the street I of course slow down

And that's what I'm talking about. Computers aren't all that good at determining whether or not a person is going to jump into the street.


human track gaze and understand desires, so you kinda know if someone exiting a shop will or will not proceed straight into the road.

that said, as I said above whenever a car sense a situation it doesn't understand it should slow down, that's enough to be safe later on as the situation develops and is different than hitting the brakes full force.

and anyway expecting autonomous car to drive full speed all the time is moronic, humans don't do that either, precisely because it's dangerous.


As a human, if I saw a plastic bag blow into the street at night I'd slow down until I was sure it wasn't an animal or something. Seems like basically the same process.


Sure, but if you were on a highway, would you slam on the brakes? I hope not.

There's a calculation here of balancing the perceived risk of an obstruction with the consequences of avoiding it or braking in time. Drivers have to make this decision all the time, on a highway they will generally assume it's safer to hit most things than swerve or panic brake, because it's mostly likely not that dangerous to collide with.

At least one stat I saw from AAA is that ~40% of the deaths from road debris result from drivers swerving to avoid them.


That means 60% result from hitting them.. still seems better to swerve.


How many plastic bags do you have crossing your streets?! Around here I doubt you'd slow down more than a couple of times per month.


Not as many since they banned them here, but it used to be quite common given the combination between the fact it's pretty windy here in SF every afternoon, and there's lots of trash / debris around. It's still pretty common to see blowing paper or other debris (tire shreds) in the road at least a few times in my 12 mile commute.

This has been a not-minor problem for autonomous cars and the Tesla-style autopilots / adaptive cruise controls that depend on vision only. You have to program it to ignore some types of things that seem like they might be an obstruction, such as road signs, debris in the road, etc. so they don't hit the brakes unnecessarily.


> If the car slowed down every time it saw one [unknown object] it would never get anywhere

well tough shit then, a car that plows trough situation it doesn't understand should never be on the public road


Both adaptive cruise controls and human drivers do this by default. If you're doing 60 MPH on a highway and something pops into the periphery of your vision that you don't recognize, do you slam on the brakes? No.


of course I do slow down, there's a whole load of possibilities between slamming the brakes and get rear ended and driving at the posted limit trough a dangerous situation, you know? car can slow down gently.

it's a 60 zone and rain or smoke impairs the visibility? slow down.

it's a 45mph zone and something that's not a motor vehicle is in a lane that's supposed to only have motor vehicles on it? you slow down until you make sense of the situation.

you're near a playground and a mother is walking a children on the other side of the road and you can't see if she's holding his hand? you slow down.

a person walks near the kerb and it's not looking in your direction? you slow down. a bike is loaded with groceries? a car acting erratically? a person being pulled by his dog? a bus stopped unloading people? you don't drive past them at 30mph.

driving safely: super simple stuff.


> super simple stuff

If people did the super simple stuff, we'd have boundless peace, prosperity, liberty.

I remember a story - stop me if you've heard this one - about a God helping out a group of desperate people, freeing them from slavery, parting seas, feeding them in the desert. They were camped at the foot of a mountain with the God right there on top - right there! And they built the golden calf anyway. And that rule seems easier than all the other 9. WTF did they even need a golden calf for?

So sadly, the criteria of simplicity is irrelevant - people will find a hard way to do it.


I'm not sure why your comment seems to be grey-ed out. Here's the relevant section from the report:

"As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path."

1. It seems like the classifier flipped state between pedestrian/unknown object/vehicle/bicycle; this seems like one of the well-known issues with machine learning. (I'm assuming the classifier is using ML simply because I have never heard of any other (semi-?) successful work on that problem.)

I suggest that the problem is that the rest of the driving system went from 100% certainty of A to 100% certainty of B, etc., with a resulting complete recalculation of what do to about the current classification. I make this hypothesis on the basis of the 4+ seconds when the car did nothing, while a response to any of the individual possibilities would possibly have averted the accident.

2. If the classifier was flipping state, I assume the system interrupted the Decide-Act phases of an OODA loop, resulting in the car continuing its given path rather than executing any actions. This seems like a reasonable thing to do, if the system contains no moment-to-moment state. Which would be strange; it seems like the planning system should have some case for having obstacles A, B, C, and D rapidly and successively appearing in the same area of its path.

3. Assuming the classifier wasn't flipping state, but presenting multiple options with probabilities, I can see no reason why the car wouldn't have taken some action in the 4+ seconds. (I note that the trajectory of the vehicle seems to move towards the right of its lane, which is a rather inadequate response and likely the wrong thing to do for several of the classification options.)

"According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior."

That's just idiotic and would be nigh-criminally unprofessional in most engineering situations.


I have that in my 2017 Volkswagen. It brakes by itself in normal operations, gives very loud feedback if intervention is needed, but does not brake if the intervention is unexpected / probably erroneous, only audio then.

I hope but do not know if the frequency and circumstances are logged and sent to Volkswagen when at the shop. Don't expect it though since it will first see the shop after 2 years. That would be too long for an improvement cycle.


> I hope but do not know if the frequency and circumstances are logged and sent to Volkswagen when at the shop.

What makes you think they aren't sent to VW all the time?


> Ok, so the AI is too panicky and will brake for no apparent reason so they have to disable that bit while they work on it. Fine.

Not fine on a public road with real people on it.


Even in the far future when all cars are Level 5, they will still be developed by error-prone humans, and I'm still gonna teach my kids to look both ways before they cross the street.


Absolutely, the thing is humans become habituated to repeated alerts.. even if you make a flashing red hazard symbol and have it audibly screaming at the driver, with enough false alarms it will delay our response over time, require us to look at the dashboard and interpret the message which burns precious seconds, coupled with slow human reaction time... 6 seconds will be tough to react within safely. We need proper autonomous controls doing their job safely without blaming us when then fail.


It really sounds like when this change was put into place, there were two people in the car: one monitoring the road in the driver position and a second monitoring the vehicle and systems in the passenger seat (with the screen). Once they decided to eliminate the second position and make the driver do both, they should have either let the car apply brake or, as you say, provide a loud alert.


It should have alerted anyway in the beginning. My guess is that either the alert would go off all the time which is why they didn't code it up or they didn't think about it during requirements analysis.


I understand that emergency maneuver system was disabled so the car did not brake between t minus 1.3 and t. But why didn't it brake from t minus 6 to t minus 1.3? Looks like it detected that the car's and object's paths were converging, so why didn't it brake during that interval?


Based on a TED talk by someone from Google[1]. I think having the car apply the brakes when there’s a possible disturbance causes the car to apply the breaks too much and makes for a really uncomfortable ride.

I think a major effort in self driving is solving the Goldilocks issue of reacting properly to impending accidents, but also not apply uncomfortable breaking if it’s not needed.

Seems like it was too insensitive at that distance.

[1] https://www.ted.com/talks/chris_urmson_how_a_driverless_car_...


There's the third option of slowing down. That's what most human drivers do subconsciously when we see something that we're having trouble identify, and feel it could turn into an obstacle.


This too. An electric car would simple decelerate by letting off the gas pedal and having the regen kick in. On a gas car this would be equivalent to a partial braking.

Shouldn't the brake application be not boolean but 0-100% strength based on confidence levels?


>I think a major effort in self driving is solving the Goldilocks issue of reacting properly to impending accidents, but also not apply uncomfortable breaking if it’s not needed.

This is also an issue with current production automatic braking system. One that was largely solved with on track testing, and on road testing logging false triggers with a driver driving. There's no need to risk lives unless you're just cutting corners to avoid the cost of a test track.


Because then the car would break every time anyone approached the car from an angle (which is constantly). Think every intersection ever, every time driving near a sidewalk ever. The car would be herky/jerky as crap.

They should have had it set to spike the breaks once collision was imminent though, that's (maybe) the biggest programming omission here.


They should have set it to slow down gradually when approached from an angle at T-6 and then speed up once past the intersection risk, so that when the scenario emerged at T-1.6 it could emergency stop safely.


> Think every intersection ever, every time driving near a sidewalk ever. The car would be herky/jerky as crap.

I'm not sure that'd be a huge issue. The vectors have to be intersecting first of all, which most vectors emanating from sidewalks wouldn't be, and then a little hysteresis would smooth out most of the rest.


I don't know if you've been to New York or any other places where people walk, but vectors would absolutely be intersecting on a regular basis up until a fairly short time when the pedestrian would stop. Constantly I walk toward an intersection where, if I kept going for three more seconds, I would be pasted to the street by a passing car. But I stop at the end of the sidewalk, before the road begins, so the vector changes to zero in those last three seconds. It would be super weird if cars would brake at the intersection every time this happened. Cars would be braking at every major street on every major avenue, constantly.

What's actually needed here is some notion of whether the pedestrian is paying attention and will correctly stop and not intersect the path of the car. Humans are constantly making that assessment based on sometimes very subtle cues (is the person looking at/talking on a phone, or are they paying attention, for example).


Yeah, eye contact is a very important signal. Maybe there needs to be some specialized hardware to detect eyes and determine the direction they're looking in.


Good idea, although suicidal or oblivious/impaired pedestrians might fail to make eye-contact.


Well, exactly. Those are the ones you slow down for.


I like it. This 'jumping in front of cabs' thing is huge in Russia and it'd be cool if AI could prevent that here.


We use eye contact because we can't infer what another person is thinking and we can't react quickly enough to their actual movements at car speeds. This latter isn't the case with automated vehicles, so eye contact shouldn't be necessary, as long as you get the vector algorithms right.


> Constantly I walk toward an intersection where, if I kept going for three more seconds, I would be pasted to the street by a passing car.

These autonomous systems are evaluating surrounding vectors every few milliseconds. A timescale of 3 seconds simply isn't important, as they would instantly detect you slowing down and conclude that you wouldn't intersect with their vector.


You have missed the entire context here.

> But why didn't it brake from t minus 6 to t minus 1.3? Looks like it detected that the car's and object's paths were converging, so why didn't it brake during that interval?


You're missing the context of this thread. The software in the Uber car has a clear failure condition. That has nothing to do with whether it's possible to infer such vector collisions without jerky driving, which is the point I'm addressing.


The question was why the car doesn’t brake early. “Because scanning every few milliseconds” is not an answer. Scanning frequency is irrelevant to the fact that emergency braking is not a reasonable strategy in general.

Safe driving often does require slowing down in the face of insufficient information. If a human driver sees an inattentive pedestrian about to intersect traffic, they will slow down. “Drive until collision is unavoidable” is a failing strategy.

And anyway, jerky driving is a symptom of late braking, not early braking.


> And anyway, jerky driving is a symptom of late braking, not early braking.

I see it as more than just jerkyness, I see a massive safety issue in traffic. If your autonomous car is slamming on the brakes spontaneously there's a lot more opportunities for other drivers to plow into you from behind.


> The question was why the car doesn’t brake early.

No it's not, the issue was jerky driving.

> “Drive until collision is unavoidable” is a failing strategy.

No kidding.


They can't detect me slowing down before I start slowing down. So if it's t-4 until impact and I'm still moving at full speed, they would need to start braking now if they can't stop in 4s (assuming the worst case that I continue on my current trajectory).

That being said, I'm happy to find my assumptions about stopping time are incorrect and a car traveling at 25mph can stop in less than a second. So on busy NYC streets this wouldn't be an issue. Even at 50mph it appears that stopping time is sub 3s, so the vehicle could probably have avoided this collision if it were running a more intelligent program.


> They can't detect me slowing down before I start slowing down. So if it's t-4 until impact and I'm still moving at full speed, they would need to start braking now if they can't stop in 4s (assuming the worst case that I continue on my current trajectory).

Right, collision is basic physics accounting for the stopping time and distance of pedestrians and cars. So the question is whether pedestrians on sidewalks really have so many collision vectors with traffic such that autonomous vehicles would be jerky all of the time as the initial poster suggested.

I claim reasonable defaults that would far outperform humans on average wouldn't have that property. Autonomous vehicles should be programmed to follow the rules of the road with reasonable provisions to avoid collisions when possible.


I think the situation might be all the transitions. All the time people on bikes switch from a driveway to a bike lane, during which they could continue straight into the road. Or people step out of a building and walk diagonally across a large sidewalk, they could keep going straight into the road.


Which simply means it is because the car's AI isn't good enough to classify that object as something it should slow down for versus something it can ignore (like an empty plastic bag drifting across the road.)


Well, it is good enough, it just that it develops that confidence over the period of so many seconds. In this case it took until T minus 1.6 seconds to realize "ok this is something we should stop for".


well then, frankly, that’s not good enough.


I don’t know their internals but from that report it looks like their recognition system is probabilistic and routinely hops back and forth between “car collision RED ALERT!” and “lol there’s no problem”. If it were to randomly slam on its breaks every other second then it would cause all kinds of other accidents.


Sensors were fine, victim was detected, software was crappy.

That's vanilla testing edge-case stuff, really, and it's known that uber are unter when it comes to this, but the removal of all the useful safety layers after that (braking, alert, second human, hardware system) is reckless and stupid.


Well, exactly. Nothing wrong with the sensors, and the classifier was getting consistent pings. This is the critical failure that led to the crash, as much as the shitty final-second emergency non-process.


Because self driving doesn't actually work.


I observe that cars and other objects often come very close to each other, so it would seem impossible to simply brake based on "converging paths". It's necessary to know what an object is and how it's going to behave. If you don't, I don't see how you can go anywhere.


It absolutely is not necessary to know how another object will behave on the road: you slow down because of the uncertainty!

"It might move out of my way" is no reason to get so close at such a high speed that you can't avoid it when it doesn't!


People slow down from 35 to 30 or whatever. Humans don't slow down to the point at which an accident is physically impossible for all unexpected movements, because that would be zero considering there are frequently objects within feet or inches.

Self driving cars can emulate humans, but that won't bring them to human level performance without the ability to model other actors. If they try to mathematically rule out the possibility of accidents without such models, they won't be able to go anywhere.


That's not how traffic works at the moment. I often have pedestrians walk towards the street and I assume they will stop so I don't slow down unless they are children or similar. Almost every day I could hit pedestrians if they kept walking.


You should slow down. Those people you are describing sound like they want to cross the street, and they probably have right of way, so yield for them.

Also, my two year old will sometimes walk towards the curb, but she is very good with streets, so I am not worried. She always stops and waits to hold someone's hand before crossing. This behavior freaks some drivers out, causing them to slow or come to a complete stop, which is the nicest outcome because then I can take her hand and cross the street. When I am walking by myself drivers rarely yield even as I am stepping into the street, even at marked crossings.

I guess my point is if my two year old exhibits the behavior you ascribe to a hypothetical non-child pedestrian then how can you be sure your hypothetical pedestrian won't just "keep walking"? What if they are blind, or drunk, or reckless? Perhaps you have been lucky before and never struck a pedestrian but I strongly urge you to assess your behavior. Stop for pedestrians, it's the nice thing to do and it's probably the law where you live.


You make me sound like a crazy driver :). Just watch the traffic along a busy road with pedestrians on the side. Nobody slows down if they pedestrians behave the usual way. You also often have pedestrians walk into the road to just stop right before the zone where cars are. Nobody slows down because they see that the pedestrian is observing traffic.


I have made this observation, I even mentioned it in my comment. I am urging you and anyone else who reads my words to assess your behavior so as to affect a change in the status quo. Sometimes pedestrians do walk into the road and they do get struck. The only foolproof way to prevent this is to change driver behavior which is why the requirement to yield to pedestrians at intersections is codified into most laws.

When I am driving I often see people standing at the curb just staring at their smartphone. Usually these people are wasting time because they don't expect traffic to stop. When I stop for them they are usually pleased, they cross the road and get on with their life. Sometimes these people are just waiting for an Uber or something, when I stop for them they get confused and look at me funny. I don't mind, I just smile at them and resume driving. I am in a car, so I can accelerate and travel very quickly with almost no effort. It is no trouble for me to spend a few seconds stopping for a false positive.


In the context of self-driving cars though, they can't read expressions and exhibit them. They can't necessarily even say whether something is a person or not. So your driving methods are not applicable. A computer can say "given the physics of how an average person can move, it is possible for them to leap in front of me in X amount of time" and then what? I think that a self-driving vehicle that follows your principles without your analytic ability is going to have so many false positives it will be useless. And I think the fact that they aren't attempting to follow your principles is evidence they don't have the ability.


I agree, and perhaps self-driving cars are not yet ready for "prime-time". The solution for the current state of the art might also just be maintaining a lower speed with automated drivers, which may also necessitate limiting the types of roadway on which they can operate. A slower average speed shouldn't be a big problem for automated cars since they don't experience the frustration of human drivers. Given wide-enough adoption, accomodations can be made to traffic signalling apparatus, car-to-car communication, and car-to-cloud integration to develop near-seamless traffic flows, allowing shorter travel times even at slower speeds. From here the tech could be iteratively improved to provide faster speeds without compromising safety.


Thanks! That's exactly my point.


This only works if you don't have other cars run into you when you slow down unexpectedly. I am not saying that current traffic is sane but just saying "always also down when you see pedestrians " just doesn't reflect reality. In CA you often have speed limits of 45 right next to houses with driveways. Either you play it safe and go 20 or less and get cursed at by other cars or you go way too fast to respond to unexpected obstacles.


When I am driving I routinely check my rearview mirror and assess the following distance of the cars behind me, so I usually know I will not be rear-ended when I am stopping for pedestrians or for any other reason. If I am driving and I notice someone is following too close for our speed I will tap the brake lights so as to encourage them to increase their following distance. If this fails I will slow down to a speed where their following distance becomes appropriate. If they are an uncommonly aggressive driver I might even pull over or change lanes and allow them to pass, I certainly don't want to be rear-ended! That said, even if I were to fail at this, I would prefer to be rear-ended stopping for a pedestrian that would have stopped than to strike a pedestrian who failed to stop walking into the path of my vehicle.

The speed limit is a reference to the maximum allowable speed of the roadway, not the minimum, only, or even recommended speed.


You clearly never have driven a busy four lane street in LA with bicycles and pedestrians mixed in. What you are saying makes sense in theory but nobody drives that way.


I do drive this way, most often in Seattle which has no shortage of the behaviors you're referencing. You can drive safely too because you are in control of your vehicle.

I didn't always drive like this, but I was in an accident that was my fault that totally upended my life, so I made an effort to change my ways. You can do it too, before you get in an accident that sets your life back, or irreparably shatters it...

I recommend taking an advanced drivers education course if you seriously decide you want to improve your driving. A lot of this stuff is covered.


Stopping unexpectedly does not cause accidents, locking down on your brakes unexpectedly causes accidents. In fact, in the US the person who hits you from behind will be held at fault no matter how hard you hit your brakes or why, because they are expected to maintain sufficient distance and attention to stop when you do.


If a light application of the brakes causes the car behind you to slam into you, the fault is with the idiot tailgating you, while playing with their phone.

Nobody's suggesting that anyone should slam the brakes every time a moving object intersects your vector of motion.


> You make me sound like a crazy driver

No, you did that:

> I often have pedestrians walk towards the street and I assume they will stop so I don't slow down unless they are children or similar. Almost every day I could hit pedestrians if they kept walking.

With respect, I don't know what you meant to say but that sounds like a description of a bad (or at least inconsiderate) driver to me.

In any case, when I think about how I would design a self driving car, an "auto-auto", the first principle I came up with was that it should never travel so fast that it couldn't safely slow down or break to avoid a possible collision. This is the bedrock, foundational principal.


> I often have pedestrians walk towards the street and I assume they will stop so I don't slow down

> the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path

Little different, huh? If you see something that looks like it might be in your way, and you aren't sure what it is, you just keep going?

And if I see a pedestrian in the middle of the road at a random spot, especially at night, I'm slowing down since I don't know WTF they're thinking. Or if I'm in a neighborhood with regular street crossings carved out of the sidewalk and someone's coming up to one of those - I don't know how well they're paying attention to their surroundings.


I can tell pretty quickly if something is pedestrian or a bicycle and plan accordingly. In addition I can tell where the pedestrian is looking. Sometimes I slow down, sometimes I don't depending on how I assess the situation.

I think it comes down to the fact that the classification algorithms are not ready for primetime.


You don't need to stop if you're uncertain, but you should slow down. That makes it easier to stop once you know you have to.


with significant fuzz factor, agreed. If i'm on the sidewalk and take a step toward the road, should it make the car jerk? Probably not, it's a hard call for passenger comfort. From another angle, think of subway tracks - the algo your describing would slow to a crawl as it crosses every station.


> From another angle, think of subway tracks - the algo your describing would slow to a crawl as it crosses every station.

As far as trains go, they do slow down when passing trough a track adjacent to a platform. There are some non-platform adjacent tracks the train companies use to avoid slowing down, however they will slow down or even stop if something is going on.

Similarly, high speed rail doesn't have level crossings due to safety considerations. Overall trains are very safe and they are _designed_ for safety. It is highly irresponsible and immoral to just wing it with people's life/safety.


> It is highly irresponsible and immoral to just wing it with people's life/safety

100% agree.

> As far as trains go, they do slow down when passing trough a track adjacent to a platform. There are some non-platform adjacent tracks the train companies use to avoid slowing down, however they will slow down or even stop if something is going on.

The equivalency isn't 'trains slow down through stations' (That would be cars having lower speedlimit in pedestrian areas - they do and the ubers honor) , it would be 'train spikes breaks if someone takes a step toward the edge' (Which they don't, even though it would potentially save lives).

There's always a tradeoff between usability and absolute safety. I'm not saying the uber did nothing wrong, at a minimum it should have spiked it's breaks. The 'perfect world' solution would be the uber knowing mass and momentum of approaching objects, and whether they could stop in time. But honestly, here would that have helped? We'll never get rid of people walking in front of moving cars, just have to be find the happy balance (which we clearly haven't)


> 'train spikes breaks if someone takes a step toward the edge' (Which they don't, even though it would potentially save lives).

A train's deceleration under maximum braking is far, far lower than a car. [1] suggests 1.2m/s² (paragraph 8).

[2] says the deceleration of a low-speed train crashing into the buffers at the end of the line in a station should not be more than 2.45m/s² (paragraph 35). That caused "minor injuries" to some passengers.

Trains do slow down earlier if the platform they are approaching is very crowded, but there's not really anything else they can do.

[1] https://assets.publishing.service.gov.uk/media/547c906640f0b...

[2] https://assets.publishing.service.gov.uk/media/547c906640f0b...


With trains, nothing else is supposed to be on the tracks.

With cars, there is an expectation that you have to share the road with other vehicles, objects, obstacles, pedestrians, etc.


> If I'm on the sidewalk and take a step toward the road, should it make the car jerk?

No. That's my point. Take less drastic measures earlier, and only escalate when you have to. That's how I drive, and a self-driving car can do the same.


>At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).

>The system is not designed to alert the operator.

>The vehicle operator intervened less than a second before impact by engaging the steering wheel.

>She had been monitoring the self-driving system interface

It seems like this was really aggravated by bad UX. Had the system alerted the user and the user had a big red "take whatever emergency action you think is best or stop ASAP if you don't know what to do" button to mash this could would have had a much better chance of being avoided.

Things coming onto the road unexpectedly isn't exactly an edge case when it comes to crash causing situations. I don't see why they wouldn't at least alert the user if the system detects a possible collision with an object coming from the side and the object is classified as one of a certain type (pedestrian, bike other vehicle, etc, no need to alert for things classified as plastic bags).

I don't see why they disabled the Volvo system. If they were setting up mannequins in a parking lot and teaching the AI to slalom around them I can see why that might be useful but I don't see why they would want to override the Volvo system when on the road. At the very least the cases where the systems disagree are useful for analysis.


Humans cannot react fast enough. There was just 1.3 seconds allowed. Drivers ed teaches 2 second following distance for a reason: it takes you most of that time to realize there is a problem and get your foot on the brake. In the best case a human would just hit the "okay to stop" button as the accident happened.

Of course cars cannot change speed instantly anyway - it is likely that even if the button was hit in time the accident was still unavoidable at 1.3 seconds. The car should have been slowing down hard long before it knew what the danger was. (I haven't read the report - it may or may not have been possible for the computer to avoid the accident)


At 40 mph 1.3 seconds is 76 feet, right around the threshold of stopping distance if the computer slammed on the brakes at that moment. At the very least, it's the difference between an ER visit and a fatality. Far too short a time for a human to react to a warning, though.


> Far too short a time for a human to react to a warning, though.

Simply swerving left would have avoided the accident. Stopping is not the only thing the human driver could have done!


Sure. What I mean is that if they expect to correct this with some kind of "oh shit" alarm for the driver, 1.3 seconds isn't enough time to do anything helpful.

The whole thing is nuts. Imagine a human driver seeing the same thing the computer did, and responding the same way: they'd be in handcuffs.


Average reaction time for humans for a visual alarm is 1/4 second. So more than a second left. And that's a second at the current speed. At 60 mph (was that the speed?), that's about 25 meters. With 1 g breaking for 1 s, the car's speed could have been reduced to 27 mph, and the person on the street would have had almost 0.4s more time to jump away, alerted by the sound of the car breaking.


Reaction time is only part of the picture. First you recognize the issue. Then you have to move your body. In court they generally use 1.5 seconds for all of this. Your best case is .7 seconds, but this situation was clearly not the best case - even if the human had been paying attention it would be much worse.

http://www.visualexpert.com/Resources/reactiontime.html has a good discussion (starting with why 1.5 seconds is not the answer)


Incredible. What is this software doing on real roads?


It shows that Uber is the exact same company it's been for years. Nothing has changed.

For something this safety critical, you want the software engineering quality of Boeing, NASA, etc. This type of mistake is pretty inexcusable.


Even more egregious that the governor, Doug Ducey, was happy to issue executive orders to waive any safety oversight to allow Uber to put the public at risk in the first place.[0]

https://www.nbcnews.com/tech/innovation/emails-show-arizona-...


AZ decided regulations were unnecessary overhead.


Or maybe people hand waved around software review saying "its fiiiiine" leveraging trust or .. $trust$


Move fast and break things (and people).


Thanks, we've changed the URL to that from https://www.wsj.com/articles/uber-self-driving-car-that-stru....


I think this will end up with an Uber exec facing manslaughter charges for gross negligence.


I hope you're right, but that's only part of the solution. Everybody in that reporting chain should be looking down the barrel of consequences. Proportional to their level of control, but harsh to be sure. Implementors need to have it made very clear to them as well that no, just-following-orders isn't enough.

I have fired clients for doing reckless and stupid things orders of magnitude reckless and stupid than what Uber has done here and I would hope that I would walk the hell out were I confronted with "we disabled the brake for a smoother ride and then disabled the alarms because they were too noisy". Do thou likewise, yeah?


Software engineers definitely got to start taking their ethical responsibilities seriously.


I think this case is a bit too hard to prove much more than negligence, but establishing criminal liability would send the right message that you gotta do this right before unleashing it on public streets.


I don't think so. The police have said that 1) the pedestrian was at least partially at-fault for not crossing at a crosswalk and 2) given the circumstances, the same outcome would have occurred with a human driver.


>2) given the circumstances, the same outcome would have occurred with a human driver.

The police are in no position at assert that, nor do they know whether or not Uber is guilty of negligence. Police do not bring charges and they're not running the investigation.


Right, an officer’s opinion is worth jack shit in an NTSB investigation. They go after facts.


If you’re gonna test something like this on public roads, there need to be better engineering failsafes in place.

The place for the product folks to override safety features is the test track. If the feature didn’t work, they should have pulled the drivers because they were not trained to properly operate the machine.

If you give the “driver” training on a car with an autonomous braking system, then give them a car without it, that’s not on the driver. Someone was negligent with safety in regards to the entire program.

I’m not saying anyone needs to go to jail over this, but there do need to be charges IMO. Personal liability needs to be involved in this or executives will continue to pressure employees to do dangerous things.


Do you have a link or quote for 2)? I was under impression that while the video looks dark, it wasn't quite so dark in reality and human driver would have fared better (if they were driving instead of checking console every 5 seconds, that is).


Yep, here's the quote and the link:

> "It's very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway," Moir told the San Francisco Chronicle after viewing the footage.

https://www.usatoday.com/story/tech/2018/03/20/tempe-police-...


The dashcam footage has poor dynamic range and is not representative of what a human driver would have seen. (This has been pointed out repeatedly in previous discussions here on HN — I'm saying that not to chide you, but just to establish it as a fact.)


I'm not convinced that (2) is literally true — that the pedestrian would have been likely to be killed in this particular instance — but: she was strolling nonchalantly across a four-lane roadway with a 45mph speed limit, in the dark, with dark clothing on, and paying not the slightest attention to oncoming traffic. I'm sure that if she did that regularly, sooner or later she would have had at least a close call.


This was discussed extensively here after the event happened. It's not pitch black around there and a number of people have recorded videos driving at night through that exact area and the entire road is well lit enough to see a person with a bike on the road. The low-fidelity CCD video Uber posted in the immediate aftermath is not representative of human vision or (apparently) the sensors that Uber had on the vehicle.


Right, and I said exactly the same thing elsewhere [0], but I still think she was taking a big chance by being so oblivious. A 45mph speed limit means some people will be doing 55. Stroll casually across a roadway like that enough times, and you will eventually force a driver to swerve around you at high speed or make a panic stop or at the very least blast you with the horn.

I can't imagine doing what she did — even thoroughly stoned, as she may have been (she tested positive for methamphetamine and marijuana), I would have more sense of self-preservation than that.

[0] https://news.ycombinator.com/item?id=17147069


"the pedestrian was at least partially at-fault for not crossing at a crosswalk"

That should be irrelevant. Even if the pedestrian is jay-walking, it's still not legal to hit them. Further, having solid evidence that the car detected the pedestrian and did nothing to avoid her mitigates the pedestrian's responsibility, no?

Also, the "center median containing trees, shrubs, and brick landscaping in the shape of an X" sure looks like it should have some crosswalks, from the aerial photograph. What's it look like from the ground?


Human drivers are also charged when they kill people, so (2) doesn't seem to have any weight.

And partially-at-fault, on one hand, means there's fault on the driver side too, and on the other hand, is a judge's decision to make, not the police, no?


That is absolute fucking insanity.


So, in the case that emergency braking is, nothing is designed to happen and no one is informed. I guess they just hoped really hard that it wouldn't murder anyone?


That makes no sense


I suspect that Uber was optimizing for the common case, which is normal traffic conditions, and didn't want their emergency braking accidentally firing during normal driving causing rear endings.

So the question becomes why couldn't they get emergency braking solved before driving on the road? Maybe that requires collecting good data first, for training the system?


You don't make things safe by optimizing for the common case, and "it can't be safe until after we have tested it on the road" is not a valid reason to allow testing of an unsafe vehicle on public roads.


Considering that 1) you are supposed to keep sufficient distance to break even if the vehicle/generic object in front of you suddenly freezes in space, e.g. a massive wall flush with the rear end of the vehicle in front of you suddenly appears, and 2) the only things that could conceivably rear end an emergency breaking Uber/Volvo would be bikes and other >=4 wheel vehicles (cars/trucks/etc.), which either drive carefully anyway ((non-)motorized bikes), or have a low probability of human damage (cars/trucks), false positives should be preferred to false negatives by somewhere between 5:1 and 1000:1. The latter only if the following vehicles are civilized enough to hold the distance. The car could figure this in, and computer probabilities for what damage an emergency maneuver would cause, which means that it'd break for a stray cat if it's otherwise alone on the road and the surface is dry. But it won't break for a wolf if it's tailgated by a truck, but potentially even accelerate, to make up for the loss of momentum (but light the rear break lights, to stop as soon as it registers the truck to slow down).


b r a k e


> I suspect that Uber was optimizing for the common case, which is normal traffic conditions

That's like creating HTML forms that only work with the common use case and crash spectacularly on unexpected input. Except that this time it's fatal. That's not the kind of software quality I want to see on roads.


Why not license the software system from Volvo, which obviously works (people drive Volvos with this turned on every day without erratic breaking), instead of disabling theirs because it was apparently broken?

This is beyond incompetence. There is a different level of software engineering when making a website vs making a pacemaker, rocket or flight avionics. You need the quality control of NASA, SpaceX or Boeing, not that of .. whoever they have running their self driving division.


I have a Subaru with EyeSight and it does strange things sometimes. For example, if I happen to be in the left (passing) lane going around a leftward curve and a car in the right lane is stopping or slowing to turn right, the Subaru will hit the brakes because due to the curve of the road, the right lane car is straight ahead. It's scared me a few times.

The other thing about the system that sucks is that it's all optical (AFAIK) so when visibility is poor, it shuts off. They need to add more sensors because those are the conditions I would most like an extra set of eyes.


Goddmn right : if the things brakes erratically then, come one, it's not ready to brake !!


s/not ready to brake/not ready to drive/


> I suspect that Uber was optimizing for the common case

Yeah that's not how you design safety critical software. This isn't some web service. Either you're wrong (let's hope) or Uber is completely negligent.

Source: I write safety critical code for a living.


> didn't want their emergency braking accidentally firing during normal driving causing rear endings.

In no case would sudden braking be the cause of a rear-end collision. It's always the fault of the driver behind.


True, and I'm with you, though I'd bet every dollar I have that, if people routinely randomly slammed on their brakes we'd have a whole lot more rear end collisions. We don't expect people to do that, and when it does happen it's rare.


tl;dr: Safety wasn't a concern for Uber, and they knowlingly put a vehicle with a broken self-driving system on the road.

I hope this won't stay unpunished (both at corporate and personal level) if confirmed.


Travis although no longer at Uber should also be held responsible as he was the driving force behind the policies and culture of 'breaking the law' to get ahead.


I hope at least one human being in touch with him tells Travis that he is directly responsible for the death of a fellow human being. The person that died deserves at least that few seconds of remorse.


That somehow made me think of Mongo DB ..


I actually feel sick...

> 1.3 seconds before impact ... emergency braking maneuver was needed ... not enabled ... to reduce the potential for erratic vehicle behavior

This wind-up toy killed a person.

Transport is a waking nightmare anyway. Every time you get in your car, every mile you drive, you're buying a ticket in a horrifying lottery. If you lose the lottery you reach your destination. If you win... blood, pain, death.

Into this we're setting loose these badly-programmed projections of our science-fiction.

- - - -

A sane "greenfield" transportation network would begin with three separate networks, one each for pedestrians, cyclists, and motor vehicles. (As long as I'm dreaming of sane urban infrastructure, let me sing the praises of C. Alexander's "Pattern Language" et. al., and specifically the "Alternating Fingers of City and Country" pattern!)

My mom has dementia and is losing her mind. We don't trust her to take the bus across town anymore, and she hasn't driven in years. If I wanted an auto-auto[1] to take her places safely I could build that today. It would be limited to about three miles an hour with a big ol' smiley sign on the back saying "Go Around Asshole" in nicer language. Obviously, you would restrict it to routes that didn't gum up major roads. It would be approximately an electric scooter wrapped in safety mechanisms and encased in a carbon fiber monocoque hull. I can't recall the name now but there's a way to set up impact dampers so that if the hull is hit most of the kinetic energy is absorbed into flywheels (as opposed to bouncing the occupant around like a rag doll or hitting them with explosive pillows.) This machine would pick its way across the city like "an old man crossing a river in winter." Its maximum speed would at all times be set by the braking distance to any possible obstacle.

[1] I maintain that "auto-auto" is the obviously cromulent name for self-driving automobiles, and will henceforth use the term unabashedly.


"According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior."

If you can't give your "self-driving software" full access to the brakes because it becomes an "erratic driver" when you do that, you do not have self-driving software. You just have some software that is controlling a car that you know is an inadequate driver. If the self-driving software is not fully capable of replacing the driver in the car you have placed it in, as shipped except for the modifications necessary to be driven by software, you do not have a safe driving system.


The irony is that this is 100% the what MobilEye said in their in their model demo I think 3-4 years ago. Their CEO said that the regulators cannot regulate all breaking conditions and should should only test for false negatives exactly to prevent this. He stated that dealing with flase positives is going to be in the best interest of every manufacturer and the easiet thing to do is to disable the breaks completely but that would be fairly easy to detect in any structured test or a test drive. Any thing else would be a problem because you would be stacking tolerances and worse creating test cases with a clear conflict of interest where the interests of the car maker and the regulator align.


This comment made my brain hurt. I think you have something interesting here, that’s why I read it four times and still didn’t understand.

Please re-write this for clarity.


Mostly the message is that regulators shouldn't be investing their resources in checking if the car breaks too often (from false positive in obstacle detection) because the car company has strong incentive to reduce unnessary breaking or driving would be unpleasant and slow for a large portion of the drivers.

The regulators should only test for false negatives, where the car should have stopped, but did not detect the obstacle (false negative), because there, it is a clear threat to safety, and the car company's incentive, while definitely still present, is less pure, as the amount of false negatives is a direct trade off with the quantity of false positives (because it is a treshold: a minimum confidence level from which you decide that there is indeed something in front of the car and you need to break), which make driving more awkward for 99% of drivers


Does any regulator actually give a damn about a self-driving car braking too often?


The persistent use of "break" instead of "brake" is difficult for me in this specific context.


This comment page has dozens of this misspelling. I'm used to it on reddit, I thought the HN crowd is more intelligent than reddit...


The TL;DR as I read were

1. Regulators want aggressive breaking but carmakers want smoother driving

2. Manually tuning all the edge cases for cases where the software is uncertain what's happening will lead to fragile monolith black boxes


Aggressive random braking is also a safety issue.


Yep, this is the thing that worries me the most on some of these systems right now. They are going to get rear-ended a lot for the next few years, IMO.


Rear ends have basically no fatality rate though. They do have material costs but if optimizing for no loss of human life it sounds more appealing.

These kind of tradeoffs were things every self-driving car software developer KNEW they were going to have to deal with - the most extreme being the one where the software has to decide who to kill and who to save:

https://www.theglobeandmail.com/globe-drive/culture/technolo...


And rules out use on snowy roads.


Why? Brakes works fine on snowy roads. Distances can be adjusted for snowy roads. It happens every winter where it snows a lot. I don't see a problem.


Not sudden random braking, no. The most important advice for driving on slippery surfaces is to avoid sudden braking, and in general to be careful when you brake.


That'll teach people to keep their damn distances!


I second the motion.


> If you can't give your "self-driving software" full access to the brakes because it becomes an "erratic driver"

You misunderstood. The Uber software had sole control of the brakes (plus the human of course). The Volvo factory system was disabled so that it didn’t have negative interaction with the Uber system.

Your mistake is understandable. The article was poorly written, perhaps due to a rush to publish, as is the norm these days. Even if the NTSB report was unclear, that doesn’t excuse clumsy reporting.

If you’ve ever done significant mileage in a car with an emergency braking system you probably have experienced seemingly random braking events. The systems favor false positives over false negatives.


Is that true? A cousin comment appears to indicate Uber had a parallel emergency breaking system and that is what failed to trigger:

> At 1.3 seconds before impact, the self-driving system determined [...]

Unless this is exceptionally poorly worded, it certainly sounds like self-driving system was the one doing the determination.

https://news.ycombinator.com/item?id=17144741


I wrote it carefully and I stand by it. If it can't deal with it because your system is getting too confused for any reason, you don't have a self-driving system. Being able to function enough like a human that the safety features on the car don't produce an unacceptable result is a bare minimum requirement to have a self-driving car.

This isn't horseshoes, as the old saying goes. I unapologetically have a high bar here.


Far as I can tell, the done thing in the industry is to disable all other safety systems (or to not have any in the first place) and to delegate safety entirely to the self-driving AI.

The charitable interpretation of this is that the industry believes that self-driving AI is a safety feature of greater quality than, say, lane assist or auto-breaking.

The less charitable one is that they find it too much work to integrate their AI to other safety systems. Which, to be fair, is really going to be a lot of extra work, on top of developping self-driving.


>You misunderstood. The Uber software had sole control of the brakes (plus the human of course). The Volvo factory system was disabled so that it didn’t have negative interaction with the Uber system.

That's not how I read it, or how any of the journalists who are reporting the story are reading it. Uber disabled the self driving software's ability to do an emergency stop when it detected it was going to crash. The Volvo system is separate and also was disabled when the car was in self driving mode.

https://www.bloomberg.com/news/articles/2018-05-24/uber-self...

>Sensors on an Uber SUV being tested in Tempe detected the woman, who was crossing a street at night outside a crosswalk, eventually concluding “an emergency braking maneuver was needed to mitigate a collision,” the National Transportation Safety Board said in a preliminary report released Thursday.

>But the system couldn’t activate the brakes, the NTSB said.

That's the only reading that makes sense to me, otherwise why did the car fail to attempt to stop when it detected the pedestrian and knew it was going to hit them?


This is from the NTSB report:

"At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2). According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior."

I believe you are the one who has misunderstood.


I think you're wrong. The Uber software controls the brakes, thru whatever means they're controlling the car. It has to – the car has to stop somehow!

"emergency braking maneuvers" refers to an additional automated (software) system for automatically applying the brakes in an emergency (that's detected by that additional system).


No, see footnote 2 of the NTSB report:

>2 In Uber’s self-driving system, an emergency brake maneuver refers to a deceleration greater than 6.5 meters per second squared (m/s^2).

So, Uber's self-driving system has command to brake normally, but it cannot "slam" the brakes. The other system from Volvo's is deactivated when Uber's is working and cannot brake at all. Thus, since Volvo's is deactivated and Uber's won't brake if it judges that a deceleration of >6.5 m/s^2 is needed, it turns out that in automated mode, the car actually lacks the ability to trigger emergency braking at all, hoping instead that the driver will somehow notice. But in a sadistic twist, no warning is given to the driver at any moment that they need to slam the brakes.


At 1.3 seconds, and with as many false-positives as it sounds like the system has, I don't think a warning would have helped.


No, it would probably take a large part of that time to even react, especially if you are looking at a screen.

However, if the safety driver was trained to brake immediately upon warnings it could have worked quite well. But that would negate the removal of e-brake actuation.....


If the car cannot use emergency braking it could at least decelerate. It looks like some hack where the engineers just commented out the code for brakes and left decision to the driver.


edit: nevermind, see makomk's response to me.

The NTSB report is pretty unclear, I had to re-read it several times and I think you're correct that the "emergency braking maneuvers" they refer to are the Volvo ones. It's strange though that they word it as

> At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision

Is the Uber self-driving system able to interact with the Volvo system? Or are they calling the Volvo safety features a second "self-driving system"? And what were the steps, Uber realizes it needs an emergency braking maneuver and sends a signal to the Volvo system which then responds with "I'm disabled" ?

I understand why the Volvo features may be disabled, but it's alarming that the self-driving system made no attempt to brake at all when it was fully "aware" it would hit someone.

The report does mention the Volvo braking features by name a few paragraphs earlier though... So I'm still not entirely sure.


They definitely seem to be referring to the Uber self-driving system. The sentence "At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2)." has this footnote attached to it: "In Uber’s self-driving system, an emergency brake maneuver refers to a deceleration greater than 6.5 meters per second squared (m/s^2)".


Thanks, don't know how I missed that, I'll blame my phone screen haha.


> The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

If what you said is true, then the vehicle operator would not be relied on to intervene because the Uber self-driving software would apply the brakes. Since the vehicle operator is relied on to intervene, this indicates the Uber self-driving software has its emergency braking disabled.


> If you’ve ever done significant mileage in a car with an emergency braking system you probably have experienced seemingly random braking events. The systems favor false positives over false negatives.

This is not my experience with VW's 2016-model-year system.

Soemtimes it stops a second or two before I would've. I haven't had any false positives, though.


Same here in our 2016 wv, we had a beep once for a warning in an intersection with traffic nearby, but never a false positive.


I presume that expectation was that the human driver was suppose to be fully engaged to take evasive actions if and when required. What jumps up at me if that fact that the system does not alert the driver when his/her intervention is required.


I'm not sure it matters, because going from disengaged to fully engaged and actually breaking to a stop in 1.3s is not happening for an average human. By the time the system alerts the driver, the ped is already dead.


But an attentive human would have noticed the pedestrian around the same time the system was confused, ~6s before impact. Enough time to swerve and stop.


At 6 sec before impact the driver for sure would assume the system will notice and brake in time. Until the driver realizes that the automated system will NOT brake its very likely too late. Automatic systems which rely on humans braking in time have no place on the street, imho. This may be different for lane assist where malfunctioning is more obvious and leaves more time for intervention. Although even in this case the latest Tesla accidents may tell a different story.


> If the self-driving software is not fully capable of replacing the driver in the car you have placed it in, ...,you do not have a safe driving system.

Nobody disagrees with you and that is explicitly the reason why a human is on board, so I am not sure what you are arguing against.


Is that enough?

I think it's very clear that a human driving a normal vehicle is different from a human sitting at the wheel of a self-driving (semi-self-driving?) vehicle. You simply cannot expect a human to remain as engaged and attentive in such a passive situation.

It's baffling to me that they chose to deactivate emergency braking without substituting a driver alert. If the false-positives are so frequent as to render the alert useless (i.e. it's going off all the time and you ignore it) I don't think these vehicles are suitable for on-road testing.

You can lean fairly heavily on the 'still dangerous, but better' argument in the face of 40,000 US vehicle fatalities each year, but there are limits.


Except it’s patently clear that is a flawed concept. Here’s a prime example where a human, not paying attention because the car has been successfully driving for a while, is given 1-2 seconds to emergency stop the car. That is not enough time to process what’s going on and take over. Even 10 seconds is likely not enough.


Worse, under Uber's regime the one human had to deal with both emergency situations (without warnings from the system) and instrumentation feedback (without a HUD), so the intended operation of this test was that the safety driver spends half the time looking down and away from the road.


And they also ran tests at night. This is not an obstacle for the LIDAR but is an extra challenge for the human.


Yeah, and it is also why they had a driver facing camera to insure that driver's were taking this responsibility seriously.

It seems that Uber wasn't actually verifying that, though.


The driver was also in charge of monitoring and classifying/tagging instruments messages appearing on monitors on the center stack.


I doubt the camera can give you a good idea of whether or not the 'driver' starts to daydream. They could be looking at the road and not paying attention. Worse, the 'driver' could be unaware they aren't paying attention, rendering their intention to stay alert moot.


Of course that is possible, but in this case the driver was clearly not looking ahead. I guess it is possible that this was the first time the driver did that, but I think that is unlikely.


Maybe he was just making a statement.


So, let the two software systems fight with each other during emergencies and hope it somehow results in something positive?

That sounds like a terrible idea.


On the contrary: the Volvo system is designed not to fight but to override any other input, including the human driver. In any other Volvo car currently on the road that detects an obstacle, the Volvo system will override the driver and bring the car to a stop even if the driver insists in hitting the obstacle: https://www.youtube.com/watch?v=oKoFalJiazQ

Independently-working override-capable systems are the base of engineering redundancy safety. See airborne collision avoidance systems (ACAS), which will automatically and forcefully steer an aircraft to avoid a collision if necessary: https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...


I don't have experience with self-driving cars, Uber or Volvo safety systems, but my Jeep's backup collision prevention often slams the brakes if I'm backing up and the camera detects potholes, oil stains, or irregular painted lines in the parking lot.

If any of the systems are vulnerable to any such false positives, and equipped to enable emergency braking to avoid them, even on the highway, it's not hard to imagine why they might be disabled, especially during testing phases.

I think it's fair to say that at this early stage in product development, there's probably no 'always right' answer for how to handle a given obstacle without considering locality, driving speed, road conditions, likelihood of false positives and negatives, etc.


They shouldn't be testing on public roads and highways then.


Yeah mine does that too. I wish they had a sensitivity option like they do with the forward collision sensors.

Hey there's an idea for Uber, maybe instead of disabling the forward collision system entirely they could just decrease the sensitivity to lessen false positives (like in our Jeeps)?


TCAS systems won't automatically take over. Instead they audibly issue an order to the flight crew, describing the action that must be taken, and the flight crew must take the action.

Instructions from TCAS are near absolute in their priority. If ATC says to do something different, you ignore ATC and do what TCAS says. If the pilot in command says to do something different, you ignore the pilot in command and do what TCAS says. If somehow God Himself is on your plane and tells you to do something different, you ignore Him and do what TCAS says. Compliance with TCAS is non-negotiable, and the Überlingen disaster[1] is the bloody example of why it's that way.

Self-driving/autonomous-car systems need to have a similar absolute authority built in. If Uber disabled theirs because of false positives, it's a sign Uber shouldn't be running those cars on public roads.

[1] https://en.wikipedia.org/wiki/2002_%C3%9Cberlingen_mid-air_c...


AFAIK Airbus has a TCAS-capable AP/FD, certified at least for the A380, which can automatically fly RAs by itself: https://www.skybrary.aero/index.php/Autopilot/Flight_Directo...


If the Uber engineers shared your confidence that it would play out that way, they would not have disabled it.

Redundant safety systems are a great idea, evidently Uber needed more of them, and integrating with the Volvo system might have been a reasonable option. It's silly to suggest that the integration would necessarily have been trivial, though. That's what I'm objecting to.


A woman is dead because of some combination of overconfidence, hubris, and ignorance of Uber’s self driving engineers.

The above traits aren’t exclusive to Uber’s “engineers”, but are lethal when applied to the engineering of life safety systems.


I am not sure if I would have been able to avoid such a situation myself. That's the only thing that keeps me from excoriating Uber...


How many people are still alive because Uber had the restraint to not slap control systems together willy-nilly, like so many people here seem to think is an obviously great idea?

A lot more than one, that's for sure.


Yikes. I would not allow a business to continue to exist whose argument is “at least we’re only partially wreckless”.

See how that holds up in civil court in front of a jury, or any legislative body Uber might need to convince to allow their operation in a jurisdiction in the future.

“Just think of how many more people we would’ve killed if we didn’t care at all!”


"at least we’re only partially wreckless" has been Tesla's standard operating procedure for a while now. I'm guessing you think Tesla shouldn't exist either.


Oh, they'll get crucified for it, I'm sure -- because the public doesn't understand that integration is never trivial, and that "obvious" integrations aren't always good ideas.

I had thought an audience of developers would "get it," since we deal with fallout from ill-conceived integrations every day, although admittedly in a far less spectacular form than control system engineers.

Unfortunately, the uber hate train has already left the station and there's no slowing it down until the investigation finds (or doesn't) actual evidence of negligence rather than clickbait guesswork by armchair engineers. Too bad.


> I had thought an audience of developers would "get it," since we deal with fallout from ill-conceived integrations every day, although admittedly in a far less spectacular form than control system engineers.

You do a disservice to the audience by assuming it would be understanding of grossly negligent behavior.

Failures happen, that is to be expected. If you're building self-driving vehicles, you're supposed to be engineering for those failures. Disabling two life safety systems (Volvo's AEB and Uber's own AEB) and relying on a single inattentive human driver? I don't understand how that's understandable or justifiable in any scenario besides a carefully controlled test track.


I think you’re misunderstanding. According to the report, it’s not just the Volvo emergency braking system that was disabled. Uber’s self-driving system had its own emergency braking feature that was also disabled.


How many people were not killed by Uber? That's a terrible way to look at things. It is better to wonder _why_ Uber killed a person. The answer appears to be right in the NTSB report about the brake system.


You're being very charitable in your assumption that Uber 'engineers' gave it much thought at all.

(Were any Professional Engineers even involved in this? Or was it just a bunch of "software engineers"?)


However much thought they gave it, I guarantee that everyone in this comment train has given it much less. It wasn't a trivial decision, their choice is not prima facie evidence of negligence.


Again, very charitable. Your assumptions are baseless, seemingly motivated by little more than your trust that incredible incompetence and negligence is rare in the real world.


Again, not at all charitable. Your assumptions are baseless, seemingly motivated by little more than your mistrust that in the competence of people that have been actively working on this problem, many for years.


He said that he can "guarantee" Uber engineers gave it careful consideration (or at least more than people in this thread), but in truth he can do nothing of the sort. There are a plethora of examples where engineers and "engineers" didn't give a problem careful consideration and it got people killed. There is no rational basis for guaranteeing this is not one of them.

There is no "guarantee" that uber "engineers"/engineers put more thought into this than "their system is inconvienent, tear it out" or "we don't need their system because ours will be better, tear it out." Nobody can guarantee Uber engineers were not stupendously negligent until the investigation is complete. Anybody who thinks they can guarantee that has an irrational basis for thinking they can provide such a guarantee.


You might be troubled to know that the guidance control systems on modern spacecraft and many aeronautics subsystems work this way -- at least three redundant systems have access to sensor data and each make independent decisions about the course of action, if any, needed. In short, they vote, and majority wins.


There is no "fight with each other". If any system says "break", the car breaks. If none says that, the Uber system has control.


Every time you drive a modern car, you fight for control with a computer driving system.

Somehow, it works out.


Unclear why this is being downvoted. Modern cars have all sorts of computer driving systems, e.g. ABS. If you learned to drive without it, you still think to pump even though the computer does a better job if you just slam on the brakes.

The point is that so far most (all?) of these computer systems don't have a failure mode where it kills pedestrians, because their scope is very limited.


Because one doesn't "fight" with ABS not wanting to break. It will break when you hit the brake.

Same with power steering: it won't steer into a different directorion. You don't have to "fight" with it.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: