According to data obtained from the self-driving system, the system first registered radar and LIDAR
observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph.
As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian
as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.
At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver
was needed to mitigate a collision (see figure 2).
2 According to Uber, emergency braking maneuvers are
not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle
behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to
alert the operator.
The reason we were ultimately able to do this is because we were operating in a fully-segregated environment of our own design. We could be certain that every other vehicle in the system was something that should be fully under our control, so anything even slightly anomalous should be treated as a hazard situation.
There are a lot of limitations to this approach, but I'm confident that it could carry literally billions of passengers without a fatality. It is overwhelmingly safe.
Operating in a mixed environment is profoundly different. The control system logic is fully reversed: you must presume that it is safe to proceed unless a "STOP" signal received. And because the interpretation of image & LIDAR data is a rather... fuzzy... process, that "STOP" signal needs to have fairly liberal thresholds, otherwise your vehicle will not move.
Uber made a critical mistake in counting on a human-in-the-loop to suddenly take control of the vehicle (note: this is why Type 3 automation is something I'm very dubious about), but it's important to understand that if you want autonomous vehicles to move through mixed-mode environments at the speeds which humans drive, then it is absolutely necessary for them to take a fuzzy, probabilistic approach to safety. This will inevitably result in fatalities -- almost certainly fewer than when humans drive, but plenty of fatalities nonetheless. The design of the overall system is system is inherently unsafe.
Do you find this unacceptable? If so, then then ultimately the only way to address this is through changing the design of the streets and/or our rules about how they are be used. These are fundamentally infrastructural issues. Merely swapping out vehicle control systems -- robot vs. human -- will be less revolutionary than many expect.
That's an E-stop chain and that's exactly how it should work.
But the software as described in the NTSB report was apparently bad enough that they essentially hardwired an override on their emergency stop. The software equivalent of putting a steel bar into a fuse receptacle. The words that come to mind are 'criminal negligence'. The vehicle would not have been able to do an E-stop even if it was 100% sure it had to do just that, nor did it warn the human luggage.
The problem here is not that the world is so unsafe that you will have to make compromises to get anywhere at all, the problem here is that the software is still so buggy that there is no way to safely navigate common scenarios. Pedestrian on the road at night is one that I've encountered twice on my trips and they did not lead to any fatalities because when I can't see I slow down. If 6 seconds isn't enough to make a decision you have no business being on the road in the first place.
I've seen a few people comment on the footage that they too would have run the pedestrian over, to which my only response is: I sure hope you don't have a driver's license [anymore]!
The report is a bit ambiguous about that:
The videos show that the pedestrian crossed in a section of roadway not directly illuminated by the roadway lighting.
I don't think you can look at videos and judge the level of illumination well; their videos could be more or less accurate than Uber's, and what I see depends on codecs, video drivers, my monitor, etc. Also, any video can easily be edited these days.
Is there a way to precisely measure the illumination besides a light meter? Maybe we can use astronomers' tricks and measure it in relation to objects with known levels of illumination. Much more importantly, I'm not even sure what properties of light we're talking about - brightness? saturation? frequencies? - nor which properties matter how much for vision, for computer vision, and for the sensors used by Uber's car in particular.
I'm not taking a side; I'm saying I have yet to see reliable information on the matter, or even a precise definition of the question.
Don't conflate that with Uber's screw-up here. This wasn't a situation where a fatality was unavoidable or where a very safe system had a once-in-a-blue-moon problem. It's one where they just drove around not-very-safe cars.
This doesn't mean that Uber therefore did the right thing in disabling the system; it probably means that the system shouldn't have been given control of the car in the first place. But my point is that there is no readiness level where driverless cars will ever be safe -- not in the same way that trains and planes are safe. The driving domain itself is intrinsically dangerous, and changing the vehicle control system doesn't change the nature of that domain. So if we actually care about safety, then we need to be changing the way that streets are designed and the rules by which they are used.
And that is why I am so mad at Uber. They are compromising the public trust in autonomous cars with their reckless release policy. And thereby potentially endangering even more lives, as we have to convince the public of the advantages of this technology.
My company didn't start with this zero tolerance thing in our minds, but it turns out our self-delivering electric bicycles have a huge advantage for real world safety because they weigh ~60lbs when in autonomous mode and are limited to 12mph. This equals the kinetic energy of myself walking at a brisk pace, or basically something that won't kill purely from blunt force impact. I think the future for autonomy will be unlocked by low mass and low speed vehicles, not cars converted to drive themselves.
It hasn't shown that at all. It has documented beyond reasonable doubt that Uber should not be allowed to participate in real world tests of autonomous vehicles.
There are plenty of situations where people would fully accept a self driving vehicle killing someone but this isn't one of those.
Uber had a fatality after 3 million miles of driving.
The mean fatality rate is approximately 1 per 100 million miles of driving.
It's a sample size of one, so the error bars are big, but it drives me insane that people are acting like the Uber cars are the ideal driverless cars of the imagined future, and are super safe. The available data (which is limited, but not that limited) is that Uber driverless cars are much, much, much more dangerous than mean human drivers.
That actually sounds like a really interesting concept, one of those ideas that seems obvious only after someone suggests it. What company is this?
Right now, in the Seattle area, we are basically seeing a new littering epidemic in the form of sharable bicycles being left to rust away, unused, at random places. If the bike could cruise to its next user autonomously, that would be really be a game-changer. "Bikes on demand" would turn bikesharing from (IMHO) a stupid idea into something that just might work.
Plus, the engineering challenges involved in automating a riderless bicycle sound fun.
The biggest challenge will probably be to keep people from screwing with the bikes, of course. :( An unoccupied bicycle cruising down the street or sidewalk will fire all sorts of mischievous neurons that onlookers didn't even know they had.
What the Uber crash has shown us is mostly the willingness of people on HN to excuse Silicon Valley darlings even when they actually demonstrably kill people.
This is an analogy that cannot completely map to cycling.
A fall at any speed from a bike is literally a potentially crippling or deadly scenario.
For elderly people, I would guess that's accurate.
A friend of mine, in his 50's very fit, cycling to work and back every day, broke both his arms while doing literally a 10-meter test drive in front of a bike store.
The bike's brakes were setup reversed compared to what he used to, so he ended up breaking with the front brake, flipping the bike over and breaking both his arms while landing. His fault? Sure, but still a rather scary story how quickly even mundane things can go really wrong.
That depends, there could simply be no traffic behind you, which an experienced driver and hopefully and automated one would be monitoring.
Besides, there are many situations on the highway where an E-stop is far safer than any of the alternatives even if there is traffic behind you. Driving as though nothing has changed in the presence of an E-stop worthy situation is definitely not the right decision.
That should be criminal.
I'm all for chalking this one up to criminal negligence and incompetence, outright malice is - for now - off the table, unless someone leaks meeting notes from Uber where they discussed that exact scenario.
Where is the almost-certainty coming from that the fatalities would be fewer compared to humans driving? And what does "almost" mean in this case?
And "almost" is always a good idea when talking about a future that looks certain. Takes into account the unknown unknowns. And the known unknowns (cough hacking cough).
Without the ability to understand its environment and react appropriately to it, all the good the fast reaction times will do to an AI agent is to let it take the wrong decisions faster than a human being.
Just saying "computers" and waving our hands about won't magically solve the hard problems involved in full autonomy. Allegedly, the industry has some sort of plan to go from where we are now (sorta kinda level-2 autonomy) to full, level-5 autonomy where "computers" will drive more safely than humans. It would be very kind of the industry if they could share that plan with the rest of us, because for the time being it sounds just like what I describe above, saying "computers" and hand-waving everything else.
1.) Road safety -- as far as the current operating concept of cars is concerned (eg., high speeds in mixed environments) -- is not a problem that can be "solved". At best it can only ever be approximated. The quality of approximation will correspond to the number of fatalities. Algorithm improvements will yield diminishing returns: the operating domain is fundamentally unsafe, and will always result in numerous fatalities even when driven "perfectly".
2.) With regards to factors that contribute to driving safety, there are some things that computers are indisputably better at than humans (raw reaction time). There are other things that humans are still better at than computers (synthesising sensory data into a cohesive model of the world, and then reasoning about that world). Computers are continually improving their performance, however. While we don't have all the theories worked out for how machines will eventually surpass human performance in these domains, we don't have a strong reason to believe that machines won't surpass human performance in these domains. The only question is when. (I don't have an answer to this question).
3.) So the question is not "when will autonomous driving be safe" (it won't be), but rather: "what is the minimum level of safety we will accept from autonomous driving?" I'm quite certain that the bar will be set much higher for autonomous driving than for human driving. This is because risk perception -- especially as magnified by a media that thrives on sensationalism -- is based on how "extraordinary" an event seems, much more than how dangerous it actually is. Look at the disparities in sociopolitical responses to, say, plane crashes and Zika virus, versus car crashes and influenza. Autonomous vehicles will be treated more as the former than the latter, and therefore the scrutiny they receive will be vastly higher.
4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.
5.) Personally, I think that the algorithms won't be able to pass this public-acceptability threshold on their own, because even the best-imaginable algorithm, if adopted on a global basis, would still kill hundreds of thousands of people every year. That's still probably too many. I expect that full automation eventually will become the norm, but only as enabled by new types of infrastructure / urban design which enable it to be safer than automation alone.
This is a wonderfully concise way of describing a phenomenon that I have not been able to articulate well. Thank you.
I'm too exhausted (health issues) to reply in as much detail as your comment deserves, but here's the best I can do.
>> 4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.
Or at least it won't be morally justifiable for them to be a thing at all, unless they're sufficiently safer than humans- whatever "sufficently" is going to mean (which we can't really know; as you say that has to do with public perception and the whims of a fickle press).
I initially took your assertion to mean that self-driving AI will inevitably get to a point where it can be "sufficiently" safer than humans. Your point (2.) above confirms this. I don't think you're wrong, there's no reason to doubt that computers will, one day, be as good as humans at the things that humans are good at.
On the other hand I really don't see this happening any time soon- not in my lifetime and most likely not in the next two or three human generations. It's certainly hard to see how we can go from the AI we have now to AI with human-level intelligence. Despite the successes of statistical machine learning and deep neural nets, their models are extremely specific and the tasks they can perform too restricted to resemble anything like general intelligence. Perhaps we could somehow combine multiple models into some kind of coherent agent with a broader range of aptitudes, but there is very little research in that direction. The hype is great, but the technology is still primitive.
But of course, that's still speculative- maybe something big will happen tomorrow and we'll all watch in awe as we enter a new era of AI research. Probably not, but who knows.
So the question is- where does this leave the efforts of the industry to, well, sell self-driving tech, in the right here and the right now? When you said self-driving cars will almost certainly be safer than humans- you didn't put a date on that. Others in the industry are trying to sell their self-driving tech as safer than humans right now, or in "a few years", "by 2021" and so on. See Elon Musk's claims that Autopilot is safer than human drivers already.
So my concern is that assertions about the safety of self-driving cars by industry players are basically trying to create a climate of acceptance of the technology in the present or near future, before it is even as safe as humans, let alone safer (or "sufficiently" so). If the press and public opinion are irrational, their irrationality can just as well mean that self-driving technology is accepted when it's still far too dangerous. Rather than setting the bar too high and demanding an extreme standard of safety, things can go the other way and we can end up with a diminished standard instead.
Note I'm not saying that is what you were trying to do with your statement about almost certainty etc. Kind of just explaining where I come from, here.
I share your skepticism that AIs capable of piloting fully driverless cars are coming in the next few years. In the longer term, I'm more optimistic. There are definitely some fundamental breakthroughs which are needed (with regards to causal reasoning etc.) before "full autonomy" can happen -- but a lot of money and creativity is being thrown at these problems, and although none of us will know how hard the Hard problem is until after it's been solved, my hunch is that it will yield within this generation.
But I think that framing this as an AI problem is not really correct in the first place.
Currently car accidents kill about 1.3 million people per year. Given current driving standards, a lot of these fatalities are "inevitable". For example: many real-world car-based trolley problems involve driving around a blind curve too fast to react to what's on the other side. You suddenly encounter an array of obstacles: which one do you choose to hit? Or do you (in some cases) minimise global harm by driving yourself off the road? Faced with these kind of choices, people say "oh, that's easy -- you can instruct autonomous cars to not drive around blind curves faster than they can react". But in that case, the autonomous car just goes from being the thing that does the hitting to the thing that gets hit (by a human). Either way, people gonna die -- not due to a specific fault in how individual vehicles are controlled, but due to collective flaws in the entire premise of automotive infrastructure.
So the problem is that no matter how good the AIs get, as long as they have to interact with humans in any way, they're still going to kill a fair number of people. I sympathise quite a lot with Musk's utilitarian point of view: if AIs are merely better humans, then it shouldn't matter that they still kill a lot of people; the fact that they kill meaningfully fewer people ought to be good enough to prefer them. If this is the basis for fostering a "climate of acceptance", as you say, then I don't think it would be a bad thing at all.
But I don't expect social or legal systems to adopt a pragmatic utilitarian ethos anytime soon!
One barrier it that even apart from the sensational aspect of autonomous-vehicle accidents, it's possible to do so much critiquing of them. When a human driver encounters a real-world trolley problem, they generally freeze up, overcorrect, or do something else that doesn't involve much careful calculation. So shit happens, some poor SOB is liable for it, and there's no black-box to audit.
In contrast, when an autonomous vehicle kills someone, there will be a cool, calculated, auditable trail of decision-making which led to that outcome. The impulse to second-guess the AV's reasoning -- by regulators, lawyers, politicians, and competitors -- will be irresistible. To the extent that this fosters actual safety improvements, it's certainly a good thing. But it can be really hard to make even honest critiques of these things, because any suggested change needs to be tested against a near-infinite number of scenarios -- and in any case, not all of the critiques will be honest. This will be a huge barrier to adoption.
Another barrier is that people's attitudes towards AVs can change how safe they are. Tesla has real data showing that Autopilot makes driving significantly safer. This data isn't wrong. The problem is that this was from a time when Autopilot was being used by people who were relatively uncomfortable with it. This meant that it was being used correctly -- as a second pair of eyes, augmenting those of the driver. That's fine: it's analogous to an aircraft Autopilot when used like that. But the more comfortable people become with Autopilot -- to the point where they start taking naps or climbing into the back seat -- the less safe it becomes. This is the bane of Level 2 and 3 automation: a feedback loop where increasing AV safety/reliability leads to decreasing human attentiveness, leading (perhaps) to a paradoxical overall decrease in safety and reliability.
Even Level 4 and 5 automation isn't immune from this kind of feedback loop. It's just externalised: drivers in Mountain View learned that they could drive more aggressively around the Google AVs, which would always give way to avoid a collision.
So my contention is that while the the AIs may be "good enough" anytime between, say, now and 20 years from now -- the above sort of problems will be real barriers to adoption. These problems can be boiled down to a single word: humans. As long as AVs share a (high-speed) domain with humans, there will be a lot of fatalities, and the AVs will take the blame for this (since humans aren't black-boxed).
Nonetheless, I think we will see AVs become very prominent. Here's how:
1. Initially, small networks of low-speed (~12mph) Level-4 AVs operating in mixed environments, generally restricted to campus environments, pedestrianised town centres, etc. At that speed, it's possible to operate safely around humans even with reasonably stupid AIs. Think Easymile, 2getthere, and others.
2. These networks will become joined-up by fully-segregated higher-speed AV-only right-of-ways, either on existing motorways or in new types of infrastructure (think the Boring Company).
3. As these AVs take a greater mode-share, cities will incrementally convert roads into either mixed low-speed or exclusive high-speed. Development patterns will adapt accordingly. It will be a slow process, but after (say) 40-50 years, the cities will be more or less fully autonomous (with most of the streets being low-speed and heavily shared with pedestrians and bicyclists).
Note that this scenario is largely insensitive to AI advances, because the real problem that needs to be solved is at the point of human interface.
Very good write anyways... indeed many things will have to change - probably the infrastructure, the vehicles, the software, the way pedestrians move, and driver behavior as well.
So you have a "driver" who has to be monitoring a diagnostic console, AND has to be separately watching for non-alerted emergency events to avoid a fatal crash? Why not hire two people? Good god.
> We decided to make this transition [from two to one] because after testing, we felt we could accomplish the task of the second person—annotating each intervention with information about what was happening around the car—by looking at our logs after the vehicle had returned to base, rather than in real time.
However, this seems to contradict the NTSB report which indicates that it still was the driver's responsibility to perform this event tagging task, which necessarily implies taking your eyes off the road.
Don't we have speech-to-text for this sort of thing?
"Uber moved from two employees in every car to one. The paired employees had been splitting duties — one ready to take over if the autonomous system failed, and another to keep an eye on what the computers were detecting. The second person was responsible for keeping track of system performance as well as labeling data on a laptop computer. Mr. Kallman, the Uber spokesman, said the second person was in the car for purely data related tasks, not safety."
This gets your license yanked, herearound. Same goes for texting and driving if you're caught. Even in stop - go traffic.
It feels like a typical coder rethrowing exception somewhere u_u
kernel: [70667120.897649] Out of fucks: Kill pedestrian 29957 score 366 or sacrifice driver's convenience
I hope that whoever was responsible for this piece of crap software loses a lot of sleep over it, and that Uber will admit that they have no business building safety critical software. Idiots.
For 6 seconds the system had crucial information and failed to relay it, for 1.3 seconds the system knew an accident was going to happen and failed to act on that knowledge.
Drunk drivers suck, but this is much worse. This is the equivalent of plowing into a pedestrian with a vehicle while you're in full control of it because you are afraid that your perception of the world is so crappy that you will over-react to such situations often enough that the risk of killing someone you know is there is perceived as the lower one.
Not to mention all the errors in terms of process and oversight that allowed this p.o.s. software to be deployed in traffic.
This is so tragic. Even Volvo's own collision avoidance system would (could?) have mitigated the crash a fair bit. From Volvo's own spec. sheet : "For speeds between 45 and 70 km/h, the collision is mitigated."
In this case, the NTSB reports mentions that the car was traveling at 43mph, i.e. 68.8 kmph :(.
What bothers me is that these systems are on public roads, without public oversight. Sure, Uber got permission from the local authorities, but getting an independent team of technologists and ethicists to sign off on the basic parameters should have been the bare minimum ... yes, that would take time, but do we really want to give companies, especially ones like Uber with a history of ethical transgressions, the benefit of the doubt?
 https://tinyurl.com/y9sp2fmu (WARNING: This open/downloads a PDF that I referred to above. Page 5 has the paragraph on pedestrian collision detection specs)
If you want to compare it with a car operating on cruise control you'd have to sedate the driver.
(Subaru's doesn't do active lanekeeping, but lots of other manufacturers like BMW and Ford do.)
The newer cruise controls have lane-keep assist and adaptive cruise control - you don't have to actively steer or brake. On an open road, there's effectively little difference from the Uber vehicle, which would also let you disengage autonomous mode by breaking or otherwise interacting with the controls. (The newest mass-market cruise controls are "stop and go", which means they'll even bring the car to a full stop, then start driving again.)
Or at least it should be. That's why there is a v_max that a car is not allowed to exceed and a faster or heavier car will have better breaks.
And 70 km/h as here should be far away from v_max.
But why the hell wouldn't you have the thing beep to alert the driver that the AI thinks there is a problem and they need to pay extra attention? In fact it seems like this would be helpful when trying to fine tune the system.
That simply means it should not be deployed. End of story.
That buys you time for the ai classifier to do its thing and isn’t as dangerous as an emergency braking later on, so seems a sensible behavior all around.
The way I see it,there is only way to make sense of a field where the most respectable R&D house (Google/Alphabet) limits their vehicle to a relative snail's pace while everyone else (including notoriously unethical shops like Uber) is taking a gung-ho, "the only limit is the speed limit" approach. That is to assume "everyone else" is cheating the game by choosing a development path that gives the appearance of being functional in the "rough but will get there with enough polish" sense while the truth is that it's merely a mountain of cheap and dirty hacks that will never achieve the goals demanded by investors.
The only reason a company would overlook such a simple safety protocol as "slow down until a potentially dangerous object is positively identified" is if their "AI" fails to positively identify objects so frequently that the car could never achieve what a human passenger would consider a consistent, normal driving pace. The same can be said for any "AI" that can't be trusted to initiate panic braking in response to a positively identified collision scenario with a positively identified object. The fact that they specifically wrote an "AI" absolving workaround for that scenario their software means the frequency of false positives must be so high as to make the frequency of "false alarm" panic braking incidents unacceptable for human passengers.
This tech simply isn't there yet and I doubt it's all that close.
This tech simply isn't there yet and I doubt it's all that close.
Doesn't sound dumb to me. The car should be going slow enough to emergency stop if the pedestrian enters the road.
So, yeah: dumb.
If I think I saw children run around between cars on the parking lane, are the parents probably morons? Yes. Do I slow down and be prepare to slam the brakes in case a child suddenly runs in front of me? Ab-so-lute-ly.
Even if people behave idiotically on the street, it is obviously still my fault if I run them over.
And that's what I'm talking about. Computers aren't all that good at determining whether or not a person is going to jump into the street.
that said, as I said above whenever a car sense a situation it doesn't understand it should slow down, that's enough to be safe later on as the situation develops and is different than hitting the brakes full force.
and anyway expecting autonomous car to drive full speed all the time is moronic, humans don't do that either, precisely because it's dangerous.
There's a calculation here of balancing the perceived risk of an obstruction with the consequences of avoiding it or braking in time. Drivers have to make this decision all the time, on a highway they will generally assume it's safer to hit most things than swerve or panic brake, because it's mostly likely not that dangerous to collide with.
At least one stat I saw from AAA is that ~40% of the deaths from road debris result from drivers swerving to avoid them.
This has been a not-minor problem for autonomous cars and the Tesla-style autopilots / adaptive cruise controls that depend on vision only. You have to program it to ignore some types of things that seem like they might be an obstruction, such as road signs, debris in the road, etc. so they don't hit the brakes unnecessarily.
well tough shit then, a car that plows trough situation it doesn't understand should never be on the public road
it's a 60 zone and rain or smoke impairs the visibility? slow down.
it's a 45mph zone and something that's not a motor vehicle is in a lane that's supposed to only have motor vehicles on it? you slow down until you make sense of the situation.
you're near a playground and a mother is walking a children on the other side of the road and you can't see if she's holding his hand? you slow down.
a person walks near the kerb and it's not looking in your direction? you slow down. a bike is loaded with groceries? a car acting erratically? a person being pulled by his dog? a bus stopped unloading people? you don't drive past them at 30mph.
driving safely: super simple stuff.
If people did the super simple stuff, we'd have boundless peace, prosperity, liberty.
I remember a story - stop me if you've heard this one - about a God helping out a group of desperate people, freeing them from slavery, parting seas, feeding them in the desert. They were camped at the foot of a mountain with the God right there on top - right there! And they built the golden calf anyway. And that rule seems easier than all the other 9. WTF did they even need a golden calf for?
So sadly, the criteria of simplicity is irrelevant - people will find a hard way to do it.
"As the vehicle and pedestrian paths converged, the self-driving system software classified
as an unknown object, as
and then as
with varying expectations of future
1. It seems like the classifier flipped state between pedestrian/unknown object/vehicle/bicycle; this seems like one of the well-known issues with machine learning. (I'm assuming the classifier is using ML simply because I have never heard of any other (semi-?) successful work on that problem.)
I suggest that the problem is that the rest of the driving system went from 100% certainty of A to 100% certainty of B, etc., with a resulting complete recalculation of what do to about the current classification. I make this hypothesis on the basis of the 4+ seconds when the car did nothing, while a response to any of the individual possibilities would possibly have averted the accident.
2. If the classifier was flipping state, I assume the system interrupted the Decide-Act phases of an OODA loop, resulting in the car continuing its given path rather than executing any actions. This seems like a reasonable thing to do, if the system contains no moment-to-moment state. Which would be strange; it seems like the planning system should have some case for having obstacles A, B, C, and D rapidly and successively appearing in the same area of its path.
3. Assuming the classifier wasn't flipping state, but presenting multiple options with probabilities, I can see no reason why the car wouldn't have taken some action in the 4+ seconds. (I note that the trajectory of the vehicle seems to move towards the right of its lane, which is a rather inadequate response and likely the wrong thing to do for several of the classification options.)
"According to Uber, emergency braking maneuvers are
not enabled while the vehicle is under computer control, to
reduce the potential for erratic
That's just idiotic and would be nigh-criminally unprofessional in most engineering situations.
I hope but do not know if the frequency and circumstances are logged and sent to Volkswagen when at the shop. Don't expect it though since it will first see the shop after 2 years. That would be too long for an improvement cycle.
What makes you think they aren't sent to VW all the time?
Not fine on a public road with real people on it.
I think a major effort in self driving is solving the Goldilocks issue of reacting properly to impending accidents, but also not apply uncomfortable breaking if it’s not needed.
Seems like it was too insensitive at that distance.
Shouldn't the brake application be not boolean but 0-100% strength based on confidence levels?
This is also an issue with current production automatic braking system. One that was largely solved with on track testing, and on road testing logging false triggers with a driver driving. There's no need to risk lives unless you're just cutting corners to avoid the cost of a test track.
They should have had it set to spike the breaks once collision was imminent though, that's (maybe) the biggest programming omission here.
I'm not sure that'd be a huge issue. The vectors have to be intersecting first of all, which most vectors emanating from sidewalks wouldn't be, and then a little hysteresis would smooth out most of the rest.
What's actually needed here is some notion of whether the pedestrian is paying attention and will correctly stop and not intersect the path of the car. Humans are constantly making that assessment based on sometimes very subtle cues (is the person looking at/talking on a phone, or are they paying attention, for example).
These autonomous systems are evaluating surrounding vectors every few milliseconds. A timescale of 3 seconds simply isn't important, as they would instantly detect you slowing down and conclude that you wouldn't intersect with their vector.
> But why didn't it brake from t minus 6 to t minus 1.3? Looks like it detected that the car's and object's paths were converging, so why didn't it brake during that interval?
Safe driving often does require slowing down in the face of insufficient information. If a human driver sees an inattentive pedestrian about to intersect traffic, they will slow down. “Drive until collision is unavoidable” is a failing strategy.
And anyway, jerky driving is a symptom of late braking, not early braking.
I see it as more than just jerkyness, I see a massive safety issue in traffic. If your autonomous car is slamming on the brakes spontaneously there's a lot more opportunities for other drivers to plow into you from behind.
No it's not, the issue was jerky driving.
> “Drive until collision is unavoidable” is a failing strategy.
That being said, I'm happy to find my assumptions about stopping time are incorrect and a car traveling at 25mph can stop in less than a second. So on busy NYC streets this wouldn't be an issue. Even at 50mph it appears that stopping time is sub 3s, so the vehicle could probably have avoided this collision if it were running a more intelligent program.
Right, collision is basic physics accounting for the stopping time and distance of pedestrians and cars. So the question is whether pedestrians on sidewalks really have so many collision vectors with traffic such that autonomous vehicles would be jerky all of the time as the initial poster suggested.
I claim reasonable defaults that would far outperform humans on average wouldn't have that property. Autonomous vehicles should be programmed to follow the rules of the road with reasonable provisions to avoid collisions when possible.
That's vanilla testing edge-case stuff, really, and it's known that uber are unter when it comes to this, but the removal of all the useful safety layers after that (braking, alert, second human, hardware system) is reckless and stupid.
"It might move out of my way" is no reason to get so close at such a high speed that you can't avoid it when it doesn't!
Self driving cars can emulate humans, but that won't bring them to human level performance without the ability to model other actors. If they try to mathematically rule out the possibility of accidents without such models, they won't be able to go anywhere.
Also, my two year old will sometimes walk towards the curb, but she is very good with streets, so I am not worried. She always stops and waits to hold someone's hand before crossing. This behavior freaks some drivers out, causing them to slow or come to a complete stop, which is the nicest outcome because then I can take her hand and cross the street. When I am walking by myself drivers rarely yield even as I am stepping into the street, even at marked crossings.
I guess my point is if my two year old exhibits the behavior you ascribe to a hypothetical non-child pedestrian then how can you be sure your hypothetical pedestrian won't just "keep walking"? What if they are blind, or drunk, or reckless? Perhaps you have been lucky before and never struck a pedestrian but I strongly urge you to assess your behavior. Stop for pedestrians, it's the nice thing to do and it's probably the law where you live.
When I am driving I often see people standing at the curb just staring at their smartphone. Usually these people are wasting time because they don't expect traffic to stop. When I stop for them they are usually pleased, they cross the road and get on with their life. Sometimes these people are just waiting for an Uber or something, when I stop for them they get confused and look at me funny. I don't mind, I just smile at them and resume driving. I am in a car, so I can accelerate and travel very quickly with almost no effort. It is no trouble for me to spend a few seconds stopping for a false positive.
The speed limit is a reference to the maximum allowable speed of the roadway, not the minimum, only, or even recommended speed.
I didn't always drive like this, but I was in an accident that was my fault that totally upended my life, so I made an effort to change my ways. You can do it too, before you get in an accident that sets your life back, or irreparably shatters it...
I recommend taking an advanced drivers education course if you seriously decide you want to improve your driving. A lot of this stuff is covered.
Nobody's suggesting that anyone should slam the brakes every time a moving object intersects your vector of motion.
No, you did that:
> I often have pedestrians walk towards the street and I assume they will stop so I don't slow down unless they are children or similar. Almost every day I could hit pedestrians if they kept walking.
With respect, I don't know what you meant to say but that sounds like a description of a bad (or at least inconsiderate) driver to me.
In any case, when I think about how I would design a self driving car, an "auto-auto", the first principle I came up with was that it should never travel so fast that it couldn't safely slow down or break to avoid a possible collision. This is the bedrock, foundational principal.
> the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path
Little different, huh? If you see something that looks like it might be in your way, and you aren't sure what it is, you just keep going?
And if I see a pedestrian in the middle of the road at a random spot, especially at night, I'm slowing down since I don't know WTF they're thinking. Or if I'm in a neighborhood with regular street crossings carved out of the sidewalk and someone's coming up to one of those - I don't know how well they're paying attention to their surroundings.
I think it comes down to the fact that the classification algorithms are not ready for primetime.
As far as trains go, they do slow down when passing trough a track adjacent to a platform. There are some non-platform adjacent tracks the train companies use to avoid slowing down, however they will slow down or even stop if something is going on.
Similarly, high speed rail doesn't have level crossings due to safety considerations. Overall trains are very safe and they are _designed_ for safety. It is highly irresponsible and immoral to just wing it with people's life/safety.
> As far as trains go, they do slow down when passing trough a track adjacent to a platform. There are some non-platform adjacent tracks the train companies use to avoid slowing down, however they will slow down or even stop if something is going on.
The equivalency isn't 'trains slow down through stations' (That would be cars having lower speedlimit in pedestrian areas - they do and the ubers honor) , it would be 'train spikes breaks if someone takes a step toward the edge' (Which they don't, even though it would potentially save lives).
There's always a tradeoff between usability and absolute safety. I'm not saying the uber did nothing wrong, at a minimum it should have spiked it's breaks. The 'perfect world' solution would be the uber knowing mass and momentum of approaching objects, and whether they could stop in time. But honestly, here would that have helped? We'll never get rid of people walking in front of moving cars, just have to be find the happy balance (which we clearly haven't)
A train's deceleration under maximum braking is far, far lower than a car.  suggests 1.2m/s² (paragraph 8).
 says the deceleration of a low-speed train crashing into the buffers at the end of the line in a station should not be more than 2.45m/s² (paragraph 35). That caused "minor injuries" to some passengers.
Trains do slow down earlier if the platform they are approaching is very crowded, but there's not really anything else they can do.
With cars, there is an expectation that you have to share the road with other vehicles, objects, obstacles, pedestrians, etc.
No. That's my point. Take less drastic measures earlier, and only escalate when you have to. That's how I drive, and a self-driving car can do the same.
>The system is not designed to alert the operator.
>The vehicle operator intervened less than a second before impact by engaging the steering wheel.
>She had been monitoring the self-driving system interface
It seems like this was really aggravated by bad UX. Had the system alerted the user and the user had a big red "take whatever emergency action you think is best or stop ASAP if you don't know what to do" button to mash this could would have had a much better chance of being avoided.
Things coming onto the road unexpectedly isn't exactly an edge case when it comes to crash causing situations. I don't see why they wouldn't at least alert the user if the system detects a possible collision with an object coming from the side and the object is classified as one of a certain type (pedestrian, bike other vehicle, etc, no need to alert for things classified as plastic bags).
I don't see why they disabled the Volvo system. If they were setting up mannequins in a parking lot and teaching the AI to slalom around them I can see why that might be useful but I don't see why they would want to override the Volvo system when on the road. At the very least the cases where the systems disagree are useful for analysis.
Of course cars cannot change speed instantly anyway - it is likely that even if the button was hit in time the accident was still unavoidable at 1.3 seconds. The car should have been slowing down hard long before it knew what the danger was. (I haven't read the report - it may or may not have been possible for the computer to avoid the accident)
Simply swerving left would have avoided the accident. Stopping is not the only thing the human driver could have done!
The whole thing is nuts. Imagine a human driver seeing the same thing the computer did, and responding the same way: they'd be in handcuffs.
http://www.visualexpert.com/Resources/reactiontime.html has a good discussion (starting with why 1.5 seconds is not the answer)
For something this safety critical, you want the software engineering quality of Boeing, NASA, etc. This type of mistake is pretty inexcusable.
I have fired clients for doing reckless and stupid things orders of magnitude reckless and stupid than what Uber has done here and I would hope that I would walk the hell out were I confronted with "we disabled the brake for a smoother ride and then disabled the alarms because they were too noisy". Do thou likewise, yeah?
The police are in no position at assert that, nor do they know whether or not Uber is guilty of negligence. Police do not bring charges and they're not running the investigation.
The place for the product folks to override safety features is the test track. If the feature didn’t work, they should have pulled the drivers because they were not trained to properly operate the machine.
If you give the “driver” training on a car with an autonomous braking system, then give them a car without it, that’s not on the driver. Someone was negligent with safety in regards to the entire program.
I’m not saying anyone needs to go to jail over this, but there do need to be charges IMO. Personal liability needs to be involved in this or executives will continue to pressure employees to do dangerous things.
> "It's very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway," Moir told the San Francisco Chronicle after viewing the footage.
I can't imagine doing what she did — even thoroughly stoned, as she may have been (she tested positive for methamphetamine and marijuana), I would have more sense of self-preservation than that.
That should be irrelevant. Even if the pedestrian is jay-walking, it's still not legal to hit them. Further, having solid evidence that the car detected the pedestrian and did nothing to avoid her mitigates the pedestrian's responsibility, no?
Also, the "center median
containing trees, shrubs, and brick landscaping
in the shape of an X" sure looks like it should have some crosswalks, from the aerial photograph. What's it look like from the ground?
And partially-at-fault, on one hand, means there's fault on the driver side too, and on the other hand, is a judge's decision to make, not the police, no?
So the question becomes why couldn't they get emergency braking solved before driving on the road? Maybe that requires collecting good data first, for training the system?
That's like creating HTML forms that only work with the common use case and crash spectacularly on unexpected input. Except that this time it's fatal. That's not the kind of software quality I want to see on roads.
This is beyond incompetence. There is a different level of software engineering when making a website vs making a pacemaker, rocket or flight avionics. You need the quality control of NASA, SpaceX or Boeing, not that of .. whoever they have running their self driving division.
The other thing about the system that sucks is that it's all optical (AFAIK) so when visibility is poor, it shuts off. They need to add more sensors because those are the conditions I would most like an extra set of eyes.
Yeah that's not how you design safety critical software. This isn't some web service. Either you're wrong (let's hope) or Uber is completely negligent.
Source: I write safety critical code for a living.
In no case would sudden braking be the cause of a rear-end collision. It's always the fault of the driver behind.
I hope this won't stay unpunished (both at corporate and personal level) if confirmed.
> 1.3 seconds before impact ... emergency braking maneuver was needed ... not enabled ... to reduce the potential for erratic vehicle behavior
This wind-up toy killed a person.
Transport is a waking nightmare anyway. Every time you get in your car, every mile you drive, you're buying a ticket in a horrifying lottery. If you lose the lottery you reach your destination. If you win... blood, pain, death.
Into this we're setting loose these badly-programmed projections of our science-fiction.
- - - -
A sane "greenfield" transportation network would begin with three separate networks, one each for pedestrians, cyclists, and motor vehicles. (As long as I'm dreaming of sane urban infrastructure, let me sing the praises of C. Alexander's "Pattern Language" et. al., and specifically the "Alternating Fingers of City and Country" pattern!)
My mom has dementia and is losing her mind. We don't trust her to take the bus across town anymore, and she hasn't driven in years. If I wanted an auto-auto to take her places safely I could build that today. It would be limited to about three miles an hour with a big ol' smiley sign on the back saying "Go Around Asshole" in nicer language. Obviously, you would restrict it to routes that didn't gum up major roads. It would be approximately an electric scooter wrapped in safety mechanisms and encased in a carbon fiber monocoque hull. I can't recall the name now but there's a way to set up impact dampers so that if the hull is hit most of the kinetic energy is absorbed into flywheels (as opposed to bouncing the occupant around like a rag doll or hitting them with explosive pillows.) This machine would pick its way across the city like "an old man crossing a river in winter." Its maximum speed would at all times be set by the braking distance to any possible obstacle.
 I maintain that "auto-auto" is the obviously cromulent name for self-driving automobiles, and will henceforth use the term unabashedly.
If you can't give your "self-driving software" full access to the brakes because it becomes an "erratic driver" when you do that, you do not have self-driving software. You just have some software that is controlling a car that you know is an inadequate driver. If the self-driving software is not fully capable of replacing the driver in the car you have placed it in, as shipped except for the modifications necessary to be driven by software, you do not have a safe driving system.
Please re-write this for clarity.
The regulators should only test for false negatives, where the car should have stopped, but did not detect the obstacle (false negative), because there, it is a clear threat to safety, and the car company's incentive, while definitely still present, is less pure, as the amount of false negatives is a direct trade off with the quantity of false positives (because it is a treshold: a minimum confidence level from which you decide that there is indeed something in front of the car and you need to break), which make driving more awkward for 99% of drivers
1. Regulators want aggressive breaking but carmakers want smoother driving
2. Manually tuning all the edge cases for cases where the software is uncertain what's happening will lead to fragile monolith black boxes
These kind of tradeoffs were things every self-driving car software developer KNEW they were going to have to deal with - the most extreme being the one where the software has to decide who to kill and who to save:
You misunderstood. The Uber software had sole control of the brakes (plus the human of course). The Volvo factory system was disabled so that it didn’t have negative interaction with the Uber system.
Your mistake is understandable. The article was poorly written, perhaps due to a rush to publish, as is the norm these days. Even if the NTSB report was unclear, that doesn’t excuse clumsy reporting.
If you’ve ever done significant mileage in a car with an emergency braking system you probably have experienced seemingly random braking events. The systems favor false positives over false negatives.
> At 1.3 seconds before impact, the self-driving system determined [...]
Unless this is exceptionally poorly worded, it certainly sounds like self-driving system was the one doing the determination.
This isn't horseshoes, as the old saying goes. I unapologetically have a high bar here.
The charitable interpretation of this is that the industry believes that self-driving AI is a safety feature of greater quality than, say, lane assist or auto-breaking.
The less charitable one is that they find it too much work to integrate their AI to other safety systems. Which, to be fair, is really going to be a lot of extra work, on top of developping self-driving.
That's not how I read it, or how any of the journalists who are reporting the story are reading it. Uber disabled the self driving software's ability to do an emergency stop when it detected it was going to crash. The Volvo system is separate and also was disabled when the car was in self driving mode.
>Sensors on an Uber SUV being tested in Tempe detected the woman, who was crossing a street at night outside a crosswalk, eventually concluding “an emergency braking maneuver was needed to mitigate a collision,” the National Transportation Safety Board said in a preliminary report released Thursday.
>But the system couldn’t activate the brakes, the NTSB said.
That's the only reading that makes sense to me, otherwise why did the car fail to attempt to stop when it detected the pedestrian and knew it was going to hit them?
1.3 seconds before
self-driving system determined that
an emergency braking maneuver
(see figure 2). According to Uber, emergency braking maneuvers are
not enabled while the vehicle is under computer control, to
reduce the potential for erratic
I believe you are the one who has misunderstood.
"emergency braking maneuvers" refers to an additional automated (software) system for automatically applying the brakes in an emergency (that's detected by that additional system).
>2 In Uber’s self-driving system, an emergency brake maneuver refers to a deceleration greater than 6.5 meters per second squared (m/s^2).
So, Uber's self-driving system has command to brake normally, but it cannot "slam" the brakes. The other system from Volvo's is deactivated when Uber's is working and cannot brake at all. Thus, since Volvo's is deactivated and Uber's won't brake if it judges that a deceleration of >6.5 m/s^2 is needed, it turns out that in automated mode, the car actually lacks the ability to trigger emergency braking at all, hoping instead that the driver will somehow notice. But in a sadistic twist, no warning is given to the driver at any moment that they need to slam the brakes.
However, if the safety driver was trained to brake immediately upon warnings it could have worked quite well. But that would negate the removal of e-brake actuation.....
The NTSB report is pretty unclear, I had to re-read it several times and I think you're correct that the "emergency braking maneuvers" they refer to are the Volvo ones. It's strange though that they word it as
> At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision
Is the Uber self-driving system able to interact with the Volvo system? Or are they calling the Volvo safety features a second "self-driving system"? And what were the steps, Uber realizes it needs an emergency braking maneuver and sends a signal to the Volvo system which then responds with "I'm disabled" ?
I understand why the Volvo features may be disabled, but it's alarming that the self-driving system made no attempt to brake at all when it was fully "aware" it would hit someone.
The report does mention the Volvo braking features by name a few paragraphs earlier though... So I'm still not entirely sure.
If what you said is true, then the vehicle operator would not be relied on to intervene because the Uber self-driving software would apply the brakes. Since the vehicle operator is relied on to intervene, this indicates the Uber self-driving software has its emergency braking disabled.
This is not my experience with VW's 2016-model-year system.
Soemtimes it stops a second or two before I would've. I haven't had any false positives, though.
Nobody disagrees with you and that is explicitly the reason why a human is on board, so I am not sure what you are arguing against.
I think it's very clear that a human driving a normal vehicle is different from a human sitting at the wheel of a self-driving (semi-self-driving?) vehicle. You simply cannot expect a human to remain as engaged and attentive in such a passive situation.
It's baffling to me that they chose to deactivate emergency braking without substituting a driver alert. If the false-positives are so frequent as to render the alert useless (i.e. it's going off all the time and you ignore it) I don't think these vehicles are suitable for on-road testing.
You can lean fairly heavily on the 'still dangerous, but better' argument in the face of 40,000 US vehicle fatalities each year, but there are limits.
It seems that Uber wasn't actually verifying that, though.
That sounds like a terrible idea.
Independently-working override-capable systems are the base of engineering redundancy safety. See airborne collision avoidance systems (ACAS), which will automatically and forcefully steer an aircraft to avoid a collision if necessary: https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...
If any of the systems are vulnerable to any such false positives, and equipped to enable emergency braking to avoid them, even on the highway, it's not hard to imagine why they might be disabled, especially during testing phases.
I think it's fair to say that at this early stage in product development, there's probably no 'always right' answer for how to handle a given obstacle without considering locality, driving speed, road conditions, likelihood of false positives and negatives, etc.
Hey there's an idea for Uber, maybe instead of disabling the forward collision system entirely they could just decrease the sensitivity to lessen false positives (like in our Jeeps)?
Instructions from TCAS are near absolute in their priority. If ATC says to do something different, you ignore ATC and do what TCAS says. If the pilot in command says to do something different, you ignore the pilot in command and do what TCAS says. If somehow God Himself is on your plane and tells you to do something different, you ignore Him and do what TCAS says. Compliance with TCAS is non-negotiable, and the Überlingen disaster is the bloody example of why it's that way.
Self-driving/autonomous-car systems need to have a similar absolute authority built in. If Uber disabled theirs because of false positives, it's a sign Uber shouldn't be running those cars on public roads.
Redundant safety systems are a great idea, evidently Uber needed more of them, and integrating with the Volvo system might have been a reasonable option. It's silly to suggest that the integration would necessarily have been trivial, though. That's what I'm objecting to.
The above traits aren’t exclusive to Uber’s “engineers”, but are lethal when applied to the engineering of life safety systems.
A lot more than one, that's for sure.
See how that holds up in civil court in front of a jury, or any legislative body Uber might need to convince to allow their operation in a jurisdiction in the future.
“Just think of how many more people we would’ve killed if we didn’t care at all!”
I had thought an audience of developers would "get it," since we deal with fallout from ill-conceived integrations every day, although admittedly in a far less spectacular form than control system engineers.
Unfortunately, the uber hate train has already left the station and there's no slowing it down until the investigation finds (or doesn't) actual evidence of negligence rather than clickbait guesswork by armchair engineers. Too bad.
You do a disservice to the audience by assuming it would be understanding of grossly negligent behavior.
Failures happen, that is to be expected. If you're building self-driving vehicles, you're supposed to be engineering for those failures. Disabling two life safety systems (Volvo's AEB and Uber's own AEB) and relying on a single inattentive human driver? I don't understand how that's understandable or justifiable in any scenario besides a carefully controlled test track.
(Were any Professional Engineers even involved in this? Or was it just a bunch of "software engineers"?)
There is no "guarantee" that uber "engineers"/engineers put more thought into this than "their system is inconvienent, tear it out" or "we don't need their system because ours will be better, tear it out." Nobody can guarantee Uber engineers were not stupendously negligent until the investigation is complete. Anybody who thinks they can guarantee that has an irrational basis for thinking they can provide such a guarantee.
Somehow, it works out.
The point is that so far most (all?) of these computer systems don't have a failure mode where it kills pedestrians, because their scope is very limited.
Same with power steering: it won't steer into a different directorion. You don't have to "fight" with it.