Hacker News new | past | comments | ask | show | jobs | submit login
Uber’s self-driving car could not detect pedestrians outside of a crosswalk (theregister.co.uk)
457 points by notlukesky 29 days ago | hide | past | web | favorite | 563 comments



> 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

System can't decide what's happening.

> It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg

System is too slow to realize something serious is happening.

> That triggered what Uber called “action suppression,” in which the system held off braking for one second

A hardcoded 1 second delay during a potential emergency situation. Horrifying.

I bet they added it because the system kept randomly thinking something serious was going to happen for a few milliseconds when everything was going fine. If you ever find yourself doing that for a safety critical piece of software, you should stop and reconsider what you are doing. This is a hacky patch over a serious underlying classification issue. You need to fix the underlying problem, not hackily patch over it.

How is this not the title of the story? This is so much worse than the "it couldn't see her as a person, only as a bicycle". At least the car would still try to avoid a bicycle, in principle, instead of blindly gliding into it while hoping for the best.

> with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.

And then top it off with systemic issues around the backup driver not actually being ready to react.


>> 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

The system should have started applying brake at this point. If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

> That triggered what Uber called “action suppression,” in which the system held off braking for one second

This is borderline criminal negligence.

> with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.

Why were there no alarms going off at 5.6 seconds when the vehicle was confused!!!??

SMH. This is just ... I'm flabbergasted.


> The system should have started applying brake at this point. If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

> Why were there no alarms going off at 5.6 seconds when the vehicle was confused!!!??

That sort of depends on the specifics of how their obstacle prediction and avoidance works - having a fuzzy view on what exactly something is at 5.6 seconds out is probably ok. The important bit is that it notices that the obstacle is moving out to the road and should stop it. Classification is not needed to avoid objects. The key words here are actually "at each change, the object's tracking history is unavailable" and "without a tracking history, ADS predicts the bicycle's path as static" which is a horrible oversight.

>> That triggered what Uber called “action suppression,” in which the system held off braking for one second

> This is borderline criminal negligence.

Yeah, this is. Even though it was too late to swerve, there was still enough time to slow down and attempt to reduce the speed of impact. _This_ is probably where an alarm should have fired - the car is now in a really unsafe state and the safety driver should absolutely know about it.

Disclaimer - Up until a few months ago I worked at a major (non Uber) self driving car company.


Here's a quick way to get safety in your collision avoidance system up to spec: randomly choose coders of the system to be the objects avoided during tests.


You might be able to literally count on two hands the number of accidents where we have this level of transparency into the thought processes and sensory data of the 'driver'. And control over what happens next time. There is no doubt that the engineering data gathered here and in other accidents is going to contribute to a massive (truly massive, unspeakably massive, enormously massive - all such adjectives are appropriate) reduction in both actual deaths from car accidents and statistical deaths from the huge amount of time people waste driving.

The big picture is so overwhelmingly positive that even if the engineers were purposefully running people over to gain testing data it would still probably be a large net win for greater society. Thankfully there is no call for such reckless data gathering.

If anything, punishments in this case should be much more lenient than normal rather than breaking out the cruel and unusual measures.


> If anything, punishments in this case should be much more lenient than normal rather than breaking out the cruel and unusual measures.

How is it cruel or unusual to subject someone to risks posed by a system they're building? The Wright brothers didn't hit up bars to find guys willing to drunkenly try and soar into the sky on an oversized kite. The vast majority of things that were ever invented involved huge risks and often injuries to the inventor. Why do people writing code get exemptions from that?

I agree with the GP: the hardcoded 1 second wait time sounds exactly like some hack I'd put in a bit of code, the key difference being, my code makes fun little applications on an iPhone work, it does not DRIVE A 3500 POUND VEHICLE IN PUBLIC.

I bet if that engineer thought it would be him in front of that things bumper, he would've put in a bit more work to figure out what was going on.


> How is it cruel or unusual to subject someone to risks posed by a system they're building?

More unusual than cruel, I must admit. We just don't do that anywhere.

If I were to guess why - if we did, people would either not work on experimental systems or would gold-plate the safety features way beyond what was reasonable. That sounds nice if you don't think about it too hard but in practice it would shave % off how much we produce in goods and services for no reason.

Aeroplanes are a great example. It sounds nice to say they are a super-safe form of transport, but actually thinking through the risks people face on their commute to work each day the amount of money we spend doesn't make sense. I mean, what if the risk of being in a plane crash was only as rare as being struck by lightening? And the ticket was much cheaper? That wouldn't be so terrible. I don't know a lot of people who were struck by lightening but I know a lot of people who don't have much money but take overseas holidays anyway.

> I bet if that engineer thought it would be him in front of that things bumper, he would've put in a bit more work to figure out what was going on.

Probably not in my experience. It is quite hard to get better performance out of a knowledge worker by threatening them. From what I've seen they tend to either resign or just do what they were already doing and hope. It isn't like anyone can stop writing buggy code by trying harder.


> More unusual than cruel, I must admit. We just don't do that anywhere.

Again I disagree, perhaps it would be unusual to do it via some sort of mandate after the fact, but the long history of human innovation involves tons of inventors risking life and/or limb to test safety systems of their own design. I'm reminded of the person who invested that special brake for circular saws that when it detected human flesh in front of the blade, it clamped down so hard it would often destroy the tool. The thing is, it also prevented you from losing a finger.

> if we did, people would either not work on experimental systems or would gold-plate the safety features way beyond what was reasonable.

I mean, maybe? I just think there's something lost there when you have a software engineer working on the code in a large company that's then testing on the unwitting public. I'm not saying it has to be a standing threat of "we're testing this on you" but, I mean, look what happened. An ugly hack that has no business in a codebase like this went in and someone was seriously injured, and I'm not saying the engineer necessarily deserved to be injured in their stead, but the person surely didn't have any role in this, they were just in the wrong place at the wrong time.

> It isn't like anyone can stop writing buggy code by trying harder.

One person, no, but an organization can. Talking of airplanes, when you look into the testing and QA setups for something like Boeing, where the software is literally keeping people in the air alive, its layer after layer of review, review, review, testing, analysis, on and on. Something like "if you think there's a pedestrian ahead, wait one second and check again" would NEVER have made it through a system like that.

You know I'm all for innovations but Silicon Valley's VC firms have a nasty tendency to treat everything like it's "just another codebase" forgetting that some of this stuff is doing real important shit. Elizabeth Holmes and Theranos come to mind.


> when you look into the testing and QA setups for something like Boeing

Boeing has a corporate lineage that extends back for more than a century and for most of that they did not have the levels of engineering safety excellence they can manage today. The culture that achieves near-perfect performance every flight is a different culture to the one that got planes off the ground back in the day.

And that goes to what I'm trying to communicate in this thread - people are bringing up examples of how people deviating from standard practice in mature, well developed industries where there are highly safe alternatives.

This is a different industry. Today in 2019 mankind knows how to fly a plane safely but does not know how to drive safely - I've worked in a high safety environment, they were notching speeds down to 30kmph and 40kmph down from 100kmph on public roads because the risk of moving any faster than that just isn't acceptable. People were substantially more likely to die on the way to work than at it. They'd probably have bought in 20kmph if the workers would reliably follow it. Driving is the single highest risk activity we participate in. Developing car self-driving technology has an obvious and imminent potential to save a lot of lives. Now we aren't about to suspend the normal legal processes but anyone who is contributing to the effort is probably reducing the overall body count even if the codebase is buggy today and even if there are a few casualties along the way.

What matters is the speed with which we get safe self driving cars. Speed of improvement, and all that jazz. Making mistakes and learning from them quickly is just too effective; that is how all the high safety systems we enjoy today started out.

It is unfortunate if people don't like weighing up pros and cons, but slow-and-steady every step of the way is going to have a higher body count than a few risks and a few deaths learnt from quickly. We should minimise total deaths, not minimise deaths caused by Uber cars. Real-world experience with level 3/4 autonomous technology is probably worth more than a few tens of human lives, because it will very quickly save hundreds if not thousands of people as it is proven and deployed widely.


> The culture that achieves near-perfect performance every flight is a different culture to the one that got planes off the ground back in the day.

But those lessons were learned. We know how to do it now. Just like...

> Today in 2019 mankind ... does not know how to drive safely

Yes we do, and by and large we do it correctly. It's easy to think nobody knows how to drive if you spend 5 minutes on r/idiotsincars, but that's selection bias. For every moron getting into an avoidable accident each day there are millions of drivers who left home and returned completely safely.

You can make the argument that people sometimes engage in too-risky behaviors while driving, that I'd agree with, but people know how. Just like people know how to develop safety systems that don't compromise safety, even when they choose to not, as I believe happened here.

> Making mistakes and learning from them quickly is just too effective; that is how all the high safety systems we enjoy today started out.

But again, we know how to do this already. And again, my issue isn't even that someone got hurt while we perfected the tech, all of our safe transit systems today are built on top of a pile of bodies, because that's how we learned- my issue is the person hurt was not an engineer, was not even a test subject. Uber jumped the gun. They have autonomous vehicles out in the wild, with human moderators who are not paying attention. That is unacceptable.

There was a whole chain of errors here:

* Ugly hacks in safety control software * Lack of alerts passed to the moderator of the vehicle * The moderator not paying attention

All of these are varying levels and kinds of negligence. And someone got hurt, not because the technology isn't perfect, but because Uber isn't taking it seriously. The way you hear them talk these are like a month away from being a thing, and have been for years. It's the type of talk you expect from a Move Fast Break Things company, and that kind of attitude has NO BUSINESS in the realm of safety, full stop.


> The culture that achieves near-perfect performance every flight is a different culture to the one that got planes off the ground back in the day.

This is true, but those lessons that got them to that culture are written in blood.

Like the old saying, "Experience is learning from one's own mistakes. Wisdom is learning from others mistakes." I don't think re-learning the same mistakes (process mistakes, not technological) is something that a mature engineering organization does.

One of my worries is that SV brings a very different "move fast and break things" mindset that doesn't translate well to safety-critical systems.

As for the rest of your post, what you're talking about is assessment of risk. Expecting higher levels of risk of an experimental system is fine, but there's a difference when the person assuming that risk (in this case, the bicyclist) doesn't have a say in whether that level of risk is acceptable


>>If I were to guess why - if we did, people would either not work on experimental systems or would gold-plate the safety features way beyond what was reasonable. That sounds nice if you don't think about it too hard but in practice it would shave % off how much we produce in goods and services for no reason.

Certified engineers absolutely can and do go to prison for structural mistakes in their projects, especially if those mistakes lead to loss of human life, but sometimes even without it - chief engineer of what was the tallest human-made structure on Earth(radio mast near Warsaw) went to jail for 2 years after the mast collapsed due incorrect design of the support structures.

Does it stop people from becoming engineers or from embarking on difficult projects? Of course not. If anything, it shows that maybe programmers working on projects like the autonomous cars should not only be certified the same way engineers are, but also held liable for what they do.


The problem here is that the engineers are likely working under an industrial exemption so there is no professional license liability assumed.

I personally have not come across a single software engineer or controls engineer who has had to certify a design.


These kind of metrics should be mandatory for self-driving cars. Just because, say, you can produce a painstakingly detailed timeline should not free you from consequences if that timeline indicates you acted with wanton disregard that any reasonable person could tell you would result in loss of life.

If someone is so stupid that they don’t realize that a car needs to be able to avoid people in the road, even outside crosswalks, they have no business doing development of self-driving cars. They simply aren’t qualified. Ditto if that person simply doesn’t care. We’re not learning anything from this that we didn’t already know. Of course if you do nothing to avoid a person or a bicycle at 40 MPH, you’re gonna kill someone eventually.

This kind of thinking reeks of the “it works with expected input, problem solved” that causes so many issues in critical software.

This is like Boeing with the 737 Max. They didn’t want to deal with failure conditions, so they avoided detecting them so they could pretend like they don’t exist. Subsequently, those failure conditions then caused two airplanes to crash and kill the people aboard.


Aeroplanes have 50 years of global experience in how to fly planes safely. Self driving cars aren't even a full product yet.

If (and it is a big if) there is one engineer who was grossly negligent then yes they shouldn't be working on self driving cars.

But it is far more likely that this was totally normal corporate ineptitude and will be fixed in the due course of normal engineering processes.

A safe culture is not built by people making wild guesses about what and why on forums. It is likely that the engineer responsible for this code is also going to be responsible on net for a very large number of lives saved and judgement of who may have been culpable for what should be left to the courts. Morally I'm happy to say that I believe not only is he or she in the clear but probably deserves a pat on the back for helping to push forward the technology most likely to promote save young lives that I've ever seen. I've had young friends who died from car crashes and ... not much else. Maybe drugs and diseases in a few rare cases. I want technology that gets people taking their their hands off the wheel and I want it ASAP. It doesn't need to be perfect; it just needs to be about as good as what we have now and consistently improving. Anyone halfway competent who is working on that as a goal has my near total support.

This is not the time to be discouraging people who work on self driving cars. This is a time to do things properly, non-judgmentally and encouragingly while acknowledging that we can't have cars running people over if we can possibly avoid it and fixing what mistakes get found.


> Aeroplanes have 50 years of global experience in how to fly planes safely. Self driving cars aren't even a full product yet.

We have over a century on accident statistics for normal cars. Anyone pretending that pedestrians will not jaywalk on a well lit street with low traffic should not develop self driving cars and most certainly should not have a drivers license .

> it is likely that the engineer responsible for this code is also going to be responsible on net for a very large number of lives saved and judgement of who may have been culpable for what should be left to the courts.

Citation needed. Claims like that can be used to proclaim the most hardened serial killer a saint. I mean if we just let them continue they might start saving people at some point, point me to any "currently applicable" evidence that says otherwise.

> I've had young friends who died from car crashes and ... not much else.

In my area people die from old age and cancer. Maybe you should move to a place with sane traffic laws that doesn't let the mental equivalent of a three year old loose on the local population.

> Anyone halfway competent who is working on that as a goal has my near total support.

As a test obstacle on the road?


"But it is far more likely that this was totally normal corporate ineptitude and will be fixed in the due course of normal engineering processes."

Normal engineering processes were built on bodies. We do not want to minimize outrage here. Public anger and pressure is the only way to keep engineering process going.


Oh wait. So the creators are to be treated with leniency, but LITERALLY running over innocent bystanders is an unfortunate coincidence which should be endured for the brighter future? If you do not see why this is wrong, do I have news for you.


> There is no doubt that the engineering data gathered here and in other accidents is going to contribute to a massive

Im going to strongly opposed this, and to support my perspective I present to you the Boing / FAA fiasco.


Air travel is the safest form of transport we have. Perfection is impossible, but air safety is pretty close. They didn't get there by getting outraged every time an aeronautical engineer made a mistake; they got there because every time there was a crash they gathered data, thought carefully and made changes.

There is no call to get riled up because Uber as a corporate entity let a mistake slip through. The justice system will come up with something that is fair enough and we will all be better off for the engineering learnings. This is young technology, which is different from aircraft.


> Perfection is impossible, but air safety is pretty close.

Intentionally programming a system to ignore object history each time reclassification happens seems like a glaring oversight.

Sitting behind the wheel as a test driver using your phone seems like a glaring oversight.

I’ll continue with my rile until change occurs, cheers.


I think you are absolutely correct. and i'm disappointed again at the broader HN downvoters who lack any sense of perspective. It's like the hackers all turned into fuzzy-headed luddites here.


Come on. It's one thing to be run over by a fellow human being, it's another to be run over by a system developed by a corporation in the pursuit of greater profits.

As a species, we've long accepted that living around each other poses some hazards and we've made our peace with it.

But to stretch that agreement to a multi-billion dollar corporation that only wants to make money off it? That's too much to ask for.


I think the "for-profit"" by a "multi-billion dollar corporation" thought process is clouding your judgment.

It is such a trite and overused argument.

Why does it feel so different to you if the accident was caused by negligent human texting, versus an engineer making decisions in code?

If anything that engineer was most likely operating in much better faith than the texting, perhaps drunk human driver.


It's a rather complicated question. Intention matters a lot when it comes to dealing with on-road incidents. If there is no intention, we deem it an "accident", else we deem it manslaughter. The punishment - by courts and by society - are much harsher for the latter.

Can code be "accidental"? Surely not. Someone had an intent to write and implement it. If it fails, its not "accidental"; the system was just programmed that way.

So the question is: are we okay with for-profit companies intentionally writing software that can lead to deaths?


Ah, sure: The New World Order requires sacrifices, as it's axiomatically Good, Correct, and Right. Anyone opposing it is the Enemy. (Now technooptimism-flavored: when Friend Computer runs you over, it's because you were secretly a traitor.)


I’d add the managers and VCs that probably put all the pressures in the world onto the developers, that end up doing hacks just to meet the unreasonable expectation of the employers.


I know you jest, but it's very easy to not reliably kill anybody. Just don't let the car move at all, or stop every time it detects something as large as a mosquito. What's hard is to not kill anybody and actually drive.


Damn straight. These fuckers (and their executives) should be the ones whose lives are on the line, not innocent people on random roads.


>>The important bit is that it notices that the obstacle is moving out to the road and should stop it

The problem here is that if I'm doing 70mph+ on the motorway and I see a plastic bag flying in my path, the correct course of action isn't to brake hard or panically try to avoid it - but a computer cannot tell a difference between a soft plastic bag and something more rigid. If the only classifier is that "there's an object in my way, I need to stop" then that's also super dangerous. In fact, there are cases where unfortunately the correct course of action is to keep going straight and hit the object - any kind of small animal you shouldn't try to avoid, especially at higher speeds - yes they will wreck your car, but a panicked pull to the left or right can actually kill you. As a human you should do this instinctively - another human = avoid at all cost, small animal = brake hard but keep going straight, plastic bags = don't brake, don't move, keep going straight. How do you teach that to a machine if the machine can't reliably tell what it's looking at?


My conclusion is that until you have hardware and software to classify objects as well a human you don't even consider testing on public roads. I can't understand the optimism that with such bad hardware and software but with tons of training it will magically get better then a human, Also we need some tests before we let anyone put his self driving car on the streets, should I be able to hack my car and npm install self driving and then test on your street ?


I don't have a great answer to that - I don't know Uber's capabilities and I can't really speak to my previous company's capabilities either. Sorry!

Edit: I will say that "size and relative speed of the object" is an excellent start, though.


My guess from the NTSB description is that their tracker had object category as part of the track information, and that detections of one category weren't allowed to match a track of another category. This is useful if, say, you are tracking a bicycle and a pedestrian next to each other. When you go to reconcile detections with tracks, you know that a bicycle isn't going to suddenly turn into a pedestrian, this rule helps to keep you from switching the tracks of the two objects.

Unfortunately that also means that when you have a person walking a bicycle that might be recognized as either one, this situation happens, and they initialize the velocity of a newly tracked object as zero.

Even this though shouldn't have been enough to sink them, because the track should have had a mean-zero velocity, but with high uncertainty. If the uncertainty were carried forward into the planner, it would have noted a fairly high collision probability no matter what lane the vehicle was in, and the only solution would be to hit the brakes or disconnect.

Furthermore, if you've initiated a track on a bicycle, and there's no nearby occlusion, and suddenly the bicycle disappears, this should be cause for concern in the car and lead to a disconnect, because bicycles don't just suddenly cease to exist. They can go out of frame or go behind something but they don't just disappear.


> Classification is not needed to avoid objects.

Taking your disclaimer into account, that is a very wrong assumption. If you classify an object wrongly that means your model will not be able to properly estimate what the object's capabilities are, including its potential acceleration speed and most likely trajectory from a stand-still. And that information will come in very handy in 'object avoidance' because a stationary object could start to move any time.


Yes, I suppose I wasn't exact when I stated that. Classification helps very much, but is no excuse to throw away other data you have. It should be an augmentation to the track the object has.


Then we are in agreement.


> Classification is not needed to avoid objects. The key words here are actually "at each change, the object's tracking history is unavailable" and "without a tracking history, ADS predicts the bicycle's path as static" which is a horrible oversight.

Was it an oversight or were they having trouble with the algorithm(s) by keeping the object history and decided to purge it?


Pure speculation incoming: probably a completely unintended bug. Classify as static, static objects shouldn't have tracks, clear the history. Your object is reclassified as dynamic, whoops, we already cleared the history, oh well.


Static is not a classification; in this case the classification was cycling between "unknown", "vehicle", and "bicycle", and each time the classification changed the history was reset. Then each detected object can be either static or be assigned a predicted path, but unknown objects without a history are always treated as static.

In any case, the history clearing seems to be a bug, they updated the software to keep the position history even when changing the classification.


In your example "unknown" is implicitly "static". Either way, neither of us have access to Uber's source history to say what exactly went wrong. They do claim to have fixed it though, you're right.


Strange that the history itself is not used for the classification.


Why aren't they driving the cars with the driverless software in learn mode, scaled up to many human drivers that are training the autonomous classifiers edge case dataset e.g. human driver stops for this particular shaped object (person in darkness entering street) in 99% of cases, so we stop as well. (As opposed to not stopping for a floating grocery bag where it applies the opposite rule to keep driving based off recorded human driving behavior)

Also, we can detect human gait already can't we? There has to be better signifiers for human life that we can override these computer vision systems with to prevent these tragedies.

What about external safety features, such as an external airbag that deploys on impact, cushioning the object being struck if the system senses immediate collision is unavoidable. Some food for thought for anyone working in this space, would love to hear your thoughts and ideas.


> Why aren't they driving the cars with the driverless software in learn mode, scaled up to many human drivers that are training the autonomous classifiers edge case dataset e.g. human driver stops for this particular shaped object (person in darkness entering street) in 99% of cases, so we stop as well. (As opposed to not stopping for a floating grocery bag where it applies the opposite rule to keep driving based off recorded human driving behavior)

This basically is being done already - recorded data is hand labeled as ground truth for classifiers. "human in darkness entering the street" might be a rare case, though. Humans in the dark on the sidewalk should be pretty common.

> Also, we can detect human gait already can't we? There has to be better signifiers for human life that we can override these computer vision systems with to prevent these tragedies.

I'm not a perception expert - it looks like in this case the victim was mostly classified by LIDAR capture. I don't know if the cameras were not good enough for conditions or if their tech was not up to the task. This is sort of a red herring though - even if the victim was classified as "bicycle" or "car" she still should have been avoided.

> What about external safety features, such as an external airbag that deploys on impact, cushioning the object being struck if the system senses immediate collision is unavoidable. Some food for thought for anyone working in this space, would love to hear your thoughts and ideas.

Why not do this on all cars?


A bit off topic, but what is your overall view of the self driving sector, aside from the more egregiously irresponsible players?


Long term optimistic. Short to medium term is fuzzy for me. :)


> Disclaimer - Up until a few months ago I worked at a major (non Uber) self driving car company.

That'd be a disclosure. (When you disclose something, it's a disclosure.)


> That'd be a disclosure.

It's a disclosure that is (among other things) disclaiming neutrality, so also a disclaimer.


> It's a disclosure that is (among other things) disclaiming neutrality

That being the purpose of a disclosure, my point stands. What was yours?


My point is that your pedantry claiming as wrong a reference to one completely accurate description because another description, which is not exclusive of the one provided, also applies is, itself, wrong, and, further, that the personal insults based on it are, —as well as being unnecessary, infantile, off topic, and counter to the HN guidelines—also unjustified.


[flagged]



> If you being shown to be incorrect is a "personal insult",

The whole point is that the person in question was not incorrect at all, they simply used one of two perfectly accurate descriptions when you preferred the other. Your description of them as having been incorrect was incorrect. (I note, though, that you have since edited the personal insults about he fitness of the poster you purported to correct for their prior job from your post, which would be commendable were it not directly linked to your dishonest pretense that that insult did not exist and that a reference to an insult in your comment must be to the mere act of proposing a correction.)

Also, the person in question was not me, I'm just the one who called you out on the error in your “correction”. So for someone so critical (incorrectly) of others’ failure to pay close attention to detail, you are doing a poor job of demonstrating attention to detail.


The insult was in your later, snide reply. “What was yours?”


>This is borderline criminal negligence.

Thank you for flatly and plainly stating this. That is unacceptable and anyone involved with approving that decision should be prosecuted in the death.


I'd also argue that US government should shut down Uber's self-driving research division. If they want self-driving cars they can license from Waymo, or someone else who is actually taking it seriously.

Its pretty clear that Uber's techbros have some serious culture issues. They have shown they are incompetent and should be disqualified by regulators.


There’s a reason Waymo is currently deployed only around Chandler, AZ, and I’m pretty sure it’s consistent building codes as relates to public highways. They picked a different optimization path to get to MVP than Uber. But you can be assured that too is a shortcut.


I'm a little biased but I don't think Waymo is shortcutting this. If you look at the data they're looking at and how it's being processed, it's several orders of magnitude more reliable than what Uber was using. I have yet to see a sensor data video from Waymo that hasn't targetted all obstacles successfully, even if they're not categorized right. If anything, I think Waymo is, in some cases, being too conservative with its driving data (although, admittedly, I don't think I really mean that since the alternative is the loss of human life). If you've ever driven behind a Waymo vehicle, they're annoyingly strict when it comes to following posted speed limits, stopping for obstacles, and reacting/erring on the side of caution. It's an infuriating exercise in patience but I hope that it will pay off in the long term.


> If you've ever driven behind a Waymo vehicle, they're annoyingly strict when it comes to following posted speed limits, stopping for obstacles, and reacting/erring on the side of caution. It's an infuriating exercise in patience but I hope that it will pay off in the long term.

I've noticed when I am riding with a good (human) driver who obeys pretty much all rules of the road and drives slower than the posted speed limit (at a speed they are comfortable driving) that there are always a few drivers behind us who will honk their horns, flash their lights, or even pass into the next lane just to pass back a little too close for comfort. I don't know if they are always horrible humans but that's besides the point.

How would a self-driving car react in such a situation? Would human drivers be better behaved around a Waymo vehicle because the Waymo car has a lot of cameras and sensors and can pretty much show a clear cut case of road rage?


This is exactly where I think that Waymo wouldn't get into that situation. A Waymo vehicle would let the car in as its cameras will actually react to the turn signal. Technically, a line of cars could stop a Waymo vehicle in its tracks just by getting next to it and turning their blinkers on. The car would try to yield and zipper merge but will almost always defer to letting cars in if they get too close. Again, it's super annoying as a human but I think I prefer how conservative they are when it comes to safety.


Speed limits are generally set too low. Someone driving even slower than the posted limit can often be assumed to be intoxicated or impaired.


Sorry, any speed limit will be "too low"; people will consistently drive at appx the speed limit +15%, whatever the number. Shifting the speed limit to old limit1.15 will result in speeds of new limit1.15.


That's not how it works, because it turns out that drivers are generally not suicidal.

The studies have been done, they've been replicated, and the statistics are very clear. Google the '85th percentile rule'.


I haven’t had the experience of driving in proximity to a Waymo vehicle like you have. Your statement could be viewed as taking a conservative approach to that automation is in itself a shortcut. That of course is speculation; I feel confident that when it comes to automated driving by virtue of the infancy of the technology, there are things we don’t know that we don’t know yet.


>Your statement could be viewed as taking a conservative approach to that automation is in itself a shortcut.

Can you expand on that? I'm not sure I follow what you're saying here.


I’m thinking the conservative approach in this context is a shortcut to solving all of the problems. For example, hitting the brakes at any unrecognized object keeps the passengers safe, and is a shortcut in that the recognition models for that particular object don’t have to be ironed out. Or, selecting a test city like Chandler, approx 60 years old, having the signal pole and crosswalks positioned at the same offset from every intersection is in itself a shortcut. That tech will never work in The Hollywood Hills, for example.


What happens when Waymo is in an accident and you shutdown their division too?

Human drivers in fatal accidents don't have their licenses permanently revoked and there's no outrage.


"permanently"

Humans have their licenses temporarily revoked because it's assumed that humans can change their ways (maybe or maybe not a reasonable assumption).

With a computer, you can't expect it will get better unless its programming is improved. So isn't that reasonable grounds for "permanent" revocation, at least in the sense of indefinite, where you wouldn't do the same thing to a human?

Criminal negligence resulting in death by motor vehicle is a minimum of six months license revocation in NY.


It isn't (just) that there was an accident and somebody died.

It's that this quality of software was allowed to operate in a capacity where that could happen.


I've interviewed an Uber engineer once, from their self-driving division. Incredible arrogance. Unforgettable. He clearly believed he is the pinnacle of creation, best software engineer around.

I've asked him a few questions, probing gently his experience and skills. Aside from his arrogance there was nothing special. No love of technology. No depth. Nothing. We didn't hire him.

Companies have different cultures. Companies hire and promote differently. And it matters.


I think it'll be hard to come up with a legally defensible reason to demand that they shut down all research efforts without looking like it's a vendetta.

Also, Uber's business plan is 100% to maintain share in the rideshare market until their cars drive themselves. That is the end game here. To shut down their self-driving research is to doom the company entirely.

Not that I'd be against that. Would be great to see Uber die. It's already caused enough headache and pain.


> I think it'll be hard to come up with a legally defensible reason to demand that they shut down all research efforts without looking like it's a vendetta.

They don't have to shut down their research. I don't care if they want to write software and play on private tracks.

As I see it, Uber should be explaining exactly why their privilege to endanger the public by operating unsafe robots on public roads should not be revoked for criminal incompetence.


Their business plan is and always was a stupid plan by alarmingly stupid leadership. Local or national leaders have no obligation to literally sacrifice more citizens to enable it.


I always thought the self driving car was an excuse. Something like "We are loosing money now, but if you invest in us we will magically be profitable in the future."


> Also, Uber's business plan is 100% to maintain share in the rideshare market until their cars drive themselves. That is the end game here. To shut down their self-driving research is to doom the company entirely.

Uber's business is clearly more important than lost human lives due to negligence of proper software safety measures. /s


A fatality should do nicely. /s


> This is borderline criminal negligence.

Yes, but on the border between “criminal negligence” and “depraved indifference” [0] not the border between “criminal negligence” and “innocent conduct”.

[0] https://en.m.wikipedia.org/wiki/Depraved-heart_murder


> If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

Well, also, any of those categories should trigger braking themselves. You can't hit vehicles or bicycles. You can't hit "other" either. If you plan to hit something, it needs to be a piece of paper or some such. As a_t48 points out, classification is not needed to avoid objects, so being confused about what the object is is totally irrelevant.


This, whether it's a speed bump, trash, tire tread, tow hitch, bicycle, animal, or person in the road, it does significantly more damage at 35mph than it does at the reasonable decrease to 25mph.

Basically, absent vehicles behind you and to each side, a driver should stop until the object can be identified and determined if it will do damage or be avoided.


> Well, also, any of those categories should trigger braking themselves. You can't hit vehicles or bicycles.

A bicycle on the side of the same lane will typically move in the same direction and stay on the side. Thus the car has to overtake.

That is different from a person crossing the street.

You always have to make assumptions, about elements in the path. And it has to have an option to handle the worst case.


> Why were there no alarms going off at 5.6 seconds when the vehicle was confused!!!??

I wonder if it's because their system is not reliable enough yet; a continuous alarm isn't useful.


Then it can't be deployed on public roadways!


You know, I don't usually like this quote when people deploy it, because it rarely applies. Frankly I find it questionable even in the original movie. But it applies here like gangbusters: "Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."

There are a lot of commenters here who seem to be operating on a subconscious presumption that being "the best we could do" is some sort of defense. It's not. If the best you can do is not good enough, don't deploy it.


Considering the aforementioned one second "got a weird result, wait and see if it goes away" system, this seems highly likely. If everything's an alarm, nothing is.


It all makes sense if you realize that Uber’s tech fundamentally doesn’t work, and if you programmed it react immediately or set off alarms immediately the false positives would make it unusable.


I think it's a bit vaguely described but I think the pedestrian wasn't in the path of the vehicle at 5.6 seconds since they were walking perpendicularly. The system couldn't classify the person or that they were walking into its path. Or at least that what I gathered from that. I guess if it classifies something as unknown it's basically would have no behavior? I am just speculating.


> If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

Don't be so sure. What if it's a bird, and you're traveling at 70 mph on a busy freeway? Slowing down or stopping could result in getting rear-ended or worse.

Clearly more testing is needed, and more work needs to be put into identifying objects early.


The first thing a driver needs to learn is how to slow down and stop safely.

If you're traveling at 70 mph (which the vehicle wasn't) and you see something up ahead you're not sure about, you can slow down a bit. If you get rear ended when you take your foot off the accelerator, that's not your fault.

If you're still confused, light braking is appropriate. At some point, you'll know to either fully brake or go on through.

This system doesn't need more testing. It needs to be designed so it can stop safely and so that it will stop safely and alert the operator in case it is unable to determine if it's safe to proceed.


If you can't tell a bird from a car or a bike you have zero business being in the self driving car arena.


It's not that I disagree with you, it's just by your standards, [almost] no one should be in the self-driving car arena.


That's quite possibly a reasonable assessment of the situation.


...yet?

At least this doesn't seem to be that hard of a line to draw in terms of competency before allowing on public roads


Not on public roads.


What do you do when your radar says it could be a person, but your vision sensor tells you it's probably a bird?


Go back to the drawing board.


Stop anyway.

And admit we're going to continue to need human safety drivers...


Uber: Block any remediation actions, wait 1 second and hope for the best?


You slow down; because even if it's a bird, cat, dog, or armadillo it does damage to your car (including getting it bloody) if you hit it at a higher speed.


You quit and then get a job at McDonalds, because you're clearly not qualified for self-driving cars development.


So you need to wait for sensors that have 100% accurate object detection before you can work on a self-driving car?

Meanwhile, you let humans who are far less than 100% accurate (and are often distracted) keep killing people? I went into a panic stop one rainy night when a billowing piece of plastic blew across the road in front of me, it looked exactly like a jogging person to myself and my passenger.

We don't need self-driving cars to be perfect before they are released, just better than people.


If you really want self driving cars to happen then you will need to take one very important factor into account that has absolutely nothing to do with technology: human psychology.

People are not going to be swayed by statistics, they are going to be swayed by their emotions and if the self driving car attempts that 'hit the road' so to speak are going to be sub-par in the public perception even if they are might be statistical win then they will get banned. So if you are really gung ho on having self-driving happening then this should concern you too because otherwise you will never reach that goal.

Self driving cars need to be - right from the get go - objectively obviously better than human drivers in all everyday situations. Every time that you end up in a situation where a self driving car kills someone human drivers - and all their relatives and the press - are going to go 'well, that was stupid, that would not have happened to me' and then it will very soon be game over. And you won't get to play again for the next 50 years.

See also: AI winter.


The way it works in safety critical software safety analysis in my experience is that you have a hazard analysis/failure modes effects analysis that factors in severity x probability (and sometimes a detectibility measure)

So if you identify a failure mode that contributes to a catastrophic hazard for instance, you better build your system to drive the probability down. The resultant severity x probability score you end with has to fall within the risk parameters deemed acceptable by management/safety


Self-driving cars are far away from matching human driver skill level right now. We do not need them to be "perfect" we need them to stop doing such a silly mistakes as described in the article - that is a very basic requirement before they can be allowed on public roads.


Self-driving cars are far away from matching human driver skill level right now.

Well, no. Has any of the systems been tested in heavy snows on icy roads or on a road without maps?

silly mistakes

A person died.


A person died.

Lots of people die in car accidents.

Nearly 40,000 people die each year in car accidents. At least 8 of the top 10 causes of accidents would be improved with self-driving cars.

1. Speeding

2. Drunk Driving

3. Speeding

4. Reckless Driving

5. Rain

6. Running Red Lights

7. Night Driving

8. Design Defects

9. Tailgating

10. Wrong-Way Driving/ Improper Turns

Well, no. Has any of the systems been tested in heavy snows on icy roads or on a road without maps?

Aside from figuring out where the edge of the road is, the biggest accident risk that I've seen with driving in heavy snow is speed -- no one wants to drive 15mph for 2 hours through a snowstorm to stay safe, so they drive 30 - 50mph instead.

And I'm not sure how to solve the road visibility issue with self driving cars, but presumably the same heuristics that humans use could be emulated in software (which I suppose is primarily following the car ahead or his tracks or looking for roadside signs that mark the edge of the road).


Lots of people die in car accidents.

I doubt anyone would refer to those as silly mistakes either.

My point with the second part is that humans have proven driving in snow storm and places that aren't fully mapped is possible, something that self-driving cars have not.


> We don't need self-driving cars to be perfect before they are released, just better than people.

Who goes to jail when the self driving car kills people?


This is a real dilemma. IIRC, some car companies have already stated they will take responsibility


no ones going to jail from MCAS auto-correcting planes into the ground.


Yet. It may still come to that. If the Germans can go after Martin Winterkorn long after he thought he was in the clear then this may still happen as well.

https://www.theverge.com/2019/9/24/20881534/volkswagen-diese...


Are there lawsuits/investigations still pending or they are over and no one from Boeing was found to be guilty?


The Boeing story is only getting started for real. It may take a new head of the FAA before the whole thing will be tackled frontally but eventually that will happen. The USA can't afford an FAA that is without respect.


Depends. If it's a hardware failure, then no one should go to jail, just like today, if my wheel falls off (through no fault of my own) and I run into a child, I wouldn't expect to go to jail (heck, drivers already get pretty much free rein to run down pedestrians by saying "I didn't see him"). The car manufacturer may have some financial liability if it was a product defect, but again no jail time.

The interesting moral dilemma is what to do if the car decided it was better to run into a pedestrian and protect the driver than to run into a brick wall and protect the pedestrian.

There's no easy answer to that dilemma.

https://www.nature.com/articles/d41586-018-07135-0


The choice shouldn't be between human drivers and human supervised computer drivers. Computer supervision of human drivers is viable, effective, and allows for evolutionary progress.


But may be less safe than fully computer controlled cars unless that computer supervision is able to take control completely -- humans tend to view safety features as an excuse to push the envelope.

"I can text because my car will warn me if I run off the road or stop me before I hit the car in front of me"

"I don't need to slow down on this snowy road, ABS will help me stop safely"

"Sure, I'm driving too fast for this road, but my airbags and seatbelts will protect me if I crash"

https://www.wired.com/2011/07/active-safety-systems-could-cr...


There were articles in July 2017 about Volvo's self-driving cars not coping with kangaroos (they bounce) which make up the majority of car/animal collisions in Australia. A kangaroo is a lot bigger than a bird but couldn't be identified as being on the road.


Yes, but we can also say what a responsible self-driving car should be detecting/recording/ decisioning at that point, paraphrasing:

- unknown object detected ahead, collision possible- likely

- dangerous object (car) detected approaching from rear with likely trajectory intercepting this vehicle (people easily forget multitasking/sensing like this is something an autonomous car should be able to do better than a human who can only do relatively intermittent serial scans of its environment)

- initiate partial slow down and possibly change path: make some decision weighting the two detected likely collision obstacles.

You do not have to slam on brakes and be rear ended, but speed is a major factor in fatal crashes, so even if you can drop 30% of your momentum by the time of impact and avoid the rear end, that's still a responsible decision.

And we can accept that sometimes cars are put in potential no-win situations (collision with two incoming objects unavoidable).

What's a negligent/borderline insane decision? Put a one second hard-coded delay in there because otherwise we have to admit we don't have self-driving cars since we can't get the software to move the vehicle reliably if it's trying to avoid its own predicted collisions.

(Another issue is an inability to maintain object identity/history and it's interaction with trajectory prediction... personally, IMO, it is negligent to put an autonomous car on a public road that displays that behaviour, but that's just me)


> What if it's a bird,

Conversely, there are many things that are bird sized that can do significant damage to a car and even be fatal to the car behind you. Ie: a trailer hitch caused one of the first Tesla battery fires; loose pieces of pavement have been known to be kicked up and kill a following car.


No it shouldn't have applied the brakes at that point. There are lots of vehicles around in most cases. If the car braked every time it spotted another vehicle it would almost never move. Only take corrective action if the paths look like they will intersect instead.


At five seconds from impact it absolutely should have. And if it couldn't tell that it was that close to the "vehicle" that's even more reason it shouldn't be anywhere near the road in this state. The system should be able to behave cautiously in situations where input is erratic. Good human drivers don't go full speed ahead when they can't see.


It took over four seconds for the system to decide swerving was not sufficient based on path prediction. The improvements made after this report would have initiated braking seconds earlier.

-5.6s 44mph Classification: Vehicle - by radar Path prediction: None; not on the path of the SUV

- Radar makes the first detection of the pedestrian and estimates its speed.

...

-1.5s 44mph Classification: Unknown - by lidar Path prediction: Static; partially on the path of the SUV

- Lidar detects an unknown object; since this is a changed classification, and an unknown object, it lacks tracking history and is not assigned a goal. ADS predicts the object’s path as static. - Although the detected object is partially in the SUV’s lane of travel, the ADS generates a motion plan around the object (maneuver to the right of the object); this motion plan remains valid—avoiding the object—for the next two data points.

-1.2s 43mph Classification: Bicycle - by lidar Path prediction: The travel lane of the SUV; fully on the path of the SUV

- Lidar detects a bicycle; although this is a changed classification and without a tracking history, it was assigned a goal. ADS predicts the bicycle to be on the path of the SUV. - The ADS motion plan—generated 300 msec earlier—for steering around the bicycle was no longer possible; as such, this situation becomes hazardous. - Action suppression begins.

* The vehicle started decelerating due to the approaching intersection, where the pre-planned route includes a right turn at Curry Road. The deceleration plan was generated 3.6 seconds before impact

https://dms.ntsb.gov/public/62500-62999/62978/629713.pdf


> If the car braked every time it spotted another vehicle it would almost never move.

In this particular case I would be perfectly ok with that. If you can't operate a vehicle safely then coming to a stop or failing to get moving is fine.


When I drive there are things I see which I am not sure what they are but I don't care since they are stationary over there and I am going that a way over here instead. I don't stop to take a good look before continuing.


If they're in your path and you can't tell the difference is what we're talking about here. And that difference is very important because the dynamic characteristics of a pedestrian, bicycle, car or other object should factor into your model of their future trajectory and speed.


"There's something straight ahead of me, unclear what. Let's just ram into it, full throttle!" If that's your thought process, please cease driving immediately and permanently.


If the car can't detect what's in front of it, then it most definitely should brake. This is up to the engineers to solve this issue, not for a vehicle to continue into a dangerous situation blindly.


> There are lots of vehicles around in most cases.

In this case, we know there weren't many vehicles around.

This very scenario is a great example where I'd want a car to stop if it saw a deer or even a dog or armadillo.

> If the car braked every time it spotted another vehicle it would almost never move.

In defensive driving it's often taught that you are supposed to slow whenever you're approaching another vehicle and don't know what it's doing. You're supposed to exercise caution at intersections, and definitely supposed to exercise caution when passing, being passed, there are things in the road or other people or vehicles on the side of the road.


> This is borderline criminal negligence.

Counterargument: the safety driver is supposed to be prepared to take control at any time, should the vehicle do something unsafe.

It seems to me to be clearly criminal to use this code to run a fully automated environment (i.e. no safety driver). It's not clear to me what the expectations should be of the code when there is supposed to be an attentive safety driver in the vehicle.

I think the safety driver is going to jail, because they were watching videos instead of watching the road at the time their vehicle killed a pedestrian.


> Counterargument: the safety driver is supposed to be prepared to take control at any time, should the vehicle do something unsafe.

That's impossible. This has been tested in countless studies with train drivers and pilots and it is absolutely impossible for a human to stay alert if an 'almost good enough' computer system is in the drivers seat.

By the time you're needed your situation awareness will be nil.


Counter-counterargument: The safety driver should have something to do, and "something is moving closer to the car in a way we're not expecting" would be a great thing to show the safety driver.

Even if this was constantly happening, it would give the safety driver some sense of purpose - their job would be constantly figuring out "is this a real thing or not" - and then they wouldn't be bored out of their mind and be watching videos.


> The safety driver should have something to do

I agree with that, and the NTSB should consider adding this to their requirements when approving test programs of this sort.

But stepping back, I think there's a very significant difference in culpability between "safety driver couldn't react in time because they zoned out" and "safety driver was watching a sitcom". In the first case, the driver was trying to do their job, and the nature of the ask made it difficult/impossible. In the second case, the driver was knowingly not doing their job, and was knowingly engaging in unsafe behaviour. We don't have any examples of a fatal accident involving the first kind of error, and this case is an example of the second kind.

> "something is moving closer to the car in a way we're not expecting" would be a great thing to show the safety driver.

Isn't that what you get by looking through the windshield using your eyeballs?


"In the second case, the driver was knowingly not doing their job, and was knowingly engaging in unsafe behaviour."

Maybe, but the bigger picture is if you hire people for low wages and give them impossible tasks, you're not paying enough for them to make a good attempt or be a scapegoat. The problem is management.


There is a clear subtext in everything put out by Tesla and Uber that their cars are self-driving. That is to say, they drive themselves. That is to say, nobody else needs to drive them in order for them to get from A to B.

If they are truly trying to stick to the guns of "well the 'driver' should have been prepared to take over at zero notice", then their (potentially criminal) negligence is in how they present and market their product and not in the software. But it's one or the other.


Not to mention, the idea that a human failsafe will be attentive enough to respond instantly is ridiculous to begin with. If the car drives itself 99% of the time, it better drive itself 100% of the time, because the human is checking out.


> because the human is checking out.

In some cases in the most literal sense of that.


Hangon, Uber is not selling their product to anybody (yet). This is completely different than what Tesla is doing.

Yes, if this code was in a Tesla, it would be criminally negligent, because in the driver's seat of a Tesla is a consumer that bought a product called "autopilot".

Uber, and Waymo, and Cruise, are all _testing_ their systems, and that's why they installed safety drivers. There's no marketing here, and there is no customer -- the person in the driver's seat was employed to sit there and pay attention to what's going on, for the explicit reason that the car's computer is NOT yet capable of driving itself.


> There is a clear subtext in everything put out

That's in marketing materials rather then the legal ones. Which is great to make regular people interpret it "look, it can drive itself, I'll just fiddle with my phone" but doesn't put any of the legal responsibility on the manufacturer's shoulders. I'm guessing regulators will eventually have to weigh in on this.


I don’t believe we should think of this person’s profession as a safety driver. We should think of them as a fall guy.

It’s not reasonable for a human to sit around for hours doing absolutely nothing, then suddenly be thrust into an emergency with seconds to respond without warning. Most humans aren’t capable of that level of vigilance. If they aren’t watching a video, they’ll be daydreaming. It’s very unlikely they will be able to recover from any kind of situation requiring an immediate response. At a system design level, by placing a human in that role you have already failed.

Maybe if you take the most qualified humans with the best reaction times, they would have a chance, but the qualifications for that job would be more like those of a fighter pilot. This job is not recruiting our best and brightest, it’s recruiting people who want to sit around doing nothing.

This person’s role in the system isn’t to provide safety - it’s to absorb liability. Ergo, a fall guy.


  This is borderline criminal negligence.
I don't think there is much border.


> This is borderline criminal negligence.

Uber had told the world who they are: criminals. They have flaunted many laws since inception.

Will they ever stop? Who will stop them?


They have flaunted their criminality. They have flouted the law.


> The system should have started applying brake at this point. If a 3500lb vehicle can't decide what it is about to impact, it needs to slow down (to a stop if necessary).

I suspect there's a big problem in the other direction, too. If the system starts to brake every time it thinks it might need to, it will happen all the time. It might see false positives like this all the time and that's why it doesn't act on them right away.

If it's (appropriately) conservative about starting to brake it will be braking/slowing all the time. If it's not conservative, people or cars will occasionally get hit. The former could make people uncomfortable or carsick or create some subtler danger by stopping short needlessly. The latter might mostly work and mostly not kill people, except for when it does.


Then your system is not ready to be on the road. We're taking about metal box weighting at least a ton, launched at dozen of miles an hour on roads where they can hit people.

"They had to not react quickly because they keep mis-detecting things" is really not an acceptable justification. If that's the why (and it probably is) that car should never have been on the street.


Exactly my point. I'm not apologizing for the software. I have severe doubt that software is anywhere close to making safe and appropriate decisions in the context of real world driving with open ended conditions about the makeup of the road, full range of obstacles and edge case conditions, weather, lighting, etc.

The scariest thing is that it's so fucking familiar, this idea of technologists solving a range of simple cases and assuming the solution extrapolates to success in the real world. We cannot get into these cars if that kind of software development feels anything like everyday software development, where random web site bugs can be tolerated.


No, the scariest thing is that capital may eventually push through a launch of these crappy devices before they're ready. And they'll never be ready.


I imagine most self-driving systems will have a higher success rate than human drivers. Uber's may not of been one of those systems though.


If self-driving cars were remotely possible, it would be obvious, because Google Maps would be insanely perfect. That's the bottom line as far as I'm concerned. Every time it directs me to do something stupid, I think, what if it were telling my car directly?


I'm sure the situation will improve eventually, but I'm guessing we're still several decades away and that it'll take advances in both sensor technology and software before driverless cars work at the same level as a human driver in all conditions. In perfect weather, on perfect roads, with minimal traffic and zero surprises the tech works mostly OK now, but those are basically lab conditions and not the reality most of us drive in today.


How do you feel about Tesla releasing their fully self driving software soon? They seem to have a very low accident rate so far.


I would want better reliability than "it shouldn't be a problem...until it is." That's pretty much the same reason my 82yo mom used and let me be the first to say you don't want to be driving next to her car (she doesn't drive anymore).

As a preliminary step to any regulatory approval Tesla should release every byte of data from their tests so we can analyze the scenarios and events that the software has dealt with, so we can second-guess them. I seem to remember a common criticism of Tesla is that it's kinda shitty to work there and I don't think the best work comes out of an environment like that.

We should know for sure what they/you mean by "seem to" and "very low." Trade secret protection is insignificant when public safety is involved.


Are 2 errors better than 35,000 human errors?


Depends on the errors!


Along the same lines, if they can figure out how to drive a car, they can figure out how to give instructions to human drivers that are far, far better than the state of the art. So, when they release their revolutionary navigation system, I'll try it. Not buying a car from them though.


Why do you think they'll never be ready?


Because I don't think computer vision will ever be good enough to be safe for this purpose.


We trust human vision for this purpose; is there anything human vision is doing that would be impossible to emulate with a computer, given enough research and development?


What exactly do you mean by "enough?" "Enough to solve the problem" is a tautology.


You're the one who used the word "enough" in "ever be good enough to solve the problem." What did you mean by it? Are you claiming that human perception cannot ever be synthetically matched?


If we apply this to a human, let's say I'm getting old and my eyes aren't as good any more, I don't just keep driving like nothing changed, I need to find a way to see better, because the risk of getting into an accident is higher. If the car's system cannot tell what an object is, you don't just assume it's going to be ok, you need to either get better sensors or find some other real solution.


>* suspect there's a big problem in the other direction, too. If the system starts to brake every time it thinks it might need to, it will happen all the time. It might see false positives like this all the time and that's why it doesn't act on them right away.*

Oh no, it doesn't work and Travis & friends have been lying for years. Wake me when there are consequences for the follies of rich white guys.


>If the system starts to brake every time it thinks it might need to, it will happen all the time. It might see false positives like this all the time and that's why it doesn't act on them right away.

if your car always sees empty space ahead as non-empty space, you probably shouldn't let the car on the road until you fix that. Once you fixed that, if the car sees that the space ahead is non-empty, even it can't classify it, it should slow down well before and warn the driver and continue with the braking if the driver is asleep/watching youtube. It is AZ, there is no rain nor snowflakes falling which would mislead the lidar. An object ahead - slow down and stop if you can't navigate it.

Presenting it as the AI-hard issue of misclassification is just an attention misdirection from and a whitewhashing/laundering of the foundational issue of knowingly letting car on the road with missing basic safety level of "don't hit objects in front of the car". Similar to the Boeing blaming 737 MAX crashes on the failed sensor.


>It is AZ, there is no rain nor snowflakes falling

This is so understated. This car was basically driving in ideal conditions at night for a self-driving car. If it can't avoid a hit under these conditions, it clearly wasn't ready to be on the road.


> If the system starts to brake every time it thinks it might need to, it will happen all the time.

One thing that self driving will "give back" is time to the occupant.

And the thing that most human driven accidents are correlated to damage; speed.


To rant a bit more about this one second delay thing.

This reeks of a type of thinking where you are relying on other parts of the system to compensate. You might expect to hear things like "it's okay, the safety driver will catch it". Speaking for myself personally, this type of thinking comes very naturally. I like to come up with little proofs that a problem is handled by one part of a program, and then forget about that problem. But in my experience (which does not involve writing anything safety critical) this strategy kinda sucks at getting things right. Dependencies change, assumptions become invalid, holes in your intuitive proof become apparent, etc, etc, etc, and the thing falls over.

If you are designing a safety critical system, something you really want to work, I don't think you should be thinking in the mode where each problem is handled by one part of the system. You need to be thinking in terms of defense in depth. When something goes wrong, many parts of the system should all be able to detect and correct the problem. And then when something bad does come up, and 9 out of 10 of those defensive layers each individually were sufficient to save the day so there was no disaster, you should go figure out what the hell went wrong in the tenth.


> Dependencies change, assumptions become invalid, holes in your intuitive proof become apparent, etc, etc, etc, and the thing falls over.

I apply encryption to storage. I can't tell you how often people try to push back on encrypting storage with stories like, "But we have access controls and auditing in place. And when we have a deprovisioning process for our drives. Encryption is costly and redundant, so why should we do it?"

Through the years I can recount several after-the-fact incidents where encryption ended up saving their bacon because of weird and entirely unanticipated events. One notable one was where a hypervisor bug caused memory to persist to an unintended location during suspend/resume, and the only reason customer data wasn't exposed in production was because the storage target was encrypted. In another case the "streams were crossed" when assigning virtual disks to virtual machines. The (older) base disk images weren't encrypted in that case, but because the newer machines were applying encryption in the backend before the blocks were exposed to the guest OS, the "unencrypted" disk content came across as garbage (plaintext was "decrypted," which with the algorithms we were using was equivalent to encrypting), again preserving the confidentiality of the original disk images.

The concept of "belt and suspenders" is often lost on people when it comes to safety and security systems.


It's depressing how much trouble some people have in understanding the idea of defense in depth (https://en.wikipedia.org/wiki/Defense_in_depth_(computing)).

Oh, you have access controls in place? Great. What happens if they fail?

Oh, you have a deprovisioning process in place? Great. What happens when someone doesn't follow it?

Systems fail all the time. If your defense only has one layer, when that layer fails (and it will, eventually) you're SOL. Multiple layers of defense give you resiliency.


> This reeks of a type of thinking where you are relying on other parts of the system to compensate.

This is what Boeing did with Max. The airframe wasn’t stable in and of itself, and they relied on software to compensate. Terrible idea.


>And then when something bad does come up, and 9 out of 10 of those defensive layers

And there should be a definitive priority established between those layers so that, if one fails, the other 9 don't attempt to correct in different ways. It should fail from the most conservative to the least so that a false positive results in erring towards stopping the vehicle.


This. The right way to think is that each component, in parallel, have a chance of succeeding, so chance of total system failure is exponentially small in the number of components. Not: oh if this layer fails, the next one will catch it... which makes the chance of failure as high as the weakest link.


It seems like Uber put in this "reaction delay" to prevent the cars from driving/maneuvering erratically (think excessive braking and avoidance turning). This, along with allowing the cars to drive on public roads at all before handling obvious concerns like pedestrians outside of crosswalks, is supposed to be balanced out by having a human ready to intervene and handle these situations.

I think one of the biggest lessons here is about the difficulty of relying on humans to maintain attention while covering an autonomous vehicle. Yes, this particular driver was actively negligent by apparently watching videos when they should have been monitoring the road. But even a more earnest driver could easily space out after long hours of uneventful driving with no "events" or inputs. And that could be enough that their delay in taking over could lead to the worst.

Certainly not defending the safety driver here - or Uber. But I think there's a bit of a paradox in that the better an AV system performs, and the more the human driver trusts it, the easier it is for that human to mentally disengage. Even if only subconsciously. This seems like a difficult problem to overcome, especially if AV development is counting on tracking driver interventions to further train the models for new, unexpected, or fringe driving situations.


We will never have any way to know whether an average attentive human would have correctly parsed this situation or would also have hit the unexpected pedestrian in the middle of the street at night, but it's worth remembering that trying to make broad assessments of self-driving technology from this one accident is reasoning from a single data point.

One advantage the self-driving cars have over a human driver is that NTSB and Uber can yank the memory and replay the logs to see what went wrong, correct the problem, and push the correction to the next generation of vehicles. That's not a trick you can pull off with our current fleet of human drivers, unfortunately(1).

(1) This is not a universal problem with human operators, per se... The airline industry has a great culture of observing air accidents and learning from them as a responsibility of individual pilots. We don't have a similar process for individual drivers, and there are far, far more car crashes than air crashes so the time commitment would be impractical at 100% of accidents.


Humans can learn and transmit lessons. There is usually more objective evidence especially nowadays with cameras everywhere.


Oh, they definitely can, but I'm saying there's basically zero culture of that in the common automotive sector.


>> I think one of the biggest lessons here is about the difficulty of relying on humans to maintain attention while covering an autonomous vehicle.

Why not run these systems in shadow mode to collect data, rather than active? Have the human completely in control and compare system's proposed response to human's. At my last job running a new algorithm in shadow mode against the current one was a common way to approach (financially) risky changes.


> But even a more earnest driver could easily space out after long hours of uneventful driving with no "events" or inputs.

As somebody who used to regularly drive the I-5 between SF and LA, I can wholeheartedly vouch for this statement.


> > That triggered what Uber called “action suppression,” in which the system held off braking for one second

> A hardcoded 1 second delay during a potential emergency situation. Horrifying.

Also laughable, if it wasn't so horrifying. The self driving car evangelists always argue how much faster their cars could react than humans. It's basically their main selling point and the reason why these things ought to be so much safer than humans.

Sorry, but I as a human don't have a one second delay built in. That's an absurdly slow reaction time for which I would have to be seriously drunk to not beat it.


There's research on this topic, and you'd be surprised.

The average human apparently has a 2.3 second delay to unexpected, interruptive stimulus while driving (https://copradar.com/redlight/factors/IEA2000_ABS51.pdf). We almost never perceive it as such because we tend to measure our own reaction times from the point we are conscious of stimulus to the point we take willful action to respond to it, but the hard numbers appear to show that critical information can take 1+ seconds to percolate to conscious observation (remember, the brain spends a lot of time figuring out what in the soup of sensory nonsense is worthy of attention).


The critical part is that you need to compare apples to apples - in this case, the one second delay is from the point at which the car had a clear idea of there being an obstacle in its path until it would have started to apply the brakes. If you want to compare this to humans, you also need to remove the sequence of time during which the human identifies the potential obstacle as relevant and subsequently as something he would crash into.

Whether this time is shorter or longer for humans is another question entirely (though the human intelligences' ability to deduce intent from behavior and forecast actions of other humans in traffic should give robocars a good challenge in that department as well). But in terms of raw reaction time after determining "I have to brake NOW", a human is definitely faster than one second.


That study shows a ~1 second delay from the incursion being visible to a human to them releasing the accelerator and a further 2.3 seconds to have the brake fully depressed, by which time they were also steering. The study also implies the average human response time was adequate to avoid the collision...

To put things into perspective, the Uber spend 4 seconds after actually detecting the incursion trying to figure out whether it needed to respond and doing nothing, then a further second of enforced pause after concluding it did need to do something until finally starting to reduce speed 0.2 seconds before impact.


That study doesn't purport to be an objective measurement of absolute reaction time. It's comparing relative driver behavior between a driving simulator and a test track, and it doesn't seem to have controlled how immediately drivers needed to respond to avoid a collision. It does, however, include one objective measurement of human reaction time, albeit not as a primary result of the study:

> the time from incursion start to throttle release included the reaction time of the tow vehicle driver pulling the foam car (which was consistently less than 200 milliseconds)


Sounds like they fixed that post-crash..

Handling of Emergency Situations. ATG changed the way the ADS manages emergency situations (as described in section 1.6.2) by no longer implementing action suppression. The updated system does not suppress system response after detection of an emergency situation, even when the resolution of such situation—prevention of the crash—exceeds the design specifications. In such situations, the system allows braking even when such action would not prevent a crash; emergency braking is engaged to mitigate the crash. ATG increased the jerk (the rate of deceleration) limit to 20 m/s3

https://dms.ntsb.gov/public/62500-62999/62978/629713.pdf .


They also addressed the earlier flaw that all the path history was lost every time the object classification changed:

>Path Prediction. ATG changed the way the ADS generates possible trajectories—predicts the path—of detected objects (as described in section 1.6.1). Previous locations of a tracked object are incorporated into decision process when generating possible trajectories, even when object’s classification changes. Trajectories are generated based on both (1) the classification of the object–possible goals associated with such object, and (2) the all previous locations.

edit: This was also improved, clearly it was not ready when it rolled-out before these changes:

>Volvo provided ATG with an access to its ADAS to allow seamless automatic deactivation when engaging the ADS. According to ATG and Volvo, simultaneous operation of Volvo ADAS and ATG ADS was viewed as incompatible because (1) of high likelihood of misinterpretation of signals between Volvo and ATG radars due to the use of same frequencies; (2) the vehicle’s brake module had not been designed to assign priority if it were to receive braking commands from both the Volvo AEB and ATG ADS.

... changes

>Volvo ADAS. Several Volvo ADAS remain active during the operation of the ADS; specifically, the FCW and the AEB with pedestrian-detection capabilities are engaged during both manual driving and testing with the UBER ATG autonomous mode. ATG changed the frequency at which ATG-installed radars supporting the ADS operate; at the new frequency, these radars do not interfere with the functioning of Volvo ADAS. ATG also worked with Volvo to assign prioritization for the ADS and Volvo AEB in situations when both systems issue command for braking. The decision of which system is assigned priority is dependent on the specific circumstance at that time.Volvo ADAS. Several Volvo ADAS remain active during the operation of the ADS; specifically, the FCW and the AEB with pedestrian-detection capabilities are engaged during both manual driving and testing with the UBER ATG autonomous mode. ATG changed the frequency at which ATG-installed radars supporting the ADS operate; at the new frequency, these radars do not interfere with the functioning of Volvo ADAS.

>ATG also worked with Volvo to assign prioritization for the ADS and Volvo AEB in situations when both systems issue command for braking. The decision of which system is assigned priority is dependent on the specific circumstance at that time.


Jerk is the rate of change of deceleration. The original settings were a pause of 1s, followed by 5m/s³ of jerk up to 7m/s². That means it takes 2.4s from the issue is detected until "full" braking of 7m/s² is applied.

Even 20m/s³ doesn't seem all that aggressive to me. A good car can brake with around 9m/s² (depending on the state of the road) which means it's going to take 0.45s to go from 0 to full braking.


Too bad they didn't do that after any of the previous 33 times it crashed into a vehicle


There's lots of bad stuff in this story without making up new stuff. Those 33 times were other vehicles striking the Uber vehicle, rather than vice versa. There was one time where the Uber vehicle struck a stationary bicyle stand that was in the roadway.


Is the story wrong? This is what it says:

"In these 37 incidents, all of the robo-vehicles were driving in autonomous mode, and in 33, self-driving cars crashed into other vehicles."

This is saying the self-driving cars crashed into other vehicles.


Story is wrong. The linked report has it as "Most of these crashes involved another vehicle striking the ATG test vehicle—33 such incidents; 25 of them were rear-end crashes and in 8 crashes ATG test vehicle was side swiped by another vehicle."

Edit: I emailed the register and they fixed it immediately. Nice!


That doesn't say who's at fault. Could be that Uber got rear ended 25 times before they disabled emergency breaking, then they hit a pedestrian.


I agree. Just pointed out that it is patched now...or rather, should be.


>> 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

Those still all seem to fall into the category "thing you should avoid hitting", though, right?


The table goes into more detail. Each time the classification changed, the history for that object was essentially deleted; since there's only one data point, the system predicted that it would "continue" to stay stationary, even though the pedestrian was walking at a steady pace.


That part, about deleting the history, confuses me.

Why delete the history on a classification change?

Shouldn't classifications be tiered? In this case, while the system was struggling to PERFECTLY classify the object, it was clearly thinking it was something that should be avoided (oscillating between car, bike, other).

In this case, I would expect the system to keep it's motion history. IMO, this could have prevented the accident because although it didn't determine it was a bicycle/person until "too late" ... it had determined with plenty of time that it was maybe a car, maybe a bike.


This classification failure reminds me of the concept of "Disguised queries" I saw on LessWrong [1]. That is, there's the question you really want to answer:

A) Should I avoid driving this car into this thing?

and then there are subsidiary questions that help to answer that, but would be fully obviated if you already had a good answer for A):

B1) Is this a human?

B2) Is this a vehicle?

B3) Is this unrecognizable?

The "natural category" here is "drive into" vs "don't drive into", not "human vs vehicle vs fruit stand vs other".

[1] https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-...

In the article, the A)-type question is whether the object has vanadium, and the B)-type questions are whether the object is a blegg (blue round thing) or a rube (red square thing). The distinction becomes stark when you know it has vanadium, but it doesn't neatly fall on the blegg/rube continuum.


If it's having a hard time both identifying an object, as well as measuring it's movement, there's not really any reason it should understand that all those separate data points are the same object.

That is, it doesn't really matter if the object history is "deleted" or not; if it can't associate a new data point with a previous history (by identification or predicted position), the practical result is the same as if there is no object history.

This could be a result of using velocity based tracking, which I don't know that Uber uses, but is a fairly standard method, as it's what raw GPS measurements are based on.


> if it can't associate a new data point with a previous history (by identification or predicted position)

This sounds a bit strange, given that the entire technology around which autonomous driving is built is about identifying and recognising patterns in extremely noisy and variable data.


This seems to be the core problem in my opinion : the category swapping inhibited a timely braking decision.

If so, how come these categories where not encompassed into a more abstract "generic object" whose relative position seems to be getting closer since first detection ? That ought to have triggered the same braking scheme as, say, an unmoving car detection ahead.

I'd go for : utter engineering malpractice.


I suspect overly aggressive association of unknown objects would have it's own set of side effects.


Well, in my experience, if you're left to balance side-effects, it's that your underlying design is flawed.

But regardless, taking sensor classification (inherently error-prone) at face value is engineering malpractice.


Right, but as the article mentions, each time it got reclassified, the tracking history gets wiped, so the car doesn't know that the object is about to enter the path of the SUV. It just sees an object that's in the other lane and assumes it's going to stay there.


>the tracking history gets wiped

This might be a little too nitpicky but it doesn't get wiped. It's simply no longer associated with that object because it's considered a different object. It's still a huge, glaring issue, obviously, but all the data is still there.

In this particular case, the "object" was identified as one type of object and so all of the data related to that was classified as "car" info, for example, and then, when it's reclassified, that additional data starts recording to the "bike" info bucket. The software should have been keeping track of certain data regardless of that classification but it's only seeing the latest bucket of data. If the tracking history got wiped, we wouldn't have the data to look back on to see how this was all happening.


Major facepalm right there. It's like all the stupidity of UI choices I see, but with the consequence of a car collision, and (in cases like this) death.


> > 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

That alone should have been ground for immediate cessation of operation until a driver could take over, for the system to be declared unworthy of operation on public roads until this problem was fixed. The differences between 'pedestrian with bicycle', 'vehicle' and 'bicycle' are so large that any system that wants to steer a vehicle should be able to tell the three apart at at least 50 yards of distance or even more.

That is the reason why regular drivers have to pass an eye test before they are allowed behind the wheel. If you can't see (or understand what you are seeing) you should not drive..


> " and Vasquez took the steering wheel, disengaging the autonomous system"

> And then top it off with systemic issues around the backup driver not actually being ready to react.

It's even worse than that! Once the human does take the wheel the computer stops doing anything.

So from when the human is alerted and grabs the wheel, until the human can react, the car isn't even slowing down!

That's like the worst of both worlds.


From a year ago [1], engineers said jaywalker detection wasn't there:

> Employees also said the car was not always able to predict the path of a pedestrian.

The brake inhibition was very intentional and was the result of in-fighting as well as engineers trying to make the Dara demo:

> Two days after the product team distributed the document discussing "rider-experience metrics" and limiting "bad experiences" to one per ride, another email went out. This one was from several ATG engineers. It said they were turning off the car's ability to make emergency decisions on its own like slamming on the brakes or swerving hard. ...

> The subtext was clear: The car's software wasn't good enough to make emergency decisions. And, one employee pointed out to us, by restricting the car to gentler responses, it might also produce a smoother ride. ...

> A few weeks later, they gave the car back more of its ability to swerve but did not return its ability to brake hard.

And then they hit Herzberg.

The UberATG leaders who made it through the Dara / Softbank demo likely vested (or are slated to vest) millions of dollars.

[1] https://www.businessinsider.com/sources-describe-questionabl...


> with 0.2 seconds left before impact, the car sounded an audio alarm

It takes about 300ms for your brain to react to unexpected stimulus, so the alarm is useless in this case. Sad.


The alarm's entire purpose is to shift the blame to the engineer in the driver's seat


it speaks to a general problem of ML systems. Real-world problems are open-ended and a system that cannot reason about what it sees but merely applies object classification is completely clueless and won't reach a level of fidelity that is needed for safety.

I'm increasingly convinced that virtually every unstructured problem in the physical world is an AI-hard problem and we won't be seeing fully autonomous driving for decades.


We as humans possess some skills that are so profoundly important but also so subtle that we don't even recognize them as skills. And excessive optimism about AI is a lack of recognition of how fundamental those skills are to our navigation of the world (both figuratively and literally.)


I wonder why pick such vehicle to test this. Why not something with smaller mass to have less momentum on impact. (ie google car).


Actually yeah, can we not rig some kind of bicycle with the same sensors and test self-driving that way?

The steering mechanism would have to be modified obviously, but surely steering is a trivial part of the problem compared to actually figuring out where to steer to?


Part of path planning will involve vehicle dynamics, breaking, acceleration and steering response, and the envelope of the vehicle. All of those will heavily impact how a car should drive.


But that's something the AI can learn fairly easily, isn't it? The difficulty in this case wasn't that the AI had issues figuring out the handling of the SUV, it's that it had issues detecting a pedestrian and a dangerous situation. You can still run into these issues on a bicycle, with a much lesser chance of killing people.


This is totally true but the issue was more with the methodology of the detection rather than the detection itself. Regardless of the type of vehicle, the software wasn't good enough for real-world testing.


cycle dynamics is an unsolved control problem


That's not entirely true, I can drive a bicycle just fine.


I can drive a car just fine, too. Are we still talking about autonomous vehicles?


My personal anecdotal belief is that it’s much, much harder for a robot to ride a bike than to drive a car. It requires human-level bodily/physical intuition that AI is nowhere near. We can barely program robots to walk normally, let alone ride a bike.


As I remember it, the driver got a lot of heat for fumbling with her phone (or 2nd computer?) right before the accident. I don't think however that 1.2s is a bad reaction time for a complex situation.

Would it have killed the developers to make the car sound its horn when it gets into this absurd 1s "action suppression" mode?


> the driver got a lot of heat for fumbling with her phone (or 2nd computer?) right before the accident

Based on news stories I found, she was glancing at a television show on her phone [1].

> make the car sound its horn when it gets into this absurd 1s "action suppression" mode?

If they added the suppression because there were too many false positives, that would just have resulted in the car honking at apparently arbitrary times. It's just converting the garbage signal from one form into another. It's still too noisy to be reliable.

1: https://www.azcentral.com/story/news/local/tempe/2019/03/17/... Vasquez looked down 166 times when the vehicle was in motion, not including times she appeared to be checking gauges [...] In all, they concluded, Vasquez traveled about 3.67 miles total while looking away from the road. [...] starting at 9:16, Vasquez played an episode of “The Voice,” The Blind Auditions, Part 5, on her phone.


> If they added the suppression because there were too many false positives, that would just have resulted in the car honking at apparently arbitrary times. It's just converting the garbage signal from one form into another. It's still too noisy to be reliable.

I love how they went from "our vision system is too unreliable to have warning signals every time it doesn't know what's in front of it" to "okay let's do it anyway but just not have warning signals". Like it didn't make them stop and think "well maybe basing a self-driving car off of this isn't a good idea".


Oh, that's not being fair; they checked in a fix after all!

  /* issue 84646282 */
  sleep(60 * 1000)


I don't know why you're downvoted, because I find your comment funny. Reminds me of a similar real fix for a race condition I found somewhere in one of the companies I worked before:

    Thread.sleep(Random.nextInt(1000));


Compared to classifying a pedestrian with a bicycle crossing the street in the dark it is easy to track the gaze of the safety driver and stop the vehicle when they are not looking at the road.


> I don't think however that 1.2s is a bad reaction time for a complex situation.

1.2 seconds after hitting a predestrian. That's a pretty poor reaction time. Typically you want to apply the breaks before you come into contact with a person.


> you want to apply the breaks before you come into contact with a person

Usually breaks are applied shortly after the moment of contact, but the brakes should certainly be applied earlier.

(I'm genuinely sorry, but I couldn't resist.)


...darn it. I think I have to leave it that way now. Can I blame my phone? Yeah, I'm gonna blame my phone.


That has nothing to do with the reaction time. Yeah, they weren't looking at the road, but that's a separate issue that doesn't involve reaction time. Going from an unexpected buzzer to full control of the vehicle in a single second is a pretty good speed.


No it's not, and hitting the brakes isn't "full control". Honestly, how long do you think it would take you to hit the breaks if something jumped out at you? It would be almost instant.

Edit: ok, I was just making a bit of a joke at first, but I looked it up. Reaction times vary by person, but tend to be between 0.75 and 3 seconds. 2.5s is used as a standard, so I guess I have to conceded that 1.25s is pretty good... I guess, for whatever that's worth.


Just a few minutes ago I had a very similar situation, although at day time. I was going at about 40 kmph straight forward my lane near a bus standing at a bus stop at the right side of the street. At the moment I was passing the bus, a young person almost jumped out in front of me, maybe 15-25 meters away. They were hidden behind the bus, so I had no way to see this coming in advance. Fortunately they realized I was coming and backed off and I also managed to stop completely before them.

So if my reaction was 5.7 seconds, I'd definitely have applied brakes far too late. I conclude the total time from classifying the object moving into my way and applying brakes was less than a second. (And btw, my car has emergency braking / pedestrian avoiding system and it didn't trigger, so I was faster).


You had experience "hope that nobody jumps out from behind that bus, as people tend to do", however. Hard to formalize, IMHO.


>Edit: ok, I was just making a bit of a joke at first, but I looked it up. Reaction times vary by person, but tend to be between 0.75 and 3 seconds. 2.5s is used as a standard, so I guess I have to conceded that 1.25s is pretty good... I guess, for whatever that's worth.

We have a rare sight here: someone not being right, learning more about the situation, changing their opinion, and then making an edit about it all.

Kudos, and thanks for making this a better place for discussions :)


It is possible! Happens more on HN than it does on Reddit at least. I'm ok with being wrong.


Reminds me a bit of "Cisco Fixes RV320/RV325 Vulnerability by Banning “curl” in User-Agent".



I would like to see the failure-mode-effects-analysis (FMEA) that identified "action suppression" as a means of mitigating a nuisance fault on a safety critical system.

And understand why the designers felt this was okay...(Assuming of course, this was the actual reason for the delay. They may have a legitimate reason?)

I hope it's not the case that the hazard analysis stated that the human in the loop was adequate no matter what haywire thing the software did.


> A hardcoded 1 second delay during a potential emergency situation. Horrifying.

As a controls engineer in the automotive industry, I can tell you that a 1-second delay for safety-critical systems is not atypical.

The expectation is that the normal software avoids unsafe operation. Bounding "safe operation" is difficult, so if an excursion is detected, there's essentially a debounce period (up to 1 second) to let the normal software correct itself before override measures are taken.

This helps prevent occasional random glitches or temporary edge cases from resulting in a system over-reaction, like applying the brakes or removing torque unnecessarily, that would annoy the driver and potentially cause unsafe operation themselves.

Obviously there are still gaps with that approach. But there is supposed to be a driver in charge; and the intent is to prevent run-away unsafe behavior. It essentially boils down to due-diligence during development.


To counter balance your point : the original Volvo emergency braking system in the car saw the crash and wanted to brake 1,3 seconds before it happened.

So Volvo engineers didn't think at all like you do / say. Their system was 0,1 second faster than Uber at detecting it, 1,1 seconds faster if you factor in Uber active suppression, and it would have braked 2,1 seconds sooner than the Uber did.

Why didn't it? Because Uber deactivated it's braking ability.


>>> 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

This is exactly why I keep saying that autonomous vehicles are not 10 or even 20 years away. More like 50-100 years away if that. Same reason as to why famously a group of researchers in the 60s thought that solving computer recognition of objects would take few months at max, and yet in 2019 our best algorithms still think that a sofa covered in a zebra print is actually a zebra with 99% confidence.

Had a human been actually paying attention to the road, I can bet they would start breaking/swearving as soon as they saw something, even if they weren't immediately 100% certain that it's a human - a computer won't until it's 99%+ certain, which is too risky assumption considering the state of visual recognition of objects.


> System can't decide what's happening.

> At least the car would still try to avoid a bicycle, in principle, instead of blindly gliding into it while hoping for the best.

"Don't know what it is, let's ram it."

Never mind not detecting a pedestrian, that in itself is terrifyingly incompetent and negligent.


This is ridiculous. I already understand many seemly critical software are just unsafe / insecure but people actually running this without multiple layers of safety net mechanisms on a high speed machine that can, and did kill people? The backup driver is one broken safety net, and there are no more working security redundancy?


>> try to avoid a bicycle, in principle, instead of blindly gliding into it while hoping for the best.

Better question than why did this happen: How often do these cars "see" bicycle and decide to glide on by. How often do they seeing things horribly incorrectly and we are all just lucky nothing happens.


> I bet they added it because the system kept randomly thinking something serious was going to happen for a few milliseconds when everything was going fine.

Smoothing bugs out via temporal integration. The oldest trick in the book.


The car hit Elaine Herzberg.

  Elaine's Obituary

  Elaine Marie Wood-Herzberg, 49, of Tempe, AZ passed away 
  on march 18, 2018.  A graveside service took place on 
  Saturday, April 21, 2018 at 2:00pm at Resthaven/Carr-
  Tenney Memorial Gardens in Phoenix.

  Elaine was born on August 2, 1969 in Phoenix, AZ to Danny 
  Wood and Sharon Daly.

  Elaine grew up in the Mesa/Apache Junction area and 
  attended Apache Junction High School. 

  Elaine was married to Mike Herzberg until he passed away.

  She was very creative and enjoyed drawing, coloring, and 
  writing poems.  She always had a smile on her face for 
  everyone she met and loved to make people laugh.  She 
  would do anything she could to help you, and was there to 
  listen when you needed it.

  Elaine is survived by her mother Sharon, of Apache 
  Junction, AZ; her father Danny, of Mayor, AZ; her son 
  Cody, of Apache Junction, AZ; her daughter Christine, of 
  Tempe, AZ; her grandchildren Charlie, Adrian, and Madelyn; 
  her sister Sheila, of Apache Junction, AZ; and many other 
  relatives.
But a homeless person is only ten points. So let's remember the car.

https://www.sonoranskiesmortuaryaz.com/obituary/6216521

https://afscarizona.org/2018/05/14/the-untold-story-of-elain...


This is why I will never use any service from Uber ever. I may be insignificant but at least I know I am doing my little part to not support this disgusting movement of "Move Fast and Break Things" which includes actual casualties. Everyone involved except the lowest pawns get away with it so the next company can do it even worse.


It's a choice you can make, but it's worth noting that the alternative to getting SDCs working isn't "Nobody gets hit;" the alternative is "We keep having humans operate multi-ton vehicles with their known wetware attention and correction flaws and an average of 3,200 people die a day."

That doesn't imply Uber has the right chops to solve the problem, but I hope someone does.


False dichotomy, my friend.

Look at Europe (or heck, NYC) for alternatives to a car-dominated society:

* walkable, mixed-use, dense neighborhoods

* public transportation (rail-based, in particular)

* car-free streets (and cities)

The solution to traffic deaths is not self-driving cars. It's moving away from the Levittown-style suburbs that have proliferated the US since WW II.


>The solution to traffic deaths is not self-driving cars.

Moving the world to a traffic free societies is not happening in a hurry, if ever. I'm writing this from a pedestrianised area in London but we still have plenty of cars.

Self driving type tech however has the potential to transform things in the next decade or so. Even if we don't have actual self driving the collision avoidance tech is getting quite good eg https://news.ycombinator.com/item?id=21388790


Cool, so tear down American civic architecture to its bedrock and rebuild it from scratch.

Not impossible, but less than likely in the short horizon. We'll probably get working SDCs sooner.


True that, but not mutually exclusive.

We used to have streetcars. We tore then down. We can put them back.


And a self-driving streetcar is a more constrained problem to solve than a self-driving car.


Yup! Still nontrivial, but actively worked on[1].

[1]https://www.wired.com/story/siemens-potsdam-self-driving-str...


The interesting difference is the “wetware” knows, without ever being trained, that it’s going to hit another person and stop the vehicle. If you think SDCs will reduce deaths at all I believe you’re ignoring reality. There are plenty of other options like public transport or remote work that is infinitely better then some cocky engineers thinking replacing a human brain is easy


6,000 pedestrian deaths a year suggests the wetware may not know to a level of certainty we should consider exclusively acceptable.

And I haven't met anybody working on the problem who thinks it is easy. But a lot of them do think it's worth the hassle if it will save even a fraction of tens of thousands of lives a year (even if fully autonomous operation is decades out, the semi-autonomous driving enhancements that have come from the research are already saving lives). Adopting and enhancing mass transit is also an excellent idea, but I think it's unrealistic to assume that will work exclusively. America has had quite a few decades to decide mass transit is something that everyone will jump on to, and it seems to not be happening.


And you still think deaths are going to be reduced by this? Read the article again. This will not go away, more pedestrians will be killed by self driving cars. Deaths will never be reduced to zero, even with self driving cars. But you will take jobs away from people driving for a living.

Public transport doesn’t work in the US because of how garbage it is. If the spent half an effort to build up a proper transportation system adoption rates would be different. I live in CA and we have some of the better transportation systems in the country. It takes me 1hr to get to work via car or 2-3 hours via public transport. Solve that problem and you’ll see people adopt it real quick.

Further the real problem is people not obeying traffic laws. Not that they can’t, they’re choosing not too. An easy solution is to but a device in vehicles that automatically cites a person for things like crossing solid lines and merging with no signal, speeding etc.


Deaths will never be reduced to zero, no. But if a self-driving car can cover twice as many miles between accidents as a regular car, the number of deaths per year plummets as more SDCs are added, assuming about the same number of total miles driven per year.

... And SDCs can be programmed to obey traffic laws.

As for jobs, we're not going to see jobs taken away by self-driving cars in the near future. More likely, we'll see a fleet of vehicles with driver assist technology derived from attempting to solve the self-driving problem that will make the lives of career drivers, such as those in the trucking industry, that much better and safer.

I agree that improvements to mass transit would also help. But those aren't mutually exclusive problems.

We have to keep in mind, it's not in either-or story, it's a statistics and numbers story. on the same day this woman was tragically killed, nine other people were killed in traffic accidents. They just don't make the news, because that tragedy has become so ubiquitous that we are utterly desensitized to it.

We shouldn't be.


There’s a lot of ifs and predictions here.

How many deaths would be prevented by automatically citing a vehicle for breaking traffic laws? This is a much cheaper and simpler option than an SDC.

SDCs can be programmed to obey laws, I never said they couldn’t. However a software system will not reach the same level of reasoning a human brain can in our lifetimes. Compute capacity isn’t even close yet. So again, an SDC / Software System / AI is no replacement for a human brain.

We have driver assist technology now. No company in the world is going to keep a human driver on the books when they can be replaced by a machine. History is there as proof. It’s not a question of if they will be replaced, it’s how long it will take to make the technology robust enough.

They don’t make the news because death is a normal, acceptable thing for us. People die, accidents happen. Covering every person that dies would not at all be time efficient.

There’s also something to be said for darwinism here, but I’m not going to get into that.


> They don’t make the news because death is a normal, acceptable thing for us.

That's how human cognition deals with things we don't think we have control over. There was a time when smallpox deaths were just "part of God's plan."

I disagree the car crash body count is inevitable or needs to stay normal any more than some people's children just naturally succumb to smallpox.

> There’s also something to be said for darwinism here, but I’m not going to get into that.

Yes, I definitely wouldn't. It's an offensive attitude to have around people who have lost friends and loved ones this way to imply they weren't good enough to live.


Umm, have you lived in any non-car dominated city/country? Multimodal public transport is a thing.


I hope you don't ever set foot in any other car either, considering they kill a million people every year. But a million is just a big number, it's not as powerful as 1 obituary.


This accident didn't save any lives though. It's just associated in your mind with saving lives. It's one more death; there's no decrease in traffic accidents to compensate.


So you're implying that our ability to take risks in the pursuit of massive systemic gains is exactly zero?


Trading risks now for future gains is always tricky.

Suppose the CEO of Uber was a cannibal, and you framed letting him eat people as a necessary perk in order to keep him happy and the self-driving program on track. Would it be valid to say the number of people it's permissible for him to eat is exactly zero, even if it slows down the production of a truly self-driving car? I mean, what's one or two lives compared to 40,000 a year or whatever? There's a lot of uncertainty about the costs and benefits though, even if you strictly adhere to a utilitarian viewpoint.


I'm fine with taking risks. I just don't think we should be making a cavelry charge into the machine guns. I know the payoff would be great if we succeeded, but it's still not a good idea.

I'm deeply excited by the possibilities of self-driving cars, and I would agree that it's necessary to take risks to make them a reality. The question is always if we're taking a necessary risk or just being reckless.

Uber has taken unnecessary risks and learned relatively little from them. They didn't need a fleet on public roads to tell them that object detection was terribly broken.


I actually don’t think the payoff will be great. What’s the real benefit here. Some jobs will be lost but now we have a large hackable software system that humans have little control over. If I have to sit in the front seat and watch the road then it really doesn’t do anything for me. And then there’s always going to be instances like this “I didn’t test that scenario” crap above.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: