Hacker News new | past | comments | ask | show | jobs | submit login
Uber Not Criminally Liable in Death of Woman Hit by Self-Driving Car (npr.org)
215 points by abhisuri97 13 days ago | hide | past | web | favorite | 325 comments





> In the six seconds before impact, the self-driving system classified the pedestrian as an unknown object, then as a vehicle, and then as a bicycle, a preliminary report from the National Transportation Safety Board explained. While the system identified that an emergency braking maneuver was needed to mitigate a collision, the system was set up to not activate emergency braking when under computer control.

The prosecutor made the wrong call here. This part is absolutely criminal negligence. Putting a “self driving” car out there that doesn’t have emergency braking enabled (apparently because it creates too many false positives) is an unjustifiable risk. Working emergency braking should be the first thing perfected, before the computer gets to control the car.


Arizona law is pretty friendly to motorists. Generally speaking, you're not supposed to jaywalk and when you do, you are taking your life in your own hands and the car that hits you is not liable. It's even worse for the pedestrian -- even at an intersection if you walk quickly onto oncoming traffic you do not have the right of way and are responsible for anything that happens to you. It's just not the case that the car is assumed to be at fault in situations like these -- the pedestrian is assumed to be at fault. The pedestrian is not automatically given the right of way and is basically never given the right of way outside of an intersection. This is different than the laws in other states like California.

Situations like drunk driving can make a difference, but in this case that's a hard claim to make. Maybe if the car was swerving out of its way to hit her, you could make that case. But arguing that it wasn't doing enough to stop -- that's a tough sell given AZ law. The car had the right of way, it was not acting maliciously, and she walked right in front of it. According to AZ law, asking the car to make what might be a dangerous last second swerve into another lane or slamming on the breaks to avoid hitting her is not a legal obligation as these are unsafe maneuvers. There might be another car behind them or other cars/pedestrians alongside them so the law in AZ doesn't require these types of high risk actions to avoid hitting pedestrians that walk right in front of oncoming traffic.

Moreover this person was jaywalking at night in an area without street lights, without even bothering to look both ways, across a median, wearing a black hoodie, through a high speed road right in front of a car that had proper headlights and was going a steady speed.

Anyone familiar with AZ law knew that Uber wasn't going to be charged. You may think the law itself is wrong and that cars should be legally required to take last second high risk evasive maneuvers. These are all tradeoffs to which reasonable people will disagree, but changing the law is the job of the Arizona legislature and not of the prosecutor, who made the correct decision, even if it seems like the wrong decision to you.


Minor correction: the area was actually very well lit with street lights. The Uber dashcam video was just poor quality so it gave the impression it was really dark.

https://arstechnica.com/cars/2018/03/police-chief-said-uber-...


That's not a minor correction at all, it's a proper correction of the smearing of the victim.

Furthermore, Uber had specifically tasked the "safety driver" with both "safety driving" and operation & oversight of the automation system, wilfully splitting their focus where other manufacturers (and previous Uber trials) had one person at each post such that the safety driver could actually focus on safety driving.


Even if it was broad daylight, it still would be unlikely that the driver would have been charged.

Read the section on "duty of care" here: https://www.jacksonwhitelaw.com/az-personal-injury/auto-pede...

and note that drivers only have a duty of care to yield to pedestrians who have the right of way. Pedestrians jaywalking never have the right of way. They are only allowed to cross at intersections.

Drivers do not have a duty of care to yield to pedestrians who do not have the right of way, although drivers have other duties.


The driver was watching TV on her phone. She might still be charged.

You're right that the bullet point list in your link doesn't have anything that applies to uber, but there is no carve out in the negligent homicide law that says running over jaywalkers is legal. The prosecutor could have looked at:

* whether Uber was warned that having one operator instead of two would be unsafe (they were) * whether having one operator to monitor the road and the software is inherently unsafe * whether operators were told that the system would never emergency brake and never warn about needing to brake * whether Uber reviewed footage to ensure operators were paying attention, whether they knew that operators weren't paying attention etc

I hope all of this will be in the NTSB report.


> The driver was watching TV on her phone. She might still be charged.

Has that been conclusively established, rather than they were watching the central console where the automation they were also supposed to oversee is located?


Yes, it was in the official police report. The driver was watching "The Voice" on Hulu at the time, and "...looked up just 0.5 seconds before the crash, after keeping her head down for 5.3 seconds while going 44 miles an hour".

I don't know AZ law, but doesn't the driver have a duty to use a reasonable level of caution to avoid hurting others, regardless of who has the right of way? For example, looking at the road while driving might be within reasonable levels of caution.

Where I'm at (Sweden, so far away and possibly a very different legal situation), right of way is not an excuse for negligent driving. You don't get to run people over unpunished just because you have the right of way if it could have been prevented by paying proper attention.


> Even if it was broad daylight, it still would be unlikely that the driver would have been charged.

I'm not arguing that the driver should have been charged at any point. I didn't do so even back when the event happened.


Here, I'm using "driver" as a proxy for Uber as the entity controlling the car, not the person sitting in the driver's seat, who was apparently reading. Sorry for the ambiguity.

E.g. if a human was driving the car and the car did the exact same thing, as the Uber car did, but it was broad daylight, then they still most likely would not be charged. If you jaywalk and get killed, it's basically your fault, and the only exception would be if the driver was drunk, or speeding, or not obeying traffic signs, etc. Jaywalkers don't have the right of way, and cars are not obligated to yield to pedestrians who don't have the right of way.


In Germany you have to stop when you see somebody on the street, independent of their right to be there in the first place and independend of the fact how many laws they are breaking with it. You have to stop unless say, it is impossible for you to do so because they were jumping on the street suddenly or it is plausible that you didn’t see them.

This is because a persons right to be alive outweighs your right of way at any time and it is none of your business whether that other person acts lawful or not.

Interesting to see how much this general principles differ.


I think we all agree that there is a moral imperative to stop.

But that doesn't necessarily mean that the law should try to perfectly reflect that moral imperative. Laws generally don't work well when they attempt a high resolution of morality, because the law can only approximate justice, and the more complex the approximation, the more arbitrary and ambiguous the law becomes, which is itself unjust.

The specific problem here is who decides whether the driver did "enough"? The driver can say "If I tried to slam on the brakes, I was afraid my car would spin out of control" or "If I tried to swerve away, I was afraid I might collide with someone". They might say "I thought there was a car behind me, and if I hit the brakes, I would injure that person. It was an honest mistake that the car which used to be behind was no longer there." Etc.

Then you have to decide whether you believe them. In an environment where someone is innocent and must be proven guilty, you don't get a whole lot more precision by adding more precision to the law. Or do you drop the requirement of presumption of innocence?

Also, that leaves a lot of discretion to the prosecutor, which may be abused, or imposed arbitrarily, and is ambiguous.

So you are creating a lot of ambiguity in the law in an effort to precisely match the moral outcome.

The other option is to have clear responsibilities. The pedestrian must do X. The driver must do Y. If the driver does Y, he is not charged. When both X and Y are followed, there can be no accident.

But "Must do all you can" is not clearly defined. "Not speeding, obeying traffic signs" -- this is more clearly defined. So in AZ, the law is a little more clear, but at the expense of not being as morally precise. It's a reasonable trade off. It may suffer from an abstract moral critique, but I'm not sure on balance it delivers less justice.


I thought it would have been a much simpler case of "Driving without due care and attention" of which both Uber and the human driver were guilty of.

But you need to define "due care". I gave a summary of the AZ definition of the due care that applied to both drivers and pedestrians. E.g. pedestrian has to make sure to cross at cross walks, driver has to obey traffic signs and yield to those with right of way, etc.

If you use a circular definition of "due care" and make it something like "due care means you must take all action that is reasonable to avoid an accident" you've again ducked the issue of giving due care a well defined meaning.


> But that doesn't necessarily mean that the law should try to perfectly reflect that moral imperative.

Indeed, which is why such laws should and are aimed at road safety.

> the more arbitrary and ambiguous the law becomes

"When you hit a pedestrian with a car, you're in the wrong", it doesn't become more unambiguous than that.

> do you drop the requirement of presumption of innocence?

That's framing the debate. If you hit someone with your car, you're guilty of hitting someone with your car. There is no "presumption of innocence".

> So you are creating a lot of ambiguity in the law in an effort to precisely match the moral outcome.

No. No ambiguity, and no matching moral outcomes, but improving road safety.

> in AZ, the law is a little more clear, but at the expense of not being as morally precise.

But it's not more clear, and moral precision in not a goal.

> It's a reasonable trade off.

Here I should draw up the statistics of accidents involving cars and pedestrians in Arizona and Germany, but I don't think that's necessary.

> It may suffer from an abstract moral critique, > but I'm not sure on balance it delivers less justice.

That's just an incredibly U.S. centered point of view. I don't mind but your parent was talking about the German system. Maybe AZ works in comparison to the laws in other states with a very car-centered way-of-life, but in comparison to Germany it's really just bat-shit insane if the goal is improving road safety, but granted, that might not be the case.


> "When you hit a pedestrian with a car, you're in the wrong", it doesn't become more unambiguous than that.

Such a general principle would be unreasonable. I'm all for putting pedestrians' safety first, but a driver who is following all rules and safety principles can still hit a pedestrian without any fault - the extreme (but not only) example is somebody who decides to suicide right when you're driving by.


In Russia at least for awhile there was a thriving business of insurance fraud where pedestrians try really hard to get cars to hit them. Dashcams became essential safety equipment. Throws a lot of mud in that water.

> example is somebody who decides to suicide right when you're driving by.

a friend of mine had this exactly issue. he was driving at the speed limit, as usual. an old lady just jumped in front of his car.

at first, he got a murder charge then it was dropped when the prosecutors saw on the road cameras that the old lady 100% wanted to die.


I agree with your reasoning. There are plenty of situations where you can do everything in your power to avoid hitting someone, and still hit them. n of 1 anecdote: While driving down a 40mph road with tall hedges on the side of the lane I was in, at night, a homeless man darted out of the hedges directly in front of me. I jammed the breaks, but still hit him going about 10mph. He got off the ground and ran off before I could even get my hazards lights on. If I'd swerved right, I would've gone through the hedges off a steep embankment into a river, left would've taken me into an oncoming vehicle. I had maybe 25 feet to stop. He was at fault, and I did the best I could possibly do. The law proposed by craigsmansion is insane in this scenario.

Just a small correction. It's not something proposed by me. It's a description of how things are at the moment, at least in the north of Europe.

The difference lies exactly in the details you describe. If an accident happens, you're not in the clear because you had "right of way", but because you, having good control of your vehicle, exhausted all possibilities to avoid the calamity.


In the case of an unavoidable suicide, it gets mitigated up to a point where any course of action is pointless because it would not serve road safety. The outcome would be the same.

The difference here is that even if the pedestrian or cyclist was in the wrong, you're not automatically in the right. Your own behaviour as a motorist stands on itself, and you're supposed to take the difference in size and weight into account.

Although it may not appeal to everyone's sense of fairness and just punishment(although, in fairness, only one of the parties can realistically be mauled or even killed), but from a road safety perspective it leads to more careful driving.


> "When you hit a pedestrian with a car, you're in the wrong", it doesn't become more unambiguous than that.

Even if a pedestrian runs out onto a highway and it's physically impossible for you to avoid hitting them?


> Even if a pedestrian runs out onto a highway and it's physically impossible for you to avoid hitting them?

The rule over here is that a driver must always be in control of their vehicle. Exceptional circumstances such as those you describe (which are completely and utterly unlike those of the Uber case) would act as mitigating factor, possibly down to no fault if there was no way for a reasonable driver to avoid the accident.

But the default state of things is that if you decide to put yourself in control of a multi-tons vehicle, you better be able to handle it or the consequences of your recklessness.


My impression is that pedestrians don't jaywalk in Germany nearly so much as in the U.S., as there is a cultural taboo against it. People get mad at you when you do that in Germany, or at least that was my impression -- you can confirm or deny.

So it could be that with pedestrians more conscientious, you can put most of the blame on the drivers by default and not have it be an unfair system. That wouldn't be suitable in the U.S. where people are generally less responsible both as drivers and as passengers.

In terms of statistics, yes, Germany has ~7 vehicle deaths per billion km-mile travelled whereas the US has ~4, but again, car ownership is much more common in the U.S. and necessary to get to work or do chores. People drive every day, when they are tired, and since almost everyone has to drive long distances you have a lot more irresponsible people doing it out of need. I also think that the streets are laid out in a less pedestrian friendly manner which makes them less safe.


This is the most detailed and well reasoned comment I've read here. Thank you for your thoughts!

All of those ambiguities can be resolved by a court trial. The whole point of trying in court is to decide how the law applies in a particular case. The ambiguity is actually a feature because it leaves it up to the individual but clearly requires the individual to try to avoid hitting pedestrians.

As opposed to Arizona where they very clearly value human life less than your right to get to work early.


I'm sure in pretty much any country a driver is supposed to break if there's a chance of collision.

But consider the following situation: An Average Joe drives down the road. Suddenly out of nowhere a pedestrian appears. Joe hits the brakes, but too late, the poor guy is killed.

He just didn't have time to react. There's no Joe's fault, unless it can be shown that he saw the pedestrian, could stop, but deliberately decided to teach him a lesson.

Now if exactly the same thing happened on a crosswalk, a driver would be at fault. A crosswalk, by design, is a place where pedestrian are crossing. In this case the driver should have expected a person appearing on the road and be prepared to stop. Unless it can be proven that a pedestrian has done something nasty, like jumped on the car from a tree.


Yeah, Arizona law seems to put drivers über alles.

> Drivers do not have a duty of care to yield to pedestrians who do not have the right of way, although drivers have other duties.

That's not what's written there at all.


It's odd to me that the video would be dark because of a poor quality camera. Why would Uber skimp on that?

My impression was that the main cameras system was too complex for the police to be able to quickly extract video at the scene. However, the car had a redundant consumer-grade dashcam that the police were able to take video from.

This prove that the area is well lit currently, but not necessarily that it was very well lit the day and hour of the accident.

It was a very notorious case. I would expect authorities silently improving the lit in the area and taking other correction measures in all those months.


I actually used to bike that road all the time 3-4 years ago. The bridge section before it could be disturbingly dark, but the point where she was crossing was decently lit, as I recall. It was also somewhere I would never consider crossing, mostly because there was a crosswalk just up ahead.

Still, if I absolutely had to it was definitely an intersection to watch carefully for cars when crossing, but one with fairly long sightlines for that.


I wish I had a link for you, but there was a youtube video of the same section of road take just a day (I think) after the accident. That video was well lit.

The Ars article above is from the week after the accident.

I certainly believe the Uber dashcam video was poor quality. But at the same time, the videos on the link you showed are a bit misleading because the holiday lights are not strung up all year, and in any case the collision happened after the bridge, where to the right of Mill Avenue is a big golf course which is going to be dark at night, and to the left are some desolate office buildings that would also be empty and dark. It's just not a well lit area, neither is it an area with a lot of pedestrians, so there aren't many crosswalks.

You can explore for yourself here:

https://www.google.com/maps/@33.4380916,-111.9434265,3a,75y,...


From the article:

"In this nighttime video, posted to YouTube by Brian Kaufman on Wednesday, the scene of the crash can be seen around 0:33"

"Black says in the video as he drives past the point in the road where Herzberg was hit (around 0:33)."


Ahh, OK, so those Christmas lights are gone by then. Yes, if you look, you see a park/golf course on your right and some office buildings on your left. I guess how well lit it is will depend on how many lights are on in the office buildings, plus of course the street lights. But if you poke around the area, you see that the street lights are insufficiently spaced, at least that's my impression. That area north of the river is not a pedestrian area, even along Mill.

The victim was struct pretty much directly under a streetlight, not at the halfway point between two streetlights. And the lights in office buildings are pretty much irrelevant, streetlights way overlight them.

It is irrelevant how different the lighting might be in other places or at other times.

This [1] has video from the dashcam as well as commercial video from news crews immediately on the scene. Keep in mind the news video is heavily lit from all the flashing lights. It's not as dark as the Uber video makes it look, but it's also nowhere near what you'd call "very well lit" especially given that the pedestrian was wearing all dark.

Something that stands out to me on ars clips is that the street lights look like fireballs, even from a distance. I'd guess they have their brightness or some other setting pumped way up. Also looks like the drivers are cruising with brights and/or halogens.

[1] - https://www.youtube.com/watch?v=ufNNuafuU7M


> This [1] has video from the dashcam as well as commercial video from news crews immediately on the scene. Keep in mind the news video is heavily lit from all the flashing lights. It's not as dark as the Uber video makes it look, but it's also nowhere near what you'd call "very well lit"

1. It absolutely is. The specific spot (00:33 in the Kaufman video) is clear as day on a cellphone.

2. Visibility doesn't even have any relevance in the first-place, self-driving systems don't rely on visible-light cameras, there is no obstacle in the middle of the road, there is no fog, there is no rain, the self-driving system had full and perfect visibility all along.

> especially given that the pedestrian was wearing all dark.

Given how well-lit the road is, that would have made them more visible against the background, not less. Again not that it has any relevance.

> Something that stands out to me on ars clips is that the street lights look like fireballs, even from a distance. I'd guess they have their brightness or some other setting pumped way up.

Yeah, sure, who'd think a camera with a finite and middling dynamic range (such as a cellphone's camera) would try to actually capture information at night and thus saturate on bright direct light sources.

> Also looks like the drivers are cruising with brights and/or halogens.

Yes of course, a rando would equip their car with an omnidirectional halogen which somehow magically brightly lights up the roadside but doesn't move with their car. You've certainly cracked the code here.


2. Visibility doesn't even have any relevance in the first-place

It has relevance in showing that the pedestrian was failing to exercise their duty of care and was thus at fault. Jaywalking at night wearing a black shirt is pretty negligent. Jaywalking even in the day is negligent in AZ, but at night, across a major artery, it's crazy.

Unless you are going to argue that they knew it was a self-driving car, and so thought that the car would see them in the dark. But they apparently didn't even know a car was there, so that's a tough sell. They didn't even look if a car was coming.

A lot of people here are having a hard time coming to grips with the fact that a pedestrian has responsibilities and can be at fault in a collision with a car -- e.g. that the pedestrian doesn't automatically have the right of way or a right to be avoided every time they step into a street. I get that this is a tough concept for a lot of people to internalize, but if you're going to be arguing that someone should be charged, then at least the broad outlines of the law in AZ need to be understood.


I don't think people are arguing that a pedestrian has an absolute "right to be avoided every time they step into a street". But many of us (I think) do believe that a driver -- or an autonomous driving system -- does have some responsibility to take reasonable care to avoid accidents, regardless of who has the right of way.

While the concept of right of way may often have a bearing on who is held at fault for an accident, it does not -- or at least should not -- entirely absolve the driver of any responsibility to drive attentively and carefully in all circumstances.

In this case, it does sounds to me as though the pedestrian was at fault -- particularly in the context of AZ law -- but it also sounds as though both Uber and the "safety" driver were seriously negligent in their responsibilities, and their negligence was a major contributory factor in the pedestrian's death. They should be held to account.


Is AZ a significant outlier in this regard? Is AZ different enough from say, California, that Uber was incentivized to test in AZ as a way to mitigate liability exposure in the event of fatal crash?

I was always curious why they were testing there as opposed to somewhere closer to HQ.


I don't know the history too well, but remember Uber refusing to get some licenses in California and opting for AZ which doesn't require the same level of licensing. Others should correct me, but that was the buzz at the time.

I always thought AZ had these laws because historically a lot of retired people are there who have slower reaction times and aren't going to be doing any stunt driving swerves, and it's generally not a pedestrian or bike friendly place. They prioritize cars. When I saw the protestors a while back pouring onto freeways in the east bay, I remember thinking no way would anyone try that in AZ -- the cars would just run right over them.


> stunt driving swerves

You've put a lot of emphasis on how dangerous and crazy it would be to have not run over the pedestrian, but in other jurisdictions learning how to do an emergency stop is required for all drivers before being allowed on the road independently. There are many reasons a car might need to stop unexpectedly (including the possibility of cars in front behaving erratically) so stopping along with maintaining safe separation between cars are basic driving skills expected in many countries. From your description AZ might be an outlier though.


> but in other jurisdictions learning how to do an emergency stop is required for all drivers before being allowed on the road independently.

People learn to parallel park too. Those that don't do it regularly tend to be pretty terrible at it. The fact that people had to check a particular box at the time they got their licenses says little about what they can be expected to be able to do years after that.


Uber was testing in multiple locations, but they originally moved it to Arizona due to less regulation than they were encountering in California: https://venturebeat.com/2016/12/22/ubers-self-driving-cars-f...

I'm not sure about AZ laws, but it's starting to seem a lot like even Texas, where I live, has laws that favor the pedestrian more than Arizona does. I may be misinformed, but that is pretty damning if true, considering Texas' population distribution and the necessity of driving. As a permanent pedestrian, though, my day-to-day concern is more on whether or not people actually follow the local driving laws (they don't, a significant amount of the time) rather than the laws themselves.

I think you're right. As someone who doesn't own a car at all, I don't think the issue for accident avoidance is preventative effect of these types of laws, because even in states with pro-pedestrian laws, it's still very rare for drivers to be charged. What actually makes a difference is proper design of crosswalks, speed limits -- things like that.

AZ has very liberal regulation on self-driving vehicles (and cars in general), but IIRC the main factor is they have almost no oversight while California has some auditing and reporting requirements, including disengagement rates, and the CA DMV releases annual reports.

They would have a hard time testing their cars like this a lot of other places. No wonder they picked Arizona. Around here (I'm in Scandinavia) if a car hits something other than another car it is always the drivers fault, jaywalking pedestrians or not. It is the responsibility of car drivers not to hit softer things - full stop.

This and traffic in Paris are the real tests of self driving cars.


Yeah, same in the Netherlands, the car driver is at best 50% at fault, typically 100%.

I do want to see self driving cars in Amsterdam though, I think they'd just shut down and have a sob after not being able to move due to being swarmed with bikes well within any sort of safety area.


There was a recent video by a Waymo person who dealt with understanding the world. He went into great detail about how to observe things. One thing he mentioned is that the car also has the goal of getting to the destination. There was an example of a pretty packed school area. The safest thing to do might be to completely stop. The car however slowed down and tried to make its intentions clear. Unfortunately I cannot find the video.. it also went into detail of how to spot that something unexpected might happen.

> It is the responsibility of car drivers not to hit softer things - full stop.

I get the feeling a lot of people on HN have never actually driven a vehicle or at least haven't in recent memory. If you're driving through a street at a perfectly safe and reasonable speed a sufficiently stupid pedestrian is still perfectly capable of creating a situation that causes you to hit them. Nothing is foolproof. I'm sure I could get a truck driver who's doing nothing wrong to back over me if I behaved ignorantly enough.

Absolutes like "always the driver's faults are just stupid, ignorant, poorly thought out, whatever you want to call it but they sure aren't good.

It is every road user's ultimate responsibility to behave in a manner such that nobody else is forced to take emergency action to avoid them. The reason we have specific rules is so that people behave predictably (e.g. stopping at stop signs) making this easier.

I say this as someone who walks a couple miles through the city every day.


In countries like China where the driver has more fault than a pedestrian and many aren’t following the rules (drivers and pedestrians), what winds up happening is that overall cruise speeds are greatly reduced and everyone is constantly on edge. I joke that the cars never really stop for you but everyone just slowly swerved out of each other’s way.

It can work, it’s just less efficient.


I can of course see how they are not criminally liable under Arizona law where the Automotive laws are pertinent... but, what if someone turns off safety features of a device and then someone is killed if those safety features had been on - would that be manslaughter under Arizona law?

I would imagine that Arizona law is at least part of the reason why these trials were being done there, and not somewhere else.

Man it must be hell to drive interstate and remember all the differences

Yeah. Not to mention the really nice dinners Uber lawyers took the prosecutor to to discuss the case.

[Citation Needed]

> The car had the right of way, it was not acting maliciously, and she walked right in front of it. According to AZ law, asking the car to make what might be a dangerous last second swerve into another lane or slamming on the breaks to avoid hitting her is not a legal obligation as these are unsafe maneuvers

My mother has dementia and until she was put into a secure home she had a habit of wandering. On three occasions she was picked up by police after midnight crossing the road.

Your defense of laws protecting drivers is fine, but doesn't cover outliers like my mother who would have been knocked down at least, killed at worst. Is that ok for laws to protect the driver/AI driver from running over pedestrians who have lost their personal safety faculties?


> but doesn't cover outliers like my mother who would have been knocked down at least, killed at worst

So would it be ok if that self driving car did see the pedestrian walking down the freeway, tried to avoid the collision, looses control and end up running into the car coming the other way, killing all the passengers in both cars?

The car was traveling at high speed down a dual carriage way (i.e. freeway) in the middle of the night.

In a situation like that any sudden reaction by the on board computer is never going to end well.


So in Arizona after running over and killing a careless pedestrian, human drivers just drive on at the same speed with all care and attention and don't slam on the brakes in fear and panic and usually spin?

That's a valid point to make in the general case, for times when you have no good option, and hitting the pedestrian in the crosswalk may be the lesser evil.

But in this case, hard breaking would have been plenty fine and with little downside risk (beyond the general problem of excessive unnecessary braking). The car's computer was aware of an obstacle in sufficient time, as would a human driver have been.


> hitting the pedestrian in the crosswalk may be the lesser evil.

I would not go so far as to say that, only because if that had been the situation then the car and it's onboard system would have to take the all the blame.

But in this case, based on the footage I saw the car was travelling down a freeway at high speed and at night.

Now some of the blame might be attributably to the car, but the real cause of the accident was the pedestrian walking down a freeway, so it seems clear most of the blame hast to be attributed to the pedestrian.


> But in this case, hard breaking would have been plenty fine and with little downside risk

Except to any cars that might be following behind that then end up ploughing into the back of that hard breaking car.


The car was traveling at 44mph. A speed reduction to even 14mph would have probably made the collision survivable. If your argument is that the car was unable to drop 30mph over several seconds without causing an accident or swerving into opposing traffic, I don't know - maybe the self driving isn't there and shouldn't be on the road at all.

No offense but the problem here is your mother walking right out in the street. A car should absolutely not be liable for accidentally hitting someone or having to risk their own life just because someone confused and unobservant walks out right in front of it.

> someone confused and unobservant walks out right in front of it

You can spot such situations pretty easily if you pay attention.

> or having to risk their own life

Reducing speed and not hitting someone cannot be compared to risking your own life. You're turning the argument around and adding unneeded emotions to this.


The problem is that a false positive causing emergency braking is also dangerous. So you'd be trading one instance of the car not emergency braking when it should against however many instances of it dangerously doing so when it shouldn't. That isn't an obvious call even before you consider that the human driver can cause the car to brake when it should but can't cause it to not break when it shouldn't.

The only real solution is to get the false positive rate down. Which is why they still have human drivers until they do.


> The problem is that a false positive causing emergency braking is also dangerous. So you'd be trading one instance of the car not emergency braking when it should against however many instances of it dangerously doing so when it shouldn't. That isn't an obvious call even before you consider that the human driver can cause the car to brake when it should but can't cause it to not break when it shouldn't.

Yes, building a self-driving car is hard. But if you don't even have emergency braking working, then your technology is too underdeveloped to drive on public roads. Keep trying on the test track.


False positive emergency braking seems less dangerous than running over obstacles. Wouldn't the expected accident be a car crashing into the Uber's back? If the false positive rate is so high that the number of such accidents outweighs the lesser risk, then perhaps Uber shouldn't let the cars drive autonomously with that software.

People can die from rear-end collisions as well, though. It's possible that Uber did the calculations based on the rates of each occurrence and concluded it would be more likely for a fatality to occur from a false positive emergency brake than from running over an obstacle.

Personally, I doubt that they really considered that; I believe that their software was rushed, flawed, and totally not ready for real-world driving. But I don't think it's a certainty that false positive emergency braking is always better than running over obstacles, especially if "running over obstacles" is determined to be a sufficiently rare event.


False positives are only a tradeoff when someone is following. The single most important advantage that driving computers have over humans to make up for their inherent deficiencies is that they don't have to split focus. The computer has no need to check the rear view mirror before going all in on the brakes, it should at any time know the current risk of a rear collision and tune thresholds accordingly.

You hit the nail on the head. People often jump to reactions without looking at bigger picture.

It is entirely likely that Uber engineers evaluated this scenario, and (correctly) decided that it was safer overall to turn off reactionary breaking - and have that function performed by a human in the driver seat. If they hadn't, these cars might instead have been brake-checking people at 100x the rate.

It's easy to say things like "Uber should have waited until better safety features were available", such as the eye tracking suggestions mentioned elsewhere in this thread. But features like that take time - especially if they're development-only features that would have no place in the final product. Every additional safety feature pushes FSD deployment back.

Globally, 1.25 million people die from car accidents annually. That's over 3,000 people per day. For every day that you delay mass adoption of FSD, you are accruing massive amounts of fatalities that could have been avoided.

FSD does not have to be perfect, it's development will cost innocent lives - but if you're optimizing for minimal loss of life, it's the correct thing to do. Reactionary policies do the opposite of what you intend them to do - they cost more lives in the long run.


Uber was nowhere near ready for live tests. It's that simple. Relying on human intervention for split-second decisions where no actions are necessary for long stretches of time is pure insanity.

This accident alone demonstrates just how flawed the system is. This wasn't even an emergency situation to start.


>Relying on human intervention for split-second decisions where no actions are necessary for long stretches of time is pure insanity.

This is literally how all human driving works right now.

Seriously, look around at other drivers while you're on a freeway or interstate sometime.

They're not driving, they're singing along with the radio, or shaving, or putting on makeup, or sending text messages.

They're not Luftwaffe aces with eagle eyes and steely nerves monitoring the fuel mix and oil pressure while scanning the skies for the silvery glint of the sun off the wings of a P-51 that may herald their last few moments on earth. They're bored people doing their boring commute and even when they have their hands on the wheel they're not really paying attention.


They still require some attention all the time. A sense of responsibility and for all our faults we are att least selective about when we aren't paying attention.

Yes, humans suck. But that's still orders of magnitudes better than uber.


What's your criterion of being ready for live tests? Under what conditions is loss of innocent life okay?

A self-driving car is ready for live testing under the same conditions in which it could pass a driver's test, which is the minimum standard for anyone to be allowed to drive legally on public roads.

My road test consisted of driving out of a dead mall parking lot (careful to obey the posted 15 MPH speed limit -- this is actually the hardest part of the test!) turn right onto a street, turn left onto a residential cul-de-sac, make a three-point-turn, and return to the mall parking lot.

These cars could absolutely pass that test.


Globally 1.25 million people die from car accidents annually, but a vanishingly tiny proportion of them would have been saved by the proprietary tech of a taxi company that doesn't even operate in many of their regions, even assuming the technology is ultimately capable of delivering a net improvement on commercial driver fatal accident rates.

Uber isn't a philanthropic research endeavour, and it can't rationalise killing people based on lives it couldn't or wouldn't have saved in the event of its technology actually working. The reason they put tech that reportedly hit things every 15k miles and had a near miss every 100 miles on the road as soon as possible has nothing to do with optimizing for minimal loss of life and everything to do with optimizing for a unicorn valuation.


Which weights off different peoples lives. I dont see how developing a technology factoring in unrelated people getting killed could ever get past a ethics board. Especially if the victims are unrelated to the technology to start with and didnt volunteer to be test subjects.

People getting killed by drunk drivers didn't volunteer for it either. It doesn't matter how you spin it, the metric that matters (there is nuance, but put bluntly) is "How many innocent lives are lost in traffic related accidents from now until [year X]?".

If, given two scenarios:

1. We develop FSD very carefully. 0 lives are lost during FSD development. Ten million lives are lost by the time that FSD sees 100% adoption.

2. We develop FSD less carefully. 100,000 lives are lost during FSD development due to suboptimal performance before it is perfected. Five million lives are lost due to human drivers by the time FSD sees 100% adoption.

Would you really choose the former option? If your ethics board refuses option 2 in favor of option 1, it is they that are mistaken.


> 2. We develop FSD less carefully. 100,000 lives are lost during FSD development due to suboptimal performance before it is perfected. Five million lives are lost due to human drivers by the time FSD sees 100% adoption.

How can you know the future in this case, I would not believe Tesla/Uber (the others) predictions , "trust us, let us kill 1 million people in the next 10 years but after that the number will go down to 100, trust us, we software developers are very reliable at predicting things and our mathematical skill are so great we can be sure even when we use NN that are unpredictable. And if we fail then the next startup will appear, promise the same thing and you have no choice then to give them also a check for 1 million lives to not discriminate between startups.


The difficulty is, you don't know the 100 thousand number is lower than the 10-5 million number in advance. An estimate can be made, but humans have a not unreasonable status quo bias in case like that. Come to think of it, status quo bias on risk is the opposite of the planning fallacy.

That is, without proof the new way will be safer in future by a specific proportion, how many excess deaths can the ethics committee sign off?

(You can make a similar, better founded argument for mass train transit. It is about 10 times safer than driving per passenger mile. So, reductions in safety that moves people from car commuting to train commuting (I imagine safety is expensive and cheaper transit would be more popular) might be worth it. No-one seems to advocate them though.)


>People getting killed by drunk drivers didn't volunteer for it either.

Its not a numbers game. Those are likely two different sets of people. You are saving some people by calculating in, that others get killed instead. It is different to for example, using an experimental treatment option on terminal patients. That would be, saving some of the dying patients but some might die regardless. The scenario is much more similar to the trolley problems cousin the transplant problem. Having someone else, uninvolved killed to save a larger group of people.

Seeing as the automatic emergency breaks were disabled, it seems the product wasnt good enough to detect false positives and was rushed by cutting the safety measure, accepting that putting it in live traffic the car would likely kill people unrelated to the research. It was a trade off between slower development and killing people.

I dont think you will find any ethics board that will green light you killing uninvolved people to save how ever many people. Of those i can think of who did that, quite a few ended up being wanted for crimes against humanity. We have ethics boards for a reason. Without them the ends can quickly justify the means.


Not having an emergency braking system should not have been a no no in this particular test. It was a prototype with probably many parts not working properly, that's why there was a human to check what was happening and act in case of issues. Here the driver is clearly at fault given that he was willfully not paying attention at the road.

There is probably some degree of responsibility on the side of Uber however:

* how this guy was recruted?

* was he trained and evaluated?

* is this kind of behavior quite common in test drives? if so, was it addressed?

* did Uber put in place at least a basic system to keep the driver engaged (like a switch you have to press every 30 seconds)?

Given the specifics I hope this ruling will not serve as a precedent, it's a bit too special.

Lastly, it raises another question: Car automation is unlikely to become perfect within the next few years, and even if it is, the legal responsibilities and liabilities cause issues. So it's likely a driver will be kept in the loop, at least for a decade or two. Given this statement, how do we design automation that keeps the driver in the loop? The driver must be aware of the car surroundings at all times so he can take over in case of an emergency. Consequently the automation must be designed to maintain him active, and naturally on alert (like a "manual" driver), an automation that is like "yeah, do nothing... humm, oops I don't know what I put myself into, please take over, you have 2 seconds to analyze the situation and take the correct actions or someone will die, kiss, goodbye" is not the correct answer.


>Not having an emergency braking system should not have been a no no in this particular test

The car did have a working emergency braking system that the car was born with. Uber disabled that one too.


No you don't understand, Uber didn't have self driving cars on the road, they had autonomous test vehicles. There was never any reasonable expectation that the test vehicle would perform perfectly in all situations, that's why there is a human monitor in the drivers seat. That human monitor is the final failsafe, it's the same with Tesla's Autopilot, and the same with any conventional car. When Rafaela Vasquez volunteered to drive the vehicle, she accepted responsibility for the vehicle. She was watching a television show when the accident occurred.

> " there is a human monitor in the drivers seat"

Any system that requires a human to sit there bored for hours, waiting for seconds of action at any moment, is fundamentally flawed. The bipedal ape component is being used out of spec and the blame for that lays with the engineers who designed the system.


I'm not so sure. If the car was driverless then yes. But they put a human operator precisely to handle these sort of situations.

They could put eye tracking system in place to make sure the operator pays attention! If the driver doesn't pay attention the car should slowly stop.

Expecting people to always do the right thing is a recipe for disaster.


Eye tracking or not, I suspect humans aren't cut out well for that sort of task. At least not without some pretty specialized recurring training.

Waymo is claiming things like 30,000 miles between needed interventions. Even if that's 100x higher than reality, that's hours of watching and boredom waiting for a few seconds of action.


This is exactly backwards from reasonable.

Ultimately systems will require no intervention for very long periods of time punctuated by very quick precise actions that most people are terrible at in the best possible scenario but that computers will be very very good at.

If you task the user with doing the right thing in 5 seconds time after thousands of hours of inactivity a sizable portion will react after the entire affair has come and gone and those are the lucky ones. The remainder will act 3 seconds into the crisis and screw everything up.

What you are describing is the worst possible combination of man and machine wherein you let the machine handle the part the human could easily handle and you task the human with the part they are worst at to ensure maximum carnage.


I think the main problem is one of social acceptance, that we will see accidents that could easily be prevented by a human operator but at the same time drastically reduce the amount of accidents where from a human perspective, only pure luck or a reflex has saved people before.

I would agree with you if she were ostensibly paying attention and just didn't react in time. But she was watching a TV show on her phone. This is not the situation you're describing.

As opposed to pretending to pay attention knowing attention would wander without engagement?

> They could put eye tracking system in place to make sure the operator pays attention! If the driver doesn't pay attention the car should slowly stop.

Except where normally there is a safety driver focusing on driving and an oversight passenger tasked with looking at and checking the automation's feedback, for cost reasons Uber had given both tasks to the "safety driver", forcing the driver to often stop paying attention to the road to check automation warnings & classify events.


The prosecutor has no doubt thought long and hard, and there is no evidence of a bad call here. We don't even have the benefit of hindsight yet.

The driver being criminally liable is a more powerful incentive. People will not buy & drive vehicles that have a reputation for getting their drivers criminally prosecuted. And the government can still regulate for high safety standards on self driving vehicles independent of criminal liability.


> The driver being criminally liable is a more powerful incentive.

Too powerful. The driver should not be punished for correctly using a legally purchased vehicle. We don't apply that kind of standard anywhere else, and for good reason.

Plus, people would refuse to buy self-driving cars even if they were on average safer than human-driven cars because

1. Everyone thinks they're an above average driver 2. Illusion of control

So your proposal would cost lives in the future when self-driving cars actually are safer.


> for correctly using a legally purchased vehicle

The eyes of the driver in this case were on the phone, not the road. That's hardly "correctly using".


The parent was quite clearly in favour of punishing the driver even when they were operating everything correctly, for the purpose of disincentivising unsafe code at the point of purchase. I'm saying that this is a bad proposal.

I have special insight into what the poster was in favour of. He thought that the prosecutor has a better idea than the rest of us, spent some time thinking about it, and has probably made a good decision. As smilekzs notes it is unlikely that the driver was operating everything correctly.

You may note he explicitly said "the government can still regulate for high safety standards [independently of the actions of the prosecutor]".


> The driver should not be punished for correctly using a legally purchased vehicle.

So where's the market incentive to bother making the piloting algorithms less killy? Consumers will choose cars that prioritise speed and the safety of the occupants, with no regard for the safety of bystanders.

This all seems to stem from the assumption that the computer just does whatever it wants (shrug), and isn't merely a tool being used by a human programmer who's supposed to be in control of it. That way lies [killbot hellscape](https://xkcd.com/1613/).


Presumably liability for the company in cases which the computer is actually the legally responsible party?

People will indeed logically and correctly choose devices which wont opt to kill them. In fact they wont buy anything that will ever opt to deliberately kill them in any circumstances.

The challenge is to ensure the machine has a survivable alternative in the broadest possible range of scenarios to avoid making bad choices.


By punishing the company, not the users.

I had the exact same thought.

>> the system was set up to not activate emergency braking when under computer control.

So the parts that mattered the most where disabled? I bet that even the human operator did not know this.


I'm wondering if anyone else responding to you clicked on your profile and knows what you do or has equivalent experience from which to comment.

I think even absent that, its still useful to have the trial, just to get all the facts out, to have the conversation.

No one knows what would have happened had this gone to trial, it would be better to find out sooner rather than later.


I don't disagree, but I've heard these self driving cars are extremely over cautious, and tend to hold up traffic because they're waiting for a safe time to turn across a lane, for example. I imagine this level of caution would needlessly trigger the emergency brake quite often, and likely cause more accidents and injuries than it prevents.

So then shouldn't the answer be not to disable the e-brake, but to conclude that the tech is not ready for testing in public where pedestrians' lives are at stake?

Obviously yes but that wouldn't be moving fast and breaking things. If companies aren't allowed to do that, then their SV competitive advantage evaporates.

> Putting a “self driving” car out there that doesn’t have emergency braking enabled ...

That's why they put in emergency breaking device with set of eyes, hands legs .... and movie streaming device apparently?

Negligence in that case is not bricking employees phone before putting him in the car. But nobody expects this from companies that hire drivers.


What definition would you use for “perfected” emergency braking—“As-good” as the median driver?

I suppose perfect emergency braking avoids hitting all objects that are visible to the sensors sufficiently early, and hits objects with the minimum possible speed if they become visible too late, while never having a false positive.

  if(obstacleDetected()) {
      // emergencyBrake(); //TODO: fix
  }

  if(obstacleDetected()) {
      // emergencyBrake(); //TODO: fix
  }

  /* 
   VolvoEmergencyBrake();  
     Disabled. We don't need this.
  */

Volvo would definitely enable the emergency brake, but only during regulatory tests.

The state also made the wrong call in allowing Uber to operate such a vehicle on public roads and are clearly now trying to distance themselves from that decision by letting Uber off the hook. Both are guilty of criminal negligence but it's not like the state is going to prosecute itself.

> Working emergency braking should be the first thing perfected, before the computer gets to control the car.

Agreed, but that makes it the lawmaker's criminal negligence, not Tesla's.


This is potentially precedent as a legal blocker for public adoption of self-driving cars. If I am to serve a 1 year prison sentence for faulty code or a deprecated LIDAR sensor, I don't see in what scenario I would be willing to leave my fate in the hands of a self-driving car manufacturer. Just as in the case of a chauffeur, if they are driving and commit vehicular manslaughter, I would not expect to be deemed guilty.

It seems to me that it would be the benefit of self-driving car companies to own up to liability as it is to their benefit of achieving widespread adoption. For example, Volvo is in line with this idea and has publicly stated that they would accept full liability in fully autonomous operation modes[0].

In this case, I do think that some liability lies with the driver - as they were tasked specifically with prevent situations like this. What is not clear is whether or not this task is even humanly feasible given reaction times, and based off of this, whether or not Uber has been criminally negligible. Given this, I am surprised that the prosecutor seems to have absolved Uber of any blame.

[0] https://www.media.volvocars.com/global/en-gb/media/pressrele...


I'd really like to see self-driving car liability treated the same way airline crashes are. Less "whose fault is this" and more "how can we prevent this same accident from ever happening again?" The second is the really important question and the one we need to keep answering to reduce the toll of driving deaths. This will mean the cars driving on our roads getting more and more regulated over time but I'm quite OK with that.

Airline crashes absolutely are treated as "whose fault is this". That's why airline crashes usually have lots of lawsuits, with millions of dollars of damages, filed after they happen.

Every NTSB report has a "probable cause" section determining who or what was at fault. And then the FAA hands out fines and courts award additional damages largely based on that finding.

It is true that the punishments aren't handled by the NTSB but that's more about separation of concerns & departmental remits than anything else. Is separating out "fault finding" from "punishments" a good idea? Maybe? I honestly don't know.


> Every NTSB report has a "probable cause" section determining who or what was at fault. And then the FAA hands out fines and courts award additional damages largely based on that finding.

In case everyone is not aware, NTSB reports cannot be used in court to establish fault or otherwise be used as evidence when seeking compensation for damages.

"No part of a report of the Board, related to an accident or an investigation of an accident, may be admitted into evidence or used in a civil action for damages resulting from a matter mentioned in the report." 49 USC 1154(b)


I have a question..

In criminal cases what happens with inadmissible evidence? Like fruit of poisonous tree doctrine... what if you literally have video proof and tons of other graphic proof of a murder, you really just ignore it and let the perp go back into society?


There are exceptions to the doctrine for exactly the reasons you mention. The legal system isn't as insane as people like to make it out to be.

if you have tons of proof about a murder, it's almost certain to fall under one of the exceptions to the doctrine. e.g, if law enforcement would have discovered the evidence anyway, it is not excluded.

In principle, it’s inadmissible. So, theoretically, yes. The perp just goes free. Not very common for a lot of reasons.

The part you missed is probably the post mortem investigations are why air travel is the safest form of transportation.

> "how can we prevent this same accident from ever happening again?"

It would be nice if society adopted this with all crimes. It's not like we can prove we're any different than the low level design of a machine; when it comes to the "free will" ideology currently the justice system is designed around.


Well if you stop viewing it as fault and instead view it as incentives it makes sense. If you (intentionally) murder somebody, you still need to be punished (even if technically "you" are not at "fault" because it's not "you" it's just the algorithm running in your brain) so that others will be disincentivised to do the same thing in the future (i.e. the algorithm running in most people's brains will take the potential punishment into account when calculating whether the crime makes sense or not).

To a large extent we already do that, e.g. many countries have upper limit in punishment (like 30 years jail max), or allow criminals to delete their criminal past after some time has passed, not because that "just" or "fair" or whatever, but simply because it's "effective".


I agree with you, and I think you also agree is not quite effective. It's a factor in deterrence, but we need to do more to understand people that still commits crimes and solve more of the root causes. We won't achieve perfection probably but we can do a lot better IMO.

I think we need to educate people how free will is an illusion and because it would help in preventing the root causes.

I do like this idea, where legal action is more about preventing the event reoccuring rather than revenge or payback. Unfortunately "an eye for an eye" is a very strong instinct and will take a lot of education to allow society as a whole to move beyond.

Whether or not free will exists (I'm a compatiblist) there are certain cases where the threat of punishment will cause people to change how they act and there are certain cases where it won't. If it were the case the the people at Uber intended to kill someone then the threat of punishment might have prevented that and Elain Herzberg would still be alive. But as the people at Uber did not expect that their actions would result in her death the threat of punishment isn't going to be very effective at preventing deaths like hers.

The distinctions our legal system makes regarding coercion, mens rea, insanity, etc are mostly trying to get at this same underlying distinction.


Society by and large does do this, just not with a grandiose show at every "incident".

"Whoa, there sure are a lot of fights at bars we have to keep responding to! Maybe those would happen less if we had fines for bartenders that had too many fights? Or maybe if they had to get a license that we could revoke if they weren't preventing fights?"

"Whoa, it seems that a lot of people don't own guns, but then go and buy one when they get pissed and shoot somebody. Maybe that would happen less if we had waiting periods?"


This. For me believing in free will is just another type of religion. It makes us feel safer and that we're in control. I think is useful for that, but we're deluding ourselves.

And I just realized you're being downvoted. How surprising.


Even if free will is an illusion it my be maladaptive to design a justice system without consequences for perceived faults.

I am not advocating a 'justice' system without consequences. You will have consequences. Consequences are just another input into our brains that do help deter some people from some behaviours. I don't kill people because I'd go to jail. I don't kill people because I don't want to hurt them (and that is on my programming, upbringing, possibly nature to a degree as well). I think the only way I'd kill someone is by accident or in a spur of the moment when emotionally disturbed by an incident (me or my family fear for our lives). Society has mostly accounted for those cases and I may not even go to jail if I did it, because people largely can relate to those situations and they agree it's not 'fair' send me to jail for them. However, there's several sectors of the population no one seems to be able or want to relate to (poor people that got little opportunities and guidance in life, abused people, mentally ill, psychopaths, a mix of all of that etc), and the people creating the rules therefore decide to 'punish' them in the name of justice for being bad and because we have not devised (and not invested enough time either) a better and more humane way to deal with them. In theory we want them to be reformed, in practice, is mostly about retribution and putting undesirables away.

It may be the downfall of humanity progress as well and things may change.

Of course it’s a religion. People are deluded to think otherwise.

We just can't help downvoting him.

Since everyone was socially conditioned to desire and believe in free will. Similar to the past with God but I would have been hanged. It will change and it’s her not him.

>This is potentially precedent as a legal blocker for public adoption of self-driving cars.

I don't think it is because in this case the person was hired by Uber specifically to "drive" this car. They knew that the car is still an incomplete product, they knew that they were testing it, they knew that they were literally getting paid to keep this car from doing something like this.

That's very different than a random person using a self-driving car that they expect and were told is a finished product.


> I don't think it is because in this case the person was hired by Uber specifically to "drive" this car. They knew that the car is still an incomplete product, they knew that they were testing it, they knew that they were literally getting paid to keep this car from doing something like this.

Sadly, while that is in fact the task of the safety drivers in other automated car trials it wasn't the case here for uber, because the "safety driver" was given both the job of safety driving and of checking, controlling and classifying the automated system's information, ensuring they'd have to split their focus and would need to frequently context-switch between two completely different tasks.

Worse, this decision of Uber all but ensured this would happen: the automation system is more likely to lose its shit and need checking specifically in situations where the safety driver would be most useful.


Don't get me wrong, Uber isn't fault free here in my opinion, and I'm mostly in aggreement with you. Pretty much everyone involved here was negligent in some way if you ask me.

My point was only that I don't think this would set any kind of precedent since it's so far from a simple case of "an owner of a self driving car".


> What is not clear is whether or not this task is even humanly feasible

Not watching TV while you’re supposed to be driving is a pretty low bar.

But apparently, humans can’t even be relied on to do this, even when it’s their one single job.


If you think it’s easy, go sit in the passenger seat without talking or using your phone and see how long you can maintain focus.

There are decades of experiments showing that boredom and fatigue are huge problems for this kind of work where someone is mostly idle except for rare events. It’s why the TSA mixes fake images into the stream on X-ray scans so people see things regularly and don’t zone out. Anyone smart asking humans to do repetitive work builds layers of safeguards in to keep people engaged – or in Uber’s case they just hope they can shift blame to the person in a bad design, which worked so far.


I think the digression to whether the concept of a safety driver is flawed from the start irrelevant. Sure, maybe the process of having a safety driver backup is physically inhumanly possible because they'll get road blind and won't be able to react in time. But that didn't happen here. We have no evidence of that ever happening in self driving tests. Of the millions of self driving car test miles with safety drivers not once has a safety driver been looking at the road, but not able to react in time because of the monotony of the job.

What we have here is someone who was watching TV on the job. I adamantly refuse to believe that humans are incapable of not watching TV for a stretch of several hours.

If the driver had nodded off, looked out with glazed over eyes, or any number of other situations consistent with your scenario, then you'd have a point. But this was not that case.


> I think the digression to whether the concept of a safety driver is flawed from the start irrelevant. … What we have here is someone who was watching TV on the job

The fact that this particular person dealt with boredom by watching a video makes it easier to blame but that's just one of many ways in which people cope with boredom and it's not the root cause. You can blame the worker if it makes you feel better about yourself but if your goal is to reduce the number of errors the system has to be redesigned not to depend on people acting like robots rather than humans.

> Of the millions of self driving car test miles with safety drivers not once has a safety driver been looking at the road, but not able to react in time because of the monotony of the job.

Do you have any evidence supporting this claim? In particular, you'd need to prove that all of the self-driving car tests have the same one-person setup (which we know to be incorrect), every company ignored decades of well-understood risks and similarly didn't have any tasks for that person to perform and thus stay engaged, and you'd need to know how frequently incidents occur which require driver action to prevent a problem.

It's far more likely to be the case that there have been many situations where someone was distracted or focused on an area other than where a potential risk was but the other driver or the self-driving system successfully avoided it turning into an incident which made the news.


Also >"The system is not designed to alert the operator," the report notes.

Aren't driving instructors supposed to do pretty much that? Supervise an imperfect driver and take control quickly if the situation requires it?

Driving Instructors are always actively engaged, atleast mine was, he'd always check traffic, plan the route, give me directions, etc.

It's way more active than watching a self-driving car doing it's thing completely autonomously.


Uber should have had people remote-monitoring the monitors at this early stage.

I saw a different company’s car today: they’re being responsible and have two people in the car.

Pretty much all of them do. In fact Uber used to do that before they moved to AZ. Having a single person responsible for driving & monitoring was a way to save cost and log more miles.

This person’s job would be more accurately described as “fall guy” than “driver”. Maintaining constant attention toward a system that works properly by itself for hours, with a response time of seconds to correct its mistakes, is not a reasonable task for humans. This guy’s role is to take a paycheck for doing virtually no work, at the exchange of taking the risk of being blamed if things go wrong.

I know it’s glib, but just on the face of it, I find the idea of Uber taking responsibility for anything a laughable proposition.

> If I am to serve a 1 year prison sentence for faulty code or a deprecated LIDAR sensor, I don't see in what scenario I would be willing to leave my fate in the hands of a self-driving car manufacturer.

What if we came up with a reasonably objective test of driver skill and the car outperforms you?

This is and remains the standard argument - the car doesn't have to be perfect, it just has to beat a human having a bad day. If you are worried about the liability, maybe keep the car serviced and sit in the drivers seat and look out of the window instead of watching a movie. Tell the car to dive slowly maybe.

I feel sorry for the driver who is creating this precedent, but this is hardly a road block. The average human is not a great driver.


> If I am to serve a 1 year prison sentence for faulty code

I wonder why would a programmer go and work on a code when he is always in danger of going to prison for every mistake he makes. We all know that bugs in software happen and it’s impossible to write bug free code


> We all know that bugs in software happen and it’s impossible to write bug free code

The go-to counterexample is the code that ran on the space shuttle, which took years and hundreds of people and $200 million to produce. I have been told that nobody has ever found a bug in the final production system. The development practices involved in creating it seem like they would make most software engineers want to curl up and die. One tidbit from the following link I found is that 60 people spent years working on the backup flight control system, intended to never be used!

https://history.nasa.gov/computers/Ch4-5.html


Well we could mandate that self-driving cars be programmed to the same standard. I mean it means that self-driving cars will simply never exist, but we could mandate it!

Maybe we shouldnt run safety critical code of which we dont know what it does. Hard realtimesystems exist for a reason. Sure it would also be nice if we could drive nuclear power plants at home via a smartphone app, so the most qualified expert could help in an emergency, but its simply not a sane idea.

This cop out of "Software gonna have bugs" as a way to evade all liability doesnt hold in any other profession. I dont see why we get special treatment here.


It does hold, in some form or other, in other professions. The EPA formally defines how much cash they're willing to burn to save a human life, which seen from one perspective is an "environment gonna kill people" cop out. Nuclear missile silos have two operators because "people gonna launch nukes". Cars have airbags because "vehicles gonna crash". Every field has risks and risk management, every field has certain steps that they could take for the purpose of safety that are judged too expensive to justify the risk, and part of managing risks in software is that you should plan for bugs and plan to mitigate their impact.

Nobody calls the emergency services out because they assume cars are going to crash and plan accordingly, so why does the software engineering industry get called out for assuming software will go wrong and planning accordingly?


> I wonder why would a programmer go and work on a code when he is always in danger of going to prison for every mistake he makes.

You don't go to prison for "mistakes", you go to prison for criminal negligence.

Given that you can go to prison for criminal negligence in any other profession (e.g. a dentist who exposes thousands of people to HIV), why would programmers be the one category of job that's exempt from criminal laws?


> You don't go to prison for "mistakes", you go to prison for criminal negligence.

The difference between the two is what a politically-motivated prosecutor can convince a jury of.


If a politically motivated prosecutor can convince a jury of twelve people that Bill killed Dave, by using his psychic brain powers, or, alternatively, by sticking pins into a voodoo doll, or some other form of non-scientific witchcraft, Bill is going to hang.

The hypothetical gullibility of juries doesn't mean that we shouldn't have laws against murder, juries, or prosecutors.


The line between mistake and negligence is a lot blurrier than the line between killed and didn’t kill.

Architects and engineers seem to go to work everyday knowing that if they are criminally negligent they could face prison.

My apologies for potentially hijacking this, but... this is exactly why the term "software engineer" bothers me. Yes, the software you write isn't likely to cause a shopping mall to collapse on a crowd of people, but there can be huge financial and societal responsibility here, and yet, almost every software license in existence completely disclaims that. Engineering comes with a tremendous amount of ethical and legal responsibility.

But innovation!

Fail fast!

...sorry for the sarcasm but this is a message student engineers internalize in part because it is pushed by companies they want to work for but can't explain why. They don't have strong boundaries between why it might he a justifiable philosophy for Facebook but not for Boeing.


Agreed. I'm a software developer who has a degree in mechanical engineering and changed over. I cringe at calling myself an engineer in my current role.

First - architects shouldn't be lumped in with engineers in this context. It seems like there's some confusion around how far the responsibilities of an architect extend, which makes me wonder if things are different in the US? Whenever I research it seems to be the same as Australia though. Anyway, architects know as much about structural engineering as engineers know about Le Corbusier. An architect would really have to mess up to be criminally negligent.

But in response to your comment, engineers have a very clearly defined set of rules, collectively called "the code". As long as they design to them you'll be free of criminal negligence (at least during the design & development phase, things get a little murkier in construction).

More to the point, engineering for the real-world means layer upon layer of uncertainty. e.g. civil/structural engineers use materials we can't model (we use pretty-good approximations for concrete behavior) in conditions which are unknown (soil) to resist forces we can't predict (weather, earthquakes, dynamic response to loading). How does the code deal with all this uncertainty? Slap safety-factor after safety-factor onto everything. Whoever comes up with a method for more effectively dealing with this stuff will make millions.

The most obvious example being that we design structures to resist a 1-in-100 year storm. In other words, we expect a structure to be within spitting distance of failure every hundred years. But as long as you design to that standard, you're fine.


If the programmer can’tbe bothered to put in enough verification effort to avoid being negligent, they should not be in charge of safety critical code. Plainly: don’t pretend there aren’t ways to ensure correctness of systems.

But that same developer can put the same effort in at another field and not be constantly at risk of a mistake doing massive damage to them. You would have to be paid an incredible amount to make it worth it.

How do other fields where the workers are liable for damages like this work? What do architects who sign off on buildings do for example?

I don't believe they all ask for incredible amounts of compensation


I think software is a lot more brittle than physics. It takes a lot to go from a sturdy building to something that could collapse, whereas in software the difference could be a single line of code, or two transposed words.

Also, the software world doesn't have the benefit of millennia of accumulated best practices.

Finally, the senior engineers and architects who are licensed to sign off on things they will be held criminally liable for do get incredible amounts of compensation as compared to their more junior colleagues.


Anybody can build a bridge that doesn't collapse, it takes an engineer to build bridges that barely don't collapse. This is hard work, don't downplay it.

I'm not sure how facetious you're being so I'll just play it straight. Modern buildings are not supposed to barely not collapse, they're supposed to be safe even under scenarios quite a bit more extreme than what they're expected (or even legally allowed) to handle.

They're supposed to be exactly as stable as required by law for their intended use, using the minimal amount of labor and materials so satisfy the requirements. Making them more stable using more money is easy, but getting the calculations just right requires years of training.

I see what you mean now. My point is that even if there's a minor screwup, the architect probably won't be prosecuted for anything because the building won't fail catastrophically thanks to modern standards which have substantial margins of safety built into them. It really does require criminal levels of negligence or extreme circumstances that would almost certainly save the architect from prosecution to have a building collapse on you.

On the other hand, a minor screwup in software is far more likely to cause catastrophic failure because we just don't know how to workably build large, robust systems out of code.


Point well made.

Nit: meeting the requirements of law isn't barely collapsing; that requirement has so many safety factors built-in because the building code has to approximate so much. The approach isn't that dissimilar from how that "anybody" you mention would build their bridge that doesn't fall down: by guessing safely.


It's over 20 years since I finished my degree, and I've never worked in the industry, but I spent sufficient hours poring over the ISO standard document for pressure vessels as part of my final year project that I can still remember the sorts of things it covers.

Effectively, they test things to destruction, then publish minimum requirements. So if you want to pressurise your reactor vessel to reactor vessel to 30 atmospheres, you can pretty much look up a table that'll tell you precisely how thick the reactor walls need to be for each of the commonly used materials. If you want to use something uncommon, then you need to pay somebody to test it.

If it fails in a catastrophic fashion, you can expect to be asked to show that you did your due diligence, and there are extenuating circumstances a reasonable engineer could not have been expected to foresee and plan for. Or that you did foresee it, and somebody else chose to accept the (clearly defined) risk.


There are non-tech professions, where the lives of other people are literally in your hands. And a lot of the people doing them get paid a lot less then software engineers.

Why is that a problem? The cost of business should obviously include safety concerns.

It's "not a problem" until the next round of "why does Silicon Valley only work on trivialities?" articles pops up.

Or there will be people who gain expertise in developing software good enough that software engineering will be considered an actual engineering discipline.

The code wasn't faulty. The configuration was explicitly set to not brake, as far as I understand. If someone got in trouble here, it wouldn't be for programming.

Operators of vehicles need to be held personally responsible for the damages caused by putting their vehicles on the road, whether the vehicle is autonomous or not. Additionally there must be a responsibility for manufacturers to not create faulty products, but that can not absolve the "operator" of the product. It should not be said that the "operator" of a Volvo autonomous vehicle is not responsible for the actions of the vehicle under her control, simply because Volvo says they accept full liability. I think your chauffeur analogy is flawed. If I set cruise control in my car to 75mph in a 25mph school zone, am I not liable? It is not possible to delegate our personal responsibilities to machines, and wash our hands of it.

Volvo's acceptance of liability seems little more than a smart business decision- the cost of an insurance payout to the family of the deceased is less than the money they stand to earn in profit from selling autonomous vehicles. We can't allow this generous actuarial calculation to absolve the operator of such a vehicle of responsibility too. If you don't like it, take the train.


Why?

Imagine that your only recourse is to go after some big faceless corporation whose immensely complex algorithm failed in some highly unusual way, and no one can really say what happened, not even the engineers who wrote it (and some of whom might have long left the company...)

It's similar to the situation with companies like Google blocking people who have no easy way to get support, except more morbid.

I'm not fond of living in such a world where personal responsibility is dissolved and becomes meaningless.


So if a cab gets in an accident, then the person who hail'd the cab should be liable? What if it is an autonomous cab? If an automated elevator fails and kills someone, is the person that pushed the call button liable?

This is something that I really don't like about our society, the fact that someone "must" take be responsible. When someone goes on a shooting spree, of course they should go to jail. But if they die in the process of committing their crime, there is almost always an outcry for the next person in line to be responsible.


We have rules about elevators. There are elevator inspectors who regularly certify elevator safety, and they are liable if they falsely attest as to its good working order. The owner of the building is also responsible for maintaining their own elevator. There isn't anything the passenger of an elevator can do to meaningfully affect its operation or safety. I suppose if you knew there was someone working in the elevator shaft and you pushed the button to hurt them, you would be responsible for that.

We have rules about motor vehicle operation. The operator is responsible. If you are the passenger in the back seat of a cab, the driver is responsible, not the passenger. If there is nobody else and you are the pilot of an autonomous vehicle, you have to assume responsibility for your vehicle and its actions.


/Or/ we take the way less insane course of action and regulate AV safety, like we do elevators and airplanes?

The individual operator (the driver, or the person in the operator seat who presses the "engage autonomous mode" on the car) is the one who chose to buy a vehicle and put it on the road, potentially endangering other people. We need to be responsible for our own actions and the situations we create.

Is their a gun shop owner liable for murder (and/or wrongful death) every time someone is killed with a firearm? What about the clerks at the gun store? Or the truck driver who delivered the firearm to the store? All of their respective actions clearly created a situation where people have died; your position is silly.

You are assigning blame further up the chain, which I am not doing. I stated in my first post that I do believe the manufacturer is liable for their faulty products, but can not be SOLELY responsible. I might find the gun shop owner for selling a faulty or damaged gun, for example if there was a recall and they ignored it.

If you bought a faulty gun with a broken safety catch, and waved it around in a crowded street, and it fired, killing someone, you should be held personally liable. You brought the gun out in public and created the scenario which caused someone to be hurt by it.


What if the ammunition was faulty, and a case of it blew up in someones car killing a pedestrian next to it? Or a gas leak in the fuel line that dripped onto a faulty underground electric cable, igniting and causing injury?

I don't know what the threshold is in other countries, but Canada has a vague but useful threshold: "knew or ought to have known". A different way it's been stated in the courts is "what a reasonably prudent person ought to have known"

Looking at the fuel line question through that lens: did the leak start while you were driving and you were completely unaware? Or has it been leaking for a while and you just haven't gotten around to fixing it.

With the ammunition, how was it stored? Was it dumped in a toolbox filled with pointy screws? (A reasonably prudent person ought to know that ammunition is fired by striking the primer with a sharp object). Was it stored in the front seat of the car on the hottest day of summer? (A reasonably prudent person would expect it to get really hot in there)

Etc. Etc. The thing about negligence is that there's a lot of room for interpretation. As another example, if you've been driving a car around with self-driving features, and you've experienced it behaving erratically multiple times, and that's followed by an accident... you knew or ought to have known that it was dangerous to be on the road in that vehicle. If there was an OTA update for the autopilot last night that installed silently, and it results in a crash, then it's probably the manufacturer who's liable.


A woman on a road was killed. By a car with no driver.

There was a person sitting in the driver's seat, but that person was in no way engaged in maintaining safe operation. That person was hired by Uber.

As the judge, therefore, I would certainly assign -most- liability to Uber for putting that screw-off behind the wheel in the first place. Uber's driverless car killed a woman.

The charge would be negligent homicide.


How do you feel about the Amtrak engineer being charged in the Philadelphia derailment? Does Amtrak deserve most liability for putting the engineer in charge of the train, or does the engineer deserve the blame for failing the most basic duties of their job?

https://www.washingtonpost.com/news/dr-gridlock/wp/2018/02/0...


I'm not familiar with the details there. IIRC, the engineer was going around a curve much faster than it was rated for.

Whatever. Failing to detect that an employee is not fit to do a job which potentially jeopardizes many lives is a very serious failure. I might -want- to ask which executive the company will expect to do the time.

Airlines and railroads seem to prefer to blame their operators, particularly when the operators are killed and unable to defend themselves. In this case, however, the vehicle had no operator to blame.


The article mentions the negligent operator watching a TV show.

> Uber, which declined to comment for this story, could still be sued in civil court and be forced to pay damages. The government could also potentially pursue criminal charges against managers or employees of Uber.

Looks like they still have a lot to deal with.


Good. For Elaine's sake, and all other potential victims of overhyped tranport options, I hope that option will be pursued vigorously.

> The charge would be negligent homicide

What even happens when a corporation is convicted of a crime like that?

A fine? Slap on the wrist? It's not like Uber can go to jail.


Let's take a look at some of the ingredients that led to this:

1) A pedestrian crossing the street likely expected an approaching driver to see them and slow down. Typical behavior on the part of the average pedestrian and the average driver, whether it's a misdemeanor or not.

2) An engineer at Uber made the decision to put tech on the road without emergency braking capabilities, likely using the justification that a safety driver would be there to intervene when needed.

3) A safety driver in an autonomous vehicle that behaves the right way 99% of the time grows complacent.

The pedestrian may have been jaywalking and the safety driver may have been abdicating their duties. The safety driver especially isn't an innocent actor here. But the party who has the most responsibility is by far the engineers and managers at Uber.

This set of circumstances was completely foreseeable, but they still decided to take the risk and put this technology out on the roads. I for one don't want to be a part of Uber, or anyone else's great experiments. Spend the money, spend the time, and figure this out in a controlled environment before subjecting the rest of us to the negative impacts of your selfish ambition. Someone at Uber deserves time in prison for this.


While I agree that disabling the emergency braking was a foolish and irresponsible decision, and the most important single technical failure leading to this tragic outcome, I don't see prison time as a reasonable response, as I doubt that there was a malicious disregard for safety. If the persons most responsible for that decision still think they did no wrong, however... I'm not sure what the ideal response would be.

Beyond that, I do not want to see an extension of the practice where low-level employees take all the blame for a corporate culture that encourages bad behavior.


On one hand, I think Uber was reckless and wantonly created the conditions for this incident.

On the other hand, I don't want this to be the death knell for autonomous driving experiments.

On the gripping hand, it's possible that a large settlement might have been the best outcome from this tragedy for the family of the woman in question. They gain nothing from Uber's criminal liability. (edit: clarity)


FTA:

>Uber, which declined to comment for this story, could still be sued in civil court and be forced to pay damages. The government could also potentially pursue criminal charges against managers or employees of Uber.


I saw a post on Reddit at the time of this incident, from someone who claimed to be a former Uber backup driver. The claim was that Uber was too strict with regards to cell phones, and that if you were caught interacting with your phone while on the job you were immediately terminated.

I'd like to know how true this is, and if there is a better source for this claim.


> The claim was that Uber was too strict with regards to cell phones, and that if you were caught interacting with your phone while on the job you were immediately terminated.

Why is that too strict? Distracted driving is rapidly becoming the biggest problem for MVCs.


Did you read the article? The "driver" was watching a TV show, while instructed to keep eyes on the road

After all these stories I strongly believe that "self-driving, except when it's not" is even worse than manually driving, because supervising a computer is not too far from a driving instructor supervising a student driver; most of the time it's OK, but you have to be extremely alert to catch the times when it makes a fatal mistake. On the other hand, if you're "manually" driving, you are fully in control of and can anticipate the situation, thinking ahead with what's next. A self-driving car, like a student driver, won't tell you what it's going to do and ask whether that's OK --- it just does and you have to be very alert to instantly take control to correct when things go wrong.

Autopilots work in planes because the operators are extremely well-trained, and the reaction times needed are measured in seconds or even minutes. In a car, it's less than a second.


I think there's actually some debate about how just the right amount of automation can be dangerous in planes. Automation works great in nominal conditions but can leave pilots unprepared to handle bad situations.

There was a really interesting article I read about this regarding the Air France plane that crashed a few years ago (I think it was this: https://www.vanityfair.com/news/business/2014/10/air-france-...). Similar things even came to play with the recent Lion Air crash, although there was also a lot of negligence and crappy maintenance.


> While the system identified that an emergency braking maneuver was needed to mitigate a collision, the system was set up to not activate emergency braking when under computer control.

> Instead, the car's system relied on the human operator to intervene as needed. "The system is not designed to alert the operator," the report notes.

How are the not negligent for this specifically?


A civil suit for negligence is still available, even when not criminally liable. Two different standards. See O.J.

The law doesn't require automatic brakes and alerts on cars. Think about how many clunkers those laws would make illegal. You can't hold Uber to different legal standards, but we can certainly say they were negligent from a moral standpoint

> The law doesn't require automatic brakes and alerts on cars.

The law isn't written for the case when the car is under its own control. The driver is assumed to be in control of the vehicle. However, if the car is also _a_ driver, neutering its ability stop, or even ask for assistance, in an emergency should be tantamount to disability the human driver's brakes.


If I had to review criminal negligence and pick a reason I would say that a standard of reasonable care has not been established that requires a car to be configured in a way that would have prevented the accident.

> The Arizona Republic has reported that the driver, 44-year-old Rafaela Vasquez, was streaming the television show The Voice in the vehicle in the minutes before the crash.

I guess we shouldn't be surprised any more as even the drivers of the non-self-driving cars do the same sometimes. It's scary how often, when I look in the rear-view mirror, I see drives behind me looking down at their phones. This is one of the reasons I almost stopped driving.


Most driving is done by humans. Humans are terrible at it and kill people unnecessarily all the time. That's the standard to beat. Speaking as someone who rides a bike on city streets, I really don't give a shit how many people Uber kills, as long as it's less than human drivers would. This whole thread smells like Monday morning quarterbacking and people throwing aside reason under stereotype threat. (Techies are in love with technology, techies are in thrall to startup narratives and oblivious to social responsibility. Everyone's anxious to disprove that.)

In fact, the mistake Uber made here was relying on a human being to do a job that is routine and boring the vast bulk of the time but occasionally requires life-saving decisions that depend on attentive awareness of the surroundings. That's the same mistake our entire civilization makes a million times a day. The fact that the operator had fewer responsibilities than a normal driver probably magnified the problem, but it's the same problem that makes driving fundamentally dangerous. She thought she was doing a good enough job and then oops, guess not, somebody's dead.

That happens every day without robot drivers involved. The standards we hold autonomous driving technology to should reflect this insane status quo.


Those cars were nowhere near ready to be on the roads. And given how fast the program spun up there’s no way the drivers were properly trained and vetted. I personally had several run-ins with the Uber cars in San Francisco before they got their California registrations revoked. I worked on 3rd in soma and saw them out as I would walk to lunch. They didn’t even try to yield to pedestrians in crosswalks during right turn on red (when pedestrians clearly have right of way). I had more than one near miss. I shouted at the driver and he grabbed the wheel in panic. He went around the block to try again and the car did exactly the same thing. It was like they were blind to anything smaller than a car. Uber knew it too, because the failure to yield was reported to Uber in person at their garage on 3rd and Harrison. Uber (and anyone who behaves like that) has no business running a self-driving program. If they start the program back up I guarantee they will continue to kill pedestrians. If you see one coming get away quick. I speak from personal experience and I’m not joking even a little bit.

My thoughts go out to her family. And the many many people who die every day because of reckless drivers and accidents. Driverless or not. Really sucks to lose someone due to no fault of their own (other than being there).

To add, driverless or not, most who kill others using a car are not criminally liable. Often times not even liable beyond what the state minimum is. Unless the victim sues for personal assets. So in California, that's 100k. Which is one of the lowest in the USA.


I don't like that you are equating this with any other car accident.

Uber made a vehicle that can zoom around the roads unassisted. You can't just put something like that out there and disclaim responsibility by assuming that somebody will sit and be ready to brake at the right times. Even trains have a 'dead man's switch', where the train stops if the operator is unresponsive.

What they did was incredibly irresponsible, they have been running an experiment on public roads with the lives of the general public at stake. For this they need to be held responsible.


I agree. I'm not saying they shouldn't be liable.

I had two points. One, expressing condolences to family. That sucks. And two, the laws favor cars and car drivers in accidents like these.


> to lose someone due to no fault of their own

? She walked in front of a car on the expressway in the dark.


You have a reasonable point that the victim probably bears some responsibility for crossing in the middle of a block in the dark, but I'm pretty sure that it was a divided city street with a 35 MPH speed limit rather than what is typically thought of as an expressway: "the preliminary police investigation determined that the car was speeding—going 38 mph in a 35 mph zone when the crash occurred" (https://www.curbed.com/transportation/2018/3/20/17142090/ube...).

It was a well lit section of road (the dashcam video Uber released makes it look much darker than it really is, there is other video out there of the same section of road, you can find it on youtube) and it was a natural crossing point but a very poorly designed road/walk way. Almost any human driver who was paying attention would have spotted her from a big distance and avoided hitting her.

Video of the section in question: https://www.youtube.com/watch?v=1XOVxSCG8u0


I stand corrected.

Just because she was the victim and passed away doesn't mean she wasn't ay fault. Where have people been getting this wrong information?

Unbelievable. I wonder how many other self driving cars are out there designed to not brake even when they clearly detect an object in their path. It's ridiculous to rely on a human to intervene in such situations. This was a completely preventable homicide, yet Uber gets away with it. As I've said before, the easiest way to get away with murder is to commit it under the protection of a corporation. True, this is manslaughter, but the principle still stands. I bet they will go after the driver now and try to place blame on her despite her having an impossible job that should simply not exist because it can't be carried out. So they get away with manslaughter and a precedent is set for other car companies that no matter how negligent your system and operations are, you can kill people and get away with it as long as you find a patsy to sit behind the wheel and take the blame. With this kind of attitude, I hope self driving cars never make it to market. I no longer think they will be safer than human drivers because there is no incentive for these companies to make them safe, let alone safer.

> I bet they will go after the driver now and try to place blame on her despite her having an impossible job

What's impossible about watching the street and hitting the breaks if necessary, while testing a prototype self-driving car? I would say if you can't even do that, you shouldn't be in that seat.

This isn't the case where just one person is responsible. The system shouldn't have been configured that way. The driver should've been alert. The woman shouldn't have crossed the road like that. Everyone was negligent.


Watching the road for hours at a time, day in and day out attentively is impossible. No human can do it. You're right that she shouldn't have been in that seat. No one should have as it's an almost impossible task. No one has that type of attention span or ability to react, especially when the situation is so boring. Humans can drive but even then, they will make mistakes. To expect there not to be mistakes or lapses of attention with no interaction is ridiculous. We are not built that way. Maybe some top athletes can do that, but even goalkeepers in soccer whose job is to watch the ball and stop it sometimes can't concentrate and they are selected based on this ability to only play 45 minutes at a time without a break. What tests did uber put this woman through before hiring her to make sure she was capable of the almost super human feat they asked of her? I bet none. The system could have prevented this crash easily but because the auto braking was turned off, it didn't. What kind of company tests a self driving car with the auto braking turned off? They should have known it would cause a collision sooner or later.

The road should not have been designed like that, allowing high speed traffic at a pedestrian crossing. Cars are a dangerous incursion on human environments, akin to letting lions wander freely in town. The whole automotit ecosystem is morally liable, joint and severally, for this assault on human life, from collisions to pollution to the appropriate of public real estate.

The pedestrian was not crossing at a designated crossing.

There is nothing more natural in a human environment than tools. A car is a tool.


It happened because one company was greedy, and negligent, - and has the money to pay the bill. The question is, will the jury really make them pay.

It happened just as much because one driver/monitor wasn't doing their job.

Saying opinions like statements of fact don't make them correct.

It's at least more useful to speak of legal liability, because that can become case law.


I wonder why Uber (and other self-driving car companies) don't have remote employees monitoring the road who can take over in case the appointed driver doesn't react quickly enough.

The problem with these driver-assisted self driving cars is that the driver is unlikely to be paying attention at any given time since the system is 99% reliable.

By adding remote monitoring, you could even have multiple people monitoring each car. It might be impossible to safely steer the car from a remote location, but they could surely activate the breaks and/or trigger other simple directives to drastically decrease a fatalistic collision.

Given that the average driver reaction time in a car collision is 2.3s [1], I doubt network latency would pose much of a problem, and cost surely isn't an issue for these companies. A remote person could also use the car's cameras to gain a superior field of vision (especially at night time) when compared to the in-car driver.

[1] https://copradar.com/redlight/factors/IEA2000_ABS51.pdf


If someone sitting in the vehicle can lose attention, there is not much hope for someone watching from afar on a TV (even if you're streaming at something like 4k 120fps --- which would be an immense amount of bandwidth to handle for each car --- it's still not much more immersive than watching someone playing a driving game.)

Maybe they can make it more like a video game, where you're constantly saving cars from collisions, and you don't know which is a simulation and which is real. Give them bonuses based on what percentage of collisions they successfully deter.

And let's have Facebook moderators be shown a small percentage of images known to be of rape or beheadings just to test if they're flagging images correctly. You see how that might be a bad idea?

From what I've seen/read, the more people are responsible for monitoring something, the less likely each one is to monitor it, sometimes to the point where more monitors can get a worse rate of catching problems.

Sort of like why the driver of a "self-driving" car is more likely to watch TV instead of paying attention to the road.


> ... since the system is 99% reliable.

Where did that number come from?


Interaction between humans and AI is far from perfect. It seems that it would be easier for the people to adapt to AI rather than the opposite. I feel that someday every traffic participant (including pedestrians) will be required to carry tracking device. These devices will communicate together and prohibit or allow actions, e.g. making a turn or crossing a road. Eventually they will make traffic rules as we know today, obsolete. No traffic lights, no road signs and no crossings anymore. Every action will controlled by the device. And, of course, they will automatically report and fine law-breakers.

Is this the bright future of the humankind? Or is this a setting for a new dystopian book?


Interesting reading the article, from a risk perspective I would think Uber would have assessed the risks associated with using an Uber self-driving solution. Particularly mitigating automated controls to stop the car in the event it detects an emergency, given it has this capability.

But I agree with the verdict, it's just strange that the vehicle has the ability to detect a potential collision but cannot apply emergency braking to prevent it.


Of course, the prosecutor is judge and jury in this case because they are not pressing charges. I'm almost certain uber would be held liable by any jury who heard that they intentionally turned off the braking that could have saved this woman's life. Then again, it is Arizona so it's not surprising that this is not even being prosecuted. Uber must be bringing in a ton of money to the state there to get off so easily.

I'm not so sure. Uber's defense here amounts to blaming the safety driver for the crash. They can argue that their actions improved the safety of the self-driving car (by reducing erratic movements due to false positive) so long as the safety driver is performing their job properly. Absent a smoking gun email that says "yep, we know this is unsafe, we don't care" (basically the Ford Pinto scenario), I don't think a guilty verdict is near-certain.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: