The prosecutor made the wrong call here. This part is absolutely criminal negligence. Putting a “self driving” car out there that doesn’t have emergency braking enabled (apparently because it creates too many false positives) is an unjustifiable risk. Working emergency braking should be the first thing perfected, before the computer gets to control the car.
Situations like drunk driving can make a difference, but in this case that's a hard claim to make. Maybe if the car was swerving out of its way to hit her, you could make that case. But arguing that it wasn't doing enough to stop -- that's a tough sell given AZ law. The car had the right of way, it was not acting maliciously, and she walked right in front of it. According to AZ law, asking the car to make what might be a dangerous last second swerve into another lane or slamming on the breaks to avoid hitting her is not a legal obligation as these are unsafe maneuvers. There might be another car behind them or other cars/pedestrians alongside them so the law in AZ doesn't require these types of high risk actions to avoid hitting pedestrians that walk right in front of oncoming traffic.
Moreover this person was jaywalking at night in an area without street lights, without even bothering to look both ways, across a median, wearing a black hoodie, through a high speed road right in front of a car that had proper headlights and was going a steady speed.
Anyone familiar with AZ law knew that Uber wasn't going to be charged. You may think the law itself is wrong and that cars should be legally required to take last second high risk evasive maneuvers. These are all tradeoffs to which reasonable people will disagree, but changing the law is the job of the Arizona legislature and not of the prosecutor, who made the correct decision, even if it seems like the wrong decision to you.
Furthermore, Uber had specifically tasked the "safety driver" with both "safety driving" and operation & oversight of the automation system, wilfully splitting their focus where other manufacturers (and previous Uber trials) had one person at each post such that the safety driver could actually focus on safety driving.
Read the section on "duty of care" here:
and note that drivers only have a duty of care to yield to pedestrians who have the right of way. Pedestrians jaywalking never have the right of way. They are only allowed to cross at intersections.
Drivers do not have a duty of care to yield to pedestrians who do not have the right of way, although drivers have other duties.
You're right that the bullet point list in your link doesn't have anything that applies to uber, but there is no carve out in the negligent homicide law that says running over jaywalkers is legal. The prosecutor could have looked at:
* whether Uber was warned that having one operator instead of two would be unsafe (they were)
* whether having one operator to monitor the road and the software is inherently unsafe
* whether operators were told that the system would never emergency brake and never warn about needing to brake
* whether Uber reviewed footage to ensure operators were paying attention, whether they knew that operators weren't paying attention etc
I hope all of this will be in the NTSB report.
Has that been conclusively established, rather than they were watching the central console where the automation they were also supposed to oversee is located?
Where I'm at (Sweden, so far away and possibly a very different legal situation), right of way is not an excuse for negligent driving. You don't get to run people over unpunished just because you have the right of way if it could have been prevented by paying proper attention.
I'm not arguing that the driver should have been charged at any point. I didn't do so even back when the event happened.
E.g. if a human was driving the car and the car did the exact same thing, as the Uber car did, but it was broad daylight, then they still most likely would not be charged. If you jaywalk and get killed, it's basically your fault, and the only exception would be if the driver was drunk, or speeding, or not obeying traffic signs, etc. Jaywalkers don't have the right of way, and cars are not obligated to yield to pedestrians who don't have the right of way.
This is because a persons right to be alive outweighs your right of way at any time and it is none of your business whether that other person acts lawful or not.
Interesting to see how much this general principles differ.
But that doesn't necessarily mean that the law should try to perfectly reflect that moral imperative. Laws generally don't work well when they attempt a high resolution of morality, because the law can only approximate justice, and the more complex the approximation, the more arbitrary and ambiguous the law becomes, which is itself unjust.
The specific problem here is who decides whether the driver did "enough"? The driver can say "If I tried to slam on the brakes, I was afraid my car would spin out of control" or "If I tried to swerve away, I was afraid I might collide with someone". They might say "I thought there was a car behind me, and if I hit the brakes, I would injure that person. It was an honest mistake that the car which used to be behind was no longer there." Etc.
Then you have to decide whether you believe them. In an environment where someone is innocent and must be proven guilty, you don't get a whole lot more precision by adding more precision to the law. Or do you drop the requirement of presumption of innocence?
Also, that leaves a lot of discretion to the prosecutor, which may be abused, or imposed arbitrarily, and is ambiguous.
So you are creating a lot of ambiguity in the law in an effort to precisely match the moral outcome.
The other option is to have clear responsibilities. The pedestrian must do X. The driver must do Y. If the driver does Y, he is not charged. When both X and Y are followed, there can be no accident.
But "Must do all you can" is not clearly defined. "Not speeding, obeying traffic signs" -- this is more clearly defined. So in AZ, the law is a little more clear, but at the expense of not being as morally precise. It's a reasonable trade off. It may suffer from an abstract moral critique, but I'm not sure on balance it delivers less justice.
If you use a circular definition of "due care" and make it something like "due care means you must take all action that is reasonable to avoid an accident" you've again ducked the issue of giving due care a well defined meaning.
Indeed, which is why such laws should and are aimed at road safety.
> the more arbitrary and ambiguous the law becomes
"When you hit a pedestrian with a car, you're in the wrong", it doesn't become more unambiguous than that.
> do you drop the requirement of presumption of innocence?
That's framing the debate. If you hit someone with your car, you're guilty of hitting someone with your car. There is no "presumption of innocence".
> So you are creating a lot of ambiguity in the law in an effort to precisely match the moral outcome.
No. No ambiguity, and no matching moral outcomes, but improving road safety.
> in AZ, the law is a little more clear, but at the expense of not being as morally precise.
But it's not more clear, and moral precision in not a goal.
> It's a reasonable trade off.
Here I should draw up the statistics of accidents involving cars and pedestrians in Arizona and Germany, but I don't think that's necessary.
> It may suffer from an abstract moral critique,
> but I'm not sure on balance it delivers less justice.
That's just an incredibly U.S. centered point of view. I don't mind but your parent was talking about the German system. Maybe AZ works in comparison to the laws in other states with a very car-centered way-of-life, but in comparison to Germany it's really just bat-shit insane if the goal is improving road safety, but granted, that might not be the case.
Such a general principle would be unreasonable. I'm all for putting pedestrians' safety first, but a driver who is following all rules and safety principles can still hit a pedestrian without any fault - the extreme (but not only) example is somebody who decides to suicide right when you're driving by.
a friend of mine had this exactly issue. he was driving at the speed limit, as usual. an old lady just jumped in front of his car.
at first, he got a murder charge then it was dropped when the prosecutors saw on the road cameras that the old lady 100% wanted to die.
The difference lies exactly in the details you describe. If an accident happens, you're not in the clear because you had "right of way", but because you, having good control of your vehicle, exhausted all possibilities to avoid the calamity.
The difference here is that even if the pedestrian or cyclist was in the wrong, you're not automatically in the right. Your own behaviour as a motorist stands on itself, and you're supposed to take the difference in size and weight into account.
Although it may not appeal to everyone's sense of fairness and just punishment(although, in fairness, only one of the parties can realistically be mauled or even killed), but from a road safety perspective it leads to more careful driving.
Even if a pedestrian runs out onto a highway and it's physically impossible for you to avoid hitting them?
The rule over here is that a driver must always be in control of their vehicle. Exceptional circumstances such as those you describe (which are completely and utterly unlike those of the Uber case) would act as mitigating factor, possibly down to no fault if there was no way for a reasonable driver to avoid the accident.
But the default state of things is that if you decide to put yourself in control of a multi-tons vehicle, you better be able to handle it or the consequences of your recklessness.
So it could be that with pedestrians more conscientious, you can put most of the blame on the drivers by default and not have it be an unfair system. That wouldn't be suitable in the U.S. where people are generally less responsible both as drivers and as passengers.
In terms of statistics, yes, Germany has ~7 vehicle deaths per billion km-mile travelled whereas the US has ~4, but again, car ownership is much more common in the U.S. and necessary to get to work or do chores. People drive every day, when they are tired, and since almost everyone has to drive long distances you have a lot more irresponsible people doing it out of need. I also think that the streets are laid out in a less pedestrian friendly manner which makes them less safe.
As opposed to Arizona where they very clearly value human life less than your right to get to work early.
But consider the following situation: An Average Joe drives down the road. Suddenly out of nowhere a pedestrian appears. Joe hits the brakes, but too late, the poor guy is killed.
He just didn't have time to react. There's no Joe's fault, unless it can be shown that he saw the pedestrian, could stop, but deliberately decided to teach him a lesson.
Now if exactly the same thing happened on a crosswalk, a driver would be at fault. A crosswalk, by design, is a place where pedestrian are crossing. In this case the driver should have expected a person appearing on the road and be prepared to stop. Unless it can be proven that a pedestrian has done something nasty, like jumped on the car from a tree.
That's not what's written there at all.
It was a very notorious case. I would expect authorities silently improving the lit in the area and taking other correction measures in all those months.
Still, if I absolutely had to it was definitely an intersection to watch carefully for cars when crossing, but one with fairly long sightlines for that.
You can explore for yourself here:
"In this nighttime video, posted to YouTube by Brian Kaufman on Wednesday, the scene of the crash can be seen around 0:33"
"Black says in the video as he drives past the point in the road where Herzberg was hit (around 0:33)."
Something that stands out to me on ars clips is that the street lights look like fireballs, even from a distance. I'd guess they have their brightness or some other setting pumped way up. Also looks like the drivers are cruising with brights and/or halogens.
 - https://www.youtube.com/watch?v=ufNNuafuU7M
1. It absolutely is. The specific spot (00:33 in the Kaufman video) is clear as day on a cellphone.
2. Visibility doesn't even have any relevance in the first-place, self-driving systems don't rely on visible-light cameras, there is no obstacle in the middle of the road, there is no fog, there is no rain, the self-driving system had full and perfect visibility all along.
> especially given that the pedestrian was wearing all dark.
Given how well-lit the road is, that would have made them more visible against the background, not less. Again not that it has any relevance.
> Something that stands out to me on ars clips is that the street lights look like fireballs, even from a distance. I'd guess they have their brightness or some other setting pumped way up.
Yeah, sure, who'd think a camera with a finite and middling dynamic range (such as a cellphone's camera) would try to actually capture information at night and thus saturate on bright direct light sources.
> Also looks like the drivers are cruising with brights and/or halogens.
Yes of course, a rando would equip their car with an omnidirectional halogen which somehow magically brightly lights up the roadside but doesn't move with their car. You've certainly cracked the code here.
It has relevance in showing that the pedestrian was failing to exercise their duty of care and was thus at fault. Jaywalking at night wearing a black shirt is pretty negligent. Jaywalking even in the day is negligent in AZ, but at night, across a major artery, it's crazy.
Unless you are going to argue that they knew it was a self-driving car, and so thought that the car would see them in the dark. But they apparently didn't even know a car was there, so that's a tough sell. They didn't even look if a car was coming.
A lot of people here are having a hard time coming to grips with the fact that a pedestrian has responsibilities and can be at fault in a collision with a car -- e.g. that the pedestrian doesn't automatically have the right of way or a right to be avoided every time they step into a street. I get that this is a tough concept for a lot of people to internalize, but if you're going to be arguing that someone should be charged, then at least the broad outlines of the law in AZ need to be understood.
While the concept of right of way may often have a bearing on who is held at fault for an accident, it does not -- or at least should not -- entirely absolve the driver of any responsibility to drive attentively and carefully in all circumstances.
In this case, it does sounds to me as though the pedestrian was at fault -- particularly in the context of AZ law -- but it also sounds as though both Uber and the "safety" driver were seriously negligent in their responsibilities, and their negligence was a major contributory factor in the pedestrian's death. They should be held to account.
I was always curious why they were testing there as opposed to somewhere closer to HQ.
I always thought AZ had these laws because historically a lot of retired people are there who have slower reaction times and aren't going to be doing any stunt driving swerves, and it's generally not a pedestrian or bike friendly place. They prioritize cars. When I saw the protestors a while back pouring onto freeways in the east bay, I remember thinking no way would anyone try that in AZ -- the cars would just run right over them.
You've put a lot of emphasis on how dangerous and crazy it would be to have not run over the pedestrian, but in other jurisdictions learning how to do an emergency stop is required for all drivers before being allowed on the road independently. There are many reasons a car might need to stop unexpectedly (including the possibility of cars in front behaving erratically) so stopping along with maintaining safe separation between cars are basic driving skills expected in many countries. From your description AZ might be an outlier though.
People learn to parallel park too. Those that don't do it regularly tend to be pretty terrible at it. The fact that people had to check a particular box at the time they got their licenses says little about what they can be expected to be able to do years after that.
This and traffic in Paris are the real tests of self driving cars.
I do want to see self driving cars in Amsterdam though, I think they'd just shut down and have a sob after not being able to move due to being swarmed with bikes well within any sort of safety area.
I get the feeling a lot of people on HN have never actually driven a vehicle or at least haven't in recent memory. If you're driving through a street at a perfectly safe and reasonable speed a sufficiently stupid pedestrian is still perfectly capable of creating a situation that causes you to hit them. Nothing is foolproof. I'm sure I could get a truck driver who's doing nothing wrong to back over me if I behaved ignorantly enough.
Absolutes like "always the driver's faults are just stupid, ignorant, poorly thought out, whatever you want to call it but they sure aren't good.
It is every road user's ultimate responsibility to behave in a manner such that nobody else is forced to take emergency action to avoid them. The reason we have specific rules is so that people behave predictably (e.g. stopping at stop signs) making this easier.
I say this as someone who walks a couple miles through the city every day.
It can work, it’s just less efficient.
My mother has dementia and until she was put into a secure home she had a habit of wandering. On three occasions she was picked up by police after midnight crossing the road.
Your defense of laws protecting drivers is fine, but doesn't cover outliers like my mother who would have been knocked down at least, killed at worst. Is that ok for laws to protect the driver/AI driver from running over pedestrians who have lost their personal safety faculties?
So would it be ok if that self driving car did see the pedestrian walking down the freeway, tried to avoid the collision, looses control and end up running into the car coming the other way, killing all the passengers in both cars?
The car was traveling at high speed down a dual carriage way (i.e. freeway) in the middle of the night.
In a situation like that any sudden reaction by the on board computer is never going to end well.
But in this case, hard breaking would have been plenty fine and with little downside risk (beyond the general problem of excessive unnecessary braking). The car's computer was aware of an obstacle in sufficient time, as would a human driver have been.
I would not go so far as to say that, only because if that had been the situation then the car and it's onboard system would have to take the all the blame.
But in this case, based on the footage I saw the car was travelling down a freeway at high speed and at night.
Now some of the blame might be attributably to the car, but the real cause of the accident was the pedestrian walking down a freeway, so it seems clear most of the blame hast to be attributed to the pedestrian.
Except to any cars that might be following behind that then end up ploughing into the back of that hard breaking car.
You can spot such situations pretty easily if you pay attention.
> or having to risk their own life
Reducing speed and not hitting someone cannot be compared to risking your own life. You're turning the argument around and adding unneeded emotions to this.
The only real solution is to get the false positive rate down. Which is why they still have human drivers until they do.
Yes, building a self-driving car is hard. But if you don't even have emergency braking working, then your technology is too underdeveloped to drive on public roads. Keep trying on the test track.
Personally, I doubt that they really considered that; I believe that their software was rushed, flawed, and totally not ready for real-world driving. But I don't think it's a certainty that false positive emergency braking is always better than running over obstacles, especially if "running over obstacles" is determined to be a sufficiently rare event.
It is entirely likely that Uber engineers evaluated this scenario, and (correctly) decided that it was safer overall to turn off reactionary breaking - and have that function performed by a human in the driver seat. If they hadn't, these cars might instead have been brake-checking people at 100x the rate.
It's easy to say things like "Uber should have waited until better safety features were available", such as the eye tracking suggestions mentioned elsewhere in this thread. But features like that take time - especially if they're development-only features that would have no place in the final product. Every additional safety feature pushes FSD deployment back.
Globally, 1.25 million people die from car accidents annually. That's over 3,000 people per day. For every day that you delay mass adoption of FSD, you are accruing massive amounts of fatalities that could have been avoided.
FSD does not have to be perfect, it's development will cost innocent lives - but if you're optimizing for minimal loss of life, it's the correct thing to do. Reactionary policies do the opposite of what you intend them to do - they cost more lives in the long run.
This accident alone demonstrates just how flawed the system is. This wasn't even an emergency situation to start.
This is literally how all human driving works right now.
Seriously, look around at other drivers while you're on a freeway or interstate sometime.
They're not driving, they're singing along with the radio, or shaving, or putting on makeup, or sending text messages.
They're not Luftwaffe aces with eagle eyes and steely nerves monitoring the fuel mix and oil pressure while scanning the skies for the silvery glint of the sun off the wings of a P-51 that may herald their last few moments on earth. They're bored people doing their boring commute and even when they have their hands on the wheel they're not really paying attention.
Yes, humans suck. But that's still
orders of magnitudes better than uber.
These cars could absolutely pass that test.
Uber isn't a philanthropic research endeavour, and it can't rationalise killing people based on lives it couldn't or wouldn't have saved in the event of its technology actually working. The reason they put tech that reportedly hit things every 15k miles and had a near miss every 100 miles on the road as soon as possible has nothing to do with optimizing for minimal loss of life and everything to do with optimizing for a unicorn valuation.
If, given two scenarios:
1. We develop FSD very carefully. 0 lives are lost during FSD development. Ten million lives are lost by the time that FSD sees 100% adoption.
2. We develop FSD less carefully. 100,000 lives are lost during FSD development due to suboptimal performance before it is perfected. Five million lives are lost due to human drivers by the time FSD sees 100% adoption.
Would you really choose the former option? If your ethics board refuses option 2 in favor of option 1, it is they that are mistaken.
How can you know the future in this case, I would not believe Tesla/Uber (the others) predictions , "trust us, let us kill 1 million people in the next 10 years but after that the number will go down to 100, trust us, we software developers are very reliable at predicting things and our mathematical skill are so great we can be sure even when we use NN that are unpredictable. And if we fail then the next startup will appear, promise the same thing and you have no choice then to give them also a check for 1 million lives to not discriminate between startups.
That is, without proof the new way will be safer in future by a specific proportion, how many excess deaths can the ethics committee sign off?
(You can make a similar, better founded argument for mass train transit. It is about 10 times safer than driving per passenger mile. So, reductions in safety that moves people from car commuting to train commuting (I imagine safety is expensive and cheaper transit would be more popular) might be worth it. No-one seems to advocate them though.)
Its not a numbers game. Those are likely two different sets of people. You are saving some people by calculating in, that others get killed instead. It is different to for example, using an experimental treatment option on terminal patients. That would be, saving some of the dying patients but some might die regardless. The scenario is much more similar to the trolley problems cousin the transplant problem. Having someone else, uninvolved killed to save a larger group of people.
Seeing as the automatic emergency breaks were disabled, it seems the product wasnt good enough to detect false positives and was rushed by cutting the safety measure, accepting that putting it in live traffic the car would likely kill people unrelated to the research. It was a trade off between slower development and killing people.
I dont think you will find any ethics board that will green light you killing uninvolved people to save how ever many people. Of those i can think of who did that, quite a few ended up being wanted for crimes against humanity. We have ethics boards for a reason. Without them the ends can quickly justify the means.
There is probably some degree of responsibility on the side of Uber however:
* how this guy was recruted?
* was he trained and evaluated?
* is this kind of behavior quite common in test drives? if so, was it addressed?
* did Uber put in place at least a basic system to keep the driver engaged (like a switch you have to press every 30 seconds)?
Given the specifics I hope this ruling will not serve as a precedent, it's a bit too special.
Lastly, it raises another question: Car automation is unlikely to become perfect within the next few years, and even if it is, the legal responsibilities and liabilities cause issues. So it's likely a driver will be kept in the loop, at least for a decade or two.
Given this statement, how do we design automation that keeps the driver in the loop?
The driver must be aware of the car surroundings at all times so he can take over in case of an emergency. Consequently the automation must be designed to maintain him active, and naturally on alert (like a "manual" driver), an automation that is like "yeah, do nothing... humm, oops I don't know what I put myself into, please take over, you have 2 seconds to analyze the situation and take the correct actions or someone will die, kiss, goodbye" is not the correct answer.
The car did have a working emergency braking system that the car was born with. Uber disabled that one too.
Any system that requires a human to sit there bored for hours, waiting for seconds of action at any moment, is fundamentally flawed. The bipedal ape component is being used out of spec and the blame for that lays with the engineers who designed the system.
They could put eye tracking system in place to make sure the operator pays attention! If the driver doesn't pay attention the car should slowly stop.
Expecting people to always do the right thing is a recipe for disaster.
Waymo is claiming things like 30,000 miles between needed interventions. Even if that's 100x higher than reality, that's hours of watching and boredom waiting for a few seconds of action.
Ultimately systems will require no intervention for very long periods of time punctuated by very quick precise actions that most people are terrible at in the best possible scenario but that computers will be very very good at.
If you task the user with doing the right thing in 5 seconds time after thousands of hours of inactivity a sizable portion will react after the entire affair has come and gone and those are the lucky ones. The remainder will act 3 seconds into the crisis and screw everything up.
What you are describing is the worst possible combination of man and machine wherein you let the machine handle the part the human could easily handle and you task the human with the part they are worst at to ensure maximum carnage.
Except where normally there is a safety driver focusing on driving and an oversight passenger tasked with looking at and checking the automation's feedback, for cost reasons Uber had given both tasks to the "safety driver", forcing the driver to often stop paying attention to the road to check automation warnings & classify events.
The driver being criminally liable is a more powerful incentive. People will not buy & drive vehicles that have a reputation for getting their drivers criminally prosecuted. And the government can still regulate for high safety standards on self driving vehicles independent of criminal liability.
Too powerful. The driver should not be punished for correctly using a legally purchased vehicle. We don't apply that kind of standard anywhere else, and for good reason.
Plus, people would refuse to buy self-driving cars even if they were on average safer than human-driven cars because
1. Everyone thinks they're an above average driver
2. Illusion of control
So your proposal would cost lives in the future when self-driving cars actually are safer.
The eyes of the driver in this case were on the phone, not the road. That's hardly "correctly using".
You may note he explicitly said "the government can still regulate for high safety standards [independently of the actions of the prosecutor]".
So where's the market incentive to bother making the piloting algorithms less killy? Consumers will choose cars that prioritise speed and the safety of the occupants, with no regard for the safety of bystanders.
This all seems to stem from the assumption that the computer just does whatever it wants (shrug), and isn't merely a tool being used by a human programmer who's supposed to be in control of it. That way lies [killbot hellscape](https://xkcd.com/1613/).
People will indeed logically and correctly choose devices which wont opt to kill them. In fact they wont buy anything that will ever opt to deliberately kill them in any circumstances.
The challenge is to ensure the machine has a survivable alternative in the broadest possible range of scenarios to avoid making bad choices.
>> the system was set up to not activate emergency braking when under computer control.
So the parts that mattered the most where disabled? I bet that even the human operator did not know this.
No one knows what would have happened had this gone to trial, it would be better to find out sooner rather than later.
That's why they put in emergency breaking device with set of eyes, hands legs .... and movie streaming device apparently?
Negligence in that case is not bricking employees phone before putting him in the car. But nobody expects this from companies that hire drivers.
// emergencyBrake(); //TODO: fix
// emergencyBrake(); //TODO: fix
Disabled. We don't need this.
Agreed, but that makes it the lawmaker's criminal negligence, not Tesla's.
It seems to me that it would be the benefit of self-driving car companies to own up to liability as it is to their benefit of achieving widespread adoption. For example, Volvo is in line with this idea and has publicly stated that they would accept full liability in fully autonomous operation modes.
In this case, I do think that some liability lies with the driver - as they were tasked specifically with prevent situations like this. What is not clear is whether or not this task is even humanly feasible given reaction times, and based off of this, whether or not Uber has been criminally negligible. Given this, I am surprised that the prosecutor seems to have absolved Uber of any blame.
Every NTSB report has a "probable cause" section determining who or what was at fault. And then the FAA hands out fines and courts award additional damages largely based on that finding.
It is true that the punishments aren't handled by the NTSB but that's more about separation of concerns & departmental remits than anything else. Is separating out "fault finding" from "punishments" a good idea? Maybe? I honestly don't know.
In case everyone is not aware, NTSB reports cannot be used in court to establish fault or otherwise be used as evidence when seeking compensation for damages.
"No part of a report of the Board, related to an accident or an investigation of an accident, may be admitted into evidence or used in a civil action for damages resulting from a matter mentioned in the report." 49 USC 1154(b)
In criminal cases what happens with inadmissible evidence? Like fruit of poisonous tree doctrine... what if you literally have video proof and tons of other graphic proof of a murder, you really just ignore it and let the perp go back into society?
It would be nice if society adopted this with all crimes. It's not like we can prove we're any different than the low level design of a machine; when it comes to the "free will" ideology currently the justice system is designed around.
To a large extent we already do that, e.g. many countries have upper limit in punishment (like 30 years jail max), or allow criminals to delete their criminal past after some time has passed, not because that "just" or "fair" or whatever, but simply because it's "effective".
The distinctions our legal system makes regarding coercion, mens rea, insanity, etc are mostly trying to get at this same underlying distinction.
"Whoa, there sure are a lot of fights at bars we have to keep responding to! Maybe those would happen less if we had fines for bartenders that had too many fights? Or maybe if they had to get a license that we could revoke if they weren't preventing fights?"
"Whoa, it seems that a lot of people don't own guns, but then go and buy one when they get pissed and shoot somebody. Maybe that would happen less if we had waiting periods?"
And I just realized you're being downvoted. How surprising.
I don't think it is because in this case the person was hired by Uber specifically to "drive" this car. They knew that the car is still an incomplete product, they knew that they were testing it, they knew that they were literally getting paid to keep this car from doing something like this.
That's very different than a random person using a self-driving car that they expect and were told is a finished product.
Sadly, while that is in fact the task of the safety drivers in other automated car trials it wasn't the case here for uber, because the "safety driver" was given both the job of safety driving and of checking, controlling and classifying the automated system's information, ensuring they'd have to split their focus and would need to frequently context-switch between two completely different tasks.
Worse, this decision of Uber all but ensured this would happen: the automation system is more likely to lose its shit and need checking specifically in situations where the safety driver would be most useful.
My point was only that I don't think this would set any kind of precedent since it's so far from a simple case of "an owner of a self driving car".
Not watching TV while you’re supposed to be driving is a pretty low bar.
But apparently, humans can’t even be relied on to do this, even when it’s their one single job.
There are decades of experiments showing that boredom and fatigue are huge problems for this kind of work where someone is mostly idle except for rare events. It’s why the TSA mixes fake images into the stream on X-ray scans so people see things regularly and don’t zone out. Anyone smart asking humans to do repetitive work builds layers of safeguards in to keep people engaged – or in Uber’s case they just hope they can shift blame to the person in a bad design, which worked so far.
What we have here is someone who was watching TV on the job. I adamantly refuse to believe that humans are incapable of not watching TV for a stretch of several hours.
If the driver had nodded off, looked out with glazed over eyes, or any number of other situations consistent with your scenario, then you'd have a point. But this was not that case.
The fact that this particular person dealt with boredom by watching a video makes it easier to blame but that's just one of many ways in which people cope with boredom and it's not the root cause. You can blame the worker if it makes you feel better about yourself but if your goal is to reduce the number of errors the system has to be redesigned not to depend on people acting like robots rather than humans.
> Of the millions of self driving car test miles with safety drivers not once has a safety driver been looking at the road, but not able to react in time because of the monotony of the job.
Do you have any evidence supporting this claim? In particular, you'd need to prove that all of the self-driving car tests have the same one-person setup (which we know to be incorrect), every company ignored decades of well-understood risks and similarly didn't have any tasks for that person to perform and thus stay engaged, and you'd need to know how frequently incidents occur which require driver action to prevent a problem.
It's far more likely to be the case that there have been many situations where someone was distracted or focused on an area other than where a potential risk was but the other driver or the self-driving system successfully avoided it turning into an incident which made the news.
It's way more active than watching a self-driving car doing it's thing completely autonomously.
What if we came up with a reasonably objective test of driver skill and the car outperforms you?
This is and remains the standard argument - the car doesn't have to be perfect, it just has to beat a human having a bad day. If you are worried about the liability, maybe keep the car serviced and sit in the drivers seat and look out of the window instead of watching a movie. Tell the car to dive slowly maybe.
I feel sorry for the driver who is creating this precedent, but this is hardly a road block. The average human is not a great driver.
I wonder why would a programmer go and work on a code when he is always in danger of going to prison for every mistake he makes. We all know that bugs in software happen and it’s impossible to write bug free code
The go-to counterexample is the code that ran on the space shuttle, which took years and hundreds of people and $200 million to produce. I have been told that nobody has ever found a bug in the final production system. The development practices involved in creating it seem like they would make most software engineers want to curl up and die. One tidbit from the following link I found is that 60 people spent years working on the backup flight control system, intended to never be used!
This cop out of "Software gonna have bugs" as a way to evade all liability doesnt hold in any other profession. I dont see why we get special treatment here.
Nobody calls the emergency services out because they assume cars are going to crash and plan accordingly, so why does the software engineering industry get called out for assuming software will go wrong and planning accordingly?
You don't go to prison for "mistakes", you go to prison for criminal negligence.
Given that you can go to prison for criminal negligence in any other profession (e.g. a dentist who exposes thousands of people to HIV), why would programmers be the one category of job that's exempt from criminal laws?
The difference between the two is what a politically-motivated prosecutor can convince a jury of.
The hypothetical gullibility of juries doesn't mean that we shouldn't have laws against murder, juries, or prosecutors.
...sorry for the sarcasm but this is a message student engineers internalize in part because it is pushed by companies they want to work for but can't explain why. They don't have strong boundaries between why it might he a justifiable philosophy for Facebook but not for Boeing.
But in response to your comment, engineers have a very clearly defined set of rules, collectively called "the code". As long as they design to them you'll be free of criminal negligence (at least during the design & development phase, things get a little murkier in construction).
More to the point, engineering for the real-world means layer upon layer of uncertainty. e.g. civil/structural engineers use materials we can't model (we use pretty-good approximations for concrete behavior) in conditions which are unknown (soil) to resist forces we can't predict (weather, earthquakes, dynamic response to loading). How does the code deal with all this uncertainty? Slap safety-factor after safety-factor onto everything. Whoever comes up with a method for more effectively dealing with this stuff will make millions.
The most obvious example being that we design structures to resist a 1-in-100 year storm. In other words, we expect a structure to be within spitting distance of failure every hundred years. But as long as you design to that standard, you're fine.
I don't believe they all ask for incredible amounts of compensation
Also, the software world doesn't have the benefit of millennia of accumulated best practices.
Finally, the senior engineers and architects who are licensed to sign off on things they will be held criminally liable for do get incredible amounts of compensation as compared to their more junior colleagues.
On the other hand, a minor screwup in software is far more likely to cause catastrophic failure because we just don't know how to workably build large, robust systems out of code.
Nit: meeting the requirements of law isn't barely collapsing; that requirement has so many safety factors built-in because the building code has to approximate so much. The approach isn't that dissimilar from how that "anybody" you mention would build their bridge that doesn't fall down: by guessing safely.
Effectively, they test things to destruction, then publish minimum requirements. So if you want to pressurise your reactor vessel to reactor vessel to 30 atmospheres, you can pretty much look up a table that'll tell you precisely how thick the reactor walls need to be for each of the commonly used materials. If you want to use something uncommon, then you need to pay somebody to test it.
If it fails in a catastrophic fashion, you can expect to be asked to show that you did your due diligence, and there are extenuating circumstances a reasonable engineer could not have been expected to foresee and plan for. Or that you did foresee it, and somebody else chose to accept the (clearly defined) risk.
Volvo's acceptance of liability seems little more than a smart business decision- the cost of an insurance payout to the family of the deceased is less than the money they stand to earn in profit from selling autonomous vehicles. We can't allow this generous actuarial calculation to absolve the operator of such a vehicle of responsibility too. If you don't like it, take the train.
It's similar to the situation with companies like Google blocking people who have no easy way to get support, except more morbid.
I'm not fond of living in such a world where personal responsibility is dissolved and becomes meaningless.
This is something that I really don't like about our society, the fact that someone "must" take be responsible. When someone goes on a shooting spree, of course they should go to jail. But if they die in the process of committing their crime, there is almost always an outcry for the next person in line to be responsible.
We have rules about motor vehicle operation. The operator is responsible. If you are the passenger in the back seat of a cab, the driver is responsible, not the passenger. If there is nobody else and you are the pilot of an autonomous vehicle, you have to assume responsibility for your vehicle and its actions.
If you bought a faulty gun with a broken safety catch, and waved it around in a crowded street, and it fired, killing someone, you should be held personally liable. You brought the gun out in public and created the scenario which caused someone to be hurt by it.
Looking at the fuel line question through that lens: did the leak start while you were driving and you were completely unaware? Or has it been leaking for a while and you just haven't gotten around to fixing it.
With the ammunition, how was it stored? Was it dumped in a toolbox filled with pointy screws? (A reasonably prudent person ought to know that ammunition is fired by striking the primer with a sharp object). Was it stored in the front seat of the car on the hottest day of summer? (A reasonably prudent person would expect it to get really hot in there)
Etc. Etc. The thing about negligence is that there's a lot of room for interpretation. As another example, if you've been driving a car around with self-driving features, and you've experienced it behaving erratically multiple times, and that's followed by an accident... you knew or ought to have known that it was dangerous to be on the road in that vehicle. If there was an OTA update for the autopilot last night that installed silently, and it results in a crash, then it's probably the manufacturer who's liable.
There was a person sitting in the driver's seat, but that person was in no way engaged in maintaining safe operation. That person was hired by Uber.
As the judge, therefore, I would certainly assign -most- liability to Uber for putting that screw-off behind the wheel in the first place. Uber's driverless car killed a woman.
The charge would be negligent homicide.
Whatever. Failing to detect that an employee is not fit to do a job which potentially jeopardizes many lives is a very serious failure. I might -want- to ask which executive the company will expect to do the time.
Airlines and railroads seem to prefer to blame their operators, particularly when the operators are killed and unable to defend themselves. In this case, however, the vehicle had no operator to blame.
Looks like they still have a lot to deal with.
What even happens when a corporation is convicted of a crime like that?
A fine? Slap on the wrist? It's not like Uber can go to jail.
1) A pedestrian crossing the street likely expected an approaching driver to see them and slow down. Typical behavior on the part of the average pedestrian and the average driver, whether it's a misdemeanor or not.
2) An engineer at Uber made the decision to put tech on the road without emergency braking capabilities, likely using the justification that a safety driver would be there to intervene when needed.
3) A safety driver in an autonomous vehicle that behaves the right way 99% of the time grows complacent.
The pedestrian may have been jaywalking and the safety driver may have been abdicating their duties. The safety driver especially isn't an innocent actor here. But the party who has the most responsibility is by far the engineers and managers at Uber.
This set of circumstances was completely foreseeable, but they still decided to take the risk and put this technology out on the roads. I for one don't want to be a part of Uber, or anyone else's great experiments. Spend the money, spend the time, and figure this out in a controlled environment before subjecting the rest of us to the negative impacts of your selfish ambition. Someone at Uber deserves time in prison for this.
Beyond that, I do not want to see an extension of the practice where low-level employees take all the blame for a corporate culture that encourages bad behavior.
On the other hand, I don't want this to be the death knell for autonomous driving experiments.
On the gripping hand, it's possible that a large settlement might have been the best outcome from this tragedy for the family of the woman in question. They gain nothing from Uber's criminal liability.
>Uber, which declined to comment for this story, could still be sued in civil court and be forced to pay damages. The government could also potentially pursue criminal charges against managers or employees of Uber.
I'd like to know how true this is, and if there is a better source for this claim.
Why is that too strict? Distracted driving is rapidly becoming the biggest problem for MVCs.
Autopilots work in planes because the operators are extremely well-trained, and the reaction times needed are measured in seconds or even minutes. In a car, it's less than a second.
There was a really interesting article I read about this regarding the Air France plane that crashed a few years ago (I think it was this: https://www.vanityfair.com/news/business/2014/10/air-france-...). Similar things even came to play with the recent Lion Air crash, although there was also a lot of negligence and crappy maintenance.
> Instead, the car's system relied on the human operator to intervene as needed. "The system is not designed to alert the operator," the report notes.
How are the not negligent for this specifically?
The law isn't written for the case when the car is under its own control. The driver is assumed to be in control of the vehicle. However, if the car is also _a_ driver, neutering its ability stop, or even ask for assistance, in an emergency should be tantamount to disability the human driver's brakes.
I guess we shouldn't be surprised any more as even the drivers of the non-self-driving cars do the same sometimes. It's scary how often, when I look in the rear-view mirror, I see drives behind me looking down at their phones. This is one of the reasons I almost stopped driving.
In fact, the mistake Uber made here was relying on a human being to do a job that is routine and boring the vast bulk of the time but occasionally requires life-saving decisions that depend on attentive awareness of the surroundings. That's the same mistake our entire civilization makes a million times a day. The fact that the operator had fewer responsibilities than a normal driver probably magnified the problem, but it's the same problem that makes driving fundamentally dangerous. She thought she was doing a good enough job and then oops, guess not, somebody's dead.
That happens every day without robot drivers involved. The standards we hold autonomous driving technology to should reflect this insane status quo.
To add, driverless or not, most who kill others using a car are not criminally liable. Often times not even liable beyond what the state minimum is. Unless the victim sues for personal assets. So in California, that's 100k. Which is one of the lowest in the USA.
Uber made a vehicle that can zoom around the roads unassisted. You can't just put something like that out there and disclaim responsibility by assuming that somebody will sit and be ready to brake at the right times. Even trains have a 'dead man's switch', where the train stops if the operator is unresponsive.
What they did was incredibly irresponsible, they have been running an experiment on public roads with the lives of the general public at stake. For this they need to be held responsible.
I had two points. One, expressing condolences to family. That sucks. And two, the laws favor cars and car drivers in accidents like these.
? She walked in front of a car on the expressway in the dark.
Video of the section in question: https://www.youtube.com/watch?v=1XOVxSCG8u0
What's impossible about watching the street and hitting the breaks if necessary, while testing a prototype self-driving car? I would say if you can't even do that, you shouldn't be in that seat.
This isn't the case where just one person is responsible. The system shouldn't have been configured that way. The driver should've been alert. The woman shouldn't have crossed the road like that. Everyone was negligent.
There is nothing more natural in a human environment than tools. A car is a tool.
It's at least more useful to speak of legal liability, because that can become case law.
The problem with these driver-assisted self driving cars is that the driver is unlikely to be paying attention at any given time since the system is 99% reliable.
By adding remote monitoring, you could even have multiple people monitoring each car. It might be impossible to safely steer the car from a remote location, but they could surely activate the breaks and/or trigger other simple directives to drastically decrease a fatalistic collision.
Given that the average driver reaction time in a car collision is 2.3s , I doubt network latency would pose much of a problem, and cost surely isn't an issue for these companies. A remote person could also use the car's cameras to gain a superior field of vision (especially at night time) when compared to the in-car driver.
Sort of like why the driver of a "self-driving" car is more likely to watch TV instead of paying attention to the road.
Where did that number come from?
Is this the bright future of the humankind? Or is this a setting for a new dystopian book?
But I agree with the verdict, it's just strange that the vehicle has the ability to detect a potential collision but cannot apply emergency braking to prevent it.