These processes seem to be the key difference between that job and this one. Rotations every 30 minutes, a manager actively observing for engagement, and overlapping zones of coverage all were instrumental in 0 fatalities - I can recall examples where each of those saved at least one person’s life. Redundancy, supervision, and engagement are what make a pool safe. While I think this woman failed to do her job and may be punished for it, it is important to question why your community pool has better safeguards than an experimental car.
That’s really the fundamental point, these processes exist because those failure cases are understood, acknowledged, and chosen to be managed diligently.
SOYA summer was a lot of fun, though an office job still qualifies for the acronym even if you don’t get a great tan and have a lot less fun, heh.
Blaming the individual would not solve the problem. The question that the parent comment tries to answer is "what do we do so the backup driver does not watch his phone?".
That would save lives. To blame the individual and move on will kill more people in the future.
Do self-driving cars which need a always very fast available backup driver make sense?
I don't think "this is a clearly a development phase" means anything when there's no guarantee self-driving cars will actually reduce deaths. It may be that the way forward is simply more advanced driver assist technology that better accounts for human psychology, and not fully autonomous cars. All these sacrifices for a future of autonomous driving may simply be a waste.
Personally, I don't think these cars should be on public roads until they get a lot better. We're just introducing a new class of accidents; human drivers generally make the same kinds of errors and other drivers can reason about what they're thinking. Computers are more likely to fail in (to humans) surprising and bizarre ways. What we have now is the worst of both worlds.
As an aside, if we really wanted to do something, I suspect the easiest way to save thousands of lives a year right now would be to implement mandatory functionality on all smartphones to shut off the screen when the phone is moving faster than 5mph. This would inconvenience people and passengers would be annoyed, but there was, after all, a time before smartphones.
I like this idea
Maybe need to figure out how to avoid shutting down the passengers' phones?
Yes, but the legal entity behind the driving of the car needs to be hold legal liable. (i.e. the AI provider)
Hand over to driver can be required but the driver needs to have reasonable time to adapt to the situation (multiple seconds).
Only with this can we make sure that companies providing driving AIs give their best to make sure they don't kill.
Else they will have just pressure to improve it to a point where it happens rarely and then stop there because its not worth the development cost.
As ever, perfect is the enemy of good. Are there currently zero fatalities on the roads because cars are 100% safe? Or have they been developed to an acceptable standard of safety?
But we need to set initiatives in a way that manufacturers will be improving over time. They won't do it out of goodness of their hart, and quite possible marketing and good PR will be more impactfull on sale than 0.5% safer AI
I have 0 trust that market itself will solve this issue.
We just would need to make it clear from the get to go that after that grace period the law will be strict even for cars sold before ;=)
I have even less faith for politicians to not extend grace periods in face of organized backslash(probably paid and bought by companies themself), making them infinite.
Of course. Otherwise you could just always hand over a few milliseconds before any impact and blame the driver.
We should do the same thing we do to stop regular drivers watching their phones.
Despite being a huge legal and moral issue, a large minority of normal drivers can and do watch their phone while driving.
Isn't this basic security guard job description? Fire lookout spotters, etc? It's totally realistic. Sit there, pay attention, and be ready to take action if needed.
For higher security sites this is solved by having several rovers in constant communication and coordinatation with the (often multiple) people sitting by the cameras. So both the rovers and the desk are constantly doing not just sitting.
Fire watch is the same thing. Your not just up in your tower eyes constantly peeled on the forrest, ready to react in milliseconds.
I’m not saying it is realistic to expect the driver to be attentive the whole trip, but watching a video on a phone is just gross negligence.
Watching something while doing nothing is a job that has been around for a long time. If you’re employed to do that job, you cannot watch a video on your phone, especially when a failure kills someone. The whole reason you’re hired to watch a thing is to prevent that from happening.
Comparing it to the lifeguards discussions above: you wouldn’t allow phone usages as a lifeguard either.
And note that watching and ensuring are not doing something, they are precisely doing nothing. Most, if not all, human beings are not very good at paying attention while doing nothing which is what Uber is requiring of their test 'drivers'.
This makes sense if the objective is to reduce fatalities. Unfortunately it makes no sense if the objective is to merely reduce the cost of operating commercial vehicles by eliminating the need to employ a driver - which is where the money is.
The fact that we cannot countenance something like this in the current world is testament to the power our devices have on us.
Also, all of these jobs would be done in relatively short shifts.
Note that I'm not saying that the scape gal wasn't grossly negligent in watching a video on her phone. I'm saying that Uber was also negligent in making their safety system just be 1 person sitting there, expected to react in milliseconds.
You don't accidentally start watching a video on your phone.
You make the human drive the car. The robot can jump in and slam the brakes when the human makes a mistake. That is the safe option.
But it just won't sell. Nobody here would buy a car that prevents the driver from making mistakes. Autodrive car that obeys speed limits while allowing driver facebook on phone: acceptable. Car that doesn't allow the human driver to speed or run red lights: market kryptonite.
> Nobody here would buy a car that prevents the driver from making mistakes.
Almost every new car today comes with adaptive cruise control, autonomous lane-keeping, augmented backup cameras, lane-change and backup alarms, etc. I don't see why a better version wouldn't sell just as well.
There was another interesting proposal: Lights/sirens that automatically turn on when the police/ambulance/firetruck breaks the speed limit. Again, no takers.
But would you buy a car that hits the brakes at the speed limit? Not an alert. An actual limiter that you cannot bypass.
There are plenty of valid reasons to speed:
* (Most common) The actual speed limit is way too slow for what the road conditions are, and everyone would suffer if they followed it
* You are trying to overtake a truck and avoid their blind spot (I have done this; the less time spent next to a truck, the less anxiety you feel)
* An emergency vehicle is behind you and you need to speed up to overtake a car in the other lane so you can lane-change and make way for the vehicle (State v. Brown)
* Everyone around you is speeding; going slower is more dangerous because people need to lane change to get around you
* You are trying to make some space between some kind of maniac driver who you believe will soon lose control and hit you (I have done this to get away from drunk drivers)
* Your loved one is experiencing a medical emergency and you need to get to the hospital
* You are being chased by some kind of criminal or terrorist
* You are avoiding a natural disaster such as a rockslide, tornado, flood, or avalanche
Realistically, will you ever be driving away from a rockslide or sedan terrorist? Probably not, but with a regular car, you at least have options to exercise if you ever are in that spot; with a limiter those opportunities are gone.
In the case of a limiter, a mis-configured or broken speedometer or sensor can give false readings that severely inhibit your driving, even while driving, which could cause an accident if your vehicle slams the brakes without warning. It could also prevent you from using highways if your vehicle is not capable of accelerating past 40mph. This is the same argument against self driving cars; humans want control to keep themselves safe, not so they can be dangerous.
Yes those things can happen, however the car can allow the driver to go say 15 km/h faster, doesn't need to slow down at +0.01 above the limits.
Some people frequently go 30 km too fast... On 30 km roads in areas with children
This is not how labor laws work. The role description needs to be fair. A employer cannot just shift all the blame to the employee. The employer needs to take the needed actions and provide the needed equipment and procedures so the employee can do her task.
But, let's see what the result of the trial is. I am not an expert and the trial will clarify this point.
"Yavapai County Attorney Sheila Polk decided in March 2019 that Uber itself had not committed a crime in Herzberg's death."
Labor laws are irrelevant. They were grossly negligent in operating a vehicle, which is the domain of criminal, not labor, law.
Principal-agent aspects of criminal law may also provide liability for the employer, though apparently they haven't been charged. Which, to the extent they could be, is unfortunate because it is only employer liability, civil and/or criminal, that is likely to substantively effect the likelihood of recurrence, because while the individual is absolutely culpable, it's not the kind of behavior where individual criminal liability is likely to be an effective deterrent without better employer supervision, which won't happen unless she employer has a reason to pay for that cost.
I see e.g. cruise and waymo SDVs around SF with only one person on board. What of those?
My very layperson protocol would be "watch the road while the car is in motion, only use tablet when car is safely parked". My understanding is that SDVs store all the data it collects, so I see no reason to require operating a tablet while the vehicle is in motion. Correct me if I'm wrong.
SF is also an easier place to be watching an SDV in the sense that there's more happening to keep your mind occupied (weird intersections, construction, people doing stupid things). It's a lot easier to lose focus when you're watching a car drive itself around deserted highways in the middle of the night.
I can't say much about what their actual protocols are supposed to be, I can only offer my personal anecdata. I do occasionally see cruise cars around the Japan Town area with only one person in the car. They often park at the safeway across geary. The waymo cars, I see them roaming around closer to downtown, also with single driver, though they don't seem to concentrate in any particular area that I can tell. I also occasionally see unbranded SDVs, sometimes w/ two people, sometimes w/ one. Not sure which company those are from.
> SF is also an easier place to be watching an SDV
Well, from a regulatory perspective, one shouldn't be able to get away with saying "well I was not in a crowded area to keep me attentive, that's why the accident happened". I'd even argue that if the job involves looking at a tablet while the car is motion, then being in a crowded area would make things _more_ dangerous.
The last PR piece seems to omit that. Next piece maybe we will find out the driver was watching hulu on their phone on their lap... while reclassifying objects on the AI tablet also on their lap.
Because if the status quo is to poke at tablets in moving vehicles, how is anyone ok w/ cruise/waymo/whatever cars driving around SF?
California DMV has shown a lot better oversight of self-driving than Arizona. They had deregistered several Uber owned vehicles in 2016 , and have some reasonably specific rules. It's much more OK in SF than AZ.
Extreme case: Your employer tells you to stay alert in a super-boring job for 8 straight hours or else a person can die or whatever. But that may be an impossibility due to how our brain works.
I'd like to know how many hours was he sitting there and how often did he have to intervene that day /week /month. Uber, maybe, should have had another one checking now and then on drivers. Say one for 10 drivers...
But if you mean watching a TV show to distract themselves then not in a million years.
Uber's training or setup may well prove to be negligent... but this employee took the active, conscious step of watching a TV show on her phone when her entire job was to supervise the car's driving and safety of others.
If that's not criminal negligence, I really don't know what is.
I think the TSA also keeps scanners on their toes by digitally placing contraband in the bag pictures at various intervals to make sure the scanner is able to stay engaged (and maybe to get some metrics on hit/miss rates). Maybe these supervised self driving cars need to project a deer jumping into the road and log the driver's response or something.
Edit: there was another post by [redacted] but it disappeared, it does not even show as deleted, weird.
It said: "This isn't some ride along in a consumer grade EV. They were gathering data to program the car with. The emergency braking systems were not active. The person that was supposed to be monitoring the vehicle knew this."
It would probably be more respectful not to copy what they posted along with their username. Actually, I think we'd better redact the username from your quote. People sometimes have important personal reasons for deleting things. The odds aren't high that it matters but the impact could be high if it did.
People sometimes have important personal reasons for deleting things but this does not mean anything. It is not as if "x posted y on twitter and deleted it afterwards" or "the page was edited/deleted, here is an archive.org link" is uncommon on HN, nor it is as if a stalker would not be able to scrap HN posts of someone instantly as they were posted.
> it disappears.
In my experience they show as [deleted] but I guess this is only for posts that have replies.
I'm sure it wasn't that big a deal and I'm sure your intentions were good, but I'm also pretty sure most HN users would want us to protect them in this way, if only for the rare occasion on which it actually is a big deal.
Kind of. Technically, posts that have replies can't be deleted, so what sometimes happens is the poster edits their post and replaces all the text with "[deleted]".
Which should also be prohibited, because it's just as destructive. You shouldn't be able to edit a post with replies either.
Yes, ideally I would just do more revising before posting a comment the first time, but I don't seem to work that way.
I think the two-hour window is a good compromise, and if anything I really wish it was longer. Yes it has downsides, but I really think they're outweighed by the good.
Maybe this is the key.
The control is one thing, but manager can prevent you to look at phone and manager is creating interruption from monotony. The rotation is necessary to deal with human hardware which requires breaks (and moving car requries faster responses then drowning person).
The overlapping zones deal with fact that breaks and manager being present is still not enough.
I really don't understand this assertion.
Having to babysit auto-driving cars in their development phase is well within the realm of what humans are capable of.
The driver booted Netflix, navigated to an episode, and watched in on their phone for the same reason anyone would do it at work: not because they were driven to a boredom so intense that it was the only way to get a grip but because they simply thought "what harm could it do?"
These replies acting like a slightly boring job is doomed to this outcome remind me of those ridiculous articles that stereotype millennials as 30yo-children so spoiled that they're unwilling to do the simplest of jobs. "Millennials refuse to be trash collectors and don't understand how someone could do it. 'Humans aren't made to pick up trash,' John, 32 years old, told our newscaster. 'Else we would have tongs for hands! I don't even think it's possible.'"
Which is not to mention that these workers have ridiculously weak motivations. They create nothing, spending hundreds of hours merely testing out someone else's work. They exist in a field with very little upward mobility, since they are effectively the only non-skilled labor at their companies. Even if there were upward mobility, they work in a field where their job is to help eliminate their job. The only thing offsetting that is that the pay is slightly better than equivalent options.
I can understand why people lose attention driving these cars around.
The argument is that no, it is not, even if you never open netflix you will loose attention. Rhat does not make Netflix ok. But this driver having netflix open does not imply that people without netflix wont loose attention. They will.
Your second paragraph is both strawman and nonsense. Before millenials, people were not able to hold attention while nothing was happening. Which is why train drivers have protocols to prevent loss of attention and so do lifeguards.
The whole "if you admit common human limitation out loud yoi are weak and spoiled is either arrogance or manipulative argument.
But they could watch the road instead of watching their phones. It would be a pretty simple fix- have the backup driver keep their phone in the back of the car. I've had jobs where we had to lock our phones in lockers at the start of our shift, it's not uncommon.
Also, disagree that moving car requires faster responses than drowning person- kids often gently slip under the water in an instant and it's generally quiet and fast, never a big splash or dog paddling "help me!" commotion like in the movies. And longer without oxygen to the brain can lead to more permanent brain death, so I'd say both activities require equal vigilance.
Sure, but that is just baindaid for one instance of problem. Something that allows management to pretend they dealt with it, while not actually trying to solve the problem. Typical human will likely start daydream without the phone or otherwise loose attention.
> Also, disagree that moving car requires faster responses than drowning person- kids often gently slip under the water in an instant and it's generally quiet and fast, never a big splash or dog paddling "help me!" commotion like in the movies. And longer without oxygen to the brain can lead to more permanent brain death, so I'd say both activities require equal vigilance.
That does not require split second attention and quick reflexive reaction. The permanent brain damage does not happen because kid was under water for 5 seconds, car crash can happen under one second.
I mean, a first step is to hold the driver criminally negligent if they get into an accident. Which already happens. That's what legal punishment is for -- to deter extremely dangerous/harmful/etc. behaviors.
But because people often think "it'll never happen to me", that's why companies generally have supervision and spot checks of employees, to try to catch bad behavior and reprimand/fire them first, before it results in damage/death.
Which, if I were Uber, would probably mean installing fish-eye cameras in the corner of each vehicle that would regularly be spot-checked to see if backup drivers were, in fact, not doing their job.
This isn't a particularly difficult problem. And I don't think privacy is a particular issue here.
Punishing the culpable lapse by the individual is necessary but not sufficient. It should have been monitored and supervises by the employer, and failing that is also a culpable lapse which should be punished.
This is a solved problem: you fit the experimental car with an interior camera aimed at the driver. If the backup driver is observed to be distracted by the interior camera, the car can sound a warning for the driver to be alert and if they ignore the warning, the car's hazard lights would come on and it would be slowly brought to a halt.
You send this person to prison for negligent homicide, which I think she is guilty of, and remind backup drivers in the future they're still responsible for the safe operation of the vehicle.
To imply that the individual has no culpability for watching a video on her phone while the vehicle she's in charge of runs down and kills a homeless woman is insanity.
Safety is about proactively defending against human nature, not blaming people after the fact.
If you don't expect something to happen reaction times can be really slow.
Which is also how it differs from the lifeguard job, there you always expect that from time to time something will happen, but as far as I can tell many people would not expect this when they use a self-driving car. Because you know, the car is already meant to be able to handle this. Or at least this is how many people think about self driving cars I think.
Thankfully, this is already happening in the self driving space as well! At least as far as research and development efforts are concerned.
I used to work right next to another self-driving tech company: Aptiva in Las Vegas. Several times a day, I would see their R&D cars launching from headquarters, and there were always two backup drivers. In addition, the backup driver in the passenger seat always has a clip board up and was actively taking notes. A “two man rule“ is common practice in critical, high risk activities.
I’ve known the details of Uber’s murder of Elaine Herzberg for a long time. I read the original police reports early on. And I don’t take that fatality lightly.
Just the same, even on a bicycle, I always felt safe sharing the road with Aptiva R&D cars because of their two man rule, and constantly attentive drivers.
It will of course totally ruin the illusion of being driven smoothly by robot but I think that should be canned asap.
but I agree - the company is also liable for failing to have processes to stop the inattentive behaviour. IF you were on duty watching your phone, and no manager spotted this, or took away the phone they are just as liable.
People are condemning this person for being on their phone (and fair enough), but if I am sitting in a comfortable vehicle with nothing to do / read then I will fall asleep soon enough. I think this is quite common.
The lifeguards at our local pools rotate about every 10 minutes. It seems like changing position and also that you have some interaction would be helpful to staying focused.
It's hard to replicate that in a car but perhaps something that was asking for data - "What's the speed right now? Are there any fire hydrants in sight?" - that could be flagged if contradicted could weed out drivers who are watching a movie or nodding off.
Because the people responsible for the experimental car doesn't give a shit and tries to wash themselves of their responsibilty by placing all the blame on the safety driver.
Don't get me wrong, she's responsible, but not nearly 100%.
For train conductors with a similar challenge, a solution has been invented requiring them to pass various visual attention challenges that detect if they aren’t alert. Such systems weren’t present here. The system wasn’t designed to work - it was designed to protect Uber by shifting the blame for failure.
I’d be inclined to give them a pass if they were looking at the road and just inattentive (because that’s expected, like you say), but I find it hard to sympathize with someone watching a show on their phone, explicitly ignoring their one job.
Obviously the system should improve to make errors less likely, but that doesn't absolve anyone of guilt in my view.
in this case, the backup driver should shoulder some blame, but not the majority of it, and commensurately, should not face any of the harsh penalties associated with manslaughter.
the wet bags behind the autopilot get bored, distracted, complacent, ill or reckless due a moltitude of factors some of which outside their control
it is well known at this point the human modes of failure, and that there was basically no mitigation is a reasonability of the system designer
A better system would have spotted this person was watching their phone not the road, but it would still have held them accountable and (at minimum) fired them!
That doesn't mean that an employee being clearly negligent doesn't also need to be held accountable.
Them being held accountable is one part of a good system.
Also, many people work at really boring jobs. If serious errors happens while you watch netflix, you'll get fired, for sure.
But again, sometimes listening to music is permissable.
I don't think it's realistic to expect people to function well under those conditions.
Which is all to say that "the system made me do it" really doesn't fly as an excuse for felonious behavior.
for instance, in the 737 max crashes, the pilots made errors, but the principal blame lies with boeing and their egregious systemic design flaws.
And I think it's wrong. If my car is on cruise control, that doesn't mean I can stop worrying about my speed. You can't always just blame the company because it's the bigger entity.
That said, I don’t think this gets Uber off the hook. If I were on a jury, I’d likely say Uber was guilty of manslaughter (barring a real look at the evidence).
There is exceptional negligence on their part. The fact that vehicle was just blithely travelling the posted speed limit, even at night, even though the speed violated it's assured clear ahead ability, is a damning point. The vehicle will operate unsafely in it's default configuration.
How much slower was the car supposed to go for it's sensors and was it all the guys responsibility even if he knew Uber apparently disabled the auto-braking due to issues and didn't pause roadtests?
Yeah, and how was the driver even supposed to know how fast the car should have been driving? I know how fast I should driving because I know how far I can see, but the car had LIDAR / Radar / night vision / etc. If the speed limit is 45MPH and the car is going 45MPH, what reason did the driver have to think that the car couldn't see? Either the driver can rely on the car to "see" or they can't.
If you did and someone drowned in the mean while, there would be no debate about who was liable.
I'd probably be looking at the road all the time, but just freeze at that moment where I'd need to take over.
Probably more like "absent mindedly" pull out a phone.
This person was being paid to pay attention, which is hard enough to get right. What happens when the person owns the car, and is just commuting? And there are millions of them, not just one person?
Mayhem, if it’s a variant on this system instead of something much more sophisticated.
How about starting by simply removing their phone? Isn't that easy? The driver was looking at her phone at the time when she was supposed to be looking at the road. This is a simple violation. The phone should have been banned and the driver fired after a first violation.
I am pretty sure that while what you say might be hard to do for some people, there are individuals out there for whom sitting and looking at the road for hours would not be such a big problem. If she couldn't handle the job, she shouldn't have worked there.
Those people are known as drivers and they’re doing more than just looking at the road. The driving/feedback from the controls is keeping them engaged.
Or "sacrificial part" -- designed to break first to protect the rest of the system. The NTSB report (https://www.ntsb.gov/investigations/AccidentReports/Reports/...) is an interesting read, by the way.
"Here's your self-driving car with a 20 page EULA/ToS. Oh, but if it's in self-driving mode and something goes wrong it's not our fault, you must respond (within seconds) and fix it yourself."
Would you blame an airplane's autopilot if it crashes? We still have two pilots even though 99% of the time the thing is on autopilot.
She's a test driver, she did not pay attention, and she killed someone. At best, her employer should help and compensate her, given how it was a workplace, on-the-clock accident.
If the plane crashed due to an autopilot error, yes, absolutely.
This job is a wage against a (very nasty) lottery ticket, to sit there to absorb the legal fallout for decisions made far away.
If the driver committed homicide, it sure sounds like Uber is also guilty of homicide.
The safety driver had an actual job to do, he wasn't there "instead" of automatic emergency braking - which is not certified for driverless operation btw. But he was distracted with a phone instead of looking at the road.
The halo effect here is unreal.
So yes, in an American court, disabling a proven safety feature is significantly worse than simply purchasing a vehicle without the feature.
The safety driver failed at their job, but the NTSB clearly lays significant blame for that failure on Uber, who should know well that humans are poorly suited to monitoring automated systems, and committed acts and omissions that increased the likelihood of an accident.
The scenario is notably different, but it does dig into the issues around acts vs omissions and how we perceive them.
No; an automatic emergency braking system is not required by law. A capable, attentive driver on the other hand is. Working brakes are as well, but it's eventually up to the driver to engage them.
To put it another way, the self-driving system did not alert the driver that it had detected something and did not know what it was. It wasn't an emergency, it was the car's normal operation.
If my skydive instructor doesn't deploy the backup parachute because I, the student, didn't alert them that the primary chute failed to deploy, it's entirely their fault if we hit the ground at terminal velocity.
If a lifeguard is working at a public pool watching Netflix on their phone and a kid drowns, you can't argue that the kid should have splashed more.
But how do you suppose we create FSD cars if we can't try them out before they are ready? There is just no other way to do it than vigilant drivers that watch what these cars do.
> To put it another way, the self-driving system did not alert the driver that it had detected something
Well of course not, who would expect that? If the car could positively identify the collision before it happened, it would have simply stopped, no need for a driver at all. The driver is there to prevent exactly this kinds of accidents, and this driver failed by getting our her phone and distracting herself with streaming videos instead of doing her one job. Plain and simple.
It's one thing to charge negligent homicide with a typical car. But the near-fraudulent claims of self-driving car hucksters at the time had a lot of people believing these vehicles were already far more capable than they will be for decades to come. And it's inevitable that the Koolaid drinking in this particular program within Uber was at its strongest. So I doubt the safety drivers were adequately informed of the actual capabilities of the cars or the risks involved.
Consider two training courses:
* instructors who are gung-ho on automation being "nearly there" and encouraging people to relax in the car and let the software do its work!
* instructors who are constantly impressing upon students that they need to be vigilant.
One can see how one party is 90% liable. And the other is 10% liable, etc.
The interesting part of this case comes down to the training. In that if it was lacking, it then makes it very clear to future companies that their training needs to be more rigorous.
There would be a different blame calculus if this were a production level 2 system like autopilot. In that case, it's not a paid test pilot, it would be the paid purchaser of a certified aircraft.
No, the last articles and discussions were quite clear that this accident wouldn't have happened with a real driver. The road conditions were good, the visibility was good. A driver would have seen the woman crossing the road well enough and slow down.
However, I don't think it's reasonable at all to expect someone to remain attentive while looking at a self-driving car. This is the same problem train drivers face, which has been mitigated with all sorts of methods, least of which a dead-man switch. Some countries let their train drivers mention every signal they come across to themselves, with Japanese train drivers even pointing at signs to ensure they're paying attention.
This was a vehicle that had been modified to reduce certain safety features (because Uber couldn't get them to work properly) with someone at the wheel expected to be 100% focused on the road while giving them nothing to do at the same time. You can only go through so many hours of sitting in a card doing nothing before you go crazy.
From a revenge-seeking perspective it's easy to blame the one person who could've stopped the car for her obvious disregard for safety (streaming video on the job), and I suppose a criminal justice case might be in order. However, I think Uber should be mainly responsible for the loss of life because their flawed design not only made the car less secure but also completely disregarded the human psychology when they designed how their human safeguard driver should do their job. Even human-operated cars will beep and yell at you if you don't pay attention while you're driving in cruise control, if such safety features were omitted in the self-driving design then clearly the driver was set up to take the fall when something bad would happen.
I strongly believe Uber only put that woman in there because local law wouldn't let them test their car without a human at the wheel, not because they wanted to ensure their car didn't kill anyone.
I think pointing out things in the environment is a great idea for safety drivers in this kind of setting. It helps keep them engaged, possibly helps the system notice when they're distracted, and possibly provides additional useful training data.
This is part of the general attitude that rights and freedoms don't matter if a machine violates them. When you walk out of a store post purchase and the security alarm goes off you have zero obligation to disclose what is now your property to the loss prevention experts. But somehow it's acceptable to assume you're a criminal because a machine said so.
protip: You probably want to let them figure out what is the thing beeping while you're still in the shop. It's not great to get home only to realize that they forgot to take off the anti theft device on some of your beers.
IMO this is much more applicable to Uber's share of the responsibility for this incident.
Driving while watching a video certainly qualifies normally, but purely hypothetically, if Uber had internally said it was totally safe to drive distracted, then maybe she has a defense.
Only to a crime whose required mental state is intent, which rules out crimes of negligent or recklessness (or strict liability, for that matter.)
> The state has to provide criminal negligence, which in most states requires very risky behavior.
“A motorist can be convicted of negligent homicide for causing the death of another person while driving in a criminally negligent manner. A person acts with criminal negligence by unknowingly doing or failing to do something that creates a substantial and unjustifiable risk to others. The risk must be of such nature and degree that the failure to perceive it constitutes a gross deviation from the standard of care that a reasonable person would use in like circumstances.”
I don't think that that's a hard fit for the publicly-reported facts of this case.
> Driving while watching a video certainly qualifies normally, but purely hypothetically, if Uber had internally said it was totally safe to drive distracted, then maybe she has a defense.
No, if “Uber” said that there is an even greater case than there already is under simple respondeat superior for the actions of the driver alone as agent of the firm for Uber to be charged with negligent homicide, manslaughter, or even 2nd degree murder.
But I don't think any representation by Uber could reduce the standard of care owed by a safety driver to members of the public to something below “not watching a video on the phone while supervising the safety of the operation of a self-driving vehicle”.
>I don't think that that's a hard fit for the publicly-reported facts of this case.
Agreed, unless (again 100% hypothetically) Uber misled the driver about the capabilities. Even then it would still require that a reasonable person would believe a (hypothetical) incorrect statement about the car's capability.
>No, if “Uber” said that there is an even greater case than there already is under simple respondeat superior for the actions of the driver alone as agent of the firm for Uber to be charged with negligent homicide, manslaughter, or even 2nd degree murder.
Not sure respondeat superior applies to criminal law. In either case, I think operating a fleet while (again 1000% hypothetically) misleading the drivers would be itself the act of criminal negligence. So I don't think you'd even need to resort to master/agent liability.
>But I don't think any representation by Uber could reduce the standard of care owed by a safety driver to members of the public to something below “not watching a video on the phone while supervising the safety of the operation of a self-driving vehicle”.
Not my area of expertise and I haven't looked at any case law on this point, but I think a believable statements made by Uber would be considered as the circumstances wrt "standard of care that a reasonable person would use in like circumstances."
If a reasonable person would believe it was safe to watch the video while driving, it wouldn't qualify as criminal negligence.
The reason why Uber had a human monitor, one required to have a valid motor vehicle license, is because of the potential lethal outcomes. You learn all of this when preparing to test for a license. It doesn't stop being one's responsibility to be a vigilant driver because it's convenient to do something else. Even considering the general stupidity levels of American drivers, ignorance of the law is no excuse.
What Uber (hypothetically might have) said is completely irrelevant; she was the driver, she is responsible.
I dont even know what license beyond a driving license exists for this job. Clearly not every driving license holder is fit to hold this position.
lets just acknowledge inefficiencies in the system
Also trending on HN today: https://news.ycombinator.com/item?id=24488350
I don't even know if an expert witness was there, thats how grand juries work.
And that a corporation didn't tell a full truth and threw an "maybe employee" under the bus? Not really that much to "posit".
Ironically also trending on HN at time of writing: https://news.ycombinator.com/item?id=24488350
When I saw the dashcam footage, I just thought to myself, I probably would have avoided that, had I been driving.
I've watched my own dashcam footage before and people on the periphery come out less visible than in reality because the headlights blow out the video - the camera has less dynamic range than our eyes.
As for "how many have videos": when Amelie Le Moullac was killed by a truck, the police didn't even check videos. There had to be a grassroots effort to go get the video.
You're kidding me.
Fitting, given Uber's reputation for questionable morality...
That poor person, who's job it was to watch the road, seems to have been watching Hulu instead. That's a pretty willful disregard for their job and for other's lives. I don't have much pity for them if this is the case.
I get your point about moral crumple zones and I suppose there's cases one could to point to demonstrate it, but this is definitely not a very good example of it.
> "Had the vehicle operator been attentive, she would likely have had sufficient time to detect and react to the crossing pedestrian to avoid the crash or mitigate the impact," the November 2019 NTSB report stated.
This isn’t a case of the Uber performing some split-second error that needed immediate correction then blaming the driver when that didn’t happen, the way so many in this thread seem to want to make it out to be. This is a person whose job it was to watch the road watching Hulu instead and as a result failing to react to developing conditions that, if NTSB is to be believed (and I trust them a whole lot more than the randos commenting here), could have easily been prevented by an attentive driver.
I can't see how this is not clear cut negligence.
The vehicle was travelling 20mph faster than it's own "safe clear ahead" zone allowed for. Simply ignoring it's own safety limits and travelling at a higher rate of speed is a design flaw.
Designing a large public road autonomous vehicle test and not taking into account an easily predicted human vulnerability that's been known in other automation experiments for decades is a project design flaw.
Whether or not the driver is at some fault, I think the court will be capable of determining that. Arizona's willingness to completely disregard the mountains of negligence on part of a giant corporation is disturbing.
If the car is going over it's speed limit, the driver should react.
I see that some people think Uber should have added systems to control the safety driver. I think that's a fair point, but it doesn't take away the drivers responsibility. They knew the car isn't perfect. It's their job to take over control whenever that happens.
> I can't see how this is not clear cut negligence.
1) The driver is a woman.
2) While I think the driver does bear some fault here, they don't bear all the fault. Uber designed an unsafe system that relied on an unnatural amount of vigilance from a single person while simultaneously discouraging that vigilance . They didn't design the car to shut down when one of it's critical safety components (the driver) was not operating correctly, and they didn't even give that driver amphetamines or something to increase their vigilance to artificial levels.
 Basically: pay close attention to a boring process while doing absolutely nothing for hours on end. I'm pretty sure that's a classic "humans suck at this" task.
This isn't a truck driver falling asleep from exhaustion caused by aggressive scheduling. You don't "accidentally" take out your phone when your job is looking ahead. That's deliberate negligence.
If you want to require eye sensors to detect distraction, by all means, pass a law about it. Maybe include regular non-AI cars too. #1 cause of accidents over here.
Statements like this seem to indicate a failure to understand what the difficult part of the task was. In short: a boring monitoring task that was practically unnecessary 99.99000% of the time, but absolutely critical maybe 0.00001% of the time. The typical-case lack of necessity would strongly re-enforce engaging with distractions over time (i.e. distraction more rewarding, and nearly all of the time no negative feedback of any kind).
> This isn't a truck driver falling asleep from exhaustion caused by aggressive scheduling. You don't "accidentally" take out your phone when your job is looking ahead. That's deliberate negligence.
I'm not saying there was no negligence here on the part of the driver, just that Uber itself was at least as negligent if not more so for designing the system in the way it did. Focusing too much on the driver is an error.
Honestly, watching a show was egregious, but I wouldn't be surprised if the drivers would end up falling asleep on a regular basis if all distractions were removed from their environment. Half paying attention to something I wasn't interested in has always been the best way to get me to fall asleep.
Doing that for five minutes is easy.
Can you do that for eight hours? Day after day? Without a single lapse in attention?
This is a much harder job then the truck driver has - because he constantly has to make microadjustments, to correct for road conditions.
People's brains don't work the way you think they do.
I am saying that passively watching the road for hours on end is much harder than actively driving.
What makes you confident that you are actually good at it, and are not the victim of Dunning-Kruger? Do you regularly find yourself in the process of stopping your self-serving car from crashing into things?
Or has your car simply not crashed yet?
All the 3 of them? Among millions of Tesla vehicles out there and the hundreds of millions of miles driven? Is that even a considerable risk when compared to the general (non-zero!) risks of driving?
For me personally I simply know when I watch the road and when I don't. For some people this might be hard, for others - not so much. I am aware of when I pull out my phone or distract myself and when not.
But are you just watching or are you seeing the road? How many times have you taken control from your car doing something stupid and dangerous?
Unless the answer to that second question is 'I do it all the time, and I'm batting 20/20', what makes you confident that you'll catch the next instance?
 If that's really the case, you should probably short TSLA, it doesn't sound like their car can safely operate.
Yes, I think I would be able to do this job safely. Not everyone has problems with focus to such an extent that you can't help but be on the phone when you need to be paying attention.
> and they didn't even give that driver amphetamines
Yo, what? IIRC, most *amphetamines are either illegal in the US or like FDA Class 1 substances or something like that, and you just want to throw them at people operating > 1 ton machinery? That seems like it could have an even higher risk for danger.
That's true, and my comment was mostly sarcastic (but a little serious). Amphetamines are given to military pilots to increase alertness . They're also used to treat attention deficit disorder (stimulants also increase focus in healthy people, see coffee). Some years ago I read that they're also popular in some parts of Asia not for recreation, but to help workers focus on boring, repetitive tasks (and they apparently can even make that kind of work seem "fun"), but I can't find the exact article again.
The kind of job these "car monitors" are expected to do is so unnatural that it's ludicrous that someone could be expected to succeed at it perfectly, unassisted.
Also, no one said the driver has to do nothing. They can safely listen to audiobooks or talk on the phone.
I get that it’s boring but fail to see how that’s an excuse for not paying attention (or watching streaming video) to a dynamic system you’re expected to actively engage with.
Many train systems are largely autonomous and far more static and yet we still expect train operators to not kill someone on the tracks.
How do you feel about lifeguards as a job? Failure of pools and beaches to design a good system?
Train conductors do not require second-level reaction time. A train cannot stop in five seconds, no matter how quick the conductor is. People who are stopped on train tracks get killed, and the train operator is very rarely to blame.
It's the basic job description. If she couldn't do it, she shouldn't have taken the job.
It's unreasonable to hold someone accountable for a "self-driving car" that suddenly decides at a split-seconds notice that it can't cope driving.
Of course this is extra bad because it's an experimental car, but it's the same in my opinion with those Teslas on the road now that do the same thing.
It is not unreasonable at all. She had one job to do - look at the road. She failed it because she felt that her entertainment was more important than doing her job. She picked up her phone and started streaming videos. She failed at her one job, plain and simple. She knew everything about the job and still chose to watch some videos and risk lives.
The experimental car shouldn't have been on the road at all if the only thing separating it from killing people is someone who is expected to maintain concentration for hours/days while simultaneously not actually doing anything.
Then, if we start with the expectation that a human in the loop is necessary for live testing, we could have two. It reduces the impact of independent failures. It also adds some social pressure to avoid negligence. It also provides overlapping coverage, as a lifeguard in this thread has pointed out.
Humans are fallible. Accept that, and design systems to be safe despite individual errors.
Driving a car is not the same aa monitoring a self driving car. There will be differences in attentiveness. Stop equating them.
In my opinion, safe FSD cars are not possible with current technology on existing public roads - so either the technology has to improve by orders of magnitude, or the roads need to be modified significantly.
The gung-ho experimentation that is going on in public is in my opinion very dangerous and should be stopped, and this case is a perfect example of why.
Both the road infrastructure and other drivers are too crappy and unpredictable for FSD cars to be viable and safe.
Also this business of holding someone who is not actually controlling the vehicle accountable is ridiculous, and is surely something which will be proven in court eventually.
If you go on a long multi-day road trip, you might even end up relying on a passenger's feedback in a moment of tiredness.
>A backseat driver may be uncomfortable with the skills of the driver, feel out of control since they are not driving the vehicle
Are you saying that the meatbag in a "self-driving car" is equivalent to a backseat driver?
Because if you are, then "self-driving cars" will never be viable because the humans inside don't actually trust them.
The claim I'm disputing is the one that says people are somehow incapable of paying attention to the road for extended periods of time unless they are in control of the vehicle.
I might agree that a reaction might be jerky and panicky, but then again, they would by definition be so regardless, due to the unexpected nature of accidents
That sounds like a nice safe system.
For the record, I don't agree. I feel that the average person who would choose to use a self-driving will tend to be overly relaxed and trusting, not what I'd call a backseat driver.
The kind of person who is a backseat driver wouldn't trust the car's driving and would prefer to control it themselves, and thus wouldn't be using the self-driving feature in the first place.
Humans are very bad at paying attention for a rare event and having nothing else to do 99.999% of the rest of the time. This is surely well known to the people who set up this system in the self driving car.
Don't take a boring job if you can't handle it. The required basic level of continued attention is not a superhuman skill. It's not a job for everyone but it's not exceptional at all.
Driving a truck is in no way comparable to what this lady was tasked to do. A truck driver is performing an active task (operating a truck), this lady was supposed to be performing a passive task (monitoring a truck's driving). Active and passive tasks are very different beasts.
But you're on to something with your comparison to pilots. Planes are mostly automated. When that automation fails, the pilots often lack the situational awareness to avoid a crash when they have to take over (for an example, see https://99percentinvisible.org/episode/children-of-the-magen...).
How many times do you think this driver was distracted with the phone when doing their job? I bet it wasn't just this one time that they happened to crash.
Pilots crash after hours of boredom because debugging problems in the air is hard, not because they were negligent leading up to the problem (vast majority of the time, anyway). And still by and large they succeed, it just goes unreported in the news because business-as-usual. This driver on the other hand got into an unrecoverable situation because of their own behaviour.
Dude was paid to pay attention. He didn't. It wasn't because he got fatigued, it was because he was deliberately doing something else, by choice
The guy screwed up. Uber knew he almost certainly would. I won't say the guy isn't responsible at all, but pretending this is purely on him and Uber bears 0% of the responsibility is just silly. Humans are not capable of anything and everything simply because they are provided pay.
If she had been negligent, i.e., too tired to notice the pedestrian, that would have been a different matter. If she had been paying attention most of the time but inadvertently reached over to drink some coffee, that would have been a different matter.
But she wasn't too tired, and she wasn't momentarily reaching for a drink; she was simply choosing not to pay attention to her job so she could do something else instead.
I'm saying that the inattention was basically a given. Yeah, at some point the driver went and grabbed a phone. It may have been fiddling with junk in the car, or their fingers, or day dreaming to the extent they were unaware of the world around them.
Yeah, they grabbed a phone. Is that really any different from any of the above? I don't think so. Particularly considering how ingrained the reflex is to grab and fiddle with a phone when bored is. I wouldn't be surprised if they never consciously decided to pick the thing up.
The task was simply unreasonable. Uber should have foreseen this problem and implemented systems to aide and monitor their drivers.
Aside from this, if the car had been driving over the speed suitable for the conditions at the time the fall person had plenty of time to note that and take over, in that case it was more like huh, the paint is drying but it looks like it's going to rain .... long time looking like it's going to rain ... oh no should have put the covers on!
But I'm betting the need to take over and drive is not properly given to the fall person either.
It is not at all fair to make the comparison you are making.
Edit: Post fixed now, disregard. :)