Hacker News new | past | comments | ask | show | jobs | submit login
Uber backup driver indicted in 2018 self-driving crash that killed woman (phoenixnewtimes.com)
234 points by runnr_az 16 days ago | hide | past | favorite | 394 comments



This job reminds me a lot of when I lifeguarded in high school during the summer. You sit on a chair in the sun, waiting for something to happen. The job is not stimulating, which further increases the difficulty of recognising if someone is in trouble - try watching some rescue training videos [0]. While a lot of the interventions a lifeguard does are prophylactic (“No running!” vs administering first aid to someone who tripped and split their skull open), interventions did happen a few times a month. To make sure those interventions happen when needed, pools implement processes to better manage the risk.

These processes seem to be the key difference between that job and this one. Rotations every 30 minutes, a manager actively observing for engagement, and overlapping zones of coverage all were instrumental in 0 fatalities - I can recall examples where each of those saved at least one person’s life. Redundancy, supervision, and engagement are what make a pool safe. While I think this woman failed to do her job and may be punished for it, it is important to question why your community pool has better safeguards than an experimental car.

[0] https://youtu.be/4sFuULOY5ik


I think the key difference is the backup driver was watching a video on her phone when the accident happened. I was a summer lifeguard on the ocean for many years (SOYA summer! Sit On Your Ass Summer), and we would have been fired if browsed our phones while on duty.


I’m in complete agreement with your assessment on that. If the processes I mentioned above weren’t in place and someone drowned while the lifeguard was watching a video or reading a magazine, they should be held to account. I’m remiss to completely remove individual agency from scenarios like this. The processes just help make those failure cases less accessible because you get fired before it happens.

That’s really the fundamental point, these processes exist because those failure cases are understood, acknowledged, and chosen to be managed diligently.

SOYA summer was a lot of fun, though an office job still qualifies for the acronym even if you don’t get a great tan and have a lot less fun, heh.


> I think the key difference is the backup driver was watching a video on her phone when the accident happened.

Blaming the individual would not solve the problem. The question that the parent comment tries to answer is "what do we do so the backup driver does not watch his phone?".

That would save lives. To blame the individual and move on will kill more people in the future.


I think the real question is:

Do self-driving cars which need a always very fast available backup driver make sense?


This is clearly a development phase. 30-40,000 people a year die on US roads, almost all from driver error.


It is entirely possible that self-driving vehicles will not be any better than human drivers, and they'll just commit different categories of errors. It may be that it really does just take general intelligence to drive a car, and that perfection is impossible in the current environment and infrastructure.

I don't think "this is a clearly a development phase" means anything when there's no guarantee self-driving cars will actually reduce deaths. It may be that the way forward is simply more advanced driver assist technology that better accounts for human psychology, and not fully autonomous cars. All these sacrifices for a future of autonomous driving may simply be a waste.

Personally, I don't think these cars should be on public roads until they get a lot better. We're just introducing a new class of accidents; human drivers generally make the same kinds of errors and other drivers can reason about what they're thinking. Computers are more likely to fail in (to humans) surprising and bizarre ways. What we have now is the worst of both worlds.

As an aside, if we really wanted to do something, I suspect the easiest way to save thousands of lives a year right now would be to implement mandatory functionality on all smartphones to shut off the screen when the phone is moving faster than 5mph. This would inconvenience people and passengers would be annoyed, but there was, after all, a time before smartphones.


> shut off the screen when the phone is moving faster than 5mph

I like this idea

Maybe need to figure out how to avoid shutting down the passengers' phones?


Sure, my answer to the question I stated would be:

Yes, but the legal entity behind the driving of the car needs to be hold legal liable. (i.e. the AI provider)

Hand over to driver can be required but the driver needs to have reasonable time to adapt to the situation (multiple seconds).

Only with this can we make sure that companies providing driving AIs give their best to make sure they don't kill.

Else they will have just pressure to improve it to a point where it happens rarely and then stop there because its not worth the development cost.


> Else they will have just pressure to improve it to a point where it happens rarely and then stop there because its not worth the development cost.

As ever, perfect is the enemy of good. Are there currently zero fatalities on the roads because cars are 100% safe? Or have they been developed to an acceptable standard of safety?


Oh its never going to be perfect (imo). And agree with you that good solution now is better than perfect never.

But we need to set initiatives in a way that manufacturers will be improving over time. They won't do it out of goodness of their hart, and quite possible marketing and good PR will be more impactfull on sale than 0.5% safer AI

I have 0 trust that market itself will solve this issue.


We could give a grace period of some years until we apply strict laws.

We just would need to make it clear from the get to go that after that grace period the law will be strict even for cars sold before ;=)


That could lead to situation where, cars that you bought, wont be useful couple of years after you bought them in case ai company doesn't deliver.

I have even less faith for politicians to not extend grace periods in face of organized backslash(probably paid and bought by companies themself), making them infinite.


> Hand over to driver can be required but the driver needs to have reasonable time to adapt to the situation (multiple seconds).

Of course. Otherwise you could just always hand over a few milliseconds before any impact and blame the driver.


This is a solved problem in actuarial & insurance circles...


> The question that the parent comment tries to answer is "what do we do so the backup driver does not watch his phone?".

We should do the same thing we do to stop regular drivers watching their phones.


Normal drivers have to drive, so they can't watch their phone. Requiring a human to do nothing while being attentive is simply not realistic.


> Normal drivers have to drive, so they can't watch their phone.

Despite being a huge legal and moral issue, a large minority of normal drivers can and do watch their phone while driving.


> Requiring a human to do nothing while being attentive is simply not realistic.

Isn't this basic security guard job description? Fire lookout spotters, etc? It's totally realistic. Sit there, pay attention, and be ready to take action if needed.


As someone who spent years working as a security guard... No. At least not in the way your thinking sitting at a desk eyes constantly on the cameras. Depending on the site we are expected to check and log them on a given interval and generally keep them in our field of view, but no one is under the illusion that there's perfect attention there.

For higher security sites this is solved by having several rovers in constant communication and coordinatation with the (often multiple) people sitting by the cameras. So both the rovers and the desk are constantly doing not just sitting.

Fire watch is the same thing. Your not just up in your tower eyes constantly peeled on the forrest, ready to react in milliseconds.


Security guards have more a split second leeway to react.


You’re not supposed to do nothing. You’re supposed to watch the road and ensure the safety of yourself, the car and the other road users.

I’m not saying it is realistic to expect the driver to be attentive the whole trip, but watching a video on a phone is just gross negligence.

Watching something while doing nothing is a job that has been around for a long time. If you’re employed to do that job, you cannot watch a video on your phone, especially when a failure kills someone. The whole reason you’re hired to watch a thing is to prevent that from happening.

Comparing it to the lifeguards discussions above: you wouldn’t allow phone usages as a lifeguard either.


Agreed - watching a video on your phone is gross negligence. But so is employing one person alone in the car to react in milliseconds to possible dangers.

And note that watching and ensuring are not doing something, they are precisely doing nothing. Most, if not all, human beings are not very good at paying attention while doing nothing which is what Uber is requiring of their test 'drivers'.


This is the reason that self-diving cars that require human intervention is a flawed concept. We should be designing vehicles that we manually control, but have a computer take over when needed.


> This is the reason that self-diving cars that require human intervention is a flawed concept. We should be designing vehicles that we manually control, but have a computer take over when needed.

This makes sense if the objective is to reduce fatalities. Unfortunately it makes no sense if the objective is to merely reduce the cost of operating commercial vehicles by eliminating the need to employ a driver - which is where the money is.


People used to be employed as lookouts on ships for days on end. Patrols have been a thing for as long as humanity had to deal with dangerous wild animals.

The fact that we cannot countenance something like this in the current world is testament to the power our devices have on us.


I doubt any lookouts were good enough to spot a potential threat the second it came into one’s range of vision (which is what something like this requires)


Lookouts aren't expected to react in milliseconds. They also likely communicated pretty often with the rest of the crew. Patrols... patrol.

Also, all of these jobs would be done in relatively short shifts.

Note that I'm not saying that the scape gal wasn't grossly negligent in watching a video on her phone. I'm saying that Uber was also negligent in making their safety system just be 1 person sitting there, expected to react in milliseconds.


Lookouts on ships typically don't stand watch for more than 4 hours at a time. Often they're rotated more frequently.


The point is that it can be done (on and off) for extended periods, unlike what the parent comment stated.


The crime here isn't failing to be attentive enough, it is actively choosing to direct your attention elsewhere.

You don't accidentally start watching a video on your phone.


>> What do we do so the backup driver does not watch his phone?

You make the human drive the car. The robot can jump in and slam the brakes when the human makes a mistake. That is the safe option.

But it just won't sell. Nobody here would buy a car that prevents the driver from making mistakes. Autodrive car that obeys speed limits while allowing driver facebook on phone: acceptable. Car that doesn't allow the human driver to speed or run red lights: market kryptonite.


Do you have any evidence for this that isn't just conjecture, though? I think there are millions of parents that would love to get a car that obeys traffic laws for their children. The peace of mind would be immense, and that's on top of the existing entries in the market that allow you to view live location and speed and even set preferences for radio volume and acceleration rate.

> Nobody here would buy a car that prevents the driver from making mistakes.

Almost every new car today comes with adaptive cruise control, autonomous lane-keeping, augmented backup cameras, lane-change and backup alarms, etc. I don't see why a better version wouldn't sell just as well.


There has been market research on this going back decades. Setting aside all the latest tech: Would anyone in america buy a car that would not allow the driver to break the speed limit? That was proposed decades ago and could be done with a simple clockwork mechanism to limit the throttle once the car hit 70mph. People are fine with an optional adaptive cruise control because they can ignore it. They aren't OK with a system that actively disallows behavior.

There was another interesting proposal: Lights/sirens that automatically turn on when the police/ambulance/firetruck breaks the speed limit. Again, no takers.


"Not breaking speed limit" and "driving by the rules" are different things. If you are passing on a two lane road without breaking speed limit then you are actually breaking the rules by creating a dangerous situation. This is why propositions to set governor to the maximum speed limit won't fly. However, people do want features like ESC, which can effectively limit your speed much lower than the posted speed limit if the road conditions do not allow higher speed. Same with collision avoidance system that do not allow to drive into obstacles and pedestrians, I don't think anybody opposes them. So I think more complex safety behaviors would have their market just as long as they actually improve safety and are not dangerous knee-jerk solutions in search of a problem.


And collision avoidance. Which is a highly desirable feature. I would also love to have a dashboard device that alerts me whenever I exceed the speed limit by some threshold. Practically any ticket I've gotten as an adult was due to inadvertently exceeding the speed limit (driving in a 45 zone, then hit a 30 zone for two blocks).


Actually, a bit of a gruesome idea I had was to look at NHTSA crash records and calculate the average probability of death at various speeds in my vehicle (Nissan Xterra). As you speed up and break the speed limit, a little % icon on the dashboard rises to indicate how likely you are to die if something goes wrong. I think the emotional impact of staring your fatality chances in the face as they steadily rise might encourage safer behavior for our monkey brains.


Gruesome? I'd say possibly life saving :-)


>> that alerts me whenever I exceed the speed limit by some threshold

But would you buy a car that hits the brakes at the speed limit? Not an alert. An actual limiter that you cannot bypass.


This is an unfair premise because it implies that breaking the speed limit is always unsafe. This is flat out not true; millions of drivers break the speed limit every day and have never or will never face the consequences. Out of various dangerous acts, speeding certainly counts as one, but it is not as fatal as, say, playing with firearms.

There are plenty of valid reasons to speed:

* (Most common) The actual speed limit is way too slow for what the road conditions are, and everyone would suffer if they followed it

* You are trying to overtake a truck and avoid their blind spot (I have done this; the less time spent next to a truck, the less anxiety you feel)

* An emergency vehicle is behind you and you need to speed up to overtake a car in the other lane so you can lane-change and make way for the vehicle (State v. Brown)

* Everyone around you is speeding; going slower is more dangerous because people need to lane change to get around you

* You are trying to make some space between some kind of maniac driver who you believe will soon lose control and hit you (I have done this to get away from drunk drivers)

* Your loved one is experiencing a medical emergency and you need to get to the hospital

* You are being chased by some kind of criminal or terrorist

* You are avoiding a natural disaster such as a rockslide, tornado, flood, or avalanche

Realistically, will you ever be driving away from a rockslide or sedan terrorist? Probably not, but with a regular car, you at least have options to exercise if you ever are in that spot; with a limiter those opportunities are gone.

In the case of a limiter, a mis-configured or broken speedometer or sensor can give false readings that severely inhibit your driving, even while driving, which could cause an accident if your vehicle slams the brakes without warning. It could also prevent you from using highways if your vehicle is not capable of accelerating past 40mph. This is the same argument against self driving cars; humans want control to keep themselves safe, not so they can be dangerous.


> ... Everyone around you is speeding; going slower is more dangerous

Yes those things can happen, however the car can allow the driver to go say 15 km/h faster, doesn't need to slow down at +0.01 above the limits.

Some people frequently go 30 km too fast... On 30 km roads in areas with children


We should absolutely be blaming the individual in this case. They were negligent in their stated job role which led directly to the death of a bystander. We should also be finding fault with the processes and finding solutions for them, but this does not absolve the driver of their responsibility.


> They were negligent in their stated job role

This is not how labor laws work. The role description needs to be fair. A employer cannot just shift all the blame to the employee. The employer needs to take the needed actions and provide the needed equipment and procedures so the employee can do her task.

But, let's see what the result of the trial is. I am not an expert and the trial will clarify this point.


Indeed, the employers are very much liable for negligence by their employees and its scope is much wider than labour laws. This is called Vicarious liability[1] in common law.

[1]: https://en.m.wikipedia.org/wiki/Vicarious_liability


I don't think anyone is arguing that Uber has no liability here, only that the driver does have liability.


Well to be fair, the the prosecutors office is arguing that.

"Yavapai County Attorney Sheila Polk decided in March 2019 that Uber itself had not committed a crime in Herzberg's death."


I stand corrected, thank you


I'm not saying the company should be able to shift the blame to the employee, I'm saying they're both at fault. Labour Law in Australia does in fact recognise this and individuals can be held criminally liable for negligence on the job. I understand that this happened in the United States, but my intention is not to predict the outcome of the case, but to make a statement about who should be held liable in a just system.


> This is not how labor laws work

Labor laws are irrelevant. They were grossly negligent in operating a vehicle, which is the domain of criminal, not labor, law.

Principal-agent aspects of criminal law may also provide liability for the employer, though apparently they haven't been charged. Which, to the extent they could be, is unfortunate because it is only employer liability, civil and/or criminal, that is likely to substantively effect the likelihood of recurrence, because while the individual is absolutely culpable, it's not the kind of behavior where individual criminal liability is likely to be an effective deterrent without better employer supervision, which won't happen unless she employer has a reason to pay for that cost.


The thing is that there are plenty of precedents for jobs that require staying alert behind the wheel of a motorized vehicle without supervision (e.g. taxi, bus, truck drivers). Whether the vehicle is equipped with safety/convenience technology is beside the point.


The taxi/bus isn't given a tablet with instructions to watch the tablet (monitoring the self driving), only to be accused later of not watching the road.


Are you saying this was the case here? Are you you familiar with the protocols used at the time or are you speculating?

I see e.g. cruise and waymo SDVs around SF with only one person on board. What of those?

My very layperson protocol would be "watch the road while the car is in motion, only use tablet when car is safely parked". My understanding is that SDVs store all the data it collects, so I see no reason to require operating a tablet while the vehicle is in motion. Correct me if I'm wrong.


Unless Cruise has changed recently, they require a primary and backup person in the car, and the rules seemed strict. I remember they were super serious about it, iirc you weren't allowed to talk to either of the drivers after you put your seatbelt on (or maybe it was after the car was on?).

SF is also an easier place to be watching an SDV in the sense that there's more happening to keep your mind occupied (weird intersections, construction, people doing stupid things). It's a lot easier to lose focus when you're watching a car drive itself around deserted highways in the middle of the night.


> Unless Cruise has changed recently

I can't say much about what their actual protocols are supposed to be, I can only offer my personal anecdata. I do occasionally see cruise cars around the Japan Town area with only one person in the car. They often park at the safeway across geary. The waymo cars, I see them roaming around closer to downtown, also with single driver, though they don't seem to concentrate in any particular area that I can tell. I also occasionally see unbranded SDVs, sometimes w/ two people, sometimes w/ one. Not sure which company those are from.

> SF is also an easier place to be watching an SDV

Well, from a regulatory perspective, one shouldn't be able to get away with saying "well I was not in a crowded area to keep me attentive, that's why the accident happened". I'd even argue that if the job involves looking at a tablet while the car is motion, then being in a crowded area would make things _more_ dangerous.


Previous articles mentioned that the driver has some tablet device(s) to watch, to monitor what's happening with the AI.

The last PR piece seems to omit that. Next piece maybe we will find out the driver was watching hulu on their phone on their lap... while reclassifying objects on the AI tablet also on their lap.


Ok, but what I'm curious is whether the monitoring/classifying/whatever was done in real time with the vehicle in motion, or after the fact. Presumably one would be able to do a far better job at whatever they were doing when they are able to play back/slow down/pause different data channels from a trip at will?

Because if the status quo is to poke at tablets in moving vehicles, how is anyone ok w/ cruise/waymo/whatever cars driving around SF?


> how is anyone ok w/ cruise/waymo/whatever cars driving around SF?

California DMV has shown a lot better oversight of self-driving than Arizona. They had deregistered several Uber owned vehicles in 2016 [1], and have some reasonably specific rules. It's much more OK in SF than AZ.

[1] https://www.theregister.com/2016/12/22/uber_selfdriving_cars...


>>We should absolutely be blaming the individual in this case.

Extreme case: Your employer tells you to stay alert in a super-boring job for 8 straight hours or else a person can die or whatever. But that may be an impossibility due to how our brain works.

I'd like to know how many hours was he sitting there and how often did he have to intervene that day /week /month. Uber, maybe, should have had another one checking now and then on drivers. Say one for 10 drivers...


If she had fallen asleep or something I would be inclined to agree with you, but she was on her phone while she was supposed to be "driving." Boring or not it seems pretty obvious that you shouldn't be on your phone when you're in charge of a moving vehicle.


If by "how our brain works" you mean our biological response times or alertness then sure.

But if you mean watching a TV show to distract themselves then not in a million years.

Uber's training or setup may well prove to be negligent... but this employee took the active, conscious step of watching a TV show on her phone when her entire job was to supervise the car's driving and safety of others.

If that's not criminal negligence, I really don't know what is.


Recognition of human capabilities in attention in repetitive tasks is important. As an earlier commenter said, lifeguards are limited to 30 minute intervals in their experience. I think baggage screeners with TSA (this is not an endorsement or attack on 'security theater', but an acknowledgement of how they address this issue) have similar time limitations due to focus drop off.

I think the TSA also keeps scanners on their toes by digitally placing contraband in the bag pictures at various intervals to make sure the scanner is able to stay engaged (and maybe to get some metrics on hit/miss rates). Maybe these supervised self driving cars need to project a deer jumping into the road and log the driver's response or something.


If she could not do the job then she should have resigned.

Edit: there was another post by [redacted] but it disappeared, it does not even show as deleted, weird.

It said: "This isn't some ride along in a consumer grade EV. They were gathering data to program the car with. The emergency braking systems were not active. The person that was supposed to be monitoring the vehicle knew this."


When a commenter deletes their comment, it disappears.

It would probably be more respectful not to copy what they posted along with their username. Actually, I think we'd better redact the username from your quote. People sometimes have important personal reasons for deleting things. The odds aren't high that it matters but the impact could be high if it did.


It would be probably be more respectful if you would not mess with my posts, now even I do not know who made said comment. Guess I should start signing my posts and keep backups of them.

People sometimes have important personal reasons for deleting things but this does not mean anything. It is not as if "x posted y on twitter and deleted it afterwards" or "the page was edited/deleted, here is an archive.org link" is uncommon on HN, nor it is as if a stalker would not be able to scrap HN posts of someone instantly as they were posted.

> it disappears.

In my experience they show as [deleted] but I guess this is only for posts that have replies.


Obviously it's rare for us to intervene in a comment that way (and never without letting them know), but in this case it was the lesser evil compared to compromising another user's privacy. Copying what they posted along with their username is basically overriding their deletion, and that was their choice to make, not yours.

I'm sure it wasn't that big a deal and I'm sure your intentions were good, but I'm also pretty sure most HN users would want us to protect them in this way, if only for the rare occasion on which it actually is a big deal.


> In my experience they show as [deleted] but I guess this is only for posts that have replies.

Kind of. Technically, posts that have replies can't be deleted, so what sometimes happens is the poster edits their post and replaces all the text with "[deleted]".


>Technically, posts that have replies can't be deleted, so what sometimes happens is the poster edits their post and replaces all the text with "[deleted]".

Which should also be prohibited, because it's just as destructive. You shouldn't be able to edit a post with replies either.


I don't know, if my posts were locked after the first reply, my contributions to HN would be definitively worse. I almost always find confusing typos and grammar errors, and things that just could have been stated better, after initially posting a comment, and I use edits to fix those problems.

Yes, ideally I would just do more revising before posting a comment the first time, but I don't seem to work that way.

I think the two-hour window is a good compromise, and if anything I really wish it was longer. Yes it has downsides, but I really think they're outweighed by the good.


> a manager actively observing for engagement

Maybe this is the key.


The other ones are necessary too. The issue is that people will loose attention - it is not possible for humans to not loose attention when nothing is happening for long time. And even when things are happening, it is not possible for humans to keep attention all the time.

The control is one thing, but manager can prevent you to look at phone and manager is creating interruption from monotony. The rotation is necessary to deal with human hardware which requires breaks (and moving car requries faster responses then drowning person).

The overlapping zones deal with fact that breaks and manager being present is still not enough.


> it is not possible for humans to not loose attention when nothing is happening for long time.

I really don't understand this assertion.

Having to babysit auto-driving cars in their development phase is well within the realm of what humans are capable of.

The driver booted Netflix, navigated to an episode, and watched in on their phone for the same reason anyone would do it at work: not because they were driven to a boredom so intense that it was the only way to get a grip but because they simply thought "what harm could it do?"

These replies acting like a slightly boring job is doomed to this outcome remind me of those ridiculous articles that stereotype millennials as 30yo-children so spoiled that they're unwilling to do the simplest of jobs. "Millennials refuse to be trash collectors and don't understand how someone could do it. 'Humans aren't made to pick up trash,' John, 32 years old, told our newscaster. 'Else we would have tongs for hands! I don't even think it's possible.'"


So you think that humans can sit there and simply watch, 8 hours a day for the rest of their lives, and not lose focus ever? I have yet to encounter a system built on that premise that wasn't woefully ineffective.

Which is not to mention that these workers have ridiculously weak motivations. They create nothing, spending hundreds of hours merely testing out someone else's work. They exist in a field with very little upward mobility, since they are effectively the only non-skilled labor at their companies. Even if there were upward mobility, they work in a field where their job is to help eliminate their job. The only thing offsetting that is that the pay is slightly better than equivalent options.

I can understand why people lose attention driving these cars around.


> Having to babysit auto-driving cars in their development phase is well within the realm of what humans are capable of.

The argument is that no, it is not, even if you never open netflix you will loose attention. Rhat does not make Netflix ok. But this driver having netflix open does not imply that people without netflix wont loose attention. They will.

Your second paragraph is both strawman and nonsense. Before millenials, people were not able to hold attention while nothing was happening. Which is why train drivers have protocols to prevent loss of attention and so do lifeguards.

The whole "if you admit common human limitation out loud yoi are weak and spoiled is either arrogance or manipulative argument.


> it is not possible for humans to not loose attention when nothing is happening for long time. And even when things are happening, it is not possible for humans to keep attention all the time.

But they could watch the road instead of watching their phones. It would be a pretty simple fix- have the backup driver keep their phone in the back of the car. I've had jobs where we had to lock our phones in lockers at the start of our shift, it's not uncommon.

Also, disagree that moving car requires faster responses than drowning person- kids often gently slip under the water in an instant and it's generally quiet and fast, never a big splash or dog paddling "help me!" commotion like in the movies. And longer without oxygen to the brain can lead to more permanent brain death, so I'd say both activities require equal vigilance.


> ut they could watch the road instead of watching their phones. It would be a pretty simple fix- have the backup driver keep their phone in the back of the car.

Sure, but that is just baindaid for one instance of problem. Something that allows management to pretend they dealt with it, while not actually trying to solve the problem. Typical human will likely start daydream without the phone or otherwise loose attention.

> Also, disagree that moving car requires faster responses than drowning person- kids often gently slip under the water in an instant and it's generally quiet and fast, never a big splash or dog paddling "help me!" commotion like in the movies. And longer without oxygen to the brain can lead to more permanent brain death, so I'd say both activities require equal vigilance.

That does not require split second attention and quick reflexive reaction. The permanent brain damage does not happen because kid was under water for 5 seconds, car crash can happen under one second.


> "what do we do so the backup driver does not watch his phone?"

I mean, a first step is to hold the driver criminally negligent if they get into an accident. Which already happens. That's what legal punishment is for -- to deter extremely dangerous/harmful/etc. behaviors.

But because people often think "it'll never happen to me", that's why companies generally have supervision and spot checks of employees, to try to catch bad behavior and reprimand/fire them first, before it results in damage/death.

Which, if I were Uber, would probably mean installing fish-eye cameras in the corner of each vehicle that would regularly be spot-checked to see if backup drivers were, in fact, not doing their job.

This isn't a particularly difficult problem. And I don't think privacy is a particular issue here.


> Blaming the individual would not solve the problem.

Punishing the culpable lapse by the individual is necessary but not sufficient. It should have been monitored and supervises by the employer, and failing that is also a culpable lapse which should be punished.


To your question: The question that the parent comment tries to answer is "what do we do so the backup driver does not watch his phone?".

This is a solved problem: you fit the experimental car with an interior camera aimed at the driver. If the backup driver is observed to be distracted by the interior camera, the car can sound a warning for the driver to be alert and if they ignore the warning, the car's hazard lights would come on and it would be slowly brought to a halt.


So an AI to watch the driver that watch the AI drives?


Such an internal camera system doesn't need to be AI powered. You could achieve the detection using a depth camera that emits active infrared (IR) stereo. Stated differently, a depth camera uses IR and visible light to see the driver's face.


> what do we do so the backup driver does not watch his phone?

You send this person to prison for negligent homicide, which I think she is guilty of, and remind backup drivers in the future they're still responsible for the safe operation of the vehicle.

To imply that the individual has no culpability for watching a video on her phone while the vehicle she's in charge of runs down and kills a homeless woman is insanity.


Indeed. But to me the distinction between your experience and the driver's is that you were supervised and compliance was checked. If no-one was (at least occasionally) making sure the driver was always focusing on the road ahead, it was inevitable that the driver would not think the rule was that serious, and lapses were bound to occur.


The point is to not place someone in a position where their attention would likely wander (for example not having active shifts over 30 minutes in the case of a lifeguard).

Safety is about proactively defending against human nature, not blaming people after the fact.


Again, it’s not the attention wandering away. It’s watching a video while driving a 2000kg steel vehicle


But would that really have made a difference if that person would have starred straight forward instead.

If you don't expect something to happen reaction times can be really slow.

Which is also how it differs from the lifeguard job, there you always expect that from time to time something will happen, but as far as I can tell many people would not expect this when they use a self-driving car. Because you know, the car is already meant to be able to handle this. Or at least this is how many people think about self driving cars I think.


Hopefully people hired to be the safety check on self-driving cars are being trained that the cars are not perfect and require human oversight.


>To make sure those interventions happen when needed, pools implement processes to better manage the risk.

Thankfully, this is already happening in the self driving space as well! At least as far as research and development efforts are concerned.

I used to work right next to another self-driving tech company: Aptiva in Las Vegas. Several times a day, I would see their R&D cars launching from headquarters, and there were always two backup drivers. In addition, the backup driver in the passenger seat always has a clip board up and was actively taking notes. A “two man rule“ is common practice in critical, high risk activities.

I’ve known the details of Uber’s murder of Elaine Herzberg for a long time. I read the original police reports early on[1]. And I don’t take that fatality lightly.

Just the same, even on a bicycle, I always felt safe sharing the road with Aptiva R&D cars because of their two man rule, and constantly attentive drivers.

[1] https://www.ntsb.gov/investigations/AccidentReports/Reports/...


Not to be pedantic but the indictment is 'Negligent Homicide'. I believe this is involuntary manslaughter. The reason I bring this up is because I see people misuse the term 'murder' on HN often, leading their readers to surmise the possibility of intent when that is not at all the charge.


One technical "solution" to this might be to have the computer ask (voice) questions on the road - such as are we approaching a right or left bend, or was that a red car we overtook. These could measure attention to the road and reaction speed. It is also quite easy to see that being put in place for normal driven vehicles - one could imagine a insurance requirement to prove attention of the driver during the journey.

It will of course totally ruin the illusion of being driven smoothly by robot but I think that should be canned asap.

but I agree - the company is also liable for failing to have processes to stop the inattentive behaviour. IF you were on duty watching your phone, and no manager spotted this, or took away the phone they are just as liable.


I think the technical solution has merit. It doesn't need to be voice, presumably you could rig something up on a steering wheel with buttons to press to record every time they go past an intersection or something. Something to force them to watch the road and keep their hands in the right area. You're right though, that does dent the image of unattended self-driving but we've already demonstrated that isn't what this was.

People are condemning this person for being on their phone (and fair enough), but if I am sitting in a comfortable vehicle with nothing to do / read then I will fall asleep soon enough. I think this is quite common.


Not a lifeguard, but I pull several people from the water during the summer. I once pulled someone to the shore in Tunisia, a few feet from where the lifeguard was standing. He neither noticed the person being in trouble, nor the rescue. A couple of elderly Germans saw us arrive and hurried to alert him but he had his attention directed elsewhere and didn't even notice them talking to him. They were outraged in an adorable elderly manner.


I assume this is the reason that casino croupiers are rotated frequently off and on the floor as well, although there of course it is about protecting profits rather than safety.


Never worked as a lifeguard but have 3 kids who like to swim and who have survived to be teenagers.

The lifeguards at our local pools rotate about every 10 minutes. It seems like changing position and also that you have some interaction would be helpful to staying focused.

It's hard to replicate that in a car but perhaps something that was asking for data - "What's the speed right now? Are there any fire hydrants in sight?" - that could be flagged if contradicted could weed out drivers who are watching a movie or nodding off.


But don't lifeguards often work in teams of at least two, frequently (e.g. every 15 - 30 minutes) alternating simply because of that limited attention span?


They do. That's why the OP mentioned rotations and redundancy. Uber could easily put two people in the vehicle and have them swap seats every 20-30 minutes. The person in the driver seat would be responsible for watching the road and reacting while the person in the passenger seat supervises.


It's okay self-driving cars aren't yet single seater.


Yup, I've had experience with safety drivers in autonomous vehicles that are extremely professional and that I feel very safe with. After seeing the Uber video, the first red flag for me was the fact that she was alone; the second was (obviously) the fact that she was watching a video on her phone, which means either she was being criminally negligent (if she had been on-shift for a short period of time) or Uber was (if she had a long, isolated shift).


> it is important to question why your community pool has better safeguards than an experimental car.

Because the people responsible for the experimental car doesn't give a shit and tries to wash themselves of their responsibilty by placing all the blame on the safety driver.

Don't get me wrong, she's responsible, but not nearly 100%.


Yeah her job was basically to do a quick time event lasting several hours. Absolutely setup to fail


I think it’s useful not to label this person’s job “driver”, but instead “fall guy”. Their job is to sit around all day doing absolutely nothing. But in the rare event where the car fails, they need to suddenly become alert and fix it, or take the blame for the machine’s failure. I think it’s less accurate to describe this transaction as a form of labor, but instead as an indemnity, an assumption of liability, paid for via a wage.

For train conductors with a similar challenge, a solution has been invented requiring them to pass various visual attention challenges that detect if they aren’t alert. Such systems weren’t present here. The system wasn’t designed to work - it was designed to protect Uber by shifting the blame for failure.


It is hard to imagine they weren’t aware that their job involved doing nothing (but pay attention to the road).

I’d be inclined to give them a pass if they were looking at the road and just inattentive (because that’s expected, like you say), but I find it hard to sympathize with someone watching a show on their phone, explicitly ignoring their one job.


that's the fundamental attribution error again, attributing poor judgement/negligence to the driver, who's the visible actor, rather than the inherent systemic flaws designed by other unseen actors who hold greater responsibility.


It's an error to attribute negligence to the safety driver of an experimental car who is watching a video rather than watching the road?

Obviously the system should improve to make errors less likely, but that doesn't absolve anyone of guilt in my view.


We can discuss morality endlessly. We have (collectively) agreed to use the law to assign guilt and mete judgement -- this the court case will bear out eventually. What's left is the problem itself abstracted from the particulars of this case. And the best approach to solving this kind of problem, as evidenced by the aviation industry's miraculous safety record, is to consider the concept of assigning blame as out of scope because it shouldn't be the basis for our safety precautions. In this context, blame is something that distracts you from the real cause and can prevent you from looking more closely. Blame is a kind of 'bottom' argument which in practice is wielded to dismiss alternative perspectives and refuse to integrate their ideas.


that's a good point. i'd add that even in the legal system, assigning blame is highly problematic because decisions are overwhelmingly pressured into being purely binary findings, whereas blame is nearly always shared by multiple entities. lawsuits would be much better if the "win-lose" dichotomy was entirely abolished in all its forms.

in this case, the backup driver should shoulder some blame, but not the majority of it, and commensurately, should not face any of the harsh penalties associated with manslaughter.


air lines have reached their current safety precisely because they consider the whole system as a whole and not just focus on the error.

the wet bags behind the autopilot get bored, distracted, complacent, ill or reckless due a moltitude of factors some of which outside their control

it is well known at this point the human modes of failure, and that there was basically no mitigation is a reasonability of the system designer


But holding individuals accountable for their actions.

A better system would have spotted this person was watching their phone not the road, but it would still have held them accountable and (at minimum) fired them!


This flaw is perfectly predictable, given other experience in industrial safety, so why was the better system not built?


A totally fair point, Uber needs to be held to account for their systemic failings.

That doesn't mean that an employee being clearly negligent doesn't also need to be held accountable.

Them being held accountable is one part of a good system.


I think that might have passed if he only listened to the radio, a good enough way to relieve boredom. But streaming video ? That's a punishable offense while driving.

Also, many people work at really boring jobs. If serious errors happens while you watch netflix, you'll get fired, for sure.

But again, sometimes listening to music is permissable.


So how did this worker get through their screening/interview process? When hiring for this job which criteria did they focus on? Why even allow the driver to carry their screen around? Massive failure, I think there is too much FSD coolaid in the valley springs. I am somewhat surprised we had the power to stop/pause them after this reported unfortunate event. Anyone working on FSD today should focus entirely on mass transit vehicles mkay tnx


Would you say the same thing for a chauffeur or truck driver who killed someone because they were using their phone while driving?


That somebody driving a truck or car in the normal way is doing more work than a person supervising a self driving car is the whole point of self-driving cars. They take all of the activity out of driving, but none of the tedium.

I don't think it's realistic to expect people to function well under those conditions.


When some people learn to drive they can take courses with instructors who sit in the passenger seat and have a brake pedal. When the student driver runs over and kills a pedestrian while the instructor is on their phone, is it the instructor or the student who should face liability? Why not both?


I'm fairly sure the instructor is at least partly liable.


In England certainly the person supervising a learner driver is subject to the same laws about alcohol, mobile phone use, etc as someone actually driving- and can be convicted on a charge of aiding and abetting any crime the learner driver commits. This is the case regardless of whether they are a professional driving instructor with their own brake pedal, or a parent supervising their child driving the family car.


In Portugal there is an interesting law regarding this: The instructor/examiner is always liable for an accident during driving lessons/exams, except if the accident resulted from an action where the learner disobeyed a direct order by the instructor. In that case the learner takes responsibility.


no, because the system is not adversarial in that case, even though the actions of the driver, and the potential tragic results, might be similar.


Saying it's an attribution error is circular. It's a conclusion, not a premise.

And I think it's wrong. If my car is on cruise control, that doesn't mean I can stop worrying about my speed. You can't always just blame the company because it's the bigger entity.


I think we can all agree that many murderers end up committing their crimes partly because of a system that gives rise to poverty and the knock on effects of poverty on humans. However, I think most folks also agree that incarceration is an important factor in discouraging the commission of more murders. Not everyone does but even the most lenient countries tend not to let the perp off scott free.

Which is all to say that "the system made me do it" really doesn't fly as an excuse for felonious behavior.


If deterrence didn't work, prison would be pointless cruelty. Paying sustained attention while nothing happens for hours is simply not something people are generally suited for. You can't deterrence them into it.


sure, but that extreme isn't helpful just as making the FAE isn't helpful, since, as others have astutely pointed out, the system was designed, whether through negligence, malevolence or other intentions, to put the driver in a (systemic) no-win situation.

for instance, in the 737 max crashes, the pilots made errors, but the principal blame lies with boeing and their egregious systemic design flaws.


Stepping back, I also find it hard to sympathize with someone watching a show on their phone. But there are two parties to this crime, and only one of them is being charged. Uber is responsible for creating this situation - I doubt this "backup driver" watches videos while driving her own car - yet has managed to escape liability. Hence why it's appropriate to describe her as a "fall gal".


Yeah, the problem isn’t that they weren’t paying attention, or that they weren’t even trying, but that they where intentionally NOT paying attention.

That said, I don’t think this gets Uber off the hook. If I were on a jury, I’d likely say Uber was guilty of manslaughter (barring a real look at the evidence).


> That said, I don’t think this gets Uber off the hook

There is exceptional negligence on their part. The fact that vehicle was just blithely travelling the posted speed limit, even at night, even though the speed violated it's assured clear ahead ability, is a damning point. The vehicle will operate unsafely in it's default configuration.


And for that reason you have a driver there who should have taken over.


Did this person know he should have taken over when the car was respecting the speedlimit but still going faster than it should have been going? I go the speedlimit at night as well. But here in Belgium where there's so much street-lighting it might as well be day

How much slower was the car supposed to go for it's sensors and was it all the guys responsibility even if he knew Uber apparently disabled the auto-braking due to issues and didn't pause roadtests?


> Did this person know he should have taken over when the car was respecting the speedlimit but still going faster than it should have been going?

Yeah, and how was the driver even supposed to know how fast the car should have been driving? I know how fast I should driving because I know how far I can see, but the car had LIDAR / Radar / night vision / etc. If the speed limit is 45MPH and the car is going 45MPH, what reason did the driver have to think that the car couldn't see? Either the driver can rely on the car to "see" or they can't.


Honestly if I’d been given the job of babysitting a self driving car, the same thing could have easily happened to me. I’d get bored out of my mind and unintentionally pull out my phone & start browsing Reddit before I caught myself. It’s a shit, soul sucking job to actively do nothing.


Let's say you were working as a lifeguard at a swimming pool. Would you also unintentionally pull out your phone after a couple of hours?

If you did and someone drowned in the mean while, there would be no debate about who was liable.


Through, lifeguards at a swimming pool in are typically not expected to be attentive for hours. They rotate and switch.


That's the difference between a system designed to keep people alive and a system designed to do the least possible work to avoid liability.


Even if they’re not, you wouldn’t allow them watching a video on their phone


And they're not allowed, and (as other commenters say elsewhere in the thread) there's another person on the floor whose job is to ensure lifeguards' attention doesn't drift off. That's how you design safe systems: defense in depth.


I know that cars is the most probable cause of sudden death/injury so I wouldn't do that. How can you 'unintentionally' pull out a phone?

I'd probably be looking at the road all the time, but just freeze at that moment where I'd need to take over.


> How can you 'unintentionally' pull out a phone?

Probably more like "absent mindedly" pull out a phone.


Absent mindedly is closer. Whatever you call reaching for something out of habit with thinking about it.


Habitually


You can listen to audiobooks -- if your job is to be a backup driver keep your eyes on the road.


But you did not work there. There are people who would very easily do this job. If she knew she couldn't concentrate, she shouldn't have taken the job and the risks involved.


Employees taking jobs they're unqualified for is a known problem. Reasons might include ignorance, hubris, financial need, greed. Employers need oversight systems to ensure employees are performing their jobs adequately, especially in safety critical roles.


We know that you can pay someone as much as you want but you can’t get them to be vigilant for a low probability event. Except, as you say, by giving them some continuous skin in the game, even if it’s illusory.

This person was being paid to pay attention, which is hard enough to get right. What happens when the person owns the car, and is just commuting? And there are millions of them, not just one person?

Mayhem, if it’s a variant on this system instead of something much more sophisticated.


> you can’t get them to be vigilant for a low probability event.

How about starting by simply removing their phone? Isn't that easy? The driver was looking at her phone at the time when she was supposed to be looking at the road. This is a simple violation. The phone should have been banned and the driver fired after a first violation.

I am pretty sure that while what you say might be hard to do for some people, there are individuals out there for whom sitting and looking at the road for hours would not be such a big problem. If she couldn't handle the job, she shouldn't have worked there.


> I am pretty sure that while what you say might be hard to do for some people, there are individuals out there for whom sitting and looking at the road for hours would not be such a big problem

Those people are known as drivers and they’re doing more than just looking at the road. The driving/feedback from the controls is keeping them engaged.


That's all fine, but we also have decades of experience designing systems for exactly this - they are called train drivers. Driving a train can be the most boring job in the universe, in ways that driving a truck isn't - there is very little feedback from the controls, you can be going for hours in a straight line without any need for input or change of speed - so locomotive controls are designed to require constant positive input to keep the driver engaged. This should be the same - the system should require the backup driver to keep confirming/selecting something on the dash, even if it's inconsequential to the operation of the system.


> I think it’s useful not to label this person’s job “driver”, but instead “fall guy”.

Or "sacrificial part" -- designed to break first to protect the rest of the system. The NTSB report (https://www.ntsb.gov/investigations/AccidentReports/Reports/...) is an interesting read, by the way.


Corporate equivalent of ablative armor then?


This is exactly right, and probably the way that liability will shake out with self-driving vehicles in general since it's in the manufacturer's best interest for it to be that way.

"Here's your self-driving car with a 20 page EULA/ToS. Oh, but if it's in self-driving mode and something goes wrong it's not our fault, you must respond (within seconds) and fix it yourself."


Their job is definitely not to sit around and do nothing. Self-driving cars regularly disengage from autonomy (e.g. emergency vehicle, construction zone, erratic bicyclist) and the safety driver needs to be ready at all times. Safety drivers also undergo extensive training around this.


I believe the normal behavior of self-driving cars in these situations is for the car to brake to avoid hitting the obstacle, so the safety driver has ample time to take over and navigate the situation. Didn’t Uber disable the auto-braking because it was oversensitive, choosing not to pause road tests until that issue was fixed?


Uber disabled the OEM (Volvo) auto-braking but not the Uber braking algorithms. IMO, that’s largely a red herring as they could have equally well chosen to base their platform on a car that didn’t originally have a factory auto-braking system.


No it is not; there is no legislation in place that shifts the burden onto technology. The technology is assistive, and the driver is the end responsible.

Would you blame an airplane's autopilot if it crashes? We still have two pilots even though 99% of the time the thing is on autopilot.

She's a test driver, she did not pay attention, and she killed someone. At best, her employer should help and compensate her, given how it was a workplace, on-the-clock accident.


> Would you blame an airplane's autopilot if it crashes?

If the plane crashed due to an autopilot error, yes, absolutely.


This is exactly right.

This job is a wage against a (very nasty) lottery ticket, to sit there to absorb the legal fallout for decisions made far away.


>Uber made a series of development decisions that contributed to the crash’s cause, the NTSB said...Uber deactivated the automatic emergency braking systems in the Volvo XC90 vehicle and precluded the use of immediate emergency braking, relying instead on the back-up driver.

If the driver committed homicide, it sure sounds like Uber is also guilty of homicide.


Not just the Volvo system, but Uber had tuned the object detector to ignore the sparse lidar returns that the car saw 6 seconds before impact. Meyhofer, the head of ATG, fought to tune the car that way because trees cause similar sparse returns and the car had been stopping for trees while testing. The pressure to tune was there because of an impending demo with Uber CEO Dara and Meyhofer stood to gain tens of millions of dollars from the demo. It’s not just homicide but Uber also defrauded their safety drivers.


Eerily similar to how the Challenger tragedy happened


Deactivating this optional feature is somehow worse than buying a car without such feature in the first place? The latter is neither illegal nor immoral.

The safety driver had an actual job to do, he wasn't there "instead" of automatic emergency braking - which is not certified for driverless operation btw. But he was distracted with a phone instead of looking at the road.

The halo effect here is unreal.


The American legal system places much more emphasis on acts you may have committed than omissions, and tends to avoid compelling action.

So yes, in an American court, disabling a proven safety feature is significantly worse than simply purchasing a vehicle without the feature.

The safety driver failed at their job, but the NTSB clearly lays significant blame for that failure on Uber, who should know well that humans are poorly suited to monitoring automated systems, and committed acts and omissions that increased the likelihood of an accident.


This brings to mind the classic Trolley Problem:

https://en.wikipedia.org/wiki/Trolley_problem

The scenario is notably different, but it does dig into the issues around acts vs omissions and how we perceive them.


She.


> If the driver committed homicide, it sure sounds like Uber is also guilty of homicide.

No; an automatic emergency braking system is not required by law. A capable, attentive driver on the other hand is. Working brakes are as well, but it's eventually up to the driver to engage them.


I think an interesting variation on this is that even if a safety system is not required by law, but is available - then disabling could constitute criminal negligence. Consider what happens if safety equipment on industrial equipment is disabled and injuries result. I'm fairly sure that criminal charges could result for whoever disabled the safety mechanisms (though the there are likely differences between workplace safety criminal law and road safety).


I mean this makes sense. She was watching a video on her phone while driving. It was her literal job to know that the car might make mistakes and correct for them, so she should have known that she still has to pay attention as though she were actually driving.


It makes sense because holding users personally responsible for the inadequacies of self-driving products externalizes all the risk for companies selling self-driving systems. The Uber system detected something in the road and proceeded because it did not recognize it. That's how it is designed. Otherwise the car would never go very far without stopping because the system does not recognize most things it detects.

To put it another way, the self-driving system did not alert the driver that it had detected something and did not know what it was. It wasn't an emergency, it was the car's normal operation.


The system was, by its very nature, not ready for production. That's why it had a safety driver in the first place. It's crazy to argue that the safety driver should have been alerted...the whole point they're there is to handle failures of the system, including ones where the system fails to detect an issue.

If my skydive instructor doesn't deploy the backup parachute because I, the student, didn't alert them that the primary chute failed to deploy, it's entirely their fault if we hit the ground at terminal velocity.

If a lifeguard is working at a public pool watching Netflix on their phone and a kid drowns, you can't argue that the kid should have splashed more.


Production? It wasn't even ready for testing on the unsuspecting public.


I don't disagree. But that's a separate issue. The safety driver provably did not even make a good faith attempt to perform their function. It's not possible to know whether they would have been able to avoid the tragedy that occurred, but it's a certainty that in any other universe where they were watching TV on their phone they would not have improved the outcome.


Of course it was "inadequate system" if you consider it a full self-driving. It is inadequate in the simple fact that it was not complete. It was under development.

But how do you suppose we create FSD cars if we can't try them out before they are ready? There is just no other way to do it than vigilant drivers that watch what these cars do.

> To put it another way, the self-driving system did not alert the driver that it had detected something

Well of course not, who would expect that? If the car could positively identify the collision before it happened, it would have simply stopped, no need for a driver at all. The driver is there to prevent exactly this kinds of accidents, and this driver failed by getting our her phone and distracting herself with streaming videos instead of doing her one job. Plain and simple.


She wasn’t a “user”


This was a training mission. Uber wasn't offering rides to passengers who weren't employees.


As mentioned by others, I think it depends strongly on what the driver's training was. Between the overblown hype about the capabilities of self-driving cars, the state government's abdication of its regulation power, and the intentional disabling of existing safety devices on the vehicle (it's unclear if this change or the risks of the change were communicated to the driver), I would be honestly surprised if the driver believed it was actually that dangerous to allow her attention to drift.

It's one thing to charge negligent homicide with a typical car. But the near-fraudulent claims of self-driving car hucksters at the time had a lot of people believing these vehicles were already far more capable than they will be for decades to come. And it's inevitable that the Koolaid drinking in this particular program within Uber was at its strongest. So I doubt the safety drivers were adequately informed of the actual capabilities of the cars or the risks involved.


I agree. This is where torts get fun in first-year law school. If Uber's training wasn't perfectly clear about the importance of the job of monitoring outside the vehicle, then part of the liability shifts to them.

Consider two training courses:

* instructors who are gung-ho on automation being "nearly there" and encouraging people to relax in the car and let the software do its work!

* instructors who are constantly impressing upon students that they need to be vigilant.

One can see how one party is 90% liable. And the other is 10% liable, etc.

The interesting part of this case comes down to the training. In that if it was lacking, it then makes it very clear to future companies that their training needs to be more rigorous.


Thank you for this comment and the insight. I have a follow up question: even after training, is there a liability burden to ensure the training is followed? I know no hiring process is perfect, so you're always going to end up with the occasional employee that disregards safety training. Knowing that, is there a burden on Uber to actively monitor the driver's attention and communicate to the driver when it is not sufficient? Even on an audit basis, but more ideally constant?


Those are great questions. I think it comes down to negligence and whether the employer makes a good faith attempt training and monitoring throughout the process. Sometimes that's difficult in new fields and then it seems like the burden falls less on the employee (the company is taking on this risk, given the chance for new rewards).


Its literally in the name of the job title "Car safety driver"


She was not brand new on the job, right? She knew how often intervention was required and chose to watch TV instead.


The fundamental attribution error suggests that the job she was asked to perform may not make any sense.


The vehicle is designed to ultimately be a level 4 system. She effectively has the job of a test pilot of an experimental airplane. The job requires careful attention and is not easy.

There would be a different blame calculus if this were a production level 2 system like autopilot. In that case, it's not a paid test pilot, it would be the paid purchaser of a certified aircraft.


And yet she was probably paid like the test pilot for a beta web application. A test pilot for experimental aircraft doesn't even bring their phone in the vehicle with them.


Perhaps so, but there are a lot of jobs that have high responsibility but aren't very well paid. School bus driver, for example. If you're not willing to take the responsibility, don't take the job. I do think that Uber bears significant responsibility for their employee's actions as well though. They hired a convicted felon, and assuming this wasn't the first time she was watching movies on the job, didn't effectively monitor her, even though they had video cameras inside the car.


I was wondering that too, was this person making $14 an hour or $140k a year? That factors in IMO, at least in some small way


But even if she wasn't watching a video, as far as I remember, there was nothing that could have been done. It was an accident that would have happened if it was a manually operated car with an alert driver, no?


>>> It was an accident that would have happened if it was a manually operated car with an alert driver, no?

No, the last articles and discussions were quite clear that this accident wouldn't have happened with a real driver. The road conditions were good, the visibility was good. A driver would have seen the woman crossing the road well enough and slow down.


From the footage I've seen (both the footage Uber released, which portrayed the road as much darker than it actually was, and from videos of people driving around in the same area after it happened) I can only conclude that an attentive driver would've been able to prevent the crash.

However, I don't think it's reasonable at all to expect someone to remain attentive while looking at a self-driving car. This is the same problem train drivers face, which has been mitigated with all sorts of methods, least of which a dead-man switch. Some countries let their train drivers mention every signal they come across to themselves, with Japanese train drivers even pointing at signs to ensure they're paying attention.

This was a vehicle that had been modified to reduce certain safety features (because Uber couldn't get them to work properly) with someone at the wheel expected to be 100% focused on the road while giving them nothing to do at the same time. You can only go through so many hours of sitting in a card doing nothing before you go crazy.

From a revenge-seeking perspective it's easy to blame the one person who could've stopped the car for her obvious disregard for safety (streaming video on the job), and I suppose a criminal justice case might be in order. However, I think Uber should be mainly responsible for the loss of life because their flawed design not only made the car less secure but also completely disregarded the human psychology when they designed how their human safeguard driver should do their job. Even human-operated cars will beep and yell at you if you don't pay attention while you're driving in cruise control, if such safety features were omitted in the self-driving design then clearly the driver was set up to take the fall when something bad would happen.

I strongly believe Uber only put that woman in there because local law wouldn't let them test their car without a human at the wheel, not because they wanted to ensure their car didn't kill anyone.


> Japanese train drivers even pointing at signs to ensure they're paying attention.

I think pointing out things in the environment is a great idea for safety drivers in this kind of setting. It helps keep them engaged, possibly helps the system notice when they're distracted, and possibly provides additional useful training data.


From what I remember the person that was hit was fairly visible. The video uber released was very dark however, giving that impression.


Accidents do happen, but the circumstances in this case are important; she was not paying attention to the road when it happened. If the victim came out of nowhere and she could not act, the case would pan out differently.


depends what she was told by the company I guess... and if he situation was avoidable at all


It doesn't matter what she was told. She was in charge of the vehicle. She was the licensed operator. The presence of a broken ML autosteer doesn't abdicate responsibility.

This is part of the general attitude that rights and freedoms don't matter if a machine violates them. When you walk out of a store post purchase and the security alarm goes off you have zero obligation to disclose what is now your property to the loss prevention experts. But somehow it's acceptable to assume you're a criminal because a machine said so.


>>> When you walk out of a store post purchase and the security alarm goes off you have zero obligation to disclose what is now your property to the loss prevention experts.

protip: You probably want to let them figure out what is the thing beeping while you're still in the shop. It's not great to get home only to realize that they forgot to take off the anti theft device on some of your beers.


> This is part of the general attitude that rights and freedoms don't matter if a machine violates them. When you walk out of a store post purchase and the security alarm goes off you have zero obligation to disclose what is now your property to the loss prevention experts. But somehow it's acceptable to assume you're a criminal because a machine said so.

IMO this is much more applicable to Uber's share of the responsibility for this incident.


Can you explain this further? How does that example equate to uber's responsibility?


Intent matters. The state has to provide criminal negligence, which in most states requires very risky behavior.

Driving while watching a video certainly qualifies normally, but purely hypothetically, if Uber had internally said it was totally safe to drive distracted, then maybe she has a defense.


> Intent matters.

Only to a crime whose required mental state is intent, which rules out crimes of negligent or recklessness (or strict liability, for that matter.)

> The state has to provide criminal negligence, which in most states requires very risky behavior.

“A motorist can be convicted of negligent homicide for causing the death of another person while driving in a criminally negligent manner. A person acts with criminal negligence by unknowingly doing or failing to do something that creates a substantial and unjustifiable risk to others. The risk must be of such nature and degree that the failure to perceive it constitutes a gross deviation from the standard of care that a reasonable person would use in like circumstances.”

https://www.drivinglaws.org/resources/arizonas-vehicular-hom...

I don't think that that's a hard fit for the publicly-reported facts of this case.

> Driving while watching a video certainly qualifies normally, but purely hypothetically, if Uber had internally said it was totally safe to drive distracted, then maybe she has a defense.

No, if “Uber” said that there is an even greater case than there already is under simple respondeat superior for the actions of the driver alone as agent of the firm for Uber to be charged with negligent homicide, manslaughter, or even 2nd degree murder.

But I don't think any representation by Uber could reduce the standard of care owed by a safety driver to members of the public to something below “not watching a video on the phone while supervising the safety of the operation of a self-driving vehicle”.


I should have said "mens rea," but it's a legal term that roughly translates to intent.

>I don't think that that's a hard fit for the publicly-reported facts of this case.

Agreed, unless (again 100% hypothetically) Uber misled the driver about the capabilities. Even then it would still require that a reasonable person would believe a (hypothetical) incorrect statement about the car's capability.

>No, if “Uber” said that there is an even greater case than there already is under simple respondeat superior for the actions of the driver alone as agent of the firm for Uber to be charged with negligent homicide, manslaughter, or even 2nd degree murder.

Not sure respondeat superior applies to criminal law. In either case, I think operating a fleet while (again 1000% hypothetically) misleading the drivers would be itself the act of criminal negligence. So I don't think you'd even need to resort to master/agent liability.

>But I don't think any representation by Uber could reduce the standard of care owed by a safety driver to members of the public to something below “not watching a video on the phone while supervising the safety of the operation of a self-driving vehicle”.

Not my area of expertise and I haven't looked at any case law on this point, but I think a believable statements made by Uber would be considered as the circumstances wrt "standard of care that a reasonable person would use in like circumstances."

If a reasonable person would believe it was safe to watch the video while driving, it wouldn't qualify as criminal negligence.


The only reason why motor vehicles require licensed operation is because of their extraordinary capacity to kill people. That is why it is regarded as a privilege and not a right.

The reason why Uber had a human monitor, one required to have a valid motor vehicle license, is because of the potential lethal outcomes. You learn all of this when preparing to test for a license. It doesn't stop being one's responsibility to be a vigilant driver because it's convenient to do something else. Even considering the general stupidity levels of American drivers, ignorance of the law is no excuse.


> if Uber had internally said it was totally safe to drive distracted, then maybe she has a defense.

What Uber (hypothetically might have) said is completely irrelevant; she was the driver, she is responsible.


"licensed operator" [citation needed]

I dont even know what license beyond a driving license exists for this job. Clearly not every driving license holder is fit to hold this position.


Everybody with a drivers license is supposedly qualified to operate a motor vehicle. That usually entails not killing random people.


That is, presumably, something that the grand jury considered.


maybe she had an attorney chosen by Uber?


She can choose her own attorney.


but did she...maybe Uber said.. here, take this free attorney


Its unethical for a lawyer to do things not in their client's best interests, and even if Uber is paying for the lawyer, their client is who they are representing.


And _surely_ Uber would never do or encourage anything unethical.

Oh, right...


Ethics violations are a big deal for attorneys, if a complaint is sustained against them with the bar association, they can loose their ability to practice law.


In theory, yes. In practice an attorney has to be intelligent enough to recognize the conflict of interest, and many of them aren't.


mmm yeah with their expert witness that can make up stuff on the spot

lets just acknowledge inefficiencies in the system


I acknowledge inefficiencies in the system but I also suspect that grand juries usually consider the most basic relevant facts of the case.


Grand juries consider the local laws as worded and the instructions they are given by the state which can be plain wrong, it is inherently imbalanced and is a common area of review when looking at miscarriages of justice.

Also trending on HN today: https://news.ycombinator.com/item?id=24488350


So you’re positing that the expert witness committed perjury and that Uber secretly told the driver that it was okay for her not to pay attention to what the car was doing?


Miscarriage of justice possible, perjury not necessarily.

I don't even know if an expert witness was there, thats how grand juries work.

And that a corporation didn't tell a full truth and threw an "maybe employee" under the bus? Not really that much to "posit".

Ironically also trending on HN at time of writing: https://news.ycombinator.com/item?id=24488350


I thought it was very interesting that the police chief came out immediately blaming the pedestrian and saying the crash was unavoidable.

When I saw the dashcam footage, I just thought to myself, I probably would have avoided that, had I been driving.

I've watched my own dashcam footage before and people on the periphery come out less visible than in reality because the headlights blow out the video - the camera has less dynamic range than our eyes.


How many Americans have been killed by distracted drivers, though? How many of those were indicted for homicide? Usually what happens when you run over someone, no matter what the circumstances, is the cops exonerate you on live television, with a statement about how "no criminality is suspected" which is suuuuuuuper weird because there's no other kind of killing you can do and get that treatment.


How many have videos of both the victim being clearly hit and the driver being severely negligent leading up to and including the accident? How many are pedestrians that subsequently die?


Well, 20 pedestrians die on American roads every day. It's incredibly common. The cops usually don't even bother looking for evidence. This idiot live-streamed himself killing a pedestrian in San Francisco, after months of filling up his Instagram with motorized jackassery, and the cops only charged him with manslaughter, not negligent homicide.

https://hoodline.com/2020/08/driver-who-killed-pedestrian-at...


I did a cursory search and it seems that involuntary manslaughter and negligent homicide are the same thing in some jurisdictions but in others (most?), manslaughter ranks lesser in severity than negligent homicide. One reason I can think of that they would charge one and not the other is that it’s easier to convict due to not having to prove criminal intent, especially when the possible punishment is similar.


Drivers getting away with killing pedestrians and cyclists is like the most common thing in the world.

As for "how many have videos": when Amelie Le Moullac was killed by a truck, the police didn't even check videos. There had to be a grassroots effort to go get the video.

You're kidding me.


To any curious readers like me: the charge is negligent homicide (the indictment itself is linked at bottom of article)


Most of the time there isn't a video of both the driver and the incident with detailed telemetry data. Also, commercial drivers are held to a higher standard.


That poor person is Uber's "moral crumple zone"

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757236

Fitting, given Uber's reputation for questionable morality...


> After the crash, police obtained search warrants for Vasquez's cellphones as well as records from the video streaming services Netflix, YouTube, and Hulu. The investigation concluded that because the data showed she was streaming The Voice over Hulu at the time of the collision, and the driver-facing camera in the Volvo showed "her face appears to react and show a smirk or laugh at various points during the time she is looking down", Vasquez may have been distracted from her primary job of monitoring road and vehicle conditions.[0]

That poor person, who's job it was to watch the road, seems to have been watching Hulu instead. That's a pretty willful disregard for their job and for other's lives. I don't have much pity for them if this is the case.

I get your point about moral crumple zones and I suppose there's cases one could to point to demonstrate it, but this is definitely not a very good example of it.

0. https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg#Distr...


Agreed.

> "Had the vehicle operator been attentive, she would likely have had sufficient time to detect and react to the crossing pedestrian to avoid the crash or mitigate the impact," the November 2019 NTSB report stated.

This isn’t a case of the Uber performing some split-second error that needed immediate correction then blaming the driver when that didn’t happen, the way so many in this thread seem to want to make it out to be. This is a person whose job it was to watch the road watching Hulu instead and as a result failing to react to developing conditions that, if NTSB is to be believed (and I trust them a whole lot more than the randos commenting here), could have easily been prevented by an attentive driver.


Guy was using his phone not paying attention to the road when testing experimental equipment on a public road. He was there specifically to mitigate potential malfunctions of said experimental equipment, but he was reading reddit or whatever instead of keeping lookout.

I can't see how this is not clear cut negligence.


The vehicle was travelling the posted speed limit at night, as designed. The NHTSA pointed this out that the law may require you to travel at a "safe" speed, and simply choosing to always move at the posted limit is potentially a moving violation and a design flaw.

The vehicle was travelling 20mph faster than it's own "safe clear ahead" zone allowed for. Simply ignoring it's own safety limits and travelling at a higher rate of speed is a design flaw.

Designing a large public road autonomous vehicle test and not taking into account an easily predicted human vulnerability that's been known in other automation experiments for decades is a project design flaw.

Whether or not the driver is at some fault, I think the court will be capable of determining that. Arizona's willingness to completely disregard the mountains of negligence on part of a giant corporation is disturbing.


Isn't it the drivers job to take control whenever there is a design flaw? I thought that was the exact job description.

If the car is going over it's speed limit, the driver should react.

I see that some people think Uber should have added systems to control the safety driver. I think that's a fair point, but it doesn't take away the drivers responsibility. They knew the car isn't perfect. It's their job to take over control whenever that happens.


Their job, as designed, is impossible. Automation complacency is well-known and well-studied -- any system which requires no input for an hour and then requires you to act randomly within seconds is an impossible task for humans. Volvo's self-driving (hands-on-wheel required) accounts for that. Uber's did not.


Uber posted a safety driver specifically to make the car safe at regular speeds despite the limitations of the autonomous system. That driver just wasn't doing their job.


> Guy was using his phone not paying attention to the road when testing experimental equipment on a public road. He was there specifically to mitigate potential malfunctions of said experimental equipment, but he was reading reddit or whatever instead of keeping lookout.

> I can't see how this is not clear cut negligence.

1) The driver is a woman.

2) While I think the driver does bear some fault here, they don't bear all the fault. Uber designed an unsafe system that relied on an unnatural amount of vigilance from a single person while simultaneously discouraging that vigilance [1]. They didn't design the car to shut down when one of it's critical safety components (the driver) was not operating correctly, and they didn't even give that driver amphetamines or something to increase their vigilance to artificial levels.

[1] Basically: pay close attention to a boring process while doing absolutely nothing for hours on end. I'm pretty sure that's a classic "humans suck at this" task.


That "unnatural amount of vigilance" is just sitting in a comfy seat and looking at the road ahead of you go by. There is nothing unnatural about that. Lots of people do similar or more boring tasks just fine.

This isn't a truck driver falling asleep from exhaustion caused by aggressive scheduling. You don't "accidentally" take out your phone when your job is looking ahead. That's deliberate negligence.

If you want to require eye sensors to detect distraction, by all means, pass a law about it. Maybe include regular non-AI cars too. #1 cause of accidents over here.


> That "unnatural amount of vigilance" is just sitting in a comfy seat and looking at the road ahead of you go by. There is nothing unnatural about that.

Statements like this seem to indicate a failure to understand what the difficult part of the task was. In short: a boring monitoring task that was practically unnecessary 99.99000% of the time, but absolutely critical maybe 0.00001% of the time. The typical-case lack of necessity would strongly re-enforce engaging with distractions over time (i.e. distraction more rewarding, and nearly all of the time no negative feedback of any kind).

> This isn't a truck driver falling asleep from exhaustion caused by aggressive scheduling. You don't "accidentally" take out your phone when your job is looking ahead. That's deliberate negligence.

I'm not saying there was no negligence here on the part of the driver, just that Uber itself was at least as negligent if not more so for designing the system in the way it did. Focusing too much on the driver is an error.

Honestly, watching a show was egregious, but I wouldn't be surprised if the drivers would end up falling asleep on a regular basis if all distractions were removed from their environment. Half paying attention to something I wasn't interested in has always been the best way to get me to fall asleep.


> That "unnatural amount of vigilance" is just sitting in a comfy seat and looking at the road ahead of you go by.

Doing that for five minutes is easy.

Can you do that for eight hours? Day after day? Without a single lapse in attention?

This is a much harder job then the truck driver has - because he constantly has to make microadjustments, to correct for road conditions.

People's brains don't work the way you think they do.


The driver wouldn't be prosecuted if she just had a lapse of attention. She had taken her phone and was watching a TV show. That's a deliberate act.


I'm not saying the driver wasn't negligent in watching TV, instead of the road. In fact, I don't think anyone in this thread is saying that.

I am saying that passively watching the road for hours on end is much harder than actively driving.


Many people write this notion in this thread. Do you have any data for this BTW? How are you claiming this confidently? Have you driven a self-driving car? Many people (myself included) find it quite easy to monitor the road in it for long periods of time. If the driver was not one of them, she shouldn't have taken the job and risked innocent lives.


https://www.mitre.org/sites/default/files/pdf/pr-3426-lesson... is a 2019 literature review of the phenomenon - section 2, reaction times in response to deviations in automated tasks, is of particular relevance here. (Summary: humans are bad at this)


How are you so confident that you are different from all the Tesla owners whose self-crashing autopilots drove their vehicles into stationary objects, fire trucks, semis, etc?

What makes you confident that you are actually good at it, and are not the victim of Dunning-Kruger? Do you regularly find yourself in the process of stopping your self-serving car from crashing into things?

Or has your car simply not crashed yet?


> all the Tesla owners whose self-crashing autopilots drove their vehicles into stationary objects

All the 3 of them? Among millions of Tesla vehicles out there and the hundreds of millions of miles driven? Is that even a considerable risk when compared to the general (non-zero!) risks of driving?

For me personally I simply know when I watch the road and when I don't. For some people this might be hard, for others - not so much. I am aware of when I pull out my phone or distract myself and when not.


I'm sure you're aware of when you've pulled out your phone, or are playing with your infotainment system. (You should also stop doing it, it's negligent and illegal.)

But are you just watching or are you seeing the road? How many times have you taken control from your car doing something stupid and dangerous?

Unless the answer to that second question is 'I do it all[1] the time, and I'm batting 20/20', what makes you confident that you'll catch the next instance?

[1] If that's really the case, you should probably short TSLA, it doesn't sound like their car can safely operate.


It is rather unlikely, statistically, that this was a single lapse. Or that a single lapse would cause an accident. It's not like the driver's finger was hovering over a nuclear launch button.

Yes, I think I would be able to do this job safely. Not everyone has problems with focus to such an extent that you can't help but be on the phone when you need to be paying attention.


I agree with you except for

> and they didn't even give that driver amphetamines

Yo, what? IIRC, most *amphetamines are either illegal in the US or like FDA Class 1 substances or something like that, and you just want to throw them at people operating > 1 ton machinery? That seems like it could have an even higher risk for danger.


> Yo, what? IIRC, most amphetamines are either illegal in the US or like FDA Class 1 substances or something like that, and you just want to throw them at people operating > 1 ton machinery? That seems like it could have an even higher* risk for danger.

That's true, and my comment was mostly sarcastic (but a little serious). Amphetamines are given to military pilots to increase alertness [1]. They're also used to treat attention deficit disorder (stimulants also increase focus in healthy people, see coffee). Some years ago I read that they're also popular in some parts of Asia not for recreation, but to help workers focus on boring, repetitive tasks (and they apparently can even make that kind of work seem "fun"), but I can't find the exact article again.

The kind of job these "car monitors" are expected to do is so unnatural that it's ludicrous that someone could be expected to succeed at it perfectly, unassisted.

[1] https://aviation.stackexchange.com/questions/8885/do-militar...


Modern cars are essentially self driving on the highway, and it doesn't require an an unnatural level vigilance to operate them. To be fair, the driver's assistance features are usually come with an attention monitor, so Uber is negligent in that aspect.

Also, no one said the driver has to do nothing. They can safely listen to audiobooks or talk on the phone.


talking on the phone can be nearly as distracting as watching TV in some cases. it's also illegal in a lot of places...


How is the required amount of vigilance any different than that of normal driving?


In normal driving, the driver is continually taking actions. Making small adjustments, considering hypotheticals, checking mirrors and speed. If the user is continually engaging with the system, there's less of a risk of the user's attention lapsing. Once the user's input is no longer required, humans are very bad at paying attention to something that doesn't change, and very bad at reacting to the change when it does happen.


I fail to see what stops one of these car monitors from “considering hypotheticals, checking mirrors and speed.”

I get that it’s boring but fail to see how that’s an excuse for not paying attention (or watching streaming video) to a dynamic system you’re expected to actively engage with.

Many train systems are largely autonomous and far more static and yet we still expect train operators to not kill someone on the tracks.

How do you feel about lifeguards as a job? Failure of pools and beaches to design a good system?


Please see the top comment for how lifeguard systems avoid these issues - 30-minute short shifts, overlapping responsibilities, and monitored attention.

Train conductors do not require second-level reaction time. A train cannot stop in five seconds, no matter how quick the conductor is. People who are stopped on train tracks get killed, and the train operator is very rarely to blame.


In what world is asking someone to at least not pick up their phone and start watching videos instead of doing one's job and at least looking at the road outside is "unnatural amount of vigilance"?

It's the basic job description. If she couldn't do it, she shouldn't have taken the job.


This is the first of many such cases to come.

It's unreasonable to hold someone accountable for a "self-driving car" that suddenly decides at a split-seconds notice that it can't cope driving.

Of course this is extra bad because it's an experimental car, but it's the same in my opinion with those Teslas on the road now that do the same thing.


> It's unreasonable to hold someone accountable for a "self-driving car"

It is not unreasonable at all. She had one job to do - look at the road. She failed it because she felt that her entertainment was more important than doing her job. She picked up her phone and started streaming videos. She failed at her one job, plain and simple. She knew everything about the job and still chose to watch some videos and risk lives.


I said it was slightly worse in this case and sibling commenters have addressed this issue.

The experimental car shouldn't have been on the road at all if the only thing separating it from killing people is someone who is expected to maintain concentration for hours/days while simultaneously not actually doing anything.


How do you expect the humanity to create a self-driving car then? Somehow magically create it from 0% to 100% in a lab, and then let out on the street? This is never going to work, it needs testing on public roads. What is your plan then? How do you propose FSD cars should be developed?


For starters, Uber could have just not disabled the existing failsafe mechanisms that the car had.

Then, if we start with the expectation that a human in the loop is necessary for live testing, we could have two. It reduces the impact of independent failures. It also adds some social pressure to avoid negligence. It also provides overlapping coverage, as a lifeguard in this thread has pointed out.

Humans are fallible. Accept that, and design systems to be safe despite individual errors.

Driving a car is not the same aa monitoring a self driving car. There will be differences in attentiveness. Stop equating them.


Lets not act like it's a self-fulfilling prophecy that FSD cars on existing public roads will exist.

In my opinion, safe FSD cars are not possible with current technology on existing public roads - so either the technology has to improve by orders of magnitude, or the roads need to be modified significantly.

The gung-ho experimentation that is going on in public is in my opinion very dangerous and should be stopped, and this case is a perfect example of why.

Both the road infrastructure and other drivers are too crappy and unpredictable for FSD cars to be viable and safe.

Also this business of holding someone who is not actually controlling the vehicle accountable is ridiculous, and is surely something which will be proven in court eventually.


What I find puzzling is that I keep hearing people say that it's impossible for people to stay alert while not being the driver, when there's actually a pretty popular term to describe that very thing: a backseat driver[1].

If you go on a long multi-day road trip, you might even end up relying on a passenger's feedback in a moment of tiredness.

[1] https://en.wikipedia.org/wiki/Back-seat_driver


Did you actually read that page? A backseat driver doesn't trust the actual driver.

>A backseat driver may be uncomfortable with the skills of the driver, feel out of control since they are not driving the vehicle

Are you saying that the meatbag in a "self-driving car" is equivalent to a backseat driver?

Because if you are, then "self-driving cars" will never be viable because the humans inside don't actually trust them.


Trust is beside the point. The actual point is that they are able to pay attention to the road, even for prolonged amounts of time, and react to perceived danger, despite not being the ones who are actually in control of the car.

The claim I'm disputing is the one that says people are somehow incapable of paying attention to the road for extended periods of time unless they are in control of the vehicle.

I might agree that a reaction might be jerky and panicky, but then again, they would by definition be so regardless, due to the unexpected nature of accidents


So other road users' safety is reliant on jerky and panicky responses to perceived danger as well as real emergencies.

That sounds like a nice safe system.

For the record, I don't agree. I feel that the average person who would choose to use a self-driving will tend to be overly relaxed and trusting, not what I'd call a backseat driver.

The kind of person who is a backseat driver wouldn't trust the car's driving and would prefer to control it themselves, and thus wouldn't be using the self-driving feature in the first place.


I give you a weight, you are to hold the weight in front of you with your arm outstretched, for 12 hours straight. If you drop your arm then a random person is shot. If you drop the weight is the death your fault or my fault for putting you in an impossible, for a human, situation?

Humans are very bad at paying attention for a rare event and having nothing else to do 99.999% of the rest of the time. This is surely well known to the people who set up this system in the self driving car.


Looking at the road when sitting in the drivers seat without being absorbed by your phone is not, in any way, a hard task. Truck drivers, train drivers, pilots and other equipment operators exist. Millions of people do it every day. Those that don't, and end up killing someone are held responsible if found out.

Don't take a boring job if you can't handle it. The required basic level of continued attention is not a superhuman skill. It's not a job for everyone but it's not exceptional at all.


> Looking at the road when sitting in the drivers seat without being absorbed by your phone is not, in any way, a hard task. Truck drivers, train drivers, pilots and other equipment operators exist. Millions of people do it every day. Those that don't, and end up killing someone are held responsible if found out.

Driving a truck is in no way comparable to what this lady was tasked to do. A truck driver is performing an active task (operating a truck), this lady was supposed to be performing a passive task (monitoring a truck's driving). Active and passive tasks are very different beasts.

But you're on to something with your comparison to pilots. Planes are mostly automated. When that automation fails, the pilots often lack the situational awareness to avoid a crash when they have to take over (for an example, see https://99percentinvisible.org/episode/children-of-the-magen...).


I don't think this distinction is morally meaningful. Do passive monitoring tasks require more discipline? Probably. Are you unable to perform them? Get another job.

How many times do you think this driver was distracted with the phone when doing their job? I bet it wasn't just this one time that they happened to crash.

Pilots crash after hours of boredom because debugging problems in the air is hard, not because they were negligent leading up to the problem (vast majority of the time, anyway). And still by and large they succeed, it just goes unreported in the news because business-as-usual. This driver on the other hand got into an unrecoverable situation because of their own behaviour.


Except they're not driving, they're just watching as the car drives itself. Driving implies you are perform actions in response to the environment, these people do not, they simply sit and watch.


Not even remotely the same thing.

Dude was paid to pay attention. He didn't. It wasn't because he got fatigued, it was because he was deliberately doing something else, by choice


Yes, but no. The point that was trying to be made was that he was hired for a task that's fundamentally extremely difficult for humans. You can pay a person all you want to stare at paint drying so they can note down the exact moment it's perfectly dry, but don't be surprised when they do a poor job.

The guy screwed up. Uber knew he almost certainly would. I won't say the guy isn't responsible at all, but pretending this is purely on him and Uber bears 0% of the responsibility is just silly. Humans are not capable of anything and everything simply because they are provided pay.


My point was that what the driver did was not negligence, it was deliberate inattention.

If she had been negligent, i.e., too tired to notice the pedestrian, that would have been a different matter. If she had been paying attention most of the time but inadvertently reached over to drink some coffee, that would have been a different matter.

But she wasn't too tired, and she wasn't momentarily reaching for a drink; she was simply choosing not to pay attention to her job so she could do something else instead.


You're still missing what I'm trying to say, I think. I'm not talking about fatigue, or brief inattention, or anything like that.

I'm saying that the inattention was basically a given. Yeah, at some point the driver went and grabbed a phone. It may have been fiddling with junk in the car, or their fingers, or day dreaming to the extent they were unaware of the world around them.

Yeah, they grabbed a phone. Is that really any different from any of the above? I don't think so. Particularly considering how ingrained the reflex is to grab and fiddle with a phone when bored is. I wouldn't be surprised if they never consciously decided to pick the thing up.

The task was simply unreasonable. Uber should have foreseen this problem and implemented systems to aide and monitor their drivers.


although I agree Uber bears most of the responsibility the analogy was ridiculous in not being fundamentally extremely difficult but basically impossible.

Aside from this, if the car had been driving over the speed suitable for the conditions at the time the fall person had plenty of time to note that and take over, in that case it was more like huh, the paint is drying but it looks like it's going to rain .... long time looking like it's going to rain ... oh no should have put the covers on!

But I'm betting the need to take over and drive is not properly given to the fall person either.


When people are bored they get distracted and do other things. This is well known human nature and very few people are suited to this sort of task. The people who set up the self-driving car should have known better than the uneducated poor person who is being sacrificed.


She could always just take a break if she felt her attention was lacking. Or you know, not take the job in the first place if this job was so hard for her personally.

It is not at all fair to make the comparison you are making.


The poor guy's a lady. Other than that, agreed.

Edit: Post fixed now, disregard. :)


Sorry! I should know better than to assume gender these days. Fixed.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: