There is no good reason for self-driving cars. This is not a problem that anyone has been clamoring to solve. Human-caused death by driving is a small problem (quoting statistics does not in any way make it into a big problem, nor do the people who quote those statistics compare them with statistics about how much safer robot drivers are, because no reliable statistics exist).
If you think that deaths and injuries are intrinsically bad, then look at the stats for injuries while skateboarding (https://skateboardsafety.org/injury-statistics/). Surely you think something must be done about that, right? But we don't care, because the FACT of injury is not the important thing. The important thing is fairness; reasonableness; agency. People on skateboards and who do other dangerous things are choosing to risk themselves and we think that's okay.
Human-caused accidents are considered by society (that's you and me) to be acceptable because we have agreed to the bargain. This is not so with robotic drivers. I have NOT CONSENTED to share the road with robot drivers because they are not bound by the same social bargain-- their real drivers are programmers, who risk nothing. Uber was not charged at all in the killing of that pedestrian in Arizona. Only the human safety driver was, who rashly had bought into the supposed safety of self-driving cars.
I don't want to ride in a box where no one takes responsibility in case of disaster. EVEN IF, there are fewer accidents. This is the key point: AI stands for "Automated Irresponsibility" or "Agency Interrupted." Irresponsibility is the bigger problem; or loss of agency.
If you truly, madly believe that mere numbers are the crux of morality, then consider. If you save 10 people from drowning in your boat, is it okay for you to pull out a gun and shoot one of them in the head before returning to shore? You will still have saved 9 people. You're a hero, right? Obviously, this is not okay-- because HOW one dies matters. Responsibiliy and agency matter.
If you think that deaths and injuries are intrinsically bad, then look at the stats for injuries while skateboarding (https://skateboardsafety.org/injury-statistics/). Surely you think something must be done about that, right? But we don't care, because the FACT of injury is not the important thing. The important thing is fairness; reasonableness; agency. People on skateboards and who do other dangerous things are choosing to risk themselves and we think that's okay.
Human-caused accidents are considered by society (that's you and me) to be acceptable because we have agreed to the bargain. This is not so with robotic drivers. I have NOT CONSENTED to share the road with robot drivers because they are not bound by the same social bargain-- their real drivers are programmers, who risk nothing. Uber was not charged at all in the killing of that pedestrian in Arizona. Only the human safety driver was, who rashly had bought into the supposed safety of self-driving cars.
I don't want to ride in a box where no one takes responsibility in case of disaster. EVEN IF, there are fewer accidents. This is the key point: AI stands for "Automated Irresponsibility" or "Agency Interrupted." Irresponsibility is the bigger problem; or loss of agency.
If you truly, madly believe that mere numbers are the crux of morality, then consider. If you save 10 people from drowning in your boat, is it okay for you to pull out a gun and shoot one of them in the head before returning to shore? You will still have saved 9 people. You're a hero, right? Obviously, this is not okay-- because HOW one dies matters. Responsibiliy and agency matter.