
A Study on Driverless-Car Ethics - theBashShell
https://www.newyorker.com/science/elements/a-study-on-driverless-car-ethics-offers-a-troubling-look-into-our-values
======
b_tterc_p
“ if a car detects a sudden obstacle—say, a jackknifed truck—should it hit the
truck and kill its own driver, or should it swerve onto a crowded sidewalk and
kill pedestrians? A human driver might react randomly (if she has time to
react at all), but the response of an autonomous vehicle would have to be
programmed ahead of time. ”

I hate this logic. No, it didn’t need to be programmed ahead of time. It
should try to avoid crashing. It shouldn’t try to solve a philosophical
problem. We don’t want to require our cars to observe it’s surroundings and
calculate number of things identified as humans to minimize expected loss of
life. That’s difficult and error prone.

If you’re going down a street and something jumps in your way, make the
program try not to hit anything. Ideally by braking, because you shouldn’t be
going fast enough to not be able to do so based on your environment.

~~~
cortic
Should a car, generally prioritize the survival of its occupant or pedestrian?
Not answering this is the same as answering it with ' _surprise me_ '. And
either answer is wrong.

~~~
mikeash
You could ask the same question about human drivers, yet zero attention is
paid to this question, and zero time is spent on it during driver training.

~~~
apocalypstyx
Is that perhaps why the question is so nettlesome?, it reveals something we've
been living with all along and would've preferred to continue ignoring.

~~~
mikeash
I really don't think so. It's ignored because it essentially never happens.
The odds that you'll ever be presented with such a scenario are so low that
it's pointless to spend any time on it. The odds that you'll be presented with
such a scenario _and_ the obvious best answer is something other than "maximum
effort braking" are even lower.

I think it gets traction with self-driving cars because people, stupidly,
expect computers to make perfect decisions 100% of the time. This idea runs
into a logical inconsistency when presented with a scenario in which there is
no perfect decision. Rather than confront the fact that it's unreasonable to
expect perfect decisions 100% of the time, people try to come up with a way to
declare a perfect decision in a no-win scenario.

~~~
tzs
> I think it gets traction with self-driving cars because people, stupidly,
> expect computers to make perfect decisions 100% of the time

I don't think it's that people expect perfect decisions.

You are right that it essentially never happens with humans. But _why_ doesn't
it happen with humans? I think it is because if we are driving down a highway
at high enough speed for this issue to arise, we probably aren't going to be
aware of what is in our swerve path, and even if we are we probably don't have
the time to recognize that we have to make a choice, nor the processing power
to make such a choice.

Hence, there isn't really any need to consider this issue with human drivers
because humans _cannot_ make a decision in such a situation.

Self-driving cars, on the other, should have the sensors and the attention and
the processing power to take into account everything to the side of the road
in addition to what is on the road. With them, unlike with human drivers, it
is actually possible for them to make a decision in these situations.

I think it is getting traction simply because with self-driving cars, unlike
with human driven cars, it is actually a meaningful question. It's meaningful
even if you assume that computers don't make perfect decisions--they at least
have the time and data to make a decision, unlike humans.

~~~
mikeash
This sort of thing can happen for humans in a way where they have time to make
a choice. A stick throttle with brakes that can’t overcome it could get you
into a situation like that, for example. You’re right that computers would be
able to make that kind of decision in a much wider range of scenarios.
However, computers will also be much likely to get into those scenarios in the
first place. I’m not at all convinced that the net result is computers
encountering situations where they can make a choice more often than humans
do. I think both will be so rare they any effort spent on them would be better
spent on avoiding crashes altogether.

------
csours
I think philosophy and culture are important, but this meme is very silly.

Personifying self-driving systems is a great disservice and misleading. A self
driving car detecting a sudden obstacle will not have time to classify
anything to the extent that these stories have in mind, and certainly not to
the point that humans would (leaving aside the fact that humans also would not
have time to classify the items at a "sudden" obstacle).

\---

That being said, there are very real ethical questions in the self driving
world:

1\. Just because you can commute farther, should you?

2\. Should more expensive cars be able to disrupt traffic for individual
benefit, or should all self driving systems follow a consistent set of rules
of the road to maximize throughput?

2.1 Should I be able to tweak my car to be a little more dangerous, but also
quicker?

3\. Should carpooling be mandated? Encouraged through tax breaks? subsidies?

4\. Should self driving cars be allowed to cut through neighborhoods or be
required to follow major roads around them? What about parking lots?

I'm sure there are other really interesting questions as well, but the trolley
problem is not one of them.

~~~
warent
Well in the case of point 2, when you have a lot of vehicles disrupting
traffic for individual benefit, what you end up with is traffic as we have it
today--horribly inefficient and slow for everyone. When everyone follows a
consistent set of rules to maximize throughput, that's what _actually_ leads
to the maximum individual benefit.

------
vishvananda
Is it just me or does solving the moral dilemma of prioritization based on
attributes of the person seem like a worthless endeavor? In practice, I can't
come up with a case where it would matter.

For example, a more reasonable metric for a machine to use is the probability
of injury/death. If swerving is 90% likely to kill a person and staying
straight is only 89% likely, then staying straight is a better choice. I don't
see how attributes of the person would ever trump the probability of harm. The
cases where probability is roughly equal for multiple actions will be
incredibly rare.

~~~
ordu
89% percent chance to kill homeless person or 90% chance to kill pregnant
woman?

 _> I don't see how attributes of the person would ever trump the probability
of harm._

Maybe they shouldn't, maybe they should. How do we know? What are the terminal
values? Society effectiveness? Or equality of all humans? Or minimizing public
outcry in cases when autonomous car killed someone?

 _> The cases where probability is roughly equal for multiple actions will be
incredibly rare._

With millions of autonomous cars on streets rare outcomes would happen all the
time.

~~~
brokenmachine
_> 89% percent chance to kill homeless person or 90% chance to kill pregnant
woman?_

Just fat, or pregnant? We need to know! Get onto it, engineers!

The car's algorithm for deciding this question is a matter of life or death!

/s of course, because I think this whole philosophical argument about self-
driving cars is ridiculous when our current tech can't even reliably determine
if an object is a stationary barrier or not.

------
Pulcinella
These discussions always bring up the Trolley Problem which I find to be
pretty insidious. The trolley problem is designed to hide the real moral
issue. Here is the original Trolley Problem for comparison.

“Suppose that a judge or magistrate is faced with rioters demanding that a
culprit be found for a certain crime and threatening otherwise to take their
own bloody revenge on a particular section of the community. The real culprit
being unknown, the judge sees himself as able to prevent the bloodshed only by
framing some innocent person and having him executed. Beside this example is
placed another in which a pilot whose airplane is about to crash is deciding
whether to steer from a more to a less inhabited area. To make the parallel as
close as possible it may rather be supposed that he is the driver of a runaway
tram which he can only steer from one narrow track on to another; five men are
working on one track and one man on the other; anyone on the track he enters
is bound to be killed. In the case of the riots the mob have five hostages, so
that in both examples the exchange is supposed to be one man's life for the
lives of five.”

The whole thing starts with a pretty elitist framework, that the best way to
run society is to lie to the public because they can’t be trusted. The problem
then ends with the infamous trolley problem which is so stripped down of any
context it’s basically a dark pattern. You are expected to choose who is to
die instead of wondering why do people keep getting run over by trolleys? Why
does anyone have to die? Why can’t the track workers have safe working
conditions?

------
gowld
Does this have anything to do with driverless cars? Driverless cars will not
be programmed to target victims by race, and driverless cars' main benefit is
reaction time to take simple safe actions like stopping the vehicle or turning
into an open space.

The car systems won't have elaborate neural nets for deciding when to jump
over a pylon onto a train track.

~~~
minikites
>Driverless cars will not be programmed to target victims by race

Not on purpose, but "machine learning" will have the same outcome:

[https://theoutline.com/post/7022/ai-trolley-problem-
ethics](https://theoutline.com/post/7022/ai-trolley-problem-ethics)

[https://twitter.com/wef/status/1058675216027660288](https://twitter.com/wef/status/1058675216027660288)

------
travisoneill1
These problems are always brought as a drawback to driverless cars but are
completely irrelevant. How many "trolley" situations are there compared to the
thousands of times where the choice is:

A. Human error causes cars to collide and kills people

B. The cars don't collide

------
hnaccy
Can the car communicate over local network and check the pedestrians' credit
scores?

------
masonic
A concern of mine is that in a situation where an autonomous vehicle is "at
fault", its owner has a strong incentive to have it _flee the scene_.

If "caught", the liability exposure is the same, so there's no downside. You
can't incarcerate an algorithm or even take its license.

------
minikites
[https://idlewords.com/talks/sase_panel.htm](https://idlewords.com/talks/sase_panel.htm)

>Machine learning is like money laundering for bias. It's a clean,
mathematical apparatus that gives the status quo the aura of logical
inevitability.

~~~
gowld
It used to be that Ivy League colleges laundered privelege into elite degrees
for people who would become consultants who gave an aura of legitimacy to
biased decisions. Maybe the robots will put them out of a job.

~~~
b_tterc_p
Consultants are the ones legitimizing the robots

------
xiphias2
The real ethical question is not asked: should a driverless car be allowed on
the street if it waits 10 minutes at the left turn while making a traffic jam?

Real autonomous cars are already making real problems. I believe it's Waymo's
job to take over the left turns if the waiting line is too long.

------
hughes
The Trolley Problem is the most boring part of the ethics of autonomous
driving.

Would you tell a human driver to stay off the roads until they decide in
advance what they'd do in such a ludicrous situation?

~~~
dingaling
The difference with a human driver is that there is no point asking them to
decide ahead of time. In all but the most highly-trained soldiers, what a
person will do in the case of immediate fatal personal threat can't be
preprogrammed.

------
gnicholas
Ah, the perfect way for Tesla to segment their different product lines — make
the expensive ones more “selfish” in their accident mitigation algorithms.

------
trainingaccount
Hope they will be taking minutes in all the meetings where the business
requirements are collected.

------
jijji
ahh a pseudo-science article coming from The New Yorker. A comic strip from
Dilbert would give more insightful information.

~~~
taneq
Well, he's recently started talking about self-driving cars, so it's on the
cards:
[https://dilbert.com/strip/2019-01-25](https://dilbert.com/strip/2019-01-25)

