
The Trolley Problem Problem - Hooke
https://aeon.co/essays/what-is-the-problem-with-ethical-trolley-problems
======
dereferenceddev
There just seems to be a misunderstanding on the intention of a though
experiment here. There is a subtle implication that because the thought
experiment is "contrived" that its less valuable.

When thinking about philosophy, a core part of philosophy is to reduce
concepts, ideas, or beliefs into abstractions (or a "spirit" or "essence" of
what their intention is), the thought experiment presents itself as a
perfectly useful tool to help challenge those concepts.

Ethics is not about absolutes, but really about sussing out where gray areas
exist in what some might believe are black and white situations. Additionally,
most of the scenarios are presented as a thought experiment, because
conducting a __real __experiment with those conditions would be wildly
_un_ethical.

How are we supposed to determine the nuance of valuing human life if we were
bound to doing actual experiments? Also, who in their right mind would conduct
such as experiment?

Zimbardo faced enough flack testing the limits of obedience and authority in
the Standford Prison Experiement.

~~~
stupidcar
Why does every comment like this, criticising an article, always claim the
author doesn't understand the concept they're talking about? You can just
disagree with something without using cheap tactics like this to imply the
author is ignorant.

I see no evidence of this claimed misunderstanding in the linked article. And
there's nothing "subtle" about it saying thought experiments lack value
because they're contrived, that is the central point its making.

~~~
totetsu
They have Trolley problem problem problems, and you have Trolley problem
problem problem problems.

~~~
have_faith
I present: The Trolley Solution. I decide what happens to the trolley in every
scenario and everyone else is absolved of guilt.

~~~
twic
Oh, i thought they Trolley Solution was that you tie all philosophers to the
tracks and then run them over.

------
viburnum
It’s not just philosophy. So much of economics education is getting students
to narrow their vision down to the something that somebody else wants them to
see. Force the assumptions on people and then tell them how smart they are
when they draw the inevitable conclusions. It’s a trap. Normal people forget
about the assumptions and walk away believing the conclusions. It’s a
technique for reshaping people’s intuitions.

~~~
6510
haha, yes! Why is everyone relevant in philosophy dead?

The answer to the trolley problem is that it doesn't matter if you pull the
lever or not. You should figure out who or what created the situation and
truly eliminate the problem at its source.

~~~
juped
Killing Philippa Foot is a common joke solution to trolley problems, but few
consider it seriously ethical.

------
castratikron
Stallman said it better:

The Trolley Problem poses this question: if a trolley is about to run over and
kill five people standing on the tracks, and you can shunt it to a different
track where it would only kill one person, should you do it? Or what if you
could save those five by throwing a person near you onto the tracks; should
you do it?

Many people feel intuitively that it would be wrong to throw that person onto
the tracks, and an argument attributes this to a supposed essential difference
between killing someone and deciding to let that person die.

I disagree. I too believe it would be wrong to throw that person onto the
tracks, in real life, but not in the hypothetical trolley problems. There is
no ethically significant difference between killing a person and letting the
person die, if (as supposed in the trolley problems) there is no doubt that
the death will occur.

The reason, in real life, why killing someone is ethically different from
letting someone die is that real life is full of surprises: the person might
not really die. If you kill him, his death is pretty certain (though not
totally; just recently a man was hanged in Iran and survived). If you merely
don't take action to save him, he might survive anyway. He might jump off the
track, for instance, or someone might pull him off. All sorts of things might
happen. Likewise, throwing the one person onto the track might not succeed in
saving the other five; how could you possibly be sure it would? You might find
that you had done nothing but cause one additional death. Thus, in real life
it is a good principle to avoid actively killing someone now, even if that
might result in other deaths later.

The trolley problems invalidate the principle because of the unlikely
certainty that they assume. Precisely for that reason, they are not a useful
moral guide for most real situations. In general, difficult artificial moral
conundrums are not very useful guides for real-life conduct. It's much more
useful to think about real situations. In the free software movement I have
often decided not to propose an answer to a general question until I had some
real cases to think about.

For real driverless cars, the trolley problem never arises: the right thing to
do, whenever there is a danger of collision, is to brake as fast as possible.

More generally, the goal is to make sure to avoid any trolley problem.

[https://stallman.org/articles/trolley-
problem.html](https://stallman.org/articles/trolley-problem.html)

~~~
bigfoot675
> the right thing to do, whenever there is a danger of collision, is to brake
> as fast as possible.

This actually isn't true in every scenario. What if there is a car following
close behind? What if the car behind has more passengers than the car in front
of you? What if swerving will kill pedestrians, but less than the passengers
in the car you're about to hit?

There are a lot of variables, and it's really not as straightforward as you
make it sound

~~~
function_seven
That's the entire point of Stallman's essay. Those what-ifs (EDIT: well, #2
and #3) aren't things the self-driving car (or a human driver, in most cases)
will compute in the milliseconds it takes to make a decision.

> _What if there is a car following close behind?_

They may be able to stop fast enough. Whereas if you intentionally don't brake
hard enough and strike the car in front of you, you (the computer) have traded
a possibility for a certainty. You control your braking and following
parameters, they control theirs. (EDIT: Yeah, if there's a lot of distance in
front of you, then the car can make use of that. It doesn't need to screech to
a halt in every case. But this isn't trolley-car territory, it's just the
standard way adaptive cruise and emergency braking already works)

> _What if the car behind has more passengers than the car in front of you?_

Irrelevant. It is not the car's job to be counting the number of occupants in
every other car on the highway.

> _What if swerving will kill pedestrians, but less than the passengers in the
> car you 're about to hit?_

I want the self-driving car to not make that choice. The passengers in the
car, however numerous, are much more likely to survive impact than unprotected
pedestrians.

It really is straightforward. Don't burden self-driving cars with silly edge-
cases. Apply the most effective mitigation known in the general sense, and see
that everyone is happy because they can anticipate the car's choices a bit
better.

~~~
garmaine
> That's the entire point of Stallman's essay. Those what-ifs (EDIT: well, #2
> and #3) aren't things the self-driving car (or a human driver, in most
> cases) will compute in the milliseconds it takes to make a decision.

It could be computed in the seconds, or minutes which lead up to the incident.
A smart driver, human or AI, has a ready, already computed backup plan.

~~~
function_seven
You count the number of occupants in the car behind you, and you factor that
into your braking strategy? Or, do you count occupants in the car ahead, and
prepare to do the pedestrian math if a quick avoidance is needed?

No, of course not. If I'm about to rear-end another car, even if it's a 1973
Ford Pinto with 4 occupants, I'm not going to swerve toward sidewalk
pedestrians in lieu of slamming on the brakes. Even if I have all the
information ahead of time on that car's faults and occupancy level. Nor would
I want a computer to make that decision either.

You're not wrong. These are things a computer could do. But I'd say that
they're not something we want them to be doing. We don't need programmers
debating trolley-problem scenarios. They can focus on just reducing or
eliminating impacts directly caused by the system they're building. In other
words, just hit the brakes.

~~~
garmaine
I agree. I’m just asserting that the point made by Stallman about there not
being enough computational capacity in the milliseconds before a crash to
compute the trolly problem is a straw man.

------
DangitBobby
I was in just such a conversation the other day, where we were discussing the
implications of the second amendment with regards to tyranny in the US. A
participant would not allow the conversation to continue without a description
of how tryanny would take hold, and no hypothetical was realistic enough. They
could not have brought the conversation to a halt faster if it were
deliberate. Ultimately, they didn't agree with the point (which was sound)
because they believe that our "democracy" is sufficient to prevent tyranny now
and forever.

The _point_ of discussions about scenarios whose parameters cannot be known is
to find a useful way to elide the unknowns and come to conclusions that we can
agree to and understand. Failing to see the forest for the trees is not a
problem with the exercise; it's a problem with the participant.

~~~
Traster
The problem is that there's nothing inherent about a hypothetical that elides
irrelevant details, it elides only whatever the author of the hypothetical
wants to elide. Hypotheticals can let us agree on one aspect of a situation,
but it can't make us agree on how relevant that aspect is to the situation
we're actually interested in. This is why the trolley problem as applied to
autonomous driving is so problematic, whilst it's an interesting thought
experiment, its totally irrelevant to the person actually designing a self-
driving car. It's a perfectly valid position to take that a situation is
complex enough that hypotheticals aren't ever going to be helpful in reasoning
about it.

------
efitz
One challenge that I have always had with the trolley problem is that in
general solutions are not symmetrical. By this, I mean that as an observer, I
might conclude that several different choices made by the subject of the
experiment were ethical. For instance, if the subject threw the switch and
caused one person to be killed (instead of multiple), I would see that as
ethical. I would also see it as ethical if the subject did nothing, either out
of shock or out of refusal to actively participate in anyone’s death. So I
guess the problem is that I’m not convinced that the thought experiment
generates objectively ethical outcomes, only subjective ones.

~~~
luckylion
> So I guess the problem is that I’m not convinced that the thought experiment
> generates objectively ethical outcomes, only subjective ones.

But that's also literally what it's supposed to do. It's a device to poke
around and figure out your positions, e.g. are you into consequentialism or do
you prefer deontology, are there circumstances that can change your position
etc.

I often find people (not you, people in general) rejecting thought
experiments, because they are not comfortable with their intuitive decisions
and they feel that it will be exposed when they're forced to apply it to a
hypothetical situation that does not leave them an easy out ("this wouldn't
happen, I don't walk near train tracks, ever").

------
sukilot
Absolutely bizarre to see a philosopher writing at some length that some
questions are bad because they are hard to answer cleanly. Does he know what
his profession is?

It's also poor form to sling broad accusations about what "some" philosophers
do without citing any examples.

~~~
logicchains
>Absolutely bizarre to see a philosopher writing at some length that some
questions are bad because they are hard to answer cleanly.

It's hardly bizarre. Wittgenstein, one of the most famous philosophers of the
20th century, wrote that most philosophy was incorrect because philosophers
failed to clearly define their terms.

"The right method of philosophy would be this. To say nothing except what can
be said, i.e. the propositions of natural science, i.e. something that has
nothing to do with philosophy: and then always, when someone else wished to
say something metaphysical, to demonstrate to him that he had given no meaning
to certain signs in his propositions. This method would be unsatisfying to the
other — he would not have the feeling that we were teaching him philosophy —
but it would be the only strictly correct method".

------
imtringued
I dislike these types of thought experiments because they always involve a
preexisting problem where a random bystander has to become a hero that has to
save everyone. There are a lot of situations in which fate is just playing out
and you can't do anything about it and struggling will only make everything
worse. In reality, the hero doesn't actually know which lever will actually
save lives and he also doesn't know if the people he is saving are actually in
danger.

------
smitty1e
TTPP seems to be suspension of disbelief.

If the audience cannot enter into the problem, then its value as a tool is
diminished.

Case studies are also fraught with peril, but the analysis of where the
particulars end and the general principles begin seems the bulk of the
exercise anyway.

~~~
Natsu
More than that, they reduce too much of the problem away. It reminds me of the
two capacitor paradox [1] which (spoiler alert) only arises because that
configuration isn't actually possible to realize in terms of ideal circuit
elements.

In particular, we spend a lot of time thinking about what would have to be, in
most formulations of the problem, a quickly-decided act (otherwise, why not
simply untie the people?) and we have absolute certainty given us as to the
consequences, vs. all the uncertainty in real life. The problem is used to get
rid of those elements as distractions, but they're essential features of
people's reasoning (and reasoning ability) so they're not so easily discarded.

[1]
[https://en.wikipedia.org/wiki/Two_capacitor_paradox](https://en.wikipedia.org/wiki/Two_capacitor_paradox)

------
Sophistifunk
The trolley problem is a non-problem. I don't know about you guys, but nobody
I know is ever going to buy a car that might decide to kill them to save some
stranger(s).

~~~
baddox
That seems overblown. Would you ever ride in a car with a driver that might
sacrifice the lives of everyone in the car to save 1,000 people?

I suspect that lots of people have actively made decisions while driving their
own car that endangers their own life in order to potentially save the life of
someone else.

~~~
leghifla
Well, I would not want to get into the car of someone with a driving habit of
risking the live of 1000 people in the first place. The real solution to this
is to avoid situations where such choice must be made, by keeping safe
distances and speed. Good drivers do this naturally. AI must do this too.

~~~
baddox
That’s never part of the thought experiment. The trolley problem isn’t asking
whether you should risk one life or risk 100 lives. It’s asking what you would
do in a situation that you were placed into.

------
dannykwells
Could this be due to the rise of pure mathematics in influencing philosophers?
In pure math, counter examples play a very important role: show once counter
example, disprove an entire claim.

The violinist in the article is clearly trying to be a pure math- like counter
example, but to me, misses the emotional aspect of pregnancy (at the least) -
a fetus is not a random person, it is you (partly), and furthermore, many
would say society (and the species) depend on having babies, whereas the
society and the species do not depend on violin players.

Not saying I agree with these arguments only that the real world is
substantially messier than pure math, and thus, pure math thinking may stumble
when applied to real world problems.

Tl;dr: humans are not rational in our beliefs and the continued attempts on
all disciplines to assume we are, are well, irrational.

~~~
dereferenceddev
I've rarely heard someone make the counter-point that a fetus is technically
half "you", which really does fundamentally change the violinist example.

The violinist experiment does have a lot of holes it it, but I think that one
in particular almost turns it on its head, particularly because it circumvents
the crux of the problem around consent (which I think is at the core of the
problem).

It would seem you would have to change the thought experiment to have the
violinist actually be related to you (say a sibling). Now, would a person feel
_as_ upset that they had to allow their sibling to use their kidneys for nine
months in order to stay alive? That really changes it.

~~~
perl4ever
A fetus is half _another_ person, whose "selfish genes" would benefit by the
fetus taking an excessive share of resources. So there's an inherent tension
that isn't determined by culture or moral theories.

------
kwhitefoot
> Had the context been one in which a hitman was preparing to take a hidden
> shot at a target, and the target then died of a sudden cardiac arrest as the
> hitman remained out of sight, it’s far from clear that killing and letting
> die would be equally bad.

But the two situations are not equivalent. The hitman is almost certainly not
in a position to save his intended victim so he is not in any _meaningful_
sense letting him die.

------
brodouevencode
This must be in the gestalt. [https://brodoyouevencode.com/the-problem-with-
the-trolley-pr...](https://brodoyouevencode.com/the-problem-with-the-trolley-
problem.html)

~~~
NoPicklez
I think its a silly response to the problem, yes if you were near the lever
and you so happened to know one of the groups then you might argue pulling the
lever one way or the other.

But is that simply you justifying your actions?

The problem isn't absurd when posed to humans because, how about the scenario
where you don't know either person and you need to make a decision in a matter
of seconds. That's the point.

If my mother was on one of the tracks, I might justify my actions to pull the
lever and have it veer into the other group, but that isn't necessarily a
morally correct decision.

The author in the article says that the problem is absurd when posed to
humans, then goes onto applying the same principles of the problem to AI and
cars but does not say that the problem is also absurd with regards to AI,
despite coming to the same conclusion.

~~~
brodouevencode
> necessarily a morally correct decision

This is where the problem is: life is way more complex and nuanced to distill
down to such a simplistic scenario. Measuring morality is not an instantaneous
action.

------
post_below
Maybe a little off topic...

Ethics thought experiments are interesting and sometimes fun and you can't
deny the value of getting people thinking about choices and behavior.

But it's always seemed to me that they, along with ethics in general, miss the
point. I don't think absolute right and wrong, or even absolute "lesser of two
evils" is a particularly useful goal.

It's the contextual framework that matters... Values, priorities, fears,
desires, needs, the things that comprise a person's identity and worldview.
Those things are going to win over ethics every time when real world decisions
are being made.

IMO there's a lot more value in exploring those things, as opposed to ethics,
if the ultimate goal is to impact the behavior of a individuals in a society.

~~~
adjkant
The irony here is you just pigeonholed "ethics" and then made several ethical
arguments yourself while saying you don't care about ethics!

To add some formal language:

> I don't think absolute right and wrong, or even absolute "lesser of two
> evils" is a particularly useful goal.

This sounds like a metaethical argument - what underpins ethics is a very
important question that a lot of people miss, but that argument is actually
the base of ethical stances. The formal metaethical belief here is
objectivism. Some other options are things like subjectivism, cultural
relativism, error theory, and non-cognitivism.[1]

[1] A video alternative:
[https://www.youtube.com/watch?v=FOoffXFpAlU](https://www.youtube.com/watch?v=FOoffXFpAlU)

> It's the contextual framework that matters... Values, priorities, fears,
> desires, needs, the things that comprise a person's identity and worldview.
> Those things are going to win over ethics every time when real world
> decisions are being made.

Ethics (once you get past metaethics) is almost always around a framework, and
focuses exactly on everything you listed. Aristotle explicitly focused on
values to built his ethical framework. Foucault talks a lot about fear and
power. Most of consequentialism and utilitarianism focuses on needs and
desires. Rawls and egalitarianism is an example of talking about priorities.

\--------------------------------------------------

To me, it sounds like you have a gripe with the impracticality of philosophers
talking about ethics but care quite a lot about ethics itself. If so, I'd be
with you strongly on both accounts.

~~~
post_below
I see that it was sloppy of me to use the term ethics so loosely.

------
yogrish
This video summarises trolley problem well for driverless cars.
[https://youtu.be/ixIoDYVfKA0](https://youtu.be/ixIoDYVfKA0)

------
msla
The problem here is reminiscent of the classic short story "The Cold
Equations": A young woman is fooling around near a space ship and ends up
accidentally stowing away on a craft needed to move serum to a colony in the
grips of disease. The mass on the ship is accounted for to the gram (the
gramme, even!) so her excess mass means the ship no longer has the fuel to
make it to the colony. The dashing space hero has to jettison the innocent
young woman in order to save untold numbers of people.

OK, what's blindly, blisteringly _wrong_ here? First, the idea that a space
ship small enough its fuel would be rationed out by the gram would have enough
room in the crew compartment for someone to hide in is ludicrous. Second,
launching without a checklist? Are you out of your tiny little mind? Third,
allowing unknown people to bring unknown contaminants into a space ship?
Having a "KEEP OUT" sign doesn't save the colonists from _another_ plague,
now, does it?

The moral of the story is that it's hard to keep your mind on the supposed
lesson when the flaws jump up and down and yell at you.

Taking a different tack, by thinking too hard about the consequences of the
thought experiment instead of the lead-up to it, there's the Jew in the attic.
You know how it goes: A Jewish family is hidden in your attic or guest bedroom
or someplace and the Nazis come knocking. Do you lie to save the Jews? "Of
course", you say, and come up with a nice logical argument for why your
ethical system demands you lie in this instance. All functional ethical
systems can come up with such an argument with minimal fuss. However: The
Nazis were not very nice, you know. If they thought a town was holding out on
them, they'd initiate reprisals. They'd kill a whole town in a fit of fascist
pique. Saving a half-dozen Jews could doom a few thousand innocents, likely
including the original Jews. But that's out of scope for the thought
experiment.

~~~
arkades
> They'd kill a whole town in a fit of fascist pique.

That's a new one to me. Any sources I can read up on that?

~~~
dredmorbius
The story of Lidice doesn't precisely fit the scenario, but is close enough in
spirit that I'd call it a match:

 _The Lidice massacre was the complete destruction of the village of Lidice,
in the Protectorate of Bohemia and Moravia, now the Czech Republic, in June
1942 on orders from Adolf Hitler and Reichsführer-SS Heinrich Himmler._

 _In reprisal for the assassination of Reich Protector Reinhard Heydrich in
the late spring of 1942, all 173 males from the village who were over 15 years
of age were executed on 10 June 1942. A further 11 men from the village but
who were not present at the time, were later arrested and executed soon
afterwards, along with several others who were already under arrest.[2] The
184 women and 88 children were deported to concentration camps; a few children
who were considered racially suitable and thus eligible for Germanisation were
handed over to SS families and the rest were sent to the Chełmno extermination
camp where they were gassed._

[https://en.wikipedia.org/wiki/Lidice_massacre](https://en.wikipedia.org/wiki/Lidice_massacre)

Memorialised in Bohuslav Martinu's "Memorial to Lidice".

[https://youtube.com/watch?v=DHWU27TD6UU](https://youtube.com/watch?v=DHWU27TD6UU)

------
SpicyLemonZest
I'm stunned that this article was published just a few days ago, because
recent events have illustrated just why it can be important to think about
strange, unrealistic hypotheticals. How much easier would lockdown debates
have been, if we had the conceptual frameworks in place to talk frankly about
how many lives must be saved to justify suspending certain freedoms?

~~~
sukilot
How does that help.

"By my weightings, freedom is more important than public health."

"My weightings say the opposite."

That's where we are today. Putting numbers on it, numbers that can't be
determined except by our personal biases, doesn't help.

Here's a fun story about cost benefit analysis in the real world, a bureaucrat
deciding which kinds of rape hurt enough to be worth preventing:

Criticism:
[https://gulcfac.typepad.com/georgetown_university_law/](https://gulcfac.typepad.com/georgetown_university_law/)

Defense: [https://www.theregreview.org/2013/09/13/13-sunstein-cost-
ben...](https://www.theregreview.org/2013/09/13/13-sunstein-cost-benefit/)

~~~
darawk
The value in these things is being able to consistently extrapolate from
weightings, and to understand all of the consequences of different weights.
Most people just have vague moral intuitions that lack consistency.

