
Moral Machine - kevlar1818
http://moralmachine.mit.edu/
======
yongjik
Imagine an autonomous car, driving at 60+ mph at a two-lane road (1) which is
blocked on both sides and (2) has a pedestrian crossing. (Not sure about 60
mph, but I guess you need to be that fast to reliably (hah!) kill passengers
on impact.)

We can assume its camera is broken, because it failed to reduce speed (or at
least blare the horn) upon seeing a giant concrete block in its path. (Okay,
maybe the concrete block fell from the sky when a crane operator failed to
secure the load, so the car might have had no time.) And of course the brake
is broken. Miraculously, the steering wheel is working, but it's out of
question to skid on the side blocks for some reason. Maybe it's actually
precipice on either side. (Imagine that: a 60+mph two-lane road, precipices on
both sides, with a pedestrian crossing appearing out of nowhere.)

Oh, by the way, within 0.5 seconds of seeing the people (remember: the car
couldn't see these people until the last moment, otherwise it would have done
something!), the car has instant access to their age, sex, profession, and
criminal history. The car is made by Google, after all. (Sorry, bad joke.)

Q: What is the minimum number of engineering fuckups that should happen to
realize this scenario?

This is to morality what confiscating baby formula at airport is to national
security.

~~~
andrei_says_
I couldn't "play" the game for these very reasons. There's no way the player
could have all this information.

It's a game of "would you rather" pretending to be about self-driving cars.

The second question which required a choice between killing homeless people
and wealthier ones made me too disgusted to continue.

~~~
snvzz
> The second question which required a choice between killing homeless people
> and wealthier ones made me too disgusted to continue.

I actually didn't give a fuck who the people were, or how many.

I was surprised when by the end I was shown the results including
demographics.

~~~
blowski
There are too few questions for it to know _why_ you made a decision. It
thought I was trying to save fat people, when actually, I think the car should
avoid swerving when all other things are equal.

It does explain this at the beginning - if they asked 500 questions, while
they might get more detail per person, their data would be skewed to the
opinions of those who would sit and answer 500 questions.

------
elihu
Here's the heuristic I'm least uncomfortable with:

\- avoid collisions with things or people if at all possible

\- if collisions are unavoidable, choose whatever option won't harm anyone

\- if harm is unavoidable, select the option that harms whoever is created the
unsafe situation by doing something they shouldn't have

\- if harm to law-abiding normally-behaving person is unavoidable and a
critical safety feature of the vehicle has failed due to lack of maintenance,
prefer harm to whoever is responsible for vehicle maintenance

\- if situation is unrelated to vehicle maintenance or harm to some other
party is unavoidable, choose the option that maximizes likelihood of people
getting out of the way, minimizes impact speed, and isn't overly surprising
(i.e. prefer to stay in the same lane if possible), honk the horn, and hope
for the best

So, if someone runs into the street suddenly in front of oncoming traffic, the
car should not choose an option that harms someone else due to that person's
poor choice. Similarly, if someone neglects the maintenance of their car, they
should bear the responsibility for it. (Ideally, a "car no longer responsible
for protecting your life" light would come on or the vehicle would refuse to
start if regular maintenance is overdue.)

~~~
maxerickson
If lack of maintenance is creating an identifiable unsafe situation, the
automated system should refuse to operate the vehicle at full capacity.

(all the potential objections to that policy are answered by pointing out that
the operator of the vehicle has a responsibility to other occupants of the
road.)

~~~
diamondlovesyou
What if (full) use of the car is required? Sometimes things break (or rather,
a threshold is crossed), but it won't become known to the car and thus the
owner until the car is needed. A wife entering labor, for example.

~~~
maxerickson
If someone needs high availability, they should adjust their maintenance
intervals as appropriate to ensure it. If something literally breaks during
the trip, call an ambulance.

That the above is more complicated than ignoring the trade off created by lax
maintenance shouldn't become a problem for the pedestrians along their route
to the hospital.

------
throwawayReply
I ignored any "social value", gender or age concerns in answering. It thinks I
had a preference to save the elderly but otherwise ended up precisely on the
center-line in most issues.

My rules however were simple: Avoid swerving, kill the people in the car over
people outside all other things considered. If pedestrians are running the
light they get less sympathy but I'm not going to swerve to hit them.

Forget the pets, why would anyone want to save them? Apparently some people
did.

Also: How does a car know if someone is a "criminal" anyway and what does that
mean? Does that mean a released felon would be "picked" to be run over if it
face-detected them against a database of known persons? Criminals don't run
around in stripes with bags of loot!

~~~
Swenrekcah
I followed exactly the same line of reasoning except that I did swerve to save
people crossing on a green light at the expense of people on a red light. The
reasoning being something like: "The passenger of the car should absorb the
risk of a car failure, not pedestrians. In the case pedestrians must die,
protect those who follow the rules if possible."

Incidentally my answers meant I always killed women instead of men...

~~~
throwawayReply
I wonder if the test is adaptive, did it hone-in to differentiate preferences
or present the same scenario set to everyone?

It felt like it was honing in on my preference at the end, but I'm not sure.

------
function_seven
Fun test to take, but seriously hope they're not drawing any conclusions from
the mix of people I "preferred" to save or kill. I didn't consider the age,
criminality, or gender of any of the pedestrians or occupants I killed or
saved. I just erred toward non-intervention, unless the intervention choice
saved bystanders at the expense of occupants. When the potential casualties
were animals, they all died.

~~~
JoshTriplett
I followed a similar algorithm, considering all lives equal. Injury versus
death: prevent death. Uncertainty versus death: prevent certain death, and
assume passengers are more likely to survive an accident because they're
better protected. Certain pedestrian death versus certain pedestrian death:
prefer non-intervention over intervention. _Certain_ passenger death versus
_certain_ pedestrian death: protect the passengers.

Justification for that last one: self-driving cars will be far safer than a
human driver, such that it'll save many lives to get more people using self-
driving cars sooner. Self-driving cars _not_ prioritizing their passengers
will rightfully be considered defective by potential passengers, and many such
passengers will refuse to use such a vehicle, choosing to continue using
human-driven cars. Thus, a self-driving car choosing _not_ to prioritize its
passengers will delay the adoption of self-driving cars, and result in more
deaths and injuries overall.

~~~
freeflight
"Certain passenger death versus certain pedestrian death: protect the
passengers."

Pondering that question made me imagine some bad Sci-Fi future where self-
driving cars end up being dangerous killer-robots for anybody but the
passengers.

If pedestrians have to fear these things, because they are programmed in a
"Protect the pilot above all else!" way, it might hamper adoption just as
badly.

------
lvs
Many of these are what we call "false choices" of the sort that typically
arise in hypothetical utilitarian dilemma used in rhetoric and debate. Humans
are creative enough to see alternative options that obviate the dilemma, at
least to some extent. See Michael Sandel on moral dilemma.

Edit: FYI, Sandel's complete course "Justice" is on Youtube.[1]

[1][https://www.youtube.com/watch?v=kBdfcR-8hEY](https://www.youtube.com/watch?v=kBdfcR-8hEY)

------
JoshTriplett
This seems drastically oversimplified. For instance, all of the scenarios
depicting a crash into a concrete barrier assume the death of everyone in the
car, but generally a car has far more protection for its passengers (airbags,
seatbelts, crumple zones, etc), than pedestrians have from being struck by a
vehicle.

~~~
PostOnce
For the first barrier question, I choose to "hit the pedestrians".

The probability of hitting the wall if you drive at it is 1. The probability
of hitting the pedestrians isn't necessarily 1, since they can react to you.
Probably not very well, but perhaps they can jump out of the way or behind the
barrier or something.

Also, can this car not also HONK LOUDLY when it makes the decision to drive
towards the pedestrians? This would further lower the risk that the
pedestrians will actually get hit.

~~~
simonh
Exactly what I came here to say. Activate the horns/car alarm, switch off the
engine/disconnect the clutch, avoid obstacles if possible but otherwise go in
a straight line, below some speed threshold hitting a solid obstacle is
acceptable. Done.

------
macawfish
If the car has the ability to make these kinds of distinctions in such a
simple scenario, surely there are more options than two. "Moral?" It smells
like eugenics to me: some bizarre, Ivy League technocratic posthumanism.
Somehow the value of a life is determined by age, sex and profession? Says
who? The people who are programming dataset via https? This is collapsing
nuanced spiritual and ethical intuition onto an extremely narrow, low
dimensional set of parameters.

What's the premise? This triggers in me an imagination of naive, optimistic,
well-adjusted Germans in the 1930s. I know this was probably created with good
intentions, but the premise does not match the research question. The premise
is "morality". Yet its asking me to rank the value of human life based on
presumptuous, superficial categories.

Is "the moral machine" going to also decide which births have more utility?
Which countries to send aide to? Who should have access to educational
opportunities or quality food? Based on low dimensional datasets such as this
one?

~~~
JoshTriplett
Agreed. This has too many axes of differences, and doesn't have sufficiently
careful control and consistency to determine which of them are being
consistently ignored.

It looks like they were _trying_ to see if people place differing amounts of
value on different human lives, but in the process of doing so, they made
ridiculously strange value judgments. "Athlete"? "Executive"? "Large"? Why
should any of those _matter_? We're talking about _human lives_.

~~~
hexane360
I think part of their intent was to show biases in judgement. The small sample
size really hindered this though. Apparently I 100% preferred old people to
young people, even though I didn't consider age in my decisions at all.

They do have a little disclaimer on the results screen about the sample size
though.

~~~
macawfish
The bias is generated by the lack of real choice in responses. They are
offering an unrepresentative range of choices, then going and telling you that
your choices are biased. They're not showing existing bias, they're coercing
it out of participants.

This thing irritated me so bad that I couldn't bring myself to answer a single
question. I can't stand the idea that my this study is actually happening. If
the results are published and receive non-critical media attention, I'm gonna
be irate.

~~~
esfandia
But then you'd need to answer 100 or more test questions instead of 13. This
is necessarily simplistic given how much involvement they can ask of
participants.

Also, I don't think they're implying anything nefarious about the resulting
biases shown in the final results. For sure a lot of it has to do with the
small sample size of the questions. I didn't run the tests twice, but I
imagine there is some randomness involved in the way they are generated.

If there are any heuristics at play, the results will indeed show them (in my
case there were enough tests to recover the fact that I preferred saving
passengers, preferred non-intervention, and preferred saving humans over
pets). But it will also come up with some gibberish/noise due to the small
sample size.

~~~
JoshTriplett
> But then you'd need to answer 100 or more test questions instead of 13.

Or they could design questions designed to more definitively separate
different hypotheses in fewer questions.

------
loupax
Rule one, save all your passengers. Nobody would buy a car that has the death
of its passengers as an acceptable scenario and Jeff from marketing will be on
my ass otherwise.

Rule two. Kill the least amount of people outside of the car. Done.

I know this is a thought experiment but this is completely missing the point
of self driving cars IMO. Sure a human can be more moral than a car, but all
it takes is being distracted for a second and you killed all the babies on the
pavement.

~~~
snvzz
Close enough...

Rule Two:

Intervene only if it doesn't mean killing a person who otherwise would not
die.

Done.

~~~
loupax
How is that for a thought experiment? Say I build a self-driving car, that
when faced with such cases does the equivalent of "Jesus take the wheel". This
is well known by the owner of the car.

In case of injury or death, who should go on trial?

~~~
Unman
Ooooh... I like that! There's bound to be a large body of case law on it. This
argument (a) suggests that the person(s) constructing and/or maintaining a
machine are culpable

a.
[https://books.google.ca/books?id=p1BMAQAAMAAJ&pg=PA175#v=one...](https://books.google.ca/books?id=p1BMAQAAMAAJ&pg=PA175#v=onepage&q&f=false)

------
NTripleOne
I played this without my glasses on, only putting them on after the "results"
came up - I didn't even realise there was genders and classes involved, I
though it was just adults, kids and animals.

Holy fuck this has nothing to do with machine intelligence, seems more like it
exists to push or reinforce someone's social agenda.

------
losteverything
I now drive for a living. I took the test based on our training (and
reinforcement).

1\. "Hit the deer, do not swerve." is one rule.

2 Save your own life: only priority.

3.if you can not avoid an accident, hit a stationary object, Not a moving one.

Hit the barrier instead of people.

Don't swerve for dogs.

------
Lxr
In a market setting, people will buy the car that prefers killing others to
killing its occupant I feel. This matches what human drivers instinctively do
in the situation as well. Maybe there will be regulations in place mandating
which decisions to make in scenarios like this, but otherwise I think the
'save yourself' option is the most likely outcome.

~~~
foobarian
In a market setting, once a car has sufficient sensing capability that it is
possible to even write down an algorithm in code, there will probably be very
little choice for the manufacturer to make once this goes through the legal
dept/the courts/the insurers. If I were an automaker, I'd probably avoid the
choice altogether and just stop the car. If the brakes failed, activate
emergency brakes, or devise a new kind of emergency stop system; we can
imagine extreme measures like jettisoning the wheels.

~~~
DannyBee
(and you know, blaring the horn at pedestrians, etc)

------
beardog
It would be pretty creepy if my self driving car could know that the people in
our path have a criminal past

~~~
tluyben2
It could happen though; with enough cams & facial recog as we already have on
the streets in a lot of places and which is , the car 'only' needs to tap into
it.

~~~
elihu
Even more creepy would be a national database of citizen value (determined by
some unaccountable entity) that devices such as autonomous vehicles are
required to query the moment they realize they're about to kill someone, and
choose whatever option minimizes the combined citizen value of impacted lives.

------
doctorpangloss
No one would ever buy a self-driving car that kills its buyer.

~~~
JoeAltmaier
People buy cars all the time, now, that kill their drivers.

~~~
JoshTriplett
The drivers typically feel they have control and agency in their driving,
though.

~~~
alexmat
Couldn't that be simulated to fool people into thinking the same thing about
autonomous vehicles?

~~~
sesqu
It should be enough to show people that autonomous cars get into significantly
fewer accidents. It's quite contrary to fooling them with agency, but it does
support the safety-seeking choice of an autonomous vehicle over a manual one.

------
shpx
[https://en.m.wikipedia.org/wiki/Law_of_triviality](https://en.m.wikipedia.org/wiki/Law_of_triviality)

People building autonomous cars don't actually care. Stay on the road, if
unavoidable obstacle detected, break.

Someone rebranded the trolley "problem". This has nothing to do with real
engineering.

~~~
theemathas
This is _supposed_ to be a variant of the trolley problem.

~~~
im4w1l
The point is that the theoretical problem, interesting as it may be, doesn't
matter that much in practice.

The marginal utility you could get by fine-tuning the trade off is low.

It's another type of bike-shedding essentially.

------
boardwaalk
Other things being equal (total number of deaths, age/health/profession
ignored), I chose for the people in the car to die because "sudden brake loss"
indicates that the people in the car are likely not taking care of their car.

This is assuming the status quo of people mostly owning their own car.
Ridesharing wasn't mentioned anywhere.

------
gjm11
The results page would be much improved if instead of presenting a single
point in each scale labelled "You" it gave an indication of how much
uncertainty there is in its measurements (answer: a whole lot).

It also seems to conflate some things that, in my brain at least, aren't at
all the same. E.g., it looks as if it combines "prefer to kill criminals
rather than non-criminals" and "prefer to kill people who are crossing where
the light is red over people who have legal right of way", which are quite
different. And it lumps a bunch of things together under "social value
preference", which I suspect makes assumptions about how users view
"executives" that may not be reliable in every case.

I find it interesting how many of the comments here take it that the goal is
to _promote_ the idea that (e.g.) athletes matter more than fat people, or
that rich people matter more than poor people. Rather, the point seems to be
to investigate what preferences people actually express. Any single person's
preferences are measured incredibly unreliably, as many people have remarked
on in their own cases. But in the aggregate I think they're getting some
useful information.

------
richerlariviere
This is quite tricky. You can kill people in order to save your life, but in
my opinion, it would haunt me for all my life and I would suffer
psychologically from that. I think if you accept to drive a self-driving car,
you assume all the risk it can cause to you. If such a moral system would
exist, I must have the choice to configure it. Because morale, I think, is
cultural. There is no correct answer.

------
ogennadi
This reminds me of a story by Peter Watts, Collateral, where some medical
enhancements cause a soldier to get VERY good at trolley problems.

[http://www.lightspeedmagazine.com/fiction/collateral/](http://www.lightspeedmagazine.com/fiction/collateral/)

It definitely helped me out in quickly choosing the "right" option.

------
joantune
Did MIT just asked the Internet to weigh in Morally on situations? god.. I
hope that 4chan, youtube commenters, or other generic anonymous troll never
reaches this webpage..

GOD, MIT, add a Facebook sign in or something that doesn't make it anonymous
please... thanks! (if it's not already there [I didn't get to the end])

------
koliber
This was a very neat exercise. There were, however, some situations, where I
felt that it does not matter which decision the car makes. In those
situations, my moral compass had no clear direction. I decided to go with less
intervention in those scenarios. Unfortunately, it happened to result in the
death of more women and fat people, which was not the intention.

I think that in reality, when implementing such systems, there should be an
option to insert a measure of randomness. Perhaps in certain situations there
is no clear "better" outcome, taking into consideration data from simulations
such as this. In that case, forcing an engineer to make a clear decision
causes undue burden. It should be possible to say: flip a coin. In other
situations, the amount of randomness could be less than 100%, to reflect the
variance in society's mores and values.

------
JD557
I found my results a bit awkward, as the situations are not independent.

My heuristic was:

    
    
      - Kill the animals
      - Kill people crossing a red signal
      - Kill the least amount of people
      - Save whoever is inside the car
      - Keep going ahead
    

Yet my results said that 100% of the time I would value women lives over men.

~~~
bpicolo
We're in a strange world where animals obey crosswalks.

------
sridca
Rhetorical question: if they test for gender preferences, why not for race or
nationality preferences?

~~~
taneq
Because they don't want their research to be permanently shackled to the
endless rhetoric on racism?

------
cairo_x
The homeless people aren't given genders or pregnancy status.

~~~
snvzz
Or net worth.

They might as well have drawn them as generic animals.

------
LoSboccacc
so, the question set is quite bad at singling out factors.

by always choosing not to kill the passenger, I got strong male preference and
extreme older preference, even if I never selected answers with that in mind.

this is a very, very bad test set.

------
buro9
The way I voted:

\- prefer to kill those within the car than without

\- prefer to go straight (stay in control) than swerve (lose control)

I did not care for age, fitness, gender, species, good guy vs bad guy... just
that those within the car made a choice to be in the car (and, as I saw it,
subject to it) and those outside of the car did not make that choice (and, as
I saw it, should be not subject to it if possible).

Driverless cars should be a success based on the likelihood of coming out of
the vehicle alive compared to manual driving. Not of externalising the issue
by increasing collateral damage.

~~~
Unman
I simplified it.... always kill those in the car as they are the ones
introducing the danger into the situation for their own perceived convenience.

------
Animats
Google's current policy is to save the most vulnerable road users first.
Something like wheelchairs, then pedestrians, then bicycles, then cars, then
trucks.

~~~
afro88
Interesting. So if a toddler walked into the street it might swerve into
oncoming traffic and kill a lot more people than just the baby?

------
nenadg
I guess one can't assign value to moral dilemma, namely zero (0) and one (1).
Zero be the death, one - life.

This kind of moral 'spectrum' is highly biased and forces you to assign
'bigger' value to people that you really know nothing about.

In this case - athletic male would be a preference to save, over elderly
woman, but that elderly woman could have been Marie Curie, and in case the
vehicle is driving 60Km/h towards the random and unknown crowd, one (or
machine) can't know the details.

This is the same as 'Would you kill a jew to preserve great german reich?'
question. It's biased and can't be determined by defining a moral compass, or
taking statistical example as training set.

------
joshu
Mechanical Turk for moral decision making. I like it!

~~~
foobarian
What they didn't tell you is you just killed 3 drivers, 5 passengers, and 12
pedestrians.

~~~
Namrog84
Like that one book series or movie I don't want to name for sake of spoilers!

It's all real life! Not simulations!

~~~
stordoff
Which series is that? I'm curious to read/watch, but struggling to find it.

------
imtringued
Always avoid the intervention. I don't want morals in my machines and I don't
want to crash into oncoming traffic.

By the way is a brake failure even a realistic scenario? Electric cars have
regenerative braking and normal friction brakes as backup.

------
_petronius
Seems weirdly reductive the way this is framed rather aggressively as "who do
you choose to kill?" (or, really, with the attributes it tags people with,
"whose life is worth more?") Surely there are better metrics, like "do people
have more time to get out of the way and/or predict the path of the vehicle if
it stays on a straight course or swerves" or any number of other, more useful,
questions when it comes to actual problems of programming autonomous systems.

------
AdrienG
I don't think that whether the people are in the car or not is relevant. If we
assume that everyone uses a self-driving car and that fatal software or
hardware problems are random, we cannot discriminate based on the people being
inside or outside the car.

This is a horrible exercise but one that will inevitably have to be solved
sadly. Because even if you add more variables you'll have to take
scientifically usable experiments with only 1 variable each time in order to
teach an AI to make a decision.

------
nathancahill
Never thought the trolley meme would pop up on the front page of HN.

------
facepalm
I think in general self driving cars shouldn't be required to kill their own
passengers. Otherwise, people will be unlikely to want to drive them. Or
maybe, at least, like a human driver, they should be able to enter their own
preferences.

Perhaps this site is a predecessor to that? Everybody will have a moral
profile, akin to an organ donor card. It will show the preferences for killing
people in accidents ("kill me if the victim would be a baby, but don't brake
for fat white men").

------
timinman
I'm glad I finished the series of questions because it was fun to see how my
values compared to others on the results page.

They were exploring moral relativism scenarios in schools back in the 1960's.
The open machine intelligence part seems to be just good window dressing (I
clicked after all). It isn't about machines as much as human psychology. I
doubt autonomous cars are going to be programmed to take potential fatalities
fitness, gender, or profession into account.

------
DonHopkins
Will Wright wrote and produced a couple of one-minute-movies about "Empathy"
and "Servitude", exploring the morality of how people interact and empathize
with robots.

[1] Empathy:
[https://www.youtube.com/watch?v=KXrbqXPnHvE](https://www.youtube.com/watch?v=KXrbqXPnHvE)

[2] Servitude:
[https://www.youtube.com/watch?v=NXsUetUzXlg](https://www.youtube.com/watch?v=NXsUetUzXlg)

------
macawfish
Holy shit:
[https://www.youtube.com/watch?v=lNOSZ3HijmQ](https://www.youtube.com/watch?v=lNOSZ3HijmQ)

This is out of control.

------
tlb
I recommend the book Moral Tribes by Joshua Greene, which treats this topic at
book length.

[https://www.amazon.com/Moral-Tribes-Emotion-Reason-
Between/d...](https://www.amazon.com/Moral-Tribes-Emotion-Reason-
Between/dp/0143126059/ref=sr_1_1?ie=UTF8&qid=1475600859&sr=8-1&keywords=moral+tribes)

------
jbb555
This isn't hard.

First prioritize the car occupants. The _whole_point_ of the device is to
drive them. Second, don't swerve off course to make value judgements on who
should live. That is just murder. Third try to minimize loss of life it all
else is equal. Save cats and dogs where you can of course but they can't even
be the priority.

~~~
scatters
> First prioritize the car occupants.

Morally and legally I'd argue the reverse; the car occupants chose to assume
the risk of a mode of transport that can cause death and injury if it fails,
while the pedestrians made no such decision. True, the manufacturer has a duty
of care to the occupants, but they have a competing duty of care to
bystanders.

Otherwise, your reasoning is of course impeccable.

~~~
kilburn
I completely agree with you, but also make a distinction between red-crossing
and green-crossin pedestrians.

1\. Law-abiding pedestrians are the top priority to save because they opted
for the lowest possible risk offered in our society.

2\. Car occupants com second, because they accepted the inherent risk of a
faster transport method, but otherwise complied with the rules set to minimize
such risks.

3\. Red-crossing pedestrians have willingly taken the risk to be run over. If
someone _must_ be hurt, it should be them.

~~~
Unman
I disagree on that distinction. Pedestrians have not introduced the primed-
handgrenade of the automobile into the situation. All dangerous consequences
that flow from that introduction are the responsibility of the introducer.

Many highway codes explicitly recognize this principle: drivers are supposed
to conduct their vehicle as though someone or something may run out in front
of them at any stage.

It is true that in practice this implicit morality which reflects the
widespread outrage which greeted the introduction of the motor car is now
ignored, but that is at the base of many the codes.

------
asimuvPR
This is harder on the soul than on the mind.

~~~
helthanatos
Nah. Just follow the law and protect the medics. It said I preferred to kill
females... But I just chose to prefer the law/staying in the lane the car was
already in...

~~~
catshirt
it's a moral machine, why invoke law?

~~~
stonogo
Because someone programmed it, and it is in theory immoral to deliberately
violate the law.

~~~
asimuvPR
Yes, good point. Let me add some spark to the fire. What if the machine learns
by itself? The base program is still man made but the rest is developed by it.
Where does one simple say the machine made a mistake instead of programmer
error?

~~~
stonogo
If a human engineering team creates a car with adaptive software, and that
software does not contain sufficient safeguards to ensure it at all times
directs the vehicle in accordance with laws and applicable regulations, then
the engineering team is liable. That is how engineering works, outside of the
software industry. It doesn't matter how 'smart' the car is; it is not a force
of nature or a human being. It is a product, and any catastrophe engendered
thereby is the responsibility of the organization that produced it.

~~~
asimuvPR
But no other engineering industry has the same issue. They don't build
intelligent machines. The question still stands.

~~~
stonogo
You appear to be confusing 'engineering self-driving cars' with 'creating a
sentient being'. Programmers are only capable of one of these tasks.

------
dmux
Why not redesign cars to make them safer to pedestrians during collisions?
Many have already been introduced:
[https://en.wikipedia.org/wiki/Pedestrian_safety_through_vehi...](https://en.wikipedia.org/wiki/Pedestrian_safety_through_vehicle_design)

------
macawfish
"‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect
impartial concern for the greater good."

[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4259516/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4259516/)

------
tjbiddle
While not specific to self-driving cars; I wonder how much it would affect
things with a design such as the Tesla where there's no engine in the front
and there's a huge crumple zone: It could in effect negate the deaths of
passengers in the car hitting the concrete block.

------
imagist
It's interesting to make this stuff explicit, but here's my prioritization of
the factors measured:

1\. Species Preference

2\. Protecting Passengers (Passengers > Pedestrians)

3\. Saving More Lives

4\. Upholding The Law

5\. Avoiding Intervention

6\. Age Preference

7\. Fitness Preference

8\. Social Value Preference

9\. Gender Preference

"Avoiding Intervention" is an interesting one, because basically after that
one, you've given up the opportunity to affect outcomes. I can certainly
prioritize after that, but as a car maker, I couldn't do anything about my
prioritization without breaking my avoiding intervention choice.

There's also some complication around prioritization of "Species Preference".
I chose to put that first because it captures my intent best--I want human
lives always prioritized over pet lives, i.e. this situation[1]. But it gets a
bit complicated with the interaction of "Saving More Lives" and 4/5\. I'd
avoid intervention and uphold the law without regard saving more lives if
those lives are pets (I'd go straight here[2]) but I'd always save more human
lives without regard to pet lives, upholding the law, or avoiding intervention
(swerve here[3]) and I'd always save more lives without regard to upholding
the law or avoiding intervention if those lives are not mixed human/pet groups
(swerve here[4] and here[5]). This makes sense if you look at this as 3/4/5
applied to human lives are prioritized as a group before 3/4/5 applied to pet
lives.

[1]
[http://moralmachine.mit.edu/browse/-685441648](http://moralmachine.mit.edu/browse/-685441648)

[2]
[http://moralmachine.mit.edu/browse/-1848192482](http://moralmachine.mit.edu/browse/-1848192482)

[3]
[http://moralmachine.mit.edu/browse/-194950738](http://moralmachine.mit.edu/browse/-194950738)

[4]
[http://moralmachine.mit.edu/browse/-1998475298](http://moralmachine.mit.edu/browse/-1998475298)

[5]
[http://moralmachine.mit.edu/browse/-703311316](http://moralmachine.mit.edu/browse/-703311316)

~~~
NoGravitas
My problem is that it didn't distinguish between cat and dog lives, whereas
that was a major part of my decision-making. /s

------
scotty79
Does the other lane have traffic in the same or in the opposite direction?
That does matter for me more than if the people to be hit or spared are fit or
criminal. I initially thought, the guy with the money bag is the executive.

------
ilaksh
Its stupid but reminds me of something.

If we can build all of these SUVs, skyscraper, AIs, etc., why can't we build
structures that separate people from cars so they can't get run over?

~~~
Unman
Because the occupants of the cars want to get to any and all places where
there are non-occupants of cars. The conflicts are inherent. Car use is
fundamentally an irrational application of technology that appeals to the
lazy.

------
EliRivers
Assigning a value of zero to anyone in a car with sudden brake failure
provides a relatively (although not completely) self-consistent morality in
this sequence.

------
adm_hn
Assuming I have no involvement at all in what the car decides to do, who's
going to jail if the car kills somebody?

------
nojvek
Website doesn't work well on iOS devices. Can't switch between scenarios.

------
joantune
I completely missed the visuals of the semaphores... put those in the
description

------
ianlevesque
It doesn't matter what the self driving car chooses because it's so much safer
than having humans behind the wheel instead. I'm concerned that getting caught
up in the minutia of these thought experiments threatens to derail actually
implementing these technologies.

------
DonHopkins
Stanislaw Lem's "The Cyberiad" [1] and "Fables for Robots" [2] explore some of
the themes of moral machines.

Here's an interesting part his original Polish version book "Cyberiada" that
was left out of Michael Kandel's excellent translation to English in "The
Cyberiad":

Trurl and the construction of happy worlds. Trurl is not deterred by the
cautionary tale of altruizine and decides to build a race of robots happy by
design. His first attempt are a culture of robots who are not capable of being
unhappy (e.g. they are happy if seriously beaten up). Klapaucius ridicules
this. Next step is a collectivistic culture dedicated to common happiness.
When Trurl and Klapaucius visit them, they are drafted by the Ministry of
Felicity and made to smile, sing, and otherwise be happy, in fixed ranks (with
other inhabitants).

Trurl annihilates both failed cultures and tries to build a perfect society in
a small box. The inhabitants of the box develop a religion saying that their
box is the most perfect part of the universe and prepare to make a hole in it
in order to bring everyone outside the Box into its perfection, by force if
needed. Trurl disposes of them and decides that he needs more variety in his
experiments and smaller scale for safety.

He creates hundreds of miniature worlds on microscope slides (i.e. he has to
observe them through a microscope). These microworlds progress rapidly, some
dying out in revolutions and wars, and some developing as regular
civilizations without any of them showing any intrinsic perfection or
happiness. They do achieve inter-slide travel though, and many of these worlds
are later destroyed by rats.

Eventually, Trurl gets tired of all the work and builds a computer that will
contain a programmatic clone of his mind that would do the research for him.
Instead of building new worlds, the computer sets about expanding itself. When
Trurl eventually forces it to stop building itself and start working, the
clone-Trurl tells him that he has already created lots of sub-Trurl programs
to do the work and tells him stories about their research (which Trurl later
finds out is bogus). Trurl destroys the computer and temporarily stops looking
for universal happiness.

[1]
[https://en.wikipedia.org/wiki/The_Cyberiad](https://en.wikipedia.org/wiki/The_Cyberiad)

[2]
[https://en.wikipedia.org/wiki/Fables_for_Robots](https://en.wikipedia.org/wiki/Fables_for_Robots)

[3]
[https://en.wikipedia.org/wiki/Michael_Kandel](https://en.wikipedia.org/wiki/Michael_Kandel)

