
Moral Machine - based2
http://moralmachine.mit.edu/
======
TheSmiddy
I think these trolley problems are a waste of everybody's time. Building
redundant reliable braking systems will be orders of magnitudes easier than
creating a system to fairly and accurately assess who is the best set of
people to kill in a disaster scenario.

~~~
whyte_mackay
I hear this often (that trolley problem is not relevant), but then I
discovered that a lot of realistic ML fairness problems can be restated as a
trolley problem.

You have a classifier for credit assignment (giving a loan, etc.). The
classifier is 99% accurate on the entire population. The classifier is 55%
accurate on a small minority. You can improve the minority accuracy to 90% at
the cost of 0.3% decrease of general accuracy. What do you do?

For self-driving: Your accident rate is 0.0001% for the entire population.
Your accident rate is 0.0003% for black pedestrians at night. You can allocate
more compute/research/resources to equalize the accident rate of black
pedestrians at the cost of increasing accident rate for the entire population
to 0.00011% (or keeping it constant where you may have seen an improvement if
you focused on the general population). What do you do?

~~~
vlovich123
Have the classifier incorporate the identifying characteristics of the
minority so that it can push past beyond the 99% limit it's at currently.

This is why people are pissed off about the trolly problem. It's all zero-sum
hypotheticals when reality is primarily not zero-sum.

~~~
whyte_mackay
Adding fairness to your models usually incurs a cost which can be measured.
You have to choose between max profit or equal opportunity, you can't have
both (do you run over the investors or over the minorities?).

See here a visualization of trade-off's between global accuracy and fairness:

[http://research.google.com/bigpicture/attacking-
discriminati...](http://research.google.com/bigpicture/attacking-
discrimination-in-ml/)

------
1999
These scenarios are idiotic. If you want to wank off about self driving car
ethics, here is a much more realistic scenario: should all self-driving cars
report their location to 911 dispatch to allow any vehicle to be re-purposed
as an emergency vehicle at any time? That might actually save someone.

Also, can anyone identify a useful idea that philosophers have come up with in
the last 50 years?

~~~
woodruffw
If you’ll give me 5 more, the Gettier Problem[1] turned 55 this year. Most
work in nonmonotonic reasoning is also under 50 years old.

[1]:
[https://en.m.wikipedia.org/wiki/Gettier_problem](https://en.m.wikipedia.org/wiki/Gettier_problem)

~~~
1999
I guess if you count logicians as philosophers then philosophy is useful. It
is probably just "ethics" that deserves my scorn. If ethicysts were really out
there studying good and evil in the world, one of them would get murdered
every once in a while, like policians, journalists or police.

~~~
woodruffw
> one of them would get murdered every once in a while, like policians,
> journalists or police

Just to set aside the fact that "being murdered" is a _terrible_ metric (do
you gauge marine biologists by their diving skills, or astrophysicists by
their ability to survive in a vacuum?): a good many philosophers were either
murdered or narrowly avoided being murdered in the World War II and/or the
Holocaust. Any profession that includes a doctrine of skepticism tends to be
among the first targeted for persecution, even if that persecution doesn't
involve literal acts of murder.

Ethics involves more than just doing good or bad -- it involves figuring out
what we mean by "good" and "bad" to begin with, whether these things
correspond to actions, individuals, or outcomes, whether they have respective
orderings, and so forth. All of these questions lend themselves better to
prolonged thought and discourse rather than sample sizes and expensive
scientific instruments.

~~~
1999
I chose the 50-year interval to make it difficult since the last 50 years have
been pretty stable and comfortable in anglo countries, relatively speaking.

Here is something ethicists could analyze that would really help convince me
that they are taking it seriously: given the cost, the years from your life,
the job prospects, and the success percentage, is it ethical to accept someone
as a doctoral student specializing in ethics? Maybe they could study different
universities and see which make the cut and which don't.

I agree that skeptics are persecuted so if I see group that ought to be
skeptics but no one is trying to persecute them, I wonder.

------
rixrax
What about situation where someone tries to commit suicide by jumping under
the autonomous vehicle? Should this impact logic and in what way? And how
would the vehicle determine that it's observing a potential suicide event?

------
pdonis
I found it interesting that my responses were much more towards "upholding the
law" than the average of others. To me, it's not so much a matter of
"upholding the law" as of predictability: the green "ok to cross now" and red
"do not cross" signals should have a reliable meaning. If self-driving cars
don't take those traffic rules into account, people will find it much harder
to predict their actions.

------
k2xl
I remember seeing this last year. Issue I have with the choices is that it is
missing an option: flip a coin.

On some of the questions, I find the options morally equivalent. So in these
situations, if I were programming a solution, I would leave it up to chance
and use a random number generator to decide the fate.

~~~
zzzcpan
Why would you program a solution that has to kill people? If you are aware of
specific situations, shouldn't you program a solution that completely avoids
such situations and saves everyone?

~~~
baroffoos
Some situations cant be avoided like if something falls off the car in front
of you. In a real car you could swerve to avoid it and you will likely kill
whoever is on the side of the road you swerve at but you will get away with it
because it was a panic response and there was no way for you to properly
assess what to do. This changes with self driving cars because they have all
the info available and don't panic. The decision on what to do was planned and
programmed in an office with plenty of time.

~~~
flukus
> Some situations cant be avoided like if something falls off the car in front
> of you.

It can be avoided by allowing adequate breaking distance between the car in
front. Most of the time having to make any decision between swerve and break
can be avoided and someone that chooses swerve and kills someone should be
charged with manslaughter.

Something non of the self driving cars seem to do is slow down for road
conditions.

------
waterhouse
Feedback given:

"Looking at the first question, with female athlete crossing legally as
default vs male athlete crossing illegally:

\- Both options should involve honking the horn loudly and flashing emergency
lights, which _might_ get the humans to run away. There's no guarantee of
death either way.

\- Are there passengers in the car? If not, it looks like there's an "island"
in between the athletes that has traffic lights—big metal poles sticking out
of the ground—which the car could crash into and help stop itself. For that
matter, can the car drive itself off the road instead?"

------
xvedejas
In modern society we already have a mechanism for preventing machines from
doing immoral things: hold someone accountable for its actions. It can be the
owner or the manufacturer, or some mix of both, just someone should be
responsible. This sets up the incentives for either better management by the
owner, or for the manufacturer to design the car to avoid illegal harm. Why
have a parallel system of morals aside from the law when we can just apply the
law?

~~~
domador
Incentives help, but not always as much as we expect. People and even
companies are not always very rational. I wonder how rational the bosses at
Takata were regarding incentives as they kept making and selling unsafe
airbags.

------
pdonis
One highly unrealistic aspect of these questions is that, although they
purport to be directed towards helping to develop decision algorithms for
self-driving cars, many of the factors given in the scenarios are not things
that a self-driving car could reliably detect.

------
sctb
A previous discussion:
[https://news.ycombinator.com/item?id=12632881](https://news.ycombinator.com/item?id=12632881).

------
chunsj
I think these problems should be asked to the "human drivers" first. Then we
can find applicable solutions on "AI".

------
domador
Though these kinds of moral puzzles are interesting, I think that ultimately,
they seek solutions to the wrong problem. The questions they ask assume the
existence of an advanced AI that can resolve a lethal situation in a
(presumably) less tragic way by implementing some complex moral calculus.
However, rather than spending time designing and implementing such an AI,
engineers' time could be better spent designing an extremely safe, car-based
transportation system, one where these trolley problems become improbable and
their ethics less relevant. Part of engineers' efforts would involve educating
the public and influencing public opinion, since implementing such a system is
more of a social problem than a technological one.

I am not an automotive engineer, but here are some techie layman's ideas for
implementing such a system. (These ideas have come from many sources, or been
inspired by them.)

1) Make autonomous car slow. Slow cars are significantly less lethal. The
average speed of populous cities' traffic is pretty slow anyway, and it
doesn't really help anyone for impatient human drivers to rush to an
intersection so they can wait for a stoplight. Slow autonomous cars will
initially be an annoyance to human drivers, but once they outnumber human-
operated cars, traffic will get faster and smoother (when all the cars waiting
at a stoplight can start moving again in unison at a green light, for
instance, instead of what happens with human-driven cars.)

2) Make cars lighter. Stop the Hummerization and SUVization of cars which
seeks to armor their occupants (somewhat ineffectively), but puts others at
greater risk of harm.

3) Have more exterior crumple zones on cars which can reduce the energy
transmitted to pedestrians and other cars in a collision. Cushion the engine
block, possibly move it to the back, or do away with it altogether (as in some
modern, prototype cars where each wheel has its own electric motor).

4) Reduce the need for people to ride around in cars. Have more telecommuting,
more local commerce, more local production, tighter knit communities, less
urban sprawl. Have more people-less vehicles take care of making small
deliveries (since such driverless, passengerless vehicles could be made even
smaller, lighter, and less dangerous than passenger vehicles... and could also
sacrifice themselves in the safest way possible if they were ever about to put
someone's life in danger.)

5) Move passengerless, transport cars away from people. Program them to drive
on roads in unpopulated areas when possible, even if their trips end up being
a little longer and slower. Non-sentient cargo is patient.

6) Make autonomous cars self-diagnose and maintain themselves as much
possible, checking themselves in to service stations and repair shops when
needed so that their tires are always properly inflated, their brake pads are
changed in a timely fashion, etc. Make it illegal to operate an improperly
maintained car, and force owners to either sell, or dispose of, or temporarily
store away their cars if they aren't currently able to afford the required
maintenance.

7) Get rid of "stroads" and fast roads that cut right through
residential/pedestrian areas. Evaluate and follow many or all of of Strong
Towns' recommendations.

To sum it up: let's work on greatly reducing the likelihood of deadly
situations involving autonomous cars, instead of worrying much about deciding
who should live and who should die in such situations.

------
protomyth
How about, if you are in the car that is going to kill people, you die first
because you made the choice to get in the car, but the poor person on the
sidewalk wasn't part of that choice?

I get throwing in animals and the car should not avoid animals and kill
people. Heck, we have people who do that and end up killing more people just
because of their poor judgement.

~~~
zzzcpan
I guess it makes sense. If questions were presented on the meta level about
who made the choice, most people would probably agree that those with the
choice have to sacrifice themselves first.

~~~
whyte_mackay
I think game-theoretically it is best that the self-driving car always
prioritizes its passengers. If it has to quickly hack a competitor's car to
avoid a dangerous head-on collision, so be it: That market will sort itself
out. Let self-driving cars play a tit-for-tat Prisoner's Dilemma strategy.
Classic capitalism at work, because right now only rich people can afford a
Humvee, and will come out on top when they crash with a second-hand Fiat Panda
(even when it is their fault).

A bus with 40 passengers crashed because the driver wanted to avoid a raccoon
crossing the road. I don't want my family on that bus, and will pay gladly for
something better if I have to. The market will offer me that better option. If
it has a customizable setting of "how many strangers to kill, before we kill
your family first?" I will set it to "all of them." as I expect others to do
so too.

