
We should take moral advice from computers and not the other way around - Gormisdomai
https://bioethics.georgetown.edu/2016/02/oxford-uehiro-prize-in-practical-ethics-should-we-take-moral-advice-from-our-computers-written-by-mahmoud-ghanem/
======
duxup
>To make a case otherwise is to claim one of two things: either that humans
have access to morally relevant data, which is in some way fundamentally
inaccessible to computers, or that humans can engage in a kind of moral
reasoning which is fundamentally uncomputable.

I remember some quotes about metrics and stats that are something like:

"What is easily counted will be deemed important, what is difficult to count
will be deemed unimportant."

Regardless of the intent the data is going to tilt AI in some directions
purely based on the data it has and at the exclusion of other information.

And of course Goodhart's law:

"When a measure becomes a target, it ceases to be a good measure."

I'll admit that I don't fully understand the entire paper but it would seem to
assume that computers would just magically have all the perfect information,
and/or human behavior would not change to skew the results...

------
jawns
Many of us tend to think of morality as a metaphysical issue and believe that
a moral sense is something that we as humans have that machines do not.

But this proposal skirts around that, because it does not require the machine
to have a moral sense. It merely requires the machine to be able to enforce
logical consistency.

So the machine says that IF you as a human want to achieve Ethical Principle
A, THEN here are some actions you can take are known to help you achieve that
principle.

Crucially, it does not require the machine to _perceive_ Ethical Principle A
as true and make a case for why it's true.

So it's very much an augmented morality device, rather than a machine that
proves moral truths.

Or to put it another way, it doesn't say, "You should do X because after
running a detailed analysis I've determined X is ethical." It says, "You've
determined X is ethical, and here are some ways to achieve X."

~~~
Supermancho
> It merely requires the machine to be able to enforce logical consistency.

“There can be no justice so long as law is absolute. Even life itself is an
excercise in exceptions.” “When has justice ever been as simple as a rule-
book?” - Riker”

― The Next Generation (season 1 epis. 7: Justice)

Because we cannot know all eventualities, we cannot construct arbitrary rules
that mimic morality, much less justice.

~~~
jkingsbery
See also: [https://memory-alpha.fandom.com/wiki/Induced_self-
destructio...](https://memory-alpha.fandom.com/wiki/Induced_self-destruction)

------
mrfredward
In the U.S. at least, most people consider it morally outrageous to eat a dog.
And yet, many of the same people will eat a hamburger. While there are obvious
cultural/emotional reasons people feel this way, I can't think of any logical
reason why eating a cow is less troubling than eating a dog.

Put simply, most popular moral frameworks people follow seem full of
contradictions like this. And yet, it seems like any attempt to force a
different moral framework on people would be a tyrannical force of misery. Any
resolution a computer could offer to the contradiction above would make a lot
of people deeply unhappy, which is in and of itself a moral problem.

People are emotionally driven, and I'm not sure there exists any logical set
of rules that could define a morality people are happy with.

~~~
jawns
But the proposal does not attempt to force a different moral framework on
people.

It takes as an input your moral framework and objectives, and the proposed
output are actions that are in line with that framework and objectives.

I have no idea whether a system like that would work in practice, but I think
there are analogs to apps that help people achieve fitness goals. These apps
probably aren't going to change someone's mind about whether to make fitness a
priority. But if the person has already made up their mind that they want to
make it a priority, the apps can help suggest routines and exercises that help
them achieve their goals.

~~~
UnFleshedOne
Humans are wonderfully adept at compartmentalizing, I think because
comprehensive consistency checks are expensive, which leads to presence of
conflicting beliefs and necessity of cheap ways to resolving them. There is no
cheaper solution to a problem than ignoring it.

If we offload consistency checks to machines, which get our goals and values
as input and then grade decisions on how they stack against all values, much
of bullet biting would ensue...

Nah, who are we kidding. Most people will keep resolving machine-revealed
inconsistencies using the same tried and true method. "I value consistency" /
"I'm a kind person who values animal welfare" / "I like hamburgers" is merely
another set of contradicting things to compartmentalize away. You don't even
have to pick two. Just have it all and flex the ignoring muscle.

~~~
AstralStorm
Let's not also forget that people are extremely adept at hypocrisy.

Such as valuing honesty but lying when it suits them.

And then there are degrees of ethics that would be obviated in special
situations such as: When would you consider killing another person
justifiable? (Vigilante problem)

Ethics are not absolute for most people. And many don't even know they are
not. And the answers change based on emotional state.

------
Jeff_Brown
Human interactions are complicated, and ethics is hard. This is why economists
have jobs.

Consider the eminent domain problem: a government is considering taking
someone's farm to build a road. Relevant facts include: how many people would
benefit from using the road, and by how much? (Enormously complex question --
one has to model the rearrangement of businesses and homes as a result of the
road's existence.) How much does the citizenry value property rights?
(Enormously complex -- how do you quantify the benefit of property rights in
the same terms as the benefit from physical goods and services? How do you
model the effect of the precedent set by this particular case on future
cases?) What is the value of the home in question? (Slightly easier but still
not simple: Can you just compare this farm to neighboring farms? What if it's
special, because the land is different, or its history, or its brand
recognition, or for simple sentimental reasons?)

Outside of certain narrow, prescribed domains, I'm sure artificial ethics
would require artificial general intelligence. When I see academic economists
being replaced by computers, I'll believe it's within reach.

------
exdsq
I don't know if OP is the author, but if so could you suggest some papers
where people actually go about implementing ethics into a system even just as
a very basic PoC? I'd be really interested. I've looked for papers but even in
a survey completed as recently as 2018 it suggested the closest thing we have
is a graph database sitting on top of the Moral Machine (which is a survey of
ethical questions).

------
tastroder
Could we please get that changed to the actual article and add a (2016)?

[http://blog.practicalethics.ox.ac.uk/2016/02/oxford-
uehiro-p...](http://blog.practicalethics.ox.ac.uk/2016/02/oxford-uehiro-prize-
in-practical-ethics-should-we-take-moral-advice-from-our-computers-written-by-
mahmoud-ghanem/)

That contains much more debatable content. But even with the linked excerpt, I
find it hard to draw the rather absolute conclusion the title suggests.
Computers are indeed very good at both points, but only after a human
carefully crafted a problem specific set of instructions to hold its virtual
hand.

I think the bias and modelling discussions of the last few years sufficiently
show that it's rather non-trivial to project these functions to
algorithms/ML/... and state of the art in the foreseeable future is not
particularly good at many aspects of the tasks mentioned in the article.

In the long form it even draws analogies between asking the phone for food
recommendations and ethics considerations. Every recommendation engine out
there is being actively gamed, that was the case in 2016 just as it is now.
There's some truth to the article of course, e.g. more crafted systems that
actively address human biases, but we're nowhere near a point to even remotely
let our computers take over ethical considerations - even if it's just a
recommendation.

------
Santosh83
So more statistical processing and comparisons in the end, and the final
decision is still left to the user? Seems like the proverbial Elves in
Tolkien's world where it is said "go not to them for advice as they'll say
both yay and nay..."

------
jkingsbery
> After all, a common complaint about practical ethics is that there are too
> many factors for any one person to consider. A similar complaint could be
> made about weather forecasting, economic modelling and space travel – and
> yet we seem to be able to do all of the above just fine with the aid of
> computers.

Space travel is very hard, but it's "just physics." Weather forecasting is
harder, and we get it wrong a lot of the time. Economic modelling is harder
still, and our inability to do this accurately was arguably a major
contributor to the subprime mortgage issue that led to the last economic down-
turn. So, taking moral advice from computers seems like a bad idea.

I think it helps to distinguish between which problems are "wicked" or not -
while it may be technically true that some function is represented by a Turing
machine, as a practical matter the amount of data would be too large, and how
one measures the "output" of the function has so much ambiguity, that it's not
practically doable. (I don't know the original source of the term "wicked
problem," but it's a concept others have written a lot about why it's
different.)

------
white-flame
Machine learning does nothing more than reenforce existing statistics, and
that is completely amoral.

This very day in the USA, computers are deciding whether prisoners are
eligible for parole, and the biggest statistical determiner burdening them is
their skin color. That goes completely against the morals of a country holding
notions of free, equal citizens and due process.

------
kangnkodos
Anyone considering this question should read Joseph Weizenbaum's 1976 book,
Computer Power and Human Reason.

He considers if an AI could be a good judge, and decides that it would not.
Even though all the facts, rules and heuristics could be fed into an AI judge,
the element we would not be able to program is human compassion. He concludes
that even though there are some tasks a computer might be able to do, there
will always remain some decisions that should always be made by humans.

------
vectorEQ
i think that due to the fact all humans carry imperfect information, their
opinions on good/bad/ethical varies, and that will ensure that never a system
can be made which 'behaves 100% ethically'. it might behave in such a way with
respect to the opinion of the person claiming that, but to another person it
can seem completely unreasonable/unethical...

since you can't create an AI which takes into account all the flaws of all
humans present in their knowledge and consciousness, a system which has such
perfection is impossible to make. (even these flaws are often just perceived
flaws and them being a flaw is an subjective matter based upon other
subjective matters.)

it's not about having all the data for an AI, it's more about understanding
what lack of data means to humans and how it affects their decision and
interpretations, and about how the same data can be interpreted in many ways.

Even if all humans were exposed to the same data as each other exactly, they
would likely still carry different opinions and interpret the same data
differently leading to a completely different decision making process... I
think this at the moment is inherently impossible to create within or take
into account in current computers or programming.

If you would reverse it, and have humans take all their morals and ethics from
computers, what is left of humans? Isn't that what makes a human? the ability
and/or inability to do this themselves. i think no one is looking for a world
or working towards a world where only 1 human exists in multitude. i think the
work should be focused on preserving the uniqueness of identity while
maximising its potential within that uniqueness. That also makes me of the
opinion that AI should thus be specialised within domains of operation, and
not attempted to be implemented in a general fasion.

perhaps an AI system could exist which comprises of many specific AI systems,
which would make it more generally applicable based upon many input from
specialised AI systems, who knows. But 1 system and 1 data set will never be
able to cover inherent uniqueness within humans.

you can argue about some rotten apple humans who have 'bad behaviour', but
even the good people you know, are wildely different from you. admit it. you
are not them, and they are not you and that's how it should be.

------
gjm11
The link here goes to a page that provides just the very beginning of the
essay and a bit of context, along with a link to
[http://blog.practicalethics.ox.ac.uk/2016/02/oxford-
uehiro-p...](http://blog.practicalethics.ox.ac.uk/2016/02/oxford-uehiro-prize-
in-practical-ethics-should-we-take-moral-advice-from-our-computers-written-by-
mahmoud-ghanem/) where you can find the rest of it.

------
vearwhershuh
Computer: there is a small group of unpopular people causing social problems.
Liquidating them humanely would increase net happiness at the least cost.

Shall we proceed?

~~~
Digit-Al
That's a strawman. Most moral frameworks forbid killing, so that would be
built into it. Therefore murdering a bunch of people to increase happiness
would not even be on the table.

~~~
exdsq
> Most moral frameworks

Who picks the frameworks used? What about more contested topics which have a
closer social split like abortions? Who decides what is and isn't ethical, and
how does this cope with social divides or in places with toally different
moral frameworks?

Honestly I think this problem is probably harder than the AGI part!

~~~
AnimalMuppet
There are some who regard abortion as exactly "murdering a bunch of people in
order to increase happiness". There are even some (hopefully a smaller number)
who regard genocide the same way.

------
ropiwqefjnpoa
I read this as "We should take moral advice from the people who program the
computers and not the other way around"

~~~
james-imitative
Please unlearn this reading habit. ;)

------
blackbear_
Very interesting read, but I strongly disagree with it. I could mention
several technical issues, but I do not think it would lead do a fruitful
discussion.

Instead, think about this: why would people _not_ behaving ethically? The
point of the essay seems to be that they are either not aware of what is the
best thing to do, or what the consequences of certain actions would be.

I would argue that no, (most) people do know, but for some reason would still
behave unethically. Who needs a reminder that driving while drunk puts lives
in danger? Nobody. It is well known that more elaborate/advanced ethical
reasoning does not lead to more ethical behavior.

And most ethical dilemmas are not that simple, consider:

> If the output of the computer ever contradicts with the human’s own
> intuitions, the human would be provided with an example of how and why the
> moral framework they have chosen does not match up with what they consider
> to be moral.

A human could as well come up with several arguments for why their intuition
is better than the computer's advice. Because that's how ethics is, there is
not a single right answer, there are many good possibilities and many bad
ones. Humans do not need help in identifying the bad ones, and only an oracle
can say which alternative is the "best".

In fact, this seems to be the main purpose of such an ethical assistant, to
"remove important epistemic limitations", i.e. predict the future.

> Suppose every driver had access to a good estimate for how likely they were
> to be involved in a traffic accident, every time they decided to go out.

This seems utterly impossible. Most accidents are caused by random events, and
the only reasonable estimate for this probability is between zero and a number
so tiny that it would not make any practical difference.

TLDR:

\- I am not going to argue whether computers might be able to perform decent
moral reasoning

\- But if they can, it cannot be much better than what humans can already do

\- And even if it was better, humans would likely not follow this advice

\- Simply because people sometimes do not want to behave rationally

