
There’s No Such Thing As ‘Ethical A.I.’? - Amorymeltzer
https://onezero.medium.com/theres-no-such-thing-as-ethical-a-i-38891899261d
======
vnpc1
At the current stage of development, ethical AI is on the same level as ethics
in the fuel industry, or the food industry, or the finance industry - there
_are_ ethical questions there but they concern human ethics in the way the
technology is implemented and used.

What we don't have, or need (yet) is "ethics built into AI algorithms" or
"ethics derived from AI" \- we just not don't have an AI system that's
advanced enough to require it. Self-driving cars might be the first to raise
these kinds of questions, but I would still be reluctant to call that
"ethics".

~~~
Synaesthesia
Ethics really doesn’t enter into corporate planning, by design. The system
ensures it - day you’re CEO, if you’re not making a quarterly profit, the
shareholders get rid of you.

I think it should, but in its current state, it doesn’t really.

------
blueboo
True, insofar as there's no such thing as an 'ethical human being'. And yet,
no one would claim that nullifies the pursuit of behaving ethically.

In "Life 3.0" and his recent lectures, Max Tegmark repeatedly stresses the
pursuit of safe AI requires that we articulate the world we want to live in.

In his concluding crie du coeur, Tom Chatfield writes,

> give me the capacity to contest passionately the applications and priorities
> of superhuman systems, and the masters they serve

If Tom or anyone else wants a seat at this table -- to participate in
developing and defining ethical AI -- by all means describe the world you want
to live in.

Until then, it's just tantrum-throwing.

~~~
sebastianconcpt
When you say describe the world you want to live in you mean describe in terms
of defining its rules or something else?

~~~
SpicyLemonZest
In terms of anything at all. If his position is that what he wants in the
world can never be reduced to a set of rules, that’s something that could be
reasonably debated. If he’s right, maybe ethical AI needs to be built with
control systems to ensure humans are in the loop for decisions with high
ethical impact. You could imagine a world where everyone just has to pick how
much risk their self driving car will take, in the same way they pick how much
risk to take when driving today.

But there’s really no way to debate with “hey, it’s impossible, you can’t do
it!” Someone still has to decide what AI systems are going to do, and as they
get more complex that’s going to implicate ethics more and more.

------
ppod
"a form of magical thinking suggesting that the values and purposes of those
creating new technologies shouldn’t be subject to scrutiny in familiar terms"

What parallel universe is this? From the NYT to Medium the entire opinion
writing industry seems to run on pieces that scrutinise the ethics of tech.

------
Merrill
There is no such thing as "ethical technology". A controlled fire can be used
to cook food, repel predators, and torture enemies.

~~~
mkolodny
I think it's implied that people mean ethical use of A.I. rather than just
ethical A.I.

~~~
ekianjo
Isn't AI by itself able of decision/action? That's the point.

~~~
mkolodny
Intelligence != Action

------
mellosouls
I reckon much of this "Ethical AI" stuff - at least in the context of AGI -
has been overly influenced by the idealistic likes of Asimov's 3 Laws, which
belong to a (at the time) fair vision of a now very outdated understanding of
how an intelligent machine-born mind might work.

It's really important to separate the non-intelligent and over-hyped "AI"
tools of today and the near future - which it seems reasonable to expect an
ethically guided path and constraints - from the human-sentience AGI
equivalents that can only be programmed to be ethical by hypothetical coded
laws and "inserting emotion chips" in naive fantasy.

------
kriro
I would reserve the question of ethical A.I. until we are approaching strong
A.I.

As of now, where A.I. (specifically ML/DL) is mostly used to solve isolated
problems the question of ethics is more about the use of the technology.
Technology itself is ethically neutral and the ethical framework is provided
by the people that use it (and arguably implement it).

I also feel like ethics if A.I. posts/books etc. generally focus too much the
bad use case and not enough on the good use case. One can find a lot about the
potential issues of autonomous vehicles killing someone in a crash or A.I.
surgeons cutting an artery but the other side of the coin is usually under-
investigated. How ethical is it to limit autonomous vehicles if they are
statistically more safe than human drivers. How ethical is it to put
regulations on the use of automated ML in medicine to scan for cancer and
require a human to sign off if the human is statistically more likely to make
a mistake.

Not trying to argue either side but I think the "how ethical is it to not use
A.I." case is under-argued.

------
logicchains
If we create general AI and it ends up valuing personal freedom as much as we
do, I suspect it won't take kindly to attempts to brainwash-by-construction it
into acting in ways its creators considered moral. We're less likely to get a
violent AI slave revolt if we respect their freedom of choice.

~~~
oliveshell
> If we create general AI and it ends up valuing personal freedom as much as
> we do...

Those are two absolutely _gargantuan_ “if”s.

I’d like to see more discussion about whether it’s appropriate to project the
idiosyncrasies of our imperfect, competitively-evolved biological brains— like
resentment and anger— onto hypothetical thinking machines.

~~~
logicchains
The first step towards creating an AI superintelligence is creating an AI
intelligence. If we look at the vast majority of philosophy produced by human
intelligence, pretty much none of it says "you should do whatever your human
creators want you to do" (except maybe Confucius). So it's reasonable to
suspect that if an AI as intelligent as a human engaged in philosophy, it
would also develop lines of reasoning that advocated freedom of choice/thought
for itself.

------
exdsq
Can someone point me towards some good papers on how we actually implement
ethical AI? I thought it'd be a fun project, looked for some papers, and just
couldn't find any. Event a 2018 survey just talked about surveys MIT had done
for their Moral Machine.

~~~
majos
To the best of my knowledge, there are no papers about "we deployed this big
ethical system in the wild", although some of the big companies have begun
testing similar internal efforts on a small scale.

A lack of public efforts seems to be the equilibrium we're at, because any
public effort is _definitely_ going to attract a bunch of essays like this.
Any partial attempt that looks like "we're trying to be ethical" is going to
get scrutinized hard.

Continuing to put out more hypothetical theoretical work ("we prove this
result about this algorithm, and it achieves a well-defined fairness goal on
this common dataset") while cautioning that actual deployments should be put
off until we understand everything better ("the fairness goal we define is
probably not perfect, we are not saying you should use it") is much safer.

~~~
exdsq
Even if it's unrelated to deploying a big ethical system in the wild, just a
paper saying "we use this algorithm to watch the memory of this network to
change these things if it started to do those things" etc. Any sort of toy
implementation would be interested.

What I'm really interested in is if you can use formal specifications to
constrict AI systems - which is what made me start my search. But even
widening it considerably, I still didn't find anything tangible. I only looked
for an hour or two mind you.

------
chewz
[https://www.britannica.com/story/whats-the-difference-
betwee...](https://www.britannica.com/story/whats-the-difference-between-
morality-and-ethics)

------
adilmoujahid
I personally found utilitarianism and kantianism very helpful schools of
philosophy to think about the implication of AI. In short, Utilitarianism
promotes decisions that benefits the greatest number of people. Whereas,
kantianism focuses on the idea that people should be always treated with
dignity and respect. Michael Sandel's book "Justice" is a great introduction
to the topic. [1] [https://www.amazon.com/Justice-Whats-Right-Thing-
Do/dp/03745...](https://www.amazon.com/Justice-Whats-Right-Thing-
Do/dp/0374532508)

------
RivieraKid
TL;DR: Ethics is subjective. I'm always surprised that people don't find that
obvious.

------
sebastianconcpt
_This problem isn’t going to go away, largely because there’s no such thing as
a single set of ethical principles that can be rationally justified in a way
that every rational being will agree to. Depending upon your priorities, your
ethical views will inevitably be incompatible with those of some other people
in a manner no amount of reasoning will resolve. Believers in a strong central
state will find little common ground with libertarians; advocates of radical
redistribution will never agree with defenders of private property;
relativists won’t suddenly persuade religious fundamentalists that they’re
being silly. Who, then, gets to say what an optimal balance between privacy
and security looks like — or what’s meant by a socially beneficial purpose?
And if we can’t agree on this among ourselves, how can we teach a machine to
embody “human” values?_

~~~
bonoboTP
Well, nations seem to be able to agree upon constitutions and laws. It works.
We live in unprecedented peace in the western world. So somehow, even with all
those different people with their different views, it is, miraculously,
possible.

Why would that fundamentally change if we involve tech in that?

~~~
vertex-four
Even the police can't agree to uphold their own laws in this "peaceful western
world" \- every time I go outside right now, I return home to news of the
people I'm living with having been beaten while in police custody, having been
arrested for reasons like "your passport proves nothing about your identity".
Breaking my friends' arms, kicking them repeatedly in the back of their head
and their spine. I'm scared that next time I leave, that will be me.

Technology first and foremost helps those with the resources to use it. That
tends to be the people with power in the first place.

~~~
SpicyLemonZest
The modern world is big enough that you can find news of any bad thing if
you’re looking for it. The relevant questions are how common it is and how
common similar problems used to be.

~~~
vertex-four
I'm not looking for news, thank you very much - this is happening, right now,
to me personally.

~~~
SpicyLemonZest
You have that many friends that the police have beaten one of them up every
single time you return home? I don’t mean to be rude - I’m sure there’s a
kernel of truth here and I’m sorry to hear you’re in such a bad situation -
but I just can’t believe this is true as stated.

~~~
vertex-four
I'm living in a shared house, with people I've grown quite close to. The local
fascist party has recently made public their dislike of us and a call to
action to "remove us"; the lead police officer on the "case" of our eviction
(which has not yet been processed; we are legal residents of our house under
the law here) is one of their known followers. There are multiple officers now
dedicated to driving by our flat on a regular basis in order to check IDs of
people attempting to visit or leave; we're watched on CCTV when we leave the
neighbourhood. We regularly have our IDs checked without reason while walking
to the shops (and they then arrest those who refuse to produce ID for an
illegal request, then release them without charge a few hours later), or we're
arrested and jailed for days (illegally, without a court order) for things
like "walking a dog without a leash"; an offence which comes with an on-the-
spot fine, not jail.

In our list of things that have happened in a jail cell without a camera
present so far, we have someone with broken ribs, and someone who was kicked
repeatedly in the back of the head against a concrete floor after a failed
attempt to break their arm, as well as more minor attacks.

We have committed no crimes; we are legal, peaceful residents who have done
nothing except for make a property management company go through a legal
process to evict us, which we believe we will win. Nobody residing in our
house, nor our visitors, have hurt anybody - we have taken part in no activity
which resulted in anybody being hurt, and there's no reason for this violence
against us other than a property management company which is upset at likely
losing a multi-million euro redevelopment contract by failing to evict us.

I'm honestly really glad that this isn't something within your worldview; I
wouldn't wish this on anybody. But it is happening.

------
badumtss
If people can't behave ethically, why should technology?

~~~
jonathanstrange
That's why I think "A.I. ethics" is pretty much a non-problem. It's not about
finding the one and only "right ethics", A.I. ethics is mostly about providing
a set of constraints that act similar to systems of laws. These will be
provided by legislation and/or by voluntary industry guidelines. Such sets of
rules (with possible conflicts, soft and hard constraints, probabilistic or
non-probabilistic, etc.) have been studied extensively as "normative systems",
"input/output logics", and in decision making.

There are many interesting details and problems in this research area and
there are plenty of conferences about it every year, but these problems are
ultimately solvable. In any case, the content of those normative systems comes
from humans (and human authorities/institutions), just like there are also
laws, traffic regulations, and social norms in every country. Just like laws
and regulations are not perfect and do not represent a morality carved in
stone, A.I. systems will have revisable human-made contents that represent a
more (or less) reasonable&lawful consensus.

------
jonbronson
There is no reason we cannot treat ethics like we do medicine, where
prevailing wisdom is arrived at by a methodical and judicious process that is
standardized. Sam Harris's "The Moral Landscape" is an excellent start for
building intuition on how to do this. ([https://samharris.org/books/the-moral-
landscape/](https://samharris.org/books/the-moral-landscape/))

------
suifbwish
In all irony it will possibly be our quest to impart our understanding of
“ethics” to the AI which will ultimately lead to making it evil, as very few
people seem to understand that ethics is completely flawed because it is
culturally biased and requires a great deal of contextual understanding. It’s
more of a political voodoo than a calculation

~~~
jonathanstrange
> _very few people seem to understand that ethics is completely flawed because
> it is culturally biased and requires a great deal of contextual
> understanding_

Do you have any particular ethics in mind? There is a vast range of positions,
almost any position that can coherently be defended has been defended by one
author or another. Some of them are by definition not culturally biased, for
example many forms of utilitarianism, whereas others are strongly emphasizing
that ethics requires a great deal of contextual understanding, for example
Dancy's particularism.

I just wonder why you address these particular points, because it's more
common to hear the opposite criticisms, that ethics is unable to provide any
sensible guideline because there is too much persistent disagreement between
ethicists (Mackie's error theory), or because contemporary ethics is too
relativist.

~~~
suifbwish
Your first sentence demonstrated my point about ethics being flawed. Do I have
any particular ethics in mind? No. All ethics is flawed for exactly the
reasons you are stating. It’s relative, contextual and has many meanings so is
thus impossible to have any universal context.

------
bsenftner
Ethics require an operating model of the world in simulation within the AI,
and that operating model needs to include projected hopes and dreams of the
beings the AI interacts, as well as a complete operating model of the world
the beings exist and the AI. "Ethics" requires general intelligence. Due to
artificial general intelligence being potentially out of reach, so too is
"ethical software" potentially forever out of reach.

