
There’s No Such Thing As ‘Ethical A.I.’ - wellokthen
https://onezero.medium.com/theres-no-such-thing-as-ethical-a-i-38891899261d
======
joe_the_user
There are a couple of different issues.

1) The author's point. That we don't agree on what's ethical so how can we
program it?

2) But also, today's AIs aren't mechanism for understanding and balancing a
multitude of interests and requirements. They are effectively pattern
recognition machines, matching data with categories (or outcomes or etc). So
even if you had some criteria formulated, an AI couldn't really do that. I
mean, if you true, unlimited intelligence, you might just say "decide like the
US supreme court would" and the might be able to do. But we don't even have
something that could remotely be challenged that we.

------
teapourer
> And yet, when it comes to the ways in which A.I. codes of ethics are
> discussed, a troubling tendency is at work even as the world wakes up to the
> field’s significance. This is the belief that A.I. codes are recipes for
> automating ethics itself; and that once a broad consensus around such codes
> has been achieved, the problem of determining an ethically positive future
> direction for computer code will have begun to be solved.

Is anyone actually claiming that AI programs will "automate ethics"? The post
seems rather pointless.

------
Animats
Ethics for AI has roughly the same problems as ethics for corporations. That's
not encouraging.

~~~
scollet
Not to mention fuzzy ownership interpretations. If it (the "AI") does
something like incorrectly target a hospital instead of a bunker, who is
responsible?

------
ganzuul
A univeraally useful question: What does it take to make it work?

In this case the AI will need to convince all of us to follow the ethical code
it designs. It can do this by suppressing information or by making morally
defensible decisions.

Having a good track-record of moral decisions key, so AIs need to be public
persons and not subject to corporate secrecy.

~~~
mirimir
I love how William Gibson reveals Eunice in _Agency_.

I'd for sure love to be her friend.

~~~
ganzuul
I will need to read that. :)

------
kragen
[https://archive.fo/tbGNL](https://archive.fo/tbGNL)

But it quotes Evgeny Morozov as if he were something other than a notorious
troll, so don't bother reading it.

~~~
herbfan
Why? The article is fine.

> This is the fact that there is no such thing as ethical A.I, any more than
> there’s a single set of instructions spelling how to be good — and that our
> current fascinated focus on the “inside” of automated processes only takes
> us further away from the contested human contexts within which values and
> consequences actually exist.

This rings true. A.I. is just another tool in the computational toolbox. We
need expert practitioners just like we need expert statisticians,
mathematicians, programmers, doctors, etc. Do all those other disciplines also
have an ethics problem?

~~~
kragen
There are ways to interpret the statement you've quoted in which it's obvious
(if by "AI" we just mean "deep learning" or something), and ways to interpret
it in which it's just wrong (if by "AI" we mean what people usually mean by
"AI", namely AGI). You seem to be interpreting it in the first sense, and
additionally you have a considerably more jaundiced view of how widely
applicable deep learning is likely to turn out to be than some of its
prominent practitioners — Karpathy famously called it "Software 2.0," which to
me seems perhaps a bit optimistic.

As far as I can tell, though, the article doesn't say anything both true and
nonobvious, which to me is a necessary but not sufficient criterion for it
being worth reading. Reliance on Morozov was just a particularly glaring sign
of this.

------
yamrzou
> Depending upon your priorities, your ethical views will inevitably be
> incompatible with those of some other people in a manner no amount of
> reasoning will resolve. Believers in a strong central state will find little
> common ground with libertarians; advocates of radical redistribution will
> never agree with defenders of private property; relativists won’t suddenly
> persuade religious fundamentalists that they’re being silly. Who, then, gets
> to say what an optimal balance between privacy and security looks like — or
> what’s meant by a socially beneficial purpose? And if we can’t agree on this
> among ourselves, how can we teach a machine to embody “human” values?

Those who build the machine will embody it "their" ethical values. Problem
solved.

------
stuntkite
There is no such thing as an "ethical hammer" either.

------
nine_k
If you need a refresher on the perils of ethical AI, Asimov's "Three laws of
robotics" stories are as fresh as 60 years ago [1].

I'd like to remind that humans themselves are not that good at ethics, both in
philosophy and in daily life; they are worse yet on agreeing what ethics to
adhere to. You can imagine how well it could be automated then.

[1]:
[https://en.wikipedia.org/wiki/Three_Laws_of_Robotics](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics)

~~~
scollet
Good reference. I think "I Have No Mouth and Must Scream" is also standard
material for futurist "AI" engineers.

Edit: on the contrary

~~~
mirimir
That is an extremely disturbing story.

------
whymauri
>This is the belief that A.I. codes are recipes for automating ethics itself;
and that once a broad consensus around such codes has been achieved, the
problem of determining an ethically positive future direction for computer
code will have begun to be solved.

I can think of literally nobody who believes this. I don't think anyone
working in ethical AI would pretend that ethics can be "solved."

~~~
curiousgal
There always seems to be a disconnect between tech philosophers and the
technology they write about. Maybe they should try to actually work on 'AI'
before preaching about it.

To confirm your point a simple example is when working on an automated
classifier for loan applications. One way to make the system more ethical,
that is less descriminating based on certain features like race or gender, is
to often lower the threshold of acceptance for the marginalized classes and
raise it for the other ones. Anyone who had to do that sees how it poses a
dilemma of equality vs fairness or even fairness vs cost. So indeed, ethics
cannot be solved.

~~~
yamrzou
What if it turns out there is reason people discriminate using those features
(for example that people of race X tend to fail repaying their loans), and the
classifier just learned that reason from the data. Why should we be expected
to "fix" the reality for the classifier?

~~~
stanmancan
Because now people start to get rejected for loans because of their race, and
that’s wrong for many obvious reasons.

~~~
yamrzou
I agree with you that it is _generally_ wrong, i.e. that other factors should
come first. But what if the data suggests it? For example if is known that if
you lend to someone from race X, there is 80% chance he would not pay back.
Would you not incorporate that factor into your decision?

~~~
stanmancan
No. Find the other, non-racist data points that correlate.

~~~
matheusmoreira
Why should correlations with ethnicity be dismissed?

Epidemiological studies tell us some diseases disproportionally affect certain
groups. Should doctors dismiss this knowledge as racist and refrain from
screening patients in the affected groups because they might come off as
racist?

~~~
lidHanteyk
Yes. We know, more specifically, that _genes_ are to blame for diseases like
hemochromatosis [0], sickle cell anemia [1], or Tay-Sachs [2]. We also know,
from pedigree collapse [3], that humans broadly form one single race.

Therefore we know that correlations with _any_ definition of ethnicity or race
are spurious, because those definitions _must_ be socially constructed,
because the gene pool simply does not have the shape that race realists claim
that it does.

Think in terms of contraposition. Sure, _if_ race were real, then maybe it
might make sense to talk about racial demographics. However, since race
clearly is _not_ real, any demographic correlations must be bogus. There is a
much simpler explanation for why some skin colors seem socioeconomically
advantaged: Because our society itself has bigoted opinions about skin colors,
and has practices like redlining [4] which systematically oppress folks.

[0]
[https://en.wikipedia.org/wiki/Hereditary_haemochromatosis](https://en.wikipedia.org/wiki/Hereditary_haemochromatosis)

[1]
[https://en.wikipedia.org/wiki/Sickle_cell_disease](https://en.wikipedia.org/wiki/Sickle_cell_disease)

[2]
[https://en.wikipedia.org/wiki/Tay%E2%80%93Sachs_disease](https://en.wikipedia.org/wiki/Tay%E2%80%93Sachs_disease)

[3]
[https://en.wikipedia.org/wiki/Pedigree_collapse](https://en.wikipedia.org/wiki/Pedigree_collapse)

[4]
[https://en.wikipedia.org/wiki/Redlining](https://en.wikipedia.org/wiki/Redlining)

------
at_a_remove
Whenever I hear this, I ask, "Whose ethics?"

When people say that we ought to or should or make some other pronouncement
about how we _must_ make AI "ethical," I ask again, "Whose ethics?"

------
lone_haxx0r
At this point, each time I read "ethics" I think of someone proselytizing
their political ideology in the most obnoxious way possible.

I do have ethics, they're the product of my own analysis and are consistent
with my world view in other issues not related to my job. I don't go around
telling everyone else how to conduct their lives. If they have their own
ethics, good for them. If they don't, I don't care, it's their life.

------
Causality1
"Ethical AI" is just another buzzword. Companies are not ethical entities.
They exist only to create profit and they will create the maximum amount of
profit. Even laws and regulations only factor into it as numbers on a
spreadsheet, and any profitable activity will continue as long as it generates
maximum profit.

------
avmich
I think the article can serve as a good opinion and an introduction to the
subject. Certain ideas seem questionable enough - maybe the matter is
inherently more complex than something which can be put in an article.

Does, for example, author argue that something which human can do the machine
cannot?

~~~
di4na
Not really but he would be right if he did.

He argues that deep down the goal of the machine is to take decisions in
humans contexts. Which by definition only can make sense for a human.

Ofc we can argue about a fictional theoretical human like machine to the point
it is undifferentiated from a human... but then we created a human. Not a
machine anymore.

~~~
avmich
> Not a machine anymore.

It's like failing to see the difference between human and machine. One can
argue then there is no difference right now. So?

It's a time-honored discussion if machine can do all a human can. On the other
hand, religion also has an associated long-living discussion. If we decide
according to our beliefs, we should invent another mechanism - something other
than reasoning, because it's not used here - to get to agreement.

