
Is Ethical A.I. Even Possible? - furcyd
https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html
======
adjkant
Ethical AI is hard, but so are ethical laws, ethical taxation, and ethical
advertising, to name a few more ubiquitous things than AI. The real answer
here is that ethics is hard, and even if you know ethics and can get some
agreement, getting humans to be ethical is even harder.

AI only introduces a new danger, but at the current complexity of AI I don't
think its any more significant in effect than the other ethical problems we
have. In fact, many misuses of A.I are based on the uses, not the AI itself. I
don't think linear regression is inherently an immoral tool either if we're
going the "this tool is too dangerous to use/allow access to" route. Until we
have anything close to AGI, AI is a tool only as ethical as the user of the
tool. That's the real issue here.

We absolutely do need regulation, but I'll be damned if 10 people with power
in any government understand AI well enough to regulate it. Every day I think
we get closer and closer to needing to implement technocrats in governments.
The FCC is a great example of a place we should have had the model for
decades.

~~~
deytempo
I think AI is especially important for it to be used ethically because it has
an unrivaled capacity to be used unethically

~~~
adjkant
> [AI] has an unrivaled capacity to be used unethically

Does it?

At a personal level, I'm much more scared of a bad person with a gun and
access to me than a bad person with an AI and access to me. It's more
important than the ethical use of a teacup, but certainly rivaled and beaten
at a micro level.

At a macro level I think your argument has more ground to stand on when it
comes to things like improved efficiency of mass surveillance, but how much of
that is AI part rather than the mass surveillance part? What immoralities are
enabled/accelerated that could not be done without AI? In the end I'm just as
concerned about mass surveillance as I was before. In a practical sense, even
if somehow passed laws limiting AI, do I really think that a government
performing mass surveillance in the shadows is going to follow those laws when
AI is a concept anyone (with the know-how) can implement? I don't think AI's
power/capacity is as high as people think, it just tends to fit well with some
very bad macro level ethical actions to enhance them.

I'm not sure I'm convinced either way yet, but the claim of "unrivaled" made
me immediately skeptical, at least considering the AI of this decade(s). In a
hundred years you're probably right with the unrivaled part, though atom bombs
and the like are probably close seconds.

~~~
AstralStorm
AI is about scale. You can achieve similar terrible results by using, say,
Mechanical Turk system with additional error checks. That scales too.

The other thing AI is about is cost and perhaps secrecy. Turk system requires
many people who need to be fed, that puts a pretty high low bar on price. It
also requires disseminating data to many agents, each of whom could leak it.

Being more accessible means more Bad Guys (unethical actors) using it.

------
chobeat
If you're interested in reading more about the subject of AI and Ethics or AI
and Fairness, I kindly suggest this reading list I've been working on for a
while: [https://github.com/chobeat/awesome-critical-tech-reading-
lis...](https://github.com/chobeat/awesome-critical-tech-reading-list)

There are also other topics but it's intended as a primer for engineers
interested in understanding the social problems created by new technologies.

~~~
calabin
I'm kicking myself that I never thought to use GitHub to put together a
reading list. Great idea.

~~~
amelius
Yeah, great idea. The thing that's missing, though, is that you can't easily
leave a comment on each of the listed items. That makes it a sort of one-way
piece of information. Perhaps a wiki would be better (?)

~~~
chobeat
True but you have the issues and the PR. The format for sure is not intended
for casual discussion but at the same time there's plenty of space to discuss
improvements and bring criticism.

------
6cd6beb
Ethics is subjective. If ethics could be codified I think we'd have one
ruleset everyone agrees on.

The article mentions trying to have a human enforce ethics, but then that
person has to be an example of ethical excellence; something you can't test
for. And in the end, they say every man has his price, so no. I don't think
"ethical AI" is possible. I think "ruthlessly efficient AI" is the goal. Maybe
it should only be used in situations where ethics don't matter

~~~
darkpuma
Ruthlessly efficient AI is the next "big stick". Nuclear weapons brought us
_relatively_ close to world peace (after burning up hundreds of thousands of
people and scaring the whole world shitless) maybe ruthless AI can do the
same. I'm not particularly eager to see what an AI arms race and cold war
looks like though.

Things are decent enough right now, it could be a lot worse; do the upsides of
creating ruthless AI justify the risks? Is it an eventuality anyway? At this
stage could we conceivably prevent it?

------
kingkawn
First we’d have to make an ethical society...

------
tolstoy77
Wow, was just looking through the glassdoor reviews at the company in the
article,Clarifai.

[https://www.glassdoor.com/Reviews/Employee-Review-
Clarifai-R...](https://www.glassdoor.com/Reviews/Employee-Review-Clarifai-
RVW23884886.htm)

Seems like an awful place to work. They have a role I was about to apply to in
the bay area. Looks like I'll avoid this place.

~~~
ex_clarifai
Wow this one is good [https://www.glassdoor.com/Reviews/Employee-Review-
Clarifai-R...](https://www.glassdoor.com/Reviews/Employee-Review-Clarifai-
RVW23794922.htm)

Fun fact, they hired an ex trump org assistant to be his new assistant
[https://www.linkedin.com/in/sharon-
benita-23703449](https://www.linkedin.com/in/sharon-benita-23703449)

------
jamessantiago
For this I usually think that complexity in general is dangerous when our
ability to apply our own value systems degrades past a certain point. For the
2008 financial issues we had a few financial products like subprime loans
where the reasoning and economics behind these were complicated enough that
most couldn't effectively regulate or understand the implications of their
use.

I suppose when enough things go wrong with a complex system it's like having
runtime errors popup that you can debug against and get a better understanding
of what you created, but that first execution is just dangerous enough that
you wouldn't want something that's complex enough to be doing anything
important. Then again, we might not think something is complex enough until we
start running into the "unknown unknowns" of real world usage.

Maybe a somewhat subjective qualifier for what's "complex" could be developed
and then the ethical question is "is due diligence being taken to reduce the
risks inherent in this complex system?"

------
cmurf
Universally ethical A.I.? No. But then neither are humans. So I refuse the
idea that always ethical A.I. is a goal that disqualifies A.I. as being
useful.

Government arguably should be an expedient (this is Thoreau's argument
anyway), and it's possible A.I. could be at least a more consistent expedient
that also commits to ratting itself out anytime its ethical programming is
substantially altered. That isn't at all how humans behave, they can't be
programmed this way.

Merely having A.I. that concisely points out the competing ethical positions
on an issue, would be an improvement to word salad propaganda; propaganda is a
significant impediment to both ethical and critical thinking, so an A.I. that
were to score statements on a propaganda scale would itself be useful.

[https://propagandacritic.com/](https://propagandacritic.com/)

------
thrwway19033
(Throwaway in case this is crank science.)

Neural nets are inspired by human neural structures. Training is in some ways
similar to human learning. Genetic algorithms especially in simulated worlds
are directly inspired by the biological evolution in the real world.

Is there any chance that the resulting algorithms themselves (the AI) shall
have any ethical rights or significance? Especially once it exceeds in
complexity the human brain.

The reason I ask is that it is an ethical question about AI (and therefore on
topic), but I don't know how to think about it and hope others here might
share some insight.

~~~
sgt101
Really that's more about the effect on us and the systems of society than the
intrinsic nature of rights of the AI. Having things that seem human or capable
of human insight and yet making them slaves may demean us all.

------
bitL
We humans have a multitude of competing ethics in the form of religions; we
can't even agree on which one is the right one; programming ethics into AI is
going back to that chicken-and-egg problem, requiring selecting which
philosophical system is the right one. We can view competing religions as
optimization algorithms where the best one for given state of world is
winning, pressuring other ones to evolve or die off. Would we need competing,
multi-GAN style AI "religions" all times at each other's throat as well?

~~~
devoply
There is nothing inherently ethical about any religion. They have some ethical
concepts in them that might be universal like don't kill but they are easily
overridden by don't kill fellow believers. Similarly there is nothing ethical
about any society other than within the small confined system of a certain
ideology which validates its own ethics. At the end ethics is just a game we
play and we can teach computers to play similar games with a certain set of
rules. That does not in any way mean that a computer will be ethical, only
that it can be taught to observe the norms of the ethical games played by the
society it serves using a rule based system that is applied to certain final
actions that it has computed it should carry out.

------
corysama
There is a work of fiction (the name of which I unfortunately do not recall)
that tells tales of war machines, hard-coded with the rules and conventions of
war, coming into conflict with their own angry and frustrated operators.

I love the concept of AI that could be not just super-intelligent, but
also/instead super-moral. I hope we can find the inspiration to bring some of
that concept into reality.

------
fouc
Ethics in AI makes me think of
[https://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Voliti...](https://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition)

------
ThomPete
Ethical yes, whether those ethics are in alignment with humans is another
question.

------
gesman
(Criminal|Unethical|Bad) for one is (hero|ethical|good) for another.

------
DoofusOfDeath
First things first. What do we mean by "ethical"?

------
dogma1138
Is an ethical hammer even possible?

~~~
coldtea
A, the old "it's how you use the tool" chestnut.

Only some tools are inherently abusable -- something that has been expressed
in lots of forms, from "the medium is the message" critique of
TV/internet/etc, to the gun control debate ("guns don't kill people, people
kill people" etc).

~~~
dogma1138
I think you are missing the point, the argument is AI ethical isn’t the same
as can AI be ethical.

AI is a tool, and while you can say some tools are easier to abuse than others
it has not baring on their ethics since tools have no ethics to begin with.

And the leverage or force multiplication you get from a tool is directly tied
to how easily it can be abused but also to how useful it is to you in general.

Is a gun less ethical than a siringe because it can be used for violence? How
about siginges fuling the drug epidemic?

Are nukes less ethical than conventional weapons?

Were the looms that the luddites seek the destroy unethical because they put
people out of work?

Should we consider combines and modern agriculture unethical because they
drastically changed the balance of power of various nations?

Ofc not as all of these arguments are silly and flawed once you actually begin
to deconstruct them.

This has nothing to do about gun control I have no problem controlling guns
because I don’t want to get shot and if someone does break into my house I
prefer them not to be armed.

I also prefer the police not to be armed at all times because I think that
it’s just as important not to bring a gun to a knife fight as it is the other
way around if you don’t want to escalate things.

I have no problem of regulating the application of AI when necessary.

However it does not mean that I think that it has anything to do with its
ethics but with ours.

Some uses of AI could be deemed by society to be unethical for the same reason
that society deemed that harvesting the organs of a random person to save 5
people is unethical because people wouldn’t be able to function knowing that
they might be harvested at any moment.

So if we bring this back to AI it’s not that I would think that say an AI run
mass surveillance system is unethical because I’m not sure if the AI can make
ethical decisions I would consider it unethical if society wouldn’t be able to
function well under it.

~~~
lukifer
Though I agree in principle, I think tools can’t be considered in isolation,
but rather in the context of iterated game theory.

Is Instagram inherently evil? Obviously not; but through the lens of pre-
existing human social dynamics, including status competition, mating drives,
social signaling, etc, the capabilities introduced by that particular tool
almost inevitably lead to the perverse incentives of lifestyle facades,
“influencers”, Fyre Festival, etc. Do we have to do these things? Of course
not. But it’s naive to not think about the “realpolitik” scenarios, and what
could be done to mitigate them, rather than assuming perfectly ethical and
rational actors.

What makes AI even more complicated than previous technological changes to our
game landscape, is the potential not for new tools, but for new _players_ : at
best, these artificial players are proxies for each of our interests (though
see the side effects of “flash crashes” from high-frequency trading bots); at
worst, we may have to contend with vastly intelligent new players with
emergent interests of their own, which we can’t necessarily predict. While I
don’t think it’s inconceivable that A.I. will always be subject to human
understanding and control, we’re in such new territory that that’s
fundamentally an assumption (see the arguments from Bostrom, etc).

~~~
dogma1138
AI isn’t s player it’s as dumb as a rock. The players are still humans that
build, maintain, operate and can switch it off.

~~~
adjkant
AI today is that dumb, but I think the parent comment is discussing a
theoretical AGI nearing or possessing consciousness. A bit irrelevant for
today's moral conversation on AI but a very interesting one down the line if
we ever do get to that point.

~~~
dogma1138
Let’s have this talk in 3 centuries then.

------
HNLurker2
Paywall bypass: [https://outline.com/VBgvaj](https://outline.com/VBgvaj)

------
laretluval
It's time for a Butlerian Jihad

~~~
tolstoy77
best comment of the day on hn.

------
calibas
const not_evil = true;

------
ErikAugust
Removes paywall, and reduces page load 91% - from 3.93MB to 349KB, uses zero
JavaScript:
[https://beta.trimread.com/articles/215](https://beta.trimread.com/articles/215)

~~~
rootusrootus
How apropos to post that in a thread discussing ethics.

------
pixl97
No. And if you think yes, you are probably deceiving yourself about your own
ethical abilities.

~~~
adjkant
What is unethical about a farmer using AI to figure out the best arrangement
of his crops to maximize the food he produces, maintain his land quality, and
minimize his environmental impact?

~~~
kthejoker2
I mean, the short answer is if all of the constraints and inputs to a problem
are known, you don't need an AI at all. There is a guaranteed optimal crop
arrangement, and the model to produce it would be purely based on natural
sciences only - physics, biochemistry, engineering etc.

AI is only required when there is some unknown, ambiguous, adversarial, or
otherwise non-existent input or constraint. AI (or indeed any intelligence) is
only useful in situations where there is "bias" (in the data science sense),
inference, preference, and extrapolation being used to make decisions in an
unknown space.

And it's precisely in these areas where ethics can be part of the "weights"
given to those inferences and preferences.

~~~
adjkant
Would weather patterns / trends not qualify a need for AI in this context?
What about prediction of food demand by type for the upcoming year? Those are
the big unknown inputs an AI could pretty easily morally aid to here.

