

Do we understand ethics well enough to build “Friendly artificial intelligence”? - pldpld
https://johncarlosbaez.wordpress.com/2011/03/25/this-weeks-finds-week-313/

======
aothman
As an AI researcher, I think obstacles like "not having your robot fall over
all the damn time" are a little more immediate than robots having a nuanced
understanding of ethics. I can understand why this stuff is fun to think about
and debate, but it's just not relevant at all to where AI is going to be for
the next 50 (or 100, or probably 200) years.

~~~
cryptoz
I really think the stability of your robot is a completely separate issue.
George W. Bush and Barack Obama both use flying robots with missiles to hunt
down and kill people they don't like. Don't you think that perhaps, as these
flying robots gain more and more autonomy, that discussions of ethics are
actually important, and important _now_? 50 years is a long, long time in
computer science.

I'm surprised that you are so pessimistic about your research that you think
ethics won't even be relevant in year 2205. Holy cow you must think AI is
hard.

~~~
jordan0day
_George W. Bush and Barack Obama both use flying robots with missiles to hunt
down and kill people they don't like._

This is a very good point. It's always good to be reminded that we're already
living in the future.

That said, I _feel_ like aothman is discussing real artificial _intelligence_
, that is, an entity capable of making a _conscious_ decision that it wants
to, in this case, fire the missiles. If I had to guess, if predator drones
gain the ability to "decide" for themselves whether or not to fire their
missiles, it will be built on a system of complex rules, and not because
they're "intelligent". Pota _y_ to, Pota _h_ to? Maybe. I'm not an AI
researcher and I don't even come close to understanding _human_ intelligence,
but I feel like even if it is just a complex system of rules, it's at a much
deeper level than we'll be able to simulate soon.

~~~
cabalamat
> It's always good to be reminded that we're already living in the future.

This is not a new phenomenon; the first use of autonomous killer robots was in
1943, in the form of acoustically guided torpedoes.

~~~
hugh3
Well heck, if we're going to stretch the analogy why not a mouse trap?

~~~
krschultz
Because nobody has ever been killed by a misguided mousetrap?

~~~
hugh3
Bear trap, then.

~~~
cabalamat
They don't move around of their own accord, attempting to close with the
target. Guided weapons do.

------
vessenes
Best suggestion of the article is that we scorn AI researchers who do not have
a credible claim that their designs will maintain a basic agreed-on value
system after a billion self-managed iterations and upgrades by the AI.

This is a fascinating and broad-ranging criticism of AI, and it's interesting
to me because the author is clearly considering 'what happens if we are
successful?'.

Definitely worth a read.

~~~
_delirium
That's a pretty high bar that would be pretty controversial if you applied it
to other fields.

For example, should we scorn genetics researchers who do not have a credible
claim that their organisms will remain harmless after a billion generations of
evolution and recombination? That's more or less what many of the anti-GMO
arguments boil down to, that we ought to require genetically-modified
organisms to be provably safe, both as they exist now, _and_ in all possible
future ways they could evolve and interact with other organisms. (And since
that bar is very hard to reach, therefore, their arguments go, we should be
careful about funding such research to begin with, and definitely shouldn't
let any of its results out into the wild, e.g. into crops.)

If anything, the argument there is stronger, because evolving biological
organisms that can pose a threat to humans actually exist, whereas evolving
machines that can pose a threat to humans are sci-fi, and likely to remain so
for a very long time. Why regulate the latter one more stringently?

~~~
yummyfajitas
The search space of biological organisms been well explored. It's unlikely
that one will develop which is vastly more dangerous than the ones which
already exist. The same cannot be said for robots/AI.

~~~
jessriedel
> The search space of biological organisms been well explored.

Not at all true. The space of possible biological organisms is searched in a
_highly_ nonuniform manner by evolution, and the human search strategy is
fundamentally different. It's overwhelmingly likely that there competitive
human-constructible organisms which could never be produced by evolution in
the past 4 billion years.

~~~
yummyfajitas
This is true if you are discussing creating organisms which are highly
different from existing ones. _delirium was discussing genetic modifications
which are simply tweaks to existing organisms.

There is no reason to believe that golden rice will evolve in any
significantly different manner than ordinary rice. In contrast, AI will evolve
via a mechanism which is unprecedented.

------
Hipchan
Haha, it's impossible.

Absolute power corrupts absolutely. The goal is to make AI more powerful than
humans is it not? We're not going to be able to control it, no way no how.

------
A1kmm
The thing is, fully autonomous AIs will most likely be tested on a simulated
world (maybe at a smaller scale) before they have any kind of real world
influence.

Real world resources are finite, and real world processes with real world
materials take a certain finite amount of time. The singularity therefore
ignores the realities of physics. It would even be possible to add artificial
constraints on the total resource use and rate of resource use.

------
geuis
So I'm not a person that actively writes AI software, but I am a knowledgable
supporter. I'm all for the Singularity, rights for future non-human
intelligences, etc, et. al.

So I _always_ take issues with these kinds of esoteric debates about how to
engineer ethics into an intelligence that can learn and become conscious.

Haven't any of these yahoos ever had kids or owned a pet dog?

You don't "engineer ethics" into your son or daughter. You teach them through
examples of good behavior, punish them when they misbehave, and reward them
when they succeed. Over the course of a few years, given a good environment,
the end result is a new young intelligence that knows how to behave well and
get along with others. That intelligence often goes on to bootstrap itself up
into adulthood and eventually goes on to create later iterations of itself. If
it was raised well, then the new ones tend to get raised well too. We call
them "grandkids".

So lets assume in 10-20 years something descended from IBM's Blue Brain
(simulating cat cortexes) leads to something that is analogous in intellectual
range from a dog to an elephant.

Most people will agree that dogs and elephants are pretty damn smart. Dogs are
able to perceive human emotional states, understand some language, do work for
people, and fit nicely into our social structure. Elephants aren't that close
with people, but are highly intelligent, have active internal emotional
states, and even grieve for their dead. In some societies, people and
elephants have worked together for thousands of years.

In both these cases, we have a long history of working with other
intelligences of varying scales for thousands of years. In general, if you
don't mistreat them they turn out to be socialized pretty well. Its only when
you mistreat them that they learn to fear and hate you. The same is true for
people.

So as @aothman said in another comment in this thread, AI researchers are just
trying to get their projects to not fall over. There's no thought of
"engineering ethics". This problem is going to be solved one little bit at a
time. Artificial neural architectures are going to more and more sophisticated
over time. But there is a key difference between the underlying architecture
and how you go about training these new minds.

If you raise them well, then most of these angels-on-a-pin discussions are
just that, meaningless.

~~~
hugh3
Dogs are actually an interesting example because I think they _are_ engineered
(by evolution) to be "good", or rather, to be obedient. The only reason you
can train a dog, and the reason that dogs were ever successfully domesticated
in the first place, is that they have an inbuilt desire to be cooperative and
to submit to the authority of a more powerful "dog". The dog is happy when
you're happy with it, and sad when you're angry at it. Making it obedient is
thus a fairly simple matter. Try that with just about any other animal
(especially any other large carnivore which could seriously hurt you if it
wanted to) and you'll be out of luck.

Humans are actually similar -- we have _some_ sort of an innate ethical sense,
though human ethics is a lot more complicated than dog ethics. We're smart
enough to realise that our own short-term best interests may be served by
acting in a non-ethical manner, and we've evolved all sorts of defence
mechanisms to cope with this fact, including the moral outrage and desire for
revenge which we feel when we see someone behaving non-ethically.

So in conclusion, while you can't necessarily engineer ethics into a mind, you
can hard-wire in the structures necessary to _care_ about ethics. Humans and
dogs both have some sort of ethical sense wired in.

------
protomyth
I don't think we understand debugging well enough to start building "friendly
A.I.", much less in robot form.

~~~
curiousepic
A true AGI will solve robotics itself, whether it is already instantiated in a
robot or not.

------
nazgulnarsil
no. and the likelihood of someone building a friendly AI first when that is
harder than building an unfriendly AI seems minuscule. cya humanity, sucked
while it lasted anyway.

~~~
MikeCapone
If you believe that, the rational thing to do is to support Friendly-AI
research. SIAI.org

~~~
keefe
...wonders if this is an SIAI affiliated human that I know...

I should say something clever to the fatalist OP to avoid downmod... something
about hardware requirements for agi likely being high (imho) and that we have
a few decades to work on the really hard problem and we haven't yet worked on
hard problems for a few decades since maturing as an information processing
species.

------
HockeyBiasDotCo
No. Not close yet...

