
Engineering Kindness: Building A Machine With Compassionate Intelligence (2015) - rbanffy
http://www.emotionalmachines.org/papers/engineeringkindnesswebcopy.pdf
======
Animats
I wasn't expecting compassionate intelligence via predicate calculus based
expert systems. That's very 1980s, as is the whole paper. There's no mention
of machine learning. There's a lot here about "common sense", which remains an
unsolved problem in AI.

Thinking about common sense without getting into philosophical tangles is
difficult. It's helpful to consider common sense as the ability to predict the
consequences of actions. If you can predict consequences, (probably
probabilistically) you have the basis for planning. And you can avoid getting
into trouble, which is what common sense is mostly for.

This is known technology. In modern control theory, you set up a predictor,
have it watch the "plant", and self-tune until it can predict what the plant
will do. This is called adaptive control, and it's a kind of machine learning,
with similar math but different terminology. Once you have a good predictor,
you can invert the predictor function and solve for the inputs which will
yield the desired outputs. This works on systems which are reasonably
continuous. Many industrial processes are controlled by such systems.

Prediction is a prerequisite for compassionate intelligence. It's not having
internal emotional state that's crucial. It's having a predictive model of the
other party's emotional state.

Of course, once you have a tuned model of somebody, you now know what buttons
to push to make them do things, in either the "buy" or "vote" spaces. Whether
something is compassionate is a design decision.

------
l_t
Interesting proposal. I definitely think that implementing metacognitive
facilities, as well as internal representations of the perceived emotional
states of the actors around the AI, seem like good steps in the right
direction. Predicate logic will never work, though.

One other aspect I found a bit odd:

> As an agent designer, it gives one confidence in choosing “insight
> meditation” as a philosophy of mind because the mental processes have been
> documented to create compassion in even the most hardened criminals (Ariel
> and Menahemi, 1997).

This strikes me as a bit of an absurd criteria for choosing what is,
essentially, a software algorithm. This system is so fundamentally different
from a human being, that I don't think any such parallels are warranted.

------
_yosefk
It's kinda funny, but I kinda don't want to make fun of it, so I'm left
speechless. Maybe others too? Hence no comments on the content in a hour?

I do think that the Turing test is still the greatest test for strong AI,
machines are still very far from passing it, and I feel that understanding
human emotion should be a part of the general strategy of "understanding human
things" on a level sufficient to pass a very long Turing test session:

* From a technology standpoint it's not clear that what we'd call human emotion requires a different approach than other "human things."

* From an application standpoint, a machine much dumber than an adult (as in totally can't pass the Turing test) but with developed emotional facilities is kinda scary, perhaps like an alien child ("human-like feelings" with undeveloped reasoning, and the reasoning is undeveloped in ways we're not familiar with - at least I'm conjecturing that this machine will not be able to pass "the human children Turing test" any more than the adult version.) It's not clear what good you can do with such a machine.

All of the above from a serious angle and without dwelling on the joy of
sexprs like (Feels (Feels (Feels (Object)))).

------
saxonklaxon
Option (a) create a 'motherly' machine which is guaranteed compassionate
towards children and small furry mammals,

Option (b) create a Buddha-like machine which is guaranteed benevolent towards
everyone and everything in the Universe.

Neither of these work! Wrt (a) the flip side of maternal empathy is a
ferocious attitude towards potential predators who threaten her cubs. How does
the programmer guarantee no human will be placed in this category, mistakenly
or otherwise? Besides, before one attains motherhood one must pass the
'terrible twos' which many psychologists regard as the peak stage of physical
violence.

Wrt (b) an AI with no desires has no incentive to interact with the world. So
its mind will not develop.

Is there a third option?

------
sitkack
Oh, this was killed because of the redirect through twitter,
[http://www.emotionalmachines.org/papers/engineeringkindnessw...](http://www.emotionalmachines.org/papers/engineeringkindnesswebcopy.pdf)

~~~
sctb
Yes, we've unkilled it and updated the link. A note to submitters: please use
full URLs so users know where they might be headed.

~~~
rbanffy
Sorry. Noted.

