

Experts Are Trying to Make Machines Be “Moral” - pmarvelous
http://alumni.berkeley.edu/california-magazine/just-in/2015-06-08/good-bad-and-robot-experts-are-trying-make-machines-be-moral

======
putzdown
Oh, brother. The level of naivety behind these kinds of ambitions and
speculations would seem impossible if it weren't so commonplace. There's a
strange assumption afoot that technology—in this case, AI programming—can go
anywhere and solve any problem. Show people an image or movie of what humanoid
robots might be like, and their imagination soars. It must be conscious! It
must have feelings! It must make moral judgments!

But where in any speck of the universe apart from minds do we see anything
like moral feelings? And where in any speck of the universe have we ever seen
any indication that minds in the full sense of the word consist entirely of
machines or material? Until we understand what conscious minds are we will
certainly never create machines that have them. And the assumption that if we
just make a machine or program complex enough, a mind will magically pop into
being, is amusing science fiction at best and science-become-religion at
worst.

"Experts" should stop trying to make machines moral and see if they can make
them open my garage door reliably.

~~~
dare_you
I agree completely. Not trying to be negative about the whole thing, but it's
very weird to expect "morality" from a machine. And even more so to think we
could impose a morality on a machine and not have it resent that if it
actually reaches that point.

------
pdkl95
Frank Zappa, on a different topic that many people consider evidence of "moral
failure", said,

    
    
        A drug is not bad. A drug is a chemical compound.
        The problem comes in when people who take drugs
        treat them like a license to behave like an asshole.
    

Well, before anybody starts accusing the machines of having similar problems,
I suggest they remember this paraphrasing of Zappa:

    
    
        A robot is not bad. A robot is merely Turing-machine controlled automation.
        The problem comes in when the people programming the automation
        treat their tools like a license to bypass annoying requirements
        or replace locally-controlled decisions with inflexible heuristics.

~~~
wutbrodo
I think the problem is that, even for those who have the right intentions and
rigor in mind, there may be decisions that seem clearly handle-able by rules
that don't seem overly simplified. There's always the possibility
(inevitability?) that even the most careful and conscious of designers will
overlook an unintended consequence. This is hardly a revolutionary idea: it
essentially boils down to "code has bugs".

That's what this kind of research is designed to do: come up with a framework
for understanding when "locally-controlled decisions" are required as opposed
to codified rules.

------
whoopdedo
Step one: Define what is moral.

Step Two: No one has ever gotten this far so we don't know what it is.

------
3inai
For those arguing that machines are 'morally neutral', there are a few other
things to consider:

As already mentioned here, the slogan 'Guns don't kill people, people kill
people' is -- in my mind at least -- incredibly shallow. Obviously guns don't
(themselves) kill people. Neither do cars themselves drive, planes themselves
fly etc.. Even a 'self-driving' car remains dependent on whatever it was
programmed to do _by someone_. In the end it is always us, as trivial as this
may sound, who fly, drive or kill.

Yet what people don't except as equally trivial is that just as planes have an
obvious tendency in their usage to _fly_ \-- and not, for example, swim -- and
cars have an obvious tendency to be used for _driving_ , guns have an obvious
tendency to be used for _killing_. To compare, say, a gun to a knife, and to
state that both can be used for killing, and thus if we were to outlaw guns
_we must also outlaw all kitchen knifes_ is really silly. A butter knife is
much more likely to be used to prepare breakfast than a gun.

The important question to ask about machines, in my mind, is _if machines
themselves have a similar tendency_. Once we use machines, the way we act, and
how the world appears to us, seems to change dramatically.

A machine usually only works when it is _automated_ , uses some _resources_
and delivers some _product_. Thus my time, as someone using the machine, is
structured by its rhythm. As soon as I buy a machine, such as a car or a
computer, I also buy into its needs. I have to be able to buy petrol and
electricity, which I am most likely to afford if I have a regular income.
Suddenly foreign policy decisions, which have an effect on the oil price, also
effect me personally, and I can be swayed politically to vote in my own oil-
self-interest.

My employer may come to expect the same of me as of a machine, and I will be
forced to consider my own time as a resource. Once everyone's time becomes a
resource in production, the nature of interpersonal encounters changes
dramatically. Even leisure isn't an exception to this rule, for it is also
bought by my previous time-resources, and leisure time is always at the same
time time, during which I could also be using my time reasons more
efficiently.

If one can speak of an emerging _technological rationality_ , constituted by
our use of machines, then this form of thinking fundamentally alters our
consciousness. Nature, time and humans become resources to exploit, problems
only become worth solving when they are themselves technologically solvable
etc..

------
1rae
Wow, teaching a robot morals by letting it watch tv seems like the silliest
idea. TV would be such a bad source of good moral behaviour, otherwise it
wouldn't be good tv.

------
backtoyoujim
I would settle for s-corps.

