
Would it be evil to build a functional brain inside a computer? - robg
http://io9.com/would-it-be-evil-to-build-a-functional-brain-inside-a-c-598064996
======
jimrandomh
If someone eventually does manage to build a functional human brain inside a
computer, that first brain will raise much bigger issues. You see, that person
in the computer, whoever they are, will have automatically acquired certain
powers - including, in particular, self-copying, self-modification, and (with
enough processing power), accelerated thought.

In other words, they'll be a motherfucking sorcerer. One who's just been
through an untested brain-to-software translation, with possible sanity
damage. Now, keeping in mind that we're talking about a motherfucking sorcerer
with possible sanity damage, reread that list:

    
    
        1. The right to not be shut down against one’s will
        2. The right to not be experimented upon
        3. The right to have full and unhindered access to one’s own source code
        4. The right to not have one’s source code manipulated against their will
        5. The right to copy (or not copy) oneself
        6. The right to privacy (namely the right to conceal one’s own internal mental states)
        7. The right of self-determination
    

This is an extraordinarily bad idea. If implemented, it would almost certainly
wipe out humanity.

~~~
fleitz
Who is to say humanity is the final product of evolution in the universe? Most
people don't consider humanity to be evil for wiping out neanderthality.

~~~
6d0debc071
We'd probably consider N pretty stupid if it just rolled over and died though.

~~~
bulatb
Or purposely invented Homo Sapiens just so we could kill it.

------
Aqueous
Depends on whether you believe in Strong AI or Weak AI, whether a simulation
is consciousness or merely behaves like consciousness.

And then, because you can't decide between those two philosophical frameworks,
you quickly realize that you can't possibly prove beyond a reasonable doubt
that a computer is or isn't conscious. If we can't prove that it isn't
conscious, are we obligated to assume that it is?

I'm not entirely sure we are. It's like asking if we can't prove that there is
no God, are we obligated - ethically - to behave as if there is one? Most
agnostics and atheists would say absolutely not.

Not being able to prove the negation of a fact is not the same as proving a
fact.

~~~
civilian
"whether a simulation is consciousness or merely behaves like consciousness."
Crucial point. Have you read Peter Watt's Blindsight? I can't recommend it
enough.
[http://www.rifters.com/real/Blindsight.htm](http://www.rifters.com/real/Blindsight.htm)
(it's also available in a pay-money format)

There's also the whole idea of whether humans are consciousness or merely
behave like consciousness. There's the idea that the sentient "you" is just a
narrator for the decisions that your brain makes. This is one example of that:
[http://www.nature.com/news/2008/080411/full/news.2008.751.ht...](http://www.nature.com/news/2008/080411/full/news.2008.751.html)

~~~
Aqueous
'There's also the whole idea of whether humans are consciousness or merely
behave like consciousness.'

Well this would be the counterargument. We behave ethically towards humans.
Yet we have no conclusive evidence that other human beings are conscious. So
is that a good enough reason to treat conscious-behaving computers ethically?

Thanks for the links.

------
read
Nevermind evil; malice is too ambitious of a goal! Incompetence is more likely
to dominate.

A question that's 100x more important is would you manage to even finish the
project? Big projects like that suffer notoriously from mismanagement that the
chances of them succeeding could be as low as inversely proportional to their
funding. A startup with 1/100th the funding has more chances of getting
something useful working.

Vicarious with $15M funding for example has better odds at getting to the root
of what you need to build a functional brain that doesn't involve such a
supercomputer.

[http://vicarious.com](http://vicarious.com)

~~~
wellboy
It doesn't matter if "this" project will succeed. This development is
unstoppable, because there will be hundreds of projects and mad scientists
trying to create human consciousness in machines.

As soon as there is a tangible goal to attain where no one has ever been
before, people tend to become unstoppable :)

~~~
pyre
I'm still waiting for my anti-gravity. ;-)

------
unclebucknasty
Have we even answered the question, "what is consciousness"? Isn't this along
the lines of "what is the meaning of life"?

Because, if we haven't answered it, then it seems that the best we could do is
build a simulation which displays some or many of the properties that we
associate with consciousness.

~~~
jes5199
No, but, there are people coming up with theories on how to quantify and
measure it anyway. For example,
[http://en.wikipedia.org/wiki/Integrated_Information_Theory](http://en.wikipedia.org/wiki/Integrated_Information_Theory)

------
kailuowang
It's interesting that people are trying to determine the ethics of simulating
human brain while ethics itself is a probably a product of human brain.

~~~
crusso
In the same way that math is.

~~~
kailuowang
Can you elaborate on "the same way"? I thought most people would agree that
math is not a product of human brain, it exists regardless of the existence of
human race.

~~~
evincarofautumn
Depends on the math in question. I don’t really believe that the real numbers,
for example, correspond to a physical phenomenon. They’re just a convenient
tool we use to solve problems.

------
coopdog
I think the bigger question is when do we grant an information automata the
right to survive? If we use evolutionary algorithms with survival as a
selection bias (just like humans) eventually creatures that 'think' like us
should emerge.

Whereas there are intelligent automata, like say a search engine, that
absolutely could not care either way whether you turn it off or not. The
survival selection bias is the difference, and it's seeing something struggle
to survive that activates our empathy emotions to save it.

I imagine we'll know the line of when it would be 'evil' to go against our own
instincts and turn off an intelligent automata that wants to survive when we
see it

~~~
daniel-cussen
I've erased millions, if not billions, of information automata while working
with evolutionary algorithms. How do I know I'm not killing something living,
when the whole idea is that they truly live?

I don't. While in retrospect they weren't complex enough to really think, I'm
so sure that they'll be unfriendly AI that I made the decision beforehand to
erase them no matter what, "dead or alive."

~~~
brucefancher
You monster.

------
pbw
It will entirely be a question of politics, not science or philosophy. The
track record is we deny rights to those we deem too different from us, then we
later see the light and include them. Think about women or other races. This
will happen with virtual beings. There will never be any proof they are
conscious, there can't be, but we'll have to decide anyway how to label them.
We won't agree that is for sure, it will be like the debate around abortion
only more divisive and more significant. We will never definitively decide,
we'll still be debating right on into our obsolescence.

~~~
guscost
Would you agree that today many people ascribe consciousness to dogs and other
mammals? Keeping in mind that most mammals don't speak English, I'd assume
that many people would befriend this hypothetical technology in the "dog
phase" or earlier.

Also, as a group women and other races have been considered more different,
then less different, without actually changing from one to the other. The same
couldn't necessarily be said about AI projects.

~~~
pbw
Most people believe their dogs are less conscious than they are, but more
conscious than gold fish, etc. I agree AI will achieve various levels of
consciousness at various times, starting well below us and ending up well
above us.

------
aperrien
The ethics of whether or not to allow independent, non-human originating
agents to be _built_ is highly debatable, for obvious reasons.

However, as to whether or not to run an emulation of someone who has
volunteered their brain to such an experiment is much less murky. As long as
the experiments have been agreed upon in advance, and carried out with a
neutral third party observing and acting as a control, I don't see too many
problems. We do that sort of experimenting with humans as a matter of course,
and people being biologically based, or electronically based should not
matter.

------
DougN7
Am I the only one that thinks this is silly? A computer is still just a Turing
machine. It's a gigantic abacus, or a huge clock of gears. Heck, I can write a
program today to output text begging the power not to be shut off. That
certainly doesn't mean it's self-aware.

------
6d0debc071
While I think these sorts of bills of rights are important - after all, there
may well come a point when things that are faster thinking than us are
dominant, or when our descendants/ourselves are simulated in some form or
another. To an extent, whether it is actually conscious or just simulates
consciousness is irrelevant (and, IMO, likely to go unanswered anyway since
you can hardly ask to see someone else's qualia.) If it behaves well if we
treat it well - if it seems to develop ethics the same way we do - then
there's a pretty darned good reason to treat it well. Golden rule, after all;
do as you would be done by.

#

Semantics aside - if you could simulate a brain inside a computer without the
environment, which seems the most likely first step, would you essentially be
locking it in sensory deprivation? That seems like the sort of thing we
shouldn't do.

------
spydum
Can't wait for the AI to file a lawsuit against the developer for defects
found in his/her/it's? implementation

~~~
jes5199
Can you sue your parents for faulty genetics?

------
flashmedium
A) No B) It's unstoppable. For as long as humans will have the desire to live
a longer and healthier life... Piles of money will be poured into R&D.

Bionic limbs, eyes... down the line bionic mind. Human, meet Cyborg.

------
cynoclast
There's no such thing as evil.

More garbage from io9.com....

------
contingencies
Run for the hills.

------
galapago
No.

