
So I’m fine with AI, because I don’t believe in it - joeyespo
http://ayjay.tumblr.com/post/105866644478/so-im-fine-with-ai-because-i-dont-believe-in-it
======
BrainInAJar
The problem here is we don't have a very fundamentally good theory of mind. AI
will come from philosophy departments, not CS.

------
codr4life
We're already seeing certain aspects of consciousness simulated using said
algorithms. Nobody is claiming to be able to do the whole shebang as far as I
know. It's black box reverse engineering at it's finest. I'm pretty sure that
we'll never reach individualization though, as that implies something that not
all agree has anything to do with the brain. And a computer is all about the
mental level, logics, trying to model feelings will never work out. Close
enough to the uncanny valley to make us uncomfortable, sure. Bot not the real
thing as it has nothing to do with logics.

~~~
codr4life
I sometimes feel like we're wasting time trying to create the next generation
of Clippy, that the quest to reduce human consciousness to a series of yes-
and no-questions is misguided. If there's anything we've been primed to for
millions of years, it's complex social relations, to be able to tell someone
who wants to kill you from someone who doesn't, someone who hates you from
someone who loves you, telling the truth from lying, pure logics from a human
response. Maybe we should focus more on the things that a computer excels at
and accept that it will never be our secret friend but rather a tool? Get a
dog for gods sake :-)

~~~
codr4life
I guess what I'm saying is the quest for a general purpose AI is probably a
dead end. Specific AI's for specific tasks is by far the most successful
approach so far.

~~~
donkeyd
If you start putting multiple 'Specific AI's' together, as is common in CS,
you will eventually get a more general purpose AI. The only constrain is time,
with enough time, things that we can't even imagine right now will be
possible.

With the Internet of Things I can imagine that distributed AI will become a
thing. Interconnectivity will eventually connect all the 'specific purpose AI'
to become a 'general purpose AI'.

I could keep thinking about this for hours, but that won't be good for my
productivity...

------
PhasmaFelis
Kim Stanley Robinson is, FYI, an _amazing_ sci-fi author, one of the best
we've got going right now, and if you haven't read his stuff already you
should.

But I can't agree with him on this particular thing. It seems like a rehashing
of the same old religious argument, that there's some magical, ineffable,
indefinable property of savannah-grown meatbrains that no manufactured device
can ever, ever have. I can't buy that. Human brains are just an arrangement of
matter, and any arrangement of matter can be replicated. Whether we _will_
make conscious machines, I can't say, but there's no rational way to assert
that it _can 't_ be done.

------
kolev
Not believing in something does not disprove its existence.

------
wyager
What an idiotic sentiment.

We have no reason to believe that the entire universe isn't computable on a
(Quantum) Turing Machine, or that any finite subset of the universe isn't
computable on a (Quantum) Finite State Machine.

The most brute-force approach is simply to make a very detailed scan of a
brain and run it in a chemical simulator. This is extremely difficult, but not
computationally or physically intractable. A project to exercise this method
on a simple creature (C. elegans) is already underway.

Recommended reading:

[http://en.wikipedia.org/wiki/OpenWorm](http://en.wikipedia.org/wiki/OpenWorm)

[http://en.wikipedia.org/wiki/Turing_completeness](http://en.wikipedia.org/wiki/Turing_completeness)

[http://en.wikipedia.org/wiki/Church–Turing_thesis](http://en.wikipedia.org/wiki/Church–Turing_thesis)

[http://www.cs.berkeley.edu/~christos/classics/Deutsch_quantu...](http://www.cs.berkeley.edu/~christos/classics/Deutsch_quantum_theory.pdf)

------
danieltillett
Lets change AI to HIV - I'm fine with HIV because I don't believe in it.

~~~
onion2k
This is a fun game. I'll play... I'm fine with unicorns because I don't
believe in them.

HIV and AI aren't analogous. HIV exists. We have lots of evidence for its
existence. It's not speculative. AI (in the "sentient machine" singularity
sense) doesn't exist, but it might in future. Consequently, right now, stating
you don't believe AI will ever happen is fine. It's stupid, because pretty
much every open-ended prediction is stupid, but it's not "wrong".

~~~
danieltillett
Not believing in something is not evidence for or against something. Saying
you don't believe AI is possible for reasons X, Y or Z is of value, but saying
you a fine with something just because you don't believe in it is meaningless.

~~~
drdeca
I'm not sure I follow.

It may be useful in some cases to describe why you believe or disbelieve
something (as you say).

I do not see why it would never be useful to state that because you believe or
disbelieve something, you have a certain emotional reaction to some thing.

It seems like a potentially useful description of why one has or chooses a
given emotional response.

Even if it so happens that the belief leading to the emotional response is
wrong, it could still be useful to describe the relationship between the
belief and the emotions or feelings or opinions on policy.

For example, suppose I believe that all insects have a significant chance of
poisoning food they land on. In this case, might not it be useful to know that
the reason I am not ok with flies is because of this belief?

So even if the person is wrong about their beliefs about AI, I don't think
their explanation of why they are OK with AI as a result is "meaningless".

~~~
danieltillett
The reason why is the same as I originally stated, reality does not care what
you believe. Either AI is possible (or not) and it does not matter if you
believe in it or not.

To put it more bluntly it is meaningless what you believe, the only thing that
matters is why you believe what you believe.

~~~
drdeca
Maybe I do not understand what you mean by meaningless?

Beliefs can have consequences that do not depend on the cause of the belief.
The actions one takes depend on one's beliefs more directly than they do ones
reasons for those beliefs, in many cases.

(For the insect+food example, consider the options of taking it as axiomatic
on a whim, or being made to believe it via propaganda+magic(magic here just
for sake of argument). What causes the person to believe the thing does not
effect whether the person intends to allow a fly to land on their food.)

(If I believe my hand is on fire, whether I attempt to remedy this does not
depend on whether I see it or feel it, nor on whether it is on fire. If I
attempt to remedy it, while it is not on fire, there may be an unpleasant
result. The truth does of course matter, but beliefs are a useful predictor of
future actions.)

