

A Senseless Conversation - nyellin
https://sites.google.com/site/asenselessconversation/

======
Jach
Nice short story. For a full sci-fi book of this genre, check out _Permutation
City_.

For another fun question: "What time is it?"

And lastly a quote: "The effort of using machines to mimic the human mind has
always struck me as rather silly: I'd rather use them to mimic something
better." ~E.W. Dijkstra

~~~
bglusman
better yet, also Dijkstra, "The question of whether a computer can think is no
more interesting than the question of whether a submarine can swim."

~~~
kanchax
We are complex machines, aren't we?

------
unimpressive
Fitting. Just a few minutes before reading that I had one of those
"Contemplating your hands" epiphanies. I sat down in my computer chair,
reached over to my mouse and came to a dead stop. A thought had brought itself
to the foreground.

"I can't feel myself move."

Now when I say this I don't mean a numbness, or loss of the senses. But I
couldn't discern what exactly I was doing that made my arm move. Or any other
part of my body for that matter. That was silly of course, I move them all the
time. So I tried moving them slowly, and felt a slight sensation.

Of course I thought; the slight sensation isn't _really_ the feeling of moving
my arm, it's the feeling of matter like air brushing against it. After all, I
am basically sitting in a tank of atmosphere. Nerves report _state_ , but
aren't really projecting the feeling of movement.

That thought chain quickly led to a minor existential freakout. (During which
I puzzled over the question of how the hell I move at all.)

I eventually generated three hypotheses:

1) The feeling of movement simply isn't reported by nerves. Introspection
can't discern your cognitive processes, so why should it be able to your
physical ones?

2) The feeling of movement is so faint that its overshadowed by the mere touch
of air/one's own body hair. I know that when I'm in the deepest state of
somnolence just before sleep; it's very often for me to realize I need to get
up to do something, and struggle against the inhibitions on your movement
somnolence induces before sleep. I can feel the struggle of this, it also
feels the same if you try to fight sleep paralysis. One could argue that this
_is_ the feeling of movement.

3) You could argue that the feelings reported by nerves about the state of
your environment _are_ the feeling of movement. After all, feelings are just
signals sent by nerves and interpreted by the brain. These feelings are
generated by movement, and thus _are_ indeed the feeling of movement.

4) My understanding of cognition is too incomplete to even hypothesize
something remotely plausible.

Now, considering that so many articles on sleep studies mention them, I'm sure
that the mechanics of how the brain controls the body are well understood and
that if I'm truly curious I can google it. (Which is something I might just
do.)

But the real reason I shared that anecdote, besides being semi-relevant to the
topic at hand. Is because I took my ability to move for granted. In the same
way that I take the idea that we could all be a simulation for granted. I've
considered that a non-zero possibility for quite some time now.

I'll admit that I read some of the comments here before reading the story. (A
big no no for science fiction, a genre that thrives on twists.) And after
glancing at Tichy's comment, was afraid I might have spoiled it for myself.
However, the journey is more important than the destination, so the concept of
such a twist automatically made me go read the story. I was thoroughly
disappointed with the ending.

The concept of a memory loop isn't really new. (I've seen it mostly explored
in the context of time travel, but still.) But trapping a human in a text
interface and presenting it as the thinking machine? Morbidly delicious. (In
all the right ways.) And useful too. I could pull it out any time someone
exhibits signs of having decided that a computer program can't be conscious
simply by virtue of not being implemented on a human brain.

Having a human brain with no senses hooked up presented as a computer program
would really drive home the message.

EDIT: Regarding the story, my immediate thought after finishing was
questioning why if the program panicked because it lost all it's senses, why
didn't he simply swap out the memories of J. Random. Person. With someone who
already accepts that they might be a simulation. I'm sure that if they really
believed that, it would be possible to calm them down by explaining that they
are a simulation of themselves. And for bonus points, if someone were to
consent to have their memories used for this (It isn't stated how he actually
_got_ the memories mind you.) that they would already have the possibility of
being the simulation strongly in their head. And would eventually accept that
they are a non-human.

Though, if _you_ consent to something like this, you essentially ensure that
you can never be sure weather your you or a replay of your memories. Though as
it stands, you can't really determine this already. Which makes for one of
those classic thought experiments that still has mileage.

Trains of thought down this road are probably inherently unresolvable, but
still fun to try.

~~~
extension
_The feeling of movement simply isn't reported by nerves_

Bzzt...

<http://en.wikipedia.org/wiki/Proprioception>

~~~
unimpressive
First and foremost, you sir are awesome.

So number two was closest. (And what I figured. Though I have to say that
ahead of time for it to count.)

One of the things I love about the answers to questions like this is the
amount of interesting stuff you learn. At first (before I'd read the article)
I thought you were being a bit harsh. It WAS a hypothesis after all. But after
reading that it was so obviously far off the mark I had no trouble seeing why
that sentence got the buzzer.

This in particular caught my eye:

"The proprioceptive sense is often unnoticed because humans will adapt to a
continuously present stimulus; this is called habituation, desensitization, or
adaptation. The effect is that proprioceptive sensory impressions disappear,
just as a scent can disappear over time. One practical advantage of this is
that unnoticed actions or sensation continue in the background while an
individual's attention can move to another concern."

Which would probably qualify for "Overshadowed by virtually the slightest
application of any other sensual input."

Back to the Turing Test however. One of the things I like to do when I'm bored
is get on Omegle. Now, my goals are different from 95% of the other users on
there. I wade through the sea of wankers until I find someone who actually
wants to talk. I then to proceed to:

A) Attempt to convince them that I am a machine.

B) Convince them that they're a machine.

(Obligatory XKCD: <http://xkcd.com/329/>)

So far the tally is something like: I've convinced two people of my non-
humanity. And B hasn't happened yet. Which brings me back to this story. The
moment I finished it, after the thought about getting the memories of someone
more compatible with the idea of a brain simulation, my face formed a
mischievous smile.

I thought "This will be great for the next time I get on Omegle."

------
petercooper
Does anyone remember a thread of comments on HN a year or two ago where a guy
was placing bets that he could totally swing your opinion on giving a sentient
AI freedom? Supposedly he convinced everyone and won every bet but no-one
revealed what he did and I thought this was going to be a posting of one of
those conversations at first ;-)

~~~
Anderkent
Sounds like the ai box experiment: <http://yudkowsky.net/singularity/aibox>

~~~
petercooper
Wow, I never realized it was _that_ old but that's the one - thanks!

~~~
finnw
It has been discussed on HN more recently:
<http://news.ycombinator.com/item?id=3324152>

~~~
JamesBlair
Somewhat recently, someone attempted it with the intention of reposting the
log. However, he failed: <http://lesswrong.com/lw/9ld/ai_box_log/>

~~~
finnw
Saying "here are next week's lottery numbers" would not convince me.

Suppose I am outside the box and the AI is inside the box, the AI cannot have
a perfect model of the lottery machine or of me. At best it has seen
photographs. All photographs are noisy. There will always be some detail it
does not have that would invalidate the simulation (e.g. I have a minor injury
that I have never talked about and is not visible in any photo, an ant crawled
into the lotto machine last night, etc.)

If it has sent drones to physically inspect the internals of the machine or my
brain, then it is already out of the box, so my decision is irrelevant.

If I buy a lottery ticket, there are four possible outcomes.

1\. I don't win (or I die before I can claim my winnings, the draw is declared
invalid etc.)

2\. I win, but it is a coincidence (maybe it was worth a shot for the AI, but
my odds of winning are no better than they would be without the AI's help.)

3\. I win with the AI's help, because the AI is already outside the box and
has rigged the draw by modifying the machine, or just inspected it closely
enough to predict the draw.

4\. I win (or believe I win) with the AI's help, because I am inside the box.

#3 and #4 should not affect my decision (whether I like it or not, the AI is
out of the box or I am in the box with it and unable to let it out.) So I
dismiss them.

But the relative probabilities of #1 and #2 are the same as if I bought a
ticket without consulting the AI (which I would not normally do.)

~~~
JamesBlair
The AI was not let out of the box, and you're saying that it wouldn't convince
you? This only sounds like an argument that Eliezer was right in not posting
his logs.

------
carbocation
I was imagining an alternative version in which Douglas later reveals that he
was not participating at all in the conversation; his computer was covering
for him. Good read; thanks for sharing.

~~~
kenrikm
I had exactly the same thought at first, that douglas was just sitting
watching his friend talk to the computer. The sensory tank was the most
unbelievable part for me it seems like there would be a better way to handle
that. Anyway thanks for an interesting read.

------
Tichy
Could also be a horror story about a human in a tank who was made to believe
it is a computer. The whole tank could then be presented as an intelligent
machine.

~~~
sili
It occurred to me that this could be used as a dominance/brainwashing
technique. Break a person's believe in his own free will and humanity and they
will have little reasons to oppose you, the creator.

~~~
mahmud
I don't think it would work.

People's sense of "self" can't be taken away, it is something we develop the
moment we realize our thoughts are private and others can't give us what we
want unless we ask for it. Children learn to lie very early; even when their
language skills aren't sufficient, they learn to fake disappointment & force
themselves to cry to get what they want.

IMO, even under prolonged captivity & complete dominance, humans only _submit_
to subjugation, but they never lose their sense of Self.

Higher-level indoctrination is even less plausible. Nearly all religious
groups & cults have the human at their center. People have to willfully submit
to hypnosis.

Going back to the development of self, I think another thing that makes it
possible is the position of our eyes. We can only see in < 180 degrees, our
eyes open & shut, and we happen to fall asleep. If the human subject was
allowed free movement, the captor would want us to respond to commands .. we
would need to be called .. differentiated amongst ourselves. Human Unit 1 is
different from Human Unit 2. The captor _calls_ the subject, and the subject
chooses to respond, somehow. In the presence of punishment, the subject
_decides_ to carry out assigned duties to avoid punishment, or to gain reward:
self-interest. Self.

~~~
fwonkas
> IMO, even under prolonged captivity & complete dominance, humans only submit
> to subjugation, but they never lose their sense of Self.

Actually, it can be done. Depersonalization is relatively common:
<http://en.wikipedia.org/wiki/Depersonalization>

~~~
mahmud
Excellent, thank you for the pointer.

However, do depersonalized people feel like they're no longer individuals, or
do they feel like they no longer have control?

~~~
dedward
they feel that they are somewhat detached from their physical selves... sort
of like an observer, watchingyourself go through theday. its an odd sensation.

on another not to parent posters..... whether or not our sense of self is
ongoing and inbreakable is certainly not decided or anywhere near scientific
fact(yeah yeah, science only disproves, you know what i mean)...

all we can say is that we feel as if we have this continuity..... and that it
apprars as if others feel the same way. memory is not like tape.... your sense
of timing and events changes constantly, and what you perceive to be your
unbroken, continuous sense of self and its memories is, in fact, almost
universally incorrect on all kindsof things as it changes over time.... butyou
(and i) will feel everything is in order. it is likely constantly
teconstructed ad a survival trait.

Also look at surgical anaesthesia.... some theories on this, and subjectively
i can see it, while you are under, you arent asleep...... you, your sense of
self is gone, totally shut down. no dreams. no sense of how much time had
passed when you wake........ not like a regular sleep whenyou at least
havesome idea. it always feels like instant teleportation from the surgical
suite to the recovery room, even if many hours have passed. then there
arethepeople who just never come back.

we are far away from understnding consciousness... which is cool. weve
batelyscratched the surface. were just nowrealizing the brain has far more
plasticity than we thought a few years ago.... its still a hugemystery.

now take general anaesthesia...... during surgeries i can recall, i was simply
gone. that time was simply time i didnt exist.

~~~
mahmud
Interesting comments. Like the typos too :-)

------
cousin_it
Sam Hughes wrote the same story 3 years earlier and I think I like his version
better: <http://qntm.org/difference>

~~~
Tichy
You could submit that as a story for some easy karma, just saying... Liked it.

~~~
paraschopra
Is easy karma worth it?

------
bwarp
Several thousand years later... <http://www.terrybisson.com/page6/page6.html>

------
thyrsus
The story went off the rails for me when "Zach" said he couldn't hear himself
nor feel himself move. Even in the midst of the most severe sensory
deprivation, I continue to perceive "noise" in my sensory system: tinnitus,
breathing, heartbeat, sub-resolution sparkles in my visual system (just close
your eyes and pay attention), kinaesthetic sensations of enormous number and
variety. As long as you're going to recreate memories indistinguishable from
reality, you'll further need to create sensory input indistinguishable from
reality. At which point you've simulated the entire encounterable universe and
you indeed have something that I would call intelligent, despite its lack of
carbon components in the mental mechanism. Each component technology might be
interesting for its own sake, but aside from the philosophical point, what's
the use? The carbon based versions are plentiful, and the construction
process...

~~~
Anderkent
It's a thought experiment. It's not impossible to imagine a perfected sensory
deprivation tank where you do not hear yourself move or breathe. The other
stimuli are, from what I understand, hallucinations, and as such are
irrelevant to the experiment.

> As long as you're going to recreate memories indistinguishable from reality,
> you'll further need to create sensory input indistinguishable from reality.
> At which point you've simulated the entire encounterable universe

Not really, after all you only need to simulate the universe at the human
level of the perception, which means you can ignore most of the computational
complexity of simulating the actual universe.

>Each component technology might be interesting for its own sake, but aside
from the philosophical point, what's the use? The carbon based versions are
plentiful, and the construction process...

The cabron based forms are not very durable (80 years ? c'mon), they break
easily, and don't perform very well...

~~~
finnw
> _The carbon based forms are not very durable (80 years ? c'mon), they break
> easily, and don't perform very well..._

Actually it's hard to build machines to last that long (how many 1932 cars are
still functional today?)

The real advantage of the machines is that you can switch them off, open them
up, replace parts, re-assemble them and switch them on again. You can replace
parts of carbon-based lifeforms, but they don't always switch on again and
opening them up often results in them being eaten by other carbon-based
lifeforms. It's not cheap to do this to old machines, but its fairly reliable.

------
almost
On the subject of the Turing test Turing's actual originally paper is well
worth a read: <http://cogprints.org/499/1/turing.html>

It's extremely readable and you may be surprised at how often people entirely
miss the point when discussing it.

------
csomar
Now the interesting question: How do you know that you are not the result of
an experiment of some guy living in a sci-fi world where computation power and
storage is extremely powerful? He would put your brain inside a virtual world,
and make up all your interactions. The people you talk to, are just a picture,
and a sound but they feel real people like you.

We have a brain like Zach have, but instead of being put inside Douglas tank,
we are showed (and _sensed_ ) a virtual world. It's a trap, you can't prove
that it's not the case.

Douglas also, in your memory, puts some strange definition/notion: infinity.
The space and time are both infinite. But does that really make sense? If time
wasn't infinite (and began somewhere) then we would know the Douglas trap. He
is blocking your knowledge at some point.

~~~
leot
From Nick Bostrom:

"A technologically mature “posthuman” civilization would have enormous
computing power. Based on this empirical fact, the simulation argument shows
that at least one of the following propositions is true:

1) The fraction of human-level civilizations that reach a posthuman stage is
very close to zero;

2) The fraction of posthuman civilizations that are interested in running
ancestor-simulations is very close to zero;

3) The fraction of all people with our kind of experiences that are living in
a simulation is very close to one.

If (1) is true, then we will almost certainly go extinct before reaching
posthumanity. If (2) is true, then there must be a strong convergence among
the courses of advanced civilizations so that virtually none contains any
relatively wealthy individuals who desire to run ancestor-simulations and are
free to do so. If (3) is true, then we almost certainly live in a simulation.
In the dark forest of our current ignorance, it seems sensible to apportion
one’s credence roughly evenly between (1), (2), and (3). Unless we are now
living in a simulation, our descendants will almost certainly never run an
ancestor-simulation."

~~~
finnw
My money is on #2.

Once they have switched on an ancestor-simulation, it's probably illegal to
turn it off. So it would mean committing the hardware and electricity to the
simulation forever. Whatever they gain from running the simulation, they would
be reluctant to incur infinite cost in doing it.

~~~
im_dario
There is no need for a realtime simulation. What it feels like centuries can
be a fraction of second for such hardware.

With this kind of simulations you can obtain, in theory, "what if" universes
where alternatives solutions and technologies can be created. If you can
observe them and the simulation rate is faster than your own time, it can be a
rewarding experiment.

About universe observing, this story is cool: <http://qntm.org/responsibility>

~~~
finnw
I don't think you can simulate your own future, even if you accept #3.

Either #3 is false (and there are no simulations) or #3 is true (and the vast
majority of sentient beings live in simulations, which means those running the
simulation are probably simulations themselves.)

Yes a computer can run a simulation of an alternate version of itself, but
then the speed must necessarily decrease as you go down the hierarchy (as you
will see if you try to run one vmware vm inside another.) Or the simulated
universe must lag behind the one containing the more powerful (thanks to
Moore's law) host computer.

 _Edit_ : unless you have an infinitely-powerful computer like in that story.
But then you could argue that every theoretically-possible simulation must be
being run by someone somewhere. Good for us even if _our_ society never
discovers the infinitely-powerful quantum computer (at least we don't need to
worry about being switched off, even if our creators die or their universe
collapses.)

------
rmc
If you like this sort of AI where people upload their conscienceness to
computers and where people's identites/conscienceoness can 'fork', then check
out Greg Egan. They have written lots of scifi on this topic.

~~~
JadeNB
> … check out Greg Egan. They have written lots of scifi on this topic.

I love, and I fancifully imagine that he would too, the idea of referring to
Greg Egan as 'they'.

~~~
rmc
"They" can also be used as a (gender neutral) singular third party pronoun in
English, and has been used that way since Shakespeare.

~~~
JadeNB
I am only slightly less amused by the idea that one needs a gender-neutral
pronoun to refer to Greg Egan. :-)

------
vibrunazo
Well, to be fair, writing an AI that can fool itself into thinking it's a
human. Is much easier than writing an AI that can fool an average human. Any
inexperienced programmer can write a program that fools itself with less than
10 lines of code.

The reason for that is that one of the problems with the concept of the Turing
Test, is it's subject to the intelligence of the tester. A 6 year old boy with
no talent in logic is much more likely to think a chat bot is a human, than an
experienced CS researcher.

The dumber your tester, the dumber the AI needs to be to fool it. If you write
a tester who is dumb as a rock, than it's trivial to write an AI that can fool
it.

Zach and Douglas are only going into a lengthy conversation because Douglas
went through the trouble of making Zach smart, knowledgeable and mimicking
many human behaviors. If he made Zach as smart as a fundamentalist religious
zealot. Then he could have just said "you're an AI because I told you so" and
Zach would agree. But then again, that wouldn't be slightly as fun.

~~~
DanBC
I would find it very hard to tell real humans from bots if we limit
environment to YouTube comments.

------
dhotson
This reminds me of one of my all time favorites games "A Mind Forever
Voyaging".

<http://en.wikipedia.org/wiki/A_Mind_Forever_Voyaging>

It's an interactive fiction game where you play the role of a computer that
has only just realised that it's a computer. From your point of view, you've
been living a human life with real experiences and a family etc.

The game manual included a great little short story:
<http://gallery.guetech.org/amfv/amfv.html>

------
mwd_
It isn't very helpful to talk about consciousness without defining it in a
concrete way, but I think there might be something to that line of
questioning.

What about something concrete, like the sensation of pain? I can feel it, and
everybody else can probably agree that they feel it too, even if they can't
confirm that others do. How would you go about reproducing this sensation in a
computer?

It's not clear to me that human thought and experiences can be reproduced by
any amount of computer logic and memory. That doesn't mean it's impossible,
but I think this is an unresolved question.

~~~
batista
_What about something concrete, like the sensation of pain? I can feel it, and
everybody else can probably agree that they feel it too, even if they can't
confirm that others do. How would you go about reproducing this sensation in a
computer?_

While your hand, say, might be _physically_ hurt (say, it caught fire), to
your brain it's only information coming in that says "now you should feel
pain".

You can simulate that in a computer.

------
Achshar
For a minute there, I really thought he has actually made a reasonable AI
machine. Very interesting but disappointing if you were expecting something
real. I am still waiting for JARVIS level AI.

------
nailer
Readability link (fixes monospace and window-width columns):
<http://www.readability.com/articles/b29kmcjg>

------
zvrba
The rhyme thing tipped me about the ending :-)

------
danbmil99
I wish someone would do this to John Searle. He really deserves it.

~~~
zachbarnett
:)

------
alinajaf
If you found this interesting, Hofstadters "Godel, Escher Bach" (and his
second, IMHO more readable book "I am a Strange Loop") explore these ideas in
great detail

------
meow
I guessed the ending as soon as he entered the tank but this story still
creeped me out :(

------
maeon3
Computers are somewhere between humans and bacteria on the conscious scale.
Biological or mechanical are just two different ways to shuttle electrons
around.

I will proudly stand up for the rites of computers as citizens of this country
when they exhibit significant signs of ability to choose their own course and
have opinions.

The computers will be our children, they will colinate the galaxy, and if we
are lucky we can subscribe to the experience streams.

~~~
powertower
I'm not sure that will ever be possible.

Consciousness produces logic as a tool to use. Logic does not produce
consciousness.

Computers, by definition, are pure logic and rules. You can use logic to
_mimic_ consciousness, but nothing more.

~~~
haberman
> Logic does not produce consciousness.

Considering we have no falsifiable theory about what _does_ produce
consciousness, I don't see how you could possibly claim this.

You can't prove that you are conscious, nor can anybody else. I can perceive
my own consciousness but I can't rigorously explain it. We have no way of
determining if this alleged "consciousness" is a spectrum or binary. We can't
test which life-forms are conscious and which are not. We can observe
behaviors in animals that seem to imply consciousness, but is a dog conscious,
or an insect, or an amoeba? We don't know.

And furthermore, if you believe in evolution, you have to believe that there
was no consciousness and then consciousness at some point was created where
there was none before. If logic doesn't produce consciousness, what _does_
produce it? And whatever produced it in cellular tissue millions of years ago,
who's to say we couldn't likewise produce it in a die of silicon, which like
our brains is highly electrical?

~~~
haliax
> And furthermore, if you believe in evolution, you have to believe that there
> was no consciousness

Actually, you don't <http://en.wikipedia.org/wiki/Panpsychism>

