
Consciousness is a recurrent neural network - xcodevn
One of the problems with consciousness is that you know that you are conscious but you can&#x27;t know if others are. Consciousness is like seeing a dog in front of you but only you can see it.<p>Let us begin with an example of seeing a dog: when you see a dog, photons from the dog come to your eyes and converted to neural signal, then, a neural network will <i>recognize</i> that what you&#x27;re seeing is a dog.<p>How do you <i>know</i> that you&#x27;re seeing a dog?<p>I believe we can explain this in the same way as seeing a dog. But in this case, what you see is not photons but signals from within brain itself. In other words, brain takes its current states (signals) as its input (this is possibly no different from brain takes signals created by your eyes as its input.) And as you may already known, this is similar to recurrent neural network.<p>In this way, consciousness is a concept learned by brain about the inner processes of brain itself which is similar to: dog is a concept we learned when seeing many dogs!<p>Stream of consciousness is the result when brain trying to model itself and continuously receiving signals from itself!<p>We can put this further and ask: is any brain of any kind (dog, cat, whale) conscious?<p>My idea is that there is a threshold when brain becomes complicated enough to model its own signal!
======
hprotagonist
You need to study some actual neuroscience.

some things to remember:

\- neurons are not linear.

\- neurons are not time-invariant.

\- neurons are not causal.

\- neurons and actual anatomical connectivity do not map well to electronics-
inspired wiring diagrams. (see: dendritic arbor)

\- anatomical structure does not imply functional connectivity.

\- functional connectivity does not imply anatomy!

\- synaptic junctions respond more or less well to different
neurotransmitters, all of which are continuously present, at different times.
We don't know why.

\- So far, what we have observed about anatomy and physiology of the brain
does not look like a RNN.

\- The relationship between the meat architecture and the phenomena it hosts
may not be as 1:1 as you'd like. Opinions vary.

My favorite analogy about relating brains to consciousness is this: "if you
believe that brains are like computers, (which you shouldn't, but just for the
sake of argument, let's), then even if you really and truly produce a full map
of the brain, what you've _got_ is the spec sheet for an x86 processor. What
you _want_ is the user manual for Mac OS."

~~~
naasking
> \- neurons are not causal.

What does that mean?

~~~
hprotagonist
the same input does not always produce the same output. IN fact it's
guaranteed not to for at least three reasons:

\- repeatedly presenting the same input to a neuron has a response that
depends on neurotransmitter reuptake rates. If the synapse isn't "recovered",
you get a different spike rate out.

\- neurons fire stochastically in the absence of stimulus. Responses to small
stimuli are indistinguishable from noise.

\- The above two phenomena propagate through connected series of neurons in a
nonlinear way.

~~~
naasking
Right, but these properties don't make neurons acausal, it makes them stateful
and sensitive to noise which is not explicitly modelled as an input in your
view. You can model noise as a constant input variable though, which preserves
causality, just not determinism.

~~~
hprotagonist
see also,
[https://books.google.com/books?id=BUghCgAAQBAJ&pg=PA45&lpg=P...](https://books.google.com/books?id=BUghCgAAQBAJ&pg=PA45&lpg=PA45&dq=acausal+neurons&source=bl&ots=RrTAEq-H7d&sig=19JLLC0hhgyWZaadZylDtg5geRE&hl=en&sa=X&ved=0ahUKEwiLvt6S9MHQAhVC1CYKHUoZCScQ6AEIKTAC#v=onepage&q=acausal%20neurons&f=false)

------
hasenj
The problem with this line of thinking is that any data model for anything has
absolutely no intrinsic meaning. It's just an encoding.

To give the simplest example: you can model a color in many different
encodings, 'red', 'hsl(0, 50%, 50%)', 'rgb(255, 0, 0)', etc.

Any "thing" can be encoded in an infinitely different number of ways. The same
applies to the state of your mind. Why should one encoding give rise to
consciousness?

Arguably, the manner in which water is flowing within a sewage pipe network
could be interpreted to be encoding some kind of information. Would you argue
that some particular arrangement of water flow within a pipe network could
give rise to consciousness?

I think there's a difference between a system modelling itself and becoming
conscious of itself.

Any system can model itself. We can do it easily with computers. Arguably the
linux kernel has a model of itself, its hardware, its inputs, etc. That
doesn't make it conscious.

~~~
dwaltrip
Computers don't model much at all, let alone themselves. They run static, one-
off code that was fed to them by external sources, and contain no coherent,
dynamic models of real world systems. My dog models more of the world than any
computer we've yet built.

Sure, some computer programs model specific things quite well. But these
models are very limited, fixed, and useless without human interpretation.

~~~
hasenj
> these models are very limited, fixed, and useless without human
> interpretation.

What model _isn't_ useless without human interpretation?

~~~
dwaltrip
The models of the world inside the brain of many different animals. They
sustain themselves

Even the models of the world in our own brains are not fully known to us, and
we largely make use of them without interpretation.

Basically, any model that wasn't created by humans does not need human
interpretation.

~~~
hasenj
That's a circular argument. You're just assuming that these models have
intrinsic meaning.

------
wbhart
Can't the same argument be used to show that consciousness is a whole bunch of
things that it clearly is not. Just arguing that consciousness has certain
attributes and it shares those attributes with something else, doesn't make
consciousness an example of that something else.

I think there is a general trend towards thinking of consciousness as an
emergent phenomenon which exists when a system is complex enough and has
certain attributes, such as memory, information processing capability, etc.

But none of these definitions enlighten me as to when or why such a system
might become self-aware.

~~~
hacker_9
Well we know the brain is incredibly advanced, as evolution has had a lot of
time to work on it. It could be that to make something conscious just requires
a very complex physical procedure or set of interconnected components. Once
humans became conscious, it then worked out to be evolutionary beneficial to
remain conscious in future generations.

To me, to be 'conscious' is to be able to make decisions. Presented with all
our sensory data, memories, knowledge etc the brain creates possible models of
the future every millisecond, and passes them to the conscious component which
is where we choose how to proceed.

After a time, it doesn't even have to ask us anymore as it just replays past
decisions. For example walking, driving a car, reading words etc

~~~
wbhart
It seems to me that a meteorological expert system is conscious by that
definition. I find that dubious.

~~~
hacker_9
An expert system doesn't actually make decisions; the programmers made the
decisions and then hard coded them in. Something along the lines of 'if
(wind_pressure > 0 && rain_density < 0.5) do_something() else etc'. It's
probably a bit more dynamic than that, but however you do it, it is always the
programmers that make the decisions and then write them in, to be replayed by
the software later with different variables.

------
andybak
I have a tongue-in-cheek theory:

Those who think there's not a "hard problem of consciousness" or hand-wave it
away with purely materialist explanations probably aren't conscious.

Folks. We have p-zombies in our midst...

~~~
yarrel
But surely the idea of p-zombies is so inimical to the idea of a self that
only p-zombies could believe in them?

(Sips tea.)

------
lproven
My thinking is similar. Long ago, I outlined it on a mailing list as follows:

I'd like to posit a progression of animal awareness. (In the full knowledge
that there is no "tree" or "hierarchy" of evolution; the progression is merely
a convenient way of presenting some data.)

1\. Single-celled animals, such as amoebae and /Paramecium/. Many of these
display simple taxic responses: they move towards light, away from heat, and
towards or away from certain chemicals - they pursue concentration gradients.
In other words, a single cell can display what could be called "voluntary"
movement; it does not follow programmed paths but responds to its environment.
You can watch a Paramecium in a microscope, swimming through a world of bits
of plant and mineral matter in water. If they bumble into something, they
recoil, and set off in another direction. If they catch a scent of something
that might be food, they change direction and set off in pursuit of it. It's
much like watching a much bigger animal, like a mouse, explore an unfamiliar
environment. Surprisingly like.

Similar behaviours can be observed in all sorts of small animals, like
collembolans and nematodes.

Small animals - even single-celled ones - interact with their environment,
responding to stimuli in ways that are more than a simple, determinate
pattern. They are not like a clockwork mouse or toy that always follows the
same path.

~~~
lproven
2\. If you put a woodlouse in a T-shaped maze - one junction, choice of left
or right - and teach it that food and a damp place lie in one direction and
bright light and dryness lie in another, you can teach an individual woodlouse
to turn left or right consistently. In other words, a woodlouse has a memory.
It can learn new behaviours.

Woodlice have enough "brain" to at some level form a model of their
surroundings. They can learn a very simple map.

3.Many small invertebrates form social colonies. In land-living ones, like
ants and termites, animals leave a chemical trail as they move around,
depositing markers indicating desirable discoveries. This is how an ant colony
"finds" sugar in our kitchens, say. One randomly-exploring ant finds sugar,
and retraces its steps home leaving a marker indicating that it's found food.
Other ants follow the trail, reinforcing it, until a path is marked for the
colony to move large amounts of food back to the nest.

In other words, ants make and read signals for other ants.

~~~
lproven
4\. Flying insects can't do this; there's no direct way to mark a trail in the
air. So bees, as is well known, have evolved a form of symbolic communication:
the "waggle dance". By dancing in a certain fashion, a worker can "tell" other
workers the direction and distance to a food source. The dance indicates
direction relative to the sun - bees can see the polarisation of light and
thus see the sun even when it is covered by clouds - and distance, and it does
so independent of the orientation of that bee while it is dancing relative to
the sun.

In other words, bees have symbolic communication. Their signals are not
direct, one-to-one, follow this to the food; they are abstract and require
interpretation. Obviously, this is innate, it is not learned behaviour, but
individual insects learn the way to food and communicate this symbolically to
other individuals.

5\. Small mammals can be taught complex behaviours in the lab. Rats can be
taught to press levers to obtain treats; mice can be taught to run mazes;
"experiments" with wild squirrels have shown that the animals will perform an
amazing array of actions to obtain a food reward. Many of these patterns of
behaviour show that the animal is able to learn that unrelated actions can
elicit a reward, showing some kind of understanding of cause and effect.

6\. Non-mammalian vertebrates perform all sorts of ritualistic behaviour that
is not directly related to finding food, a mate, predator avoidance and so on.
Many birds have mating rituals; male bower birds construct huge and elaborate
structures to tempt females into mating, which are presumably derived from
nest-building behaviour, but actually the structures serve no purpose other
than to be on some level "pleasing" to the females. Many migratory birds such
as albatrosses and swans form long-term pairs which persist over years,
sometimes lifelong. They can recognise their mate and when they rendezvous at
breeding time indulge in long rituals which appear to stimulate and reinforce
the pair bond. Obviously mate recognition does not involve some simple
biochemical cue of kinship which might be used in a mother finding her
offspring in a large, mobile breeding colony independent of environmental
cues, such as penguins. Maybe penguins can "smell" or "taste" their own
offspring by some form of genetic resemblance; I'm not aware of any research
into this. However, even if they can, and don't somehow just learn what their
offspring look or sound like, the same cannot apply to individuals recognising
their mates, who are, by choice, usually not directly related.

In other words, these birds "know" their non-related partners, can recognise
them and distinguish them from strangers, and behave in entirely different
manners around their partners to around other individuals. When encountering
their partners for the first time after a protracted absence, they indulge in
displays and other complex behaviours which are not directly survival-related.
It is hard to watch such a pair being reunited without thinking that they
"feel happy" to see one another. It is also a reasonably common experience to
see one which is isolated and has lost its partner, witness its lack of
animation, listless behaviour and so on, and the human reaction is to
interpret this as the animal "feeling sad".

~~~
lproven
7\. Anyone who has owned a pet dog has witnessed canine behaviour which at
times closely mimics human responses: being hopeful, being excited, being sad,
being afraid and so on. The null hypothesis here, it seems to me, is that the
animal actually has these states of "mind", rather than that it is going
through some completely different, separate, unrelated process which merely
closely resembles human emotional states. To my surprise, as a fairly recent
and reluctant cat owner, I have seen my cats exhibit what appears to resemble
such complex reactions as a rapid survey of the surroundings to see if anyone
witnessed an awkward fall.

8\. Outside of the remit of pet animals in human company, though, mammals
which live in social groups form a complex of often hierarchical
relationships: dominant and submissive members of the group and so on. They
perform patterned behaviours where different members have different roles;
hunting animals such as female lions and wolves, for instance, cooperate, so
that hunting parties contain scouts, flushers, chasers and so on. Many group
predators display patterns of behaviour such as setting up ambushes. These
roles are not fixed but are interchangeable between members of the pack. This
demonstrates that the animals not only know of each others' existence, but of
their relationships, since for instance in felines often only relatives
cooperate. Elephants, wolf packs and so on may comprise unrelated individuals,
though. Particular members of the group have expectations of the ways that
others will act; cooperation must be learned, and members that do not
cooperate may be "punished" by biting or by withdrawal of food. They also
perform planned activities; ambushes or driving prey towards a pre-placed
fellow pack-member indicates some form of awareness of the future. One cannot
plan if one does not remember past events and strives to re-create things that
have worked before; this indicates an awareness of time and of modelling the
behaviours of other animals, so that there are expected, desired behaviours
and unexpected, undesired ones, both in fellow pack members and in the prey
animals. This implies a considerable degree of ability to form and maintain
mental models of the behaviours of unrelated individuals.

9\. Many animals have been shown to make and use tools. Not only chimpanzees
in the wild, or finches in Gibraltar which use thorns as probes and levers to
get at otherwise-inaccessible food items. Crows have been experimentally
demonstrated to be able to improvise tools from available objects to get at
food items. In other words, tool use is not always inherited behaviour or
mimicry of others; the invention of novel tools has been demonstrated, outside
of the mammals.

10\. Again in the Aves, recently, a well-known experimental African Grey
parrot named Alex died. Alex had been taught human speech; he was able to
identify a wide range of objects and colours by name in English, to count up
to five or more, to understand simple questions in English and formulate novel
answers in the same spoken language. He was able to spell simple words out
phonetically - "nuh, uh, tuh, NUT". He was able to express desires in spoken
words: "wanna nut, now", even when this did not form part of the experimental
dialogue. Many instances have show that his utterances were not simply mimicry
of those of his trainers; he was able to construct sentences of his own, as
well as parse those of his trainers. This shows considerable proficiency in
manipulating an alien to him form of symbolic communication, very considerably
exceeding the abilities of chimps and gorillas in the use of sign language.
Video clips are available of conversations with Alex; they compel most
observers to complete re-assess the presumed levels of intelligence of a bird
with a brain the size of a walnut, orders of magnitude smaller than a human's
brain.

------
rayalez
Great explanation, makes perfect sense to me! My own heory is pretty much the
same. Just like you can experience other internal sensations in your body when
nerves send signals that are recognized by the brain, the brain can recognize
it's own signals.

I highly recommend to read Godel Escher Bach(if you dont have time - just read
the introduction to get the general idea). In this book author explains how
the meaning arises when things(like language, or math equations, or neurons in
the brain) "mirror" things in the real world, when their structure can be
"mapped" onto some other structure(so called "isomorphism").

He says that brain "mirrors" the world around it as it builds a world model.
But the brain itself is a part of the world, so it builds the model of itself
as well. Neurons recognizing/observing/experiencing other neurons.

Just like you can see a dog you can "see" your own brain state.

I've also heard a cool quote somewhere - "Consciousness is simply just what it
feels like to have a brain". You can close your eyes and feel the position of
your body, you can feel your stomach being full, and you can feel your brain
thinking.

~~~
dghf
> I highly recommend to read Godel Escher Bach(if you dont have time - just
> read the introduction to get the general idea).

I don't think you'll get any real sense of what GEB is about just by reading
the introduction.

~~~
rayalez
Well, duh doi. What I mean is that the introduction gave me the epiphany
related to the concept I'm talking about, and if you're not ready to commit a
ton of time to reading the whole book, you should still read the intro,
because it explains a lot of awesome things related to the OP's post.

But you're right, the whole book does contain more information than it's first
few pages. No shit.

------
unlikelymordant
When i saw the title, i expected this post to be somebody's 'stoner
philosophy' of what conciousness is. But i really like this idea. If true, it
means conciousness is naturally emergent as soon as networks can be
efficiently trained to represent complex enough functions. And we might not be
too far away.

How do you define conciousness? Like the voice in your head? Are you saying
that voice is essentially the predicted copy of you I.e. what you predict you
would do in the current situation? I think this could explain why we have a
concious and unconscious mind, the unconscious is the actual brain, the
conscious is just our prediction of what we would do in the current situation.

~~~
catshirt
not that i'd personally choose to call OP's post a "stoner philosophy"... or
even use that term in general... but...

if this is not a "stoner philosophy" on consciousness then what is?

------
SuperPaintMan
I can't tell if this is ironic or not.

------
MrQuincle
In train, so summarized:

\+ You have forward-inverse models e.g. by Wolpert.

\+ You have a sequential winner take all process, e.g. see Baars.

\+ You have honing in on the on/off switch. Search for claustrum and Francis
Crick.

The challenge of course is to know what the brain knows and learns about
itself. Oscillations at an alpha, beta, gamma level have as far as I have seen
no place in current networks. I find it suspicious that we don't reproduce
this behaviour. Are we sure that it is nonfunctional?

------
amelius
This explanation feels a little tautological to me. We knew this already
because physics doesn't care what is a brain and a dog. All physics knows is
that there are atoms, and whether they belong to a brain or a dog doesn't
really matter. It's all the same.

And this theory fails to explain how we can feel pleasure or pain, and, it
fails to predict whether an artificial neural network can feel pleasure or
pain.

~~~
visarga
It's just survival - when a species gets good enough at survival, it learns to
adapt to the environment. First, it's just simple reflex, trigger and
response, but with evolution, species have become more capable to adapt. We,
humans, need to adapt not just to nature, but also to the human society and
technology, and human consciousness is just that function. Without it we would
be dead of hunger and thirst in 3 days. What is special about conscious
systems is that they adapt and preserve themselves in the face of adversity.

Pleasure and pain are related to reward signalling in reinforcement learning.
Humans have inborn reward signals which have been selected by evolution.
Artificial agents have reward signals too, but they are designed.

So I think the path to consciousness is to create artificial agents that learn
to survive and adapt. When such a complex behavioral system is designed, it
naturally learns about its relation to the world. It has consciousness by
extension of being capable to protect its life.

~~~
amelius
Ok, so would an algorithm that uses genetic programming have a consciousness
like we do? Would it be capable of feeling pain?

This is what I'm interested in, but what the theory presented here fails to
explain.

~~~
visarga
It would be able to feel pain if it can learn to avoid it. In humans, pain,
hunger and other emotions are just reward signals that guide behavior such
that we don't die.

It would also need to be embodied and responsible for its own survival, more
generally, otherwise it's just a computer running software, in a lack of
external context.

------
Kinnard
You'll really appreciate this research on the evolution of self-awareness and
mirror neurons: [https://www.edge.org/conversation/the-neurology-of-self-
awar...](https://www.edge.org/conversation/the-neurology-of-self-awareness)

------
maliniakh
I have the very same theory on that and can't imagine it being something else.
Yet I haven't came across such explanation until now.

------
vorotato
The map is not the territory.

------
danruckus
F u feeder the triple Danielle rose Simpson and Douglas Paul Hacker no game
but what we leave in place so we are not a monitored brain amount all things
to a joint that which should not be a common cause obstacle

------
lproven
11\. Such symbolic communication is not unprecedented among wild animals.
Social mammals such as meerkats have vocal calls which can indicate the type
of threat that a scout has perceived. Many primates do similar things.
Different groups use different sounds; these are not inherited actions, they
are learned, or else genetically-similar groups in varying locations would use
the same noises. Wild animals use symbolic communication to manipulate the
behaviours of others, sometimes even in an altruistic fashion - favouring kin
over themselves.

12\. As previously discussed, chimps have been observed to lie, meaning that
chimps are not only able to model the behaviours of other chimps in their
troupe, they also model the mental state of those others. This is not to say
that a butterfly with eyespots on its wings is consciously "lying" to
predators, but when a more complex animal such as a chimp gives false
information to other chimps, I think that what it's doing is certainly trying
to manipulate another's mind, implying that it knows it has a mind.

What I'm trying to demonstrate here is that there is a fairly simple, steady,
observable and demonstrable increase in the sophistication of animal awareness
of the world. Few aspects of human cognition are unique to humans; just about
everything we do except writing - a recent human innovation, not an
evolutionary one - various animals do too. Animals can be shown to possess and
perform just about every mental trick that we do, from symbolic manipulation
to abstract thought. Cognition is not a uniquely human behaviour and neither
is self-awareness. We're just better at it. It's a difference of degree, not
of kind.

Now, this being so - and I think it is unarguable, but I welcome attempts -
and the basic aspects of stimulus/response being readily demonstrable right
down to single cells, what I want to ask is this:

Where is the step from simple reflex action to perception/thought/response?

Even in humans, functional NMRI has shown that the cerebral impulses governing
physical actions arise before the conscious mind is aware of them. Whereas we
do undoubtedly reason things out and act on them, in much of the basic action
of the human brain, the conscious mind is merely a spectator, watching what's
going on "beneath" it and then rationalising after the event that it "decided"
to do that.

Thinking is not, I submit, some special event in the brain. It's merely a
slightly more sophisticated version of the very simple environmental modelling
that even small crustaceans like woodlice do. Right down at the level of
animals that have no brain, merely a small loop of nerve tissue around the
mouth with more ganglia than elsewhere, animals take a step back from simple
direct-wired stimulus->response, filter the incoming signals, form a model of
what's going on, and act upon it. This, I submit, is the simplest kind of
"mind", and the difference between it and us is that we have an awful lot more
neurons and much more complex neural networks in between "in" and "out". It is
a difference of degree, not of kind. Purely quantitative, not qualitative.

A woodlouse "sees" in exactly the same way as we do. There's no deep
difference. Many insects and birds and fish see colour better than we
primates; they can see more colours, more differences over a greater range.
The bigger the brain, the more complex the pattern-analysis; the bigger the
patterns that can be identified. What happens, though, is still the same: a
sensor detects a stimulus, sends an action potential down an axon to a
ganglion, where it triggers a cascade of other action potentials that
propagate across a network of neurons until they either elicit a response or
not.

The difference is that in humans, the cascades are bigger than they are in
other animals, except whales, dolphins, elephants and the like. In at least
some of the great apes - chimps and orangs - some of the impulses originate in
some circuits whose job is to monitor the activity of the rest of the brain;
there are circuits given over to modelling the activities of the rest of the
brain, and there are circuits given over to modelling the model. The senses
include awareness of brain activity: a feedback loop. The brain model includes
a model of the brain model.

Where, in this model, do "qualia" occur? Where is the great marvellous miracle
over which so much paper and so many innocent electrons are expended?

To me, it all seems fairly simple and clear. I don't understand why there is
so much debate.

