
Integrated Information Theory of Consciousness - lainon
http://www.iep.utm.edu/int-info/
======
monktastic1
I'm a fan of Scott Aaronson's take on IIT:
[http://www.scottaaronson.com/blog/?p=1799](http://www.scottaaronson.com/blog/?p=1799)

~~~
jes5199
relatedly, Max Tegmark tried to do some estimates of Phi values of neural
networks in his paper "Consciousness as a State of Matter"
[https://arxiv.org/pdf/1401.1219v2.pdf](https://arxiv.org/pdf/1401.1219v2.pdf)

And while he seems to take IIT pretty seriously, his conclusion sure seems
like a refutation of the idea that IIT's definition of Phi means anything:

> Information stored in Hopfield neural networks is naturally error-corrected,
> but 10^11 neurons support only about 37 bits of integrated information. This
> leaves us with an integration paradox: why does the information content of
> our conscious experience appear to be vastly larger than 37 bits?

~~~
rwnspace
Amateur, though I have read Tononi's book and some of his work: My suspicion
is that the answer is to do with how those bits are situated. Environmental
complexity (or how it co-varies with ones percepts) seems to lend a lot to the
richness of conscious experience.

------
visarga
Ten years ago I was a big fan of IIT and Giulio Tononi. But today, I prefer
the reinforcement learning paradigm. It's much more powerful. Instead of
consciousness, we have agents that perceive the environment and act, in order
to maximize rewards. Agents are also endowed with the power to simulate /
imagine possible futures so they can plan and reason. An agent is something
concrete, consciousness doesn't even have a definition, that is why I
appreciate the RL paradigm. It brings concreteness to an almost metaphisical
research topic.

~~~
KingMob
Um, "agents that perceive the environment" sounds a lot like homunculi, which
effectively just punts the Hard Problem downstream, but doesn't address it.

This is no different in essence than Searle's Chinese Room problem, which at
its core asks "If the parts aren't conscious, how can the gestalt be?"

We don't have an answer, but it must be true as long as the brain is involved.
Individual neurons are unconscious electrochemical devices, but they still add
up to experiencing the redness of red.

~~~
dragonwriter
> This is no different in essence than Searle's Chinese Room problem, which at
> its core asks "If the parts aren't conscious, how can the gestalt be?"

The answer to that question is “consciousness is a property of the interaction
between the parts, not of the individual parts.” Or, alternatively,
“consciousness is not a well-defined objective property, just a vague
incoherent concept that has lots of emotional attachment, but which you can't
analytically say is or is not present in any entity or aggregate.”

The Chinese Room is useless as anything other than an as an overly elaborate
illustration that there isn't a useful, clear understanding of what
“consciousness” means.

~~~
visarga
My take on the Chinese Room: it is a failed mental experiment. CR differs from
humans by embodiment - namely - humans are agents in an external world, with
certain limitations, such as need for food, shelter and avoiding pain and
injury, which a room doesn't have. Thus the CR can't learn the same value
system as a human. The CR has nothing on the line, humans have the protect
their life.

By removing the world itself from the CR, it is limited in its growth. The
world allows for exploration and testing of hypothesis.

The CR can't self reproduce, humans can - and reproduction brings a whole list
of new constraints for humans that guide evolution. Genetic evolution is also
a meta-learning algorithm that the CR lacks. Humans are born with a set of
instinctive values which guide the evolution of the brain - like a program. CR
has no such initial values (reward channels) and more generally, the problem
of learning in CR is glossed over.

Searle should have compared humans with a frail robot that has to earn its
electricity and raw materials to produce spare parts by its own endeavor, and
be able to learn from and teach its knowledge to other robots. Such a robot
might have a closer to human perspective on the world, being embodied and
subject to limitations that force it to learn intelligent action.

~~~
red75prime
The problem is not differences between Chinese room and human. The problem is
in different perceptions of them. One is intuitively perceived as conscious,
other not so much. If you can't perceive something as conscious because you
see all the moving parts, it surely isn't, right?

I see this as "what we can program is not a mind" taken to the extreme.

------
_-__---
Here's the web page of the main research group developing this theory at the
moment:

[http://integratedinformationtheory.org/](http://integratedinformationtheory.org/)

Giulio Tononi's work is very interesting. I suggest anyone interested in
sleep/consciousness research take a peek at what his group is doing.

------
m15i
I find it hard to believe that "consciousness" can exist in a non-neuronal (or
at least non-biological system) , i.e., phi greater than 0 outside of a
nervous system. But IIT suggests it can, albeit a small amount, I guess
because of back propagation. "If IIT is correct in placing such constraints
upon artificial consciousness, deep convolutional networks such as GooGleNet
and advanced projects like Blue Brain may be unable to realize high levels of
consciousness."

[http://www.iep.utm.edu/int-info/#SH4c](http://www.iep.utm.edu/int-info/#SH4c)

