

Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander) - altro
http://www.scottaaronson.com/blog/?p=1799

======
apl
I appreciate the notion of a "pretty-hard" problem of consciousness; it nicely
captures what people in the field, philosophers and scientists alike, are
_actually_ looking for.

His counterarguments appear sound, especially the technical concerns. (That's
not particularly exciting, though: formalisations of philosophical arguments
often crumble under the mathematician's lens. Even the best ones!) They're not
exactly new, I'd say, but their rigour is refreshing. My key problem with the
overall approach remains the shitty test set we have for any theory of
consciousness. It ultimately consists of two elements. If we're being
generous, there's three. Namely, almost everyone except for the philosophical
extremist agrees that we are conscious and that a stone isn't. IIT works quite
well for these cases. Some people would include, say, a cat as a conscious
being while maintaining that its "level" of consciousness is reduced. This
boils down to

    
    
      C_stone < [C_cat <] C_human
    

which isn't much. There's no real hope for extending it. Any other cases like
the ones Aaronson discusses are ones for which we don't even have strong
intuitions. Sure, we'd like to think that none of the entities he describes
are in fact conscious. But if that's the bullet I have to bite in order to get
a decent theory of consciousness, then I might be OK with that.

~~~
Udo
There is little doubt that the cat is conscious, and the idea that
consciousness is a gradient with more than one dimension is not really in
dispute. In fact, you can build a very simple gradient model by observing
nothing but humans: a human baby is scarcely conscious at all, a toddler is
pretty much on par with the cat, and a grown human has a higher level still.
The reason why we think this is true is based on at least two things:
awareness and meta-awareness (which includes the ability to reason about one's
own existence as well as the world in general).

The cat and the toddler are clearly conscious in the sense that they do have a
recognizable life experience. They don't just react to the environment, they
make models of it, and they have a limited ability to abstract observations.
They experience life, and they have complex dreams and emotions. However, they
still lack a deeper understanding of causalities, and they can grasp the
subjective realities of other beings only on an instinctive level. Adult
humans are less limited in this regard, but it's easy to imagine hypothetical
beings who perform better still.

That's bad news for the validity of consciousness as a concept, since it seems
to be interlinked with intelligence, which itself is a fuzzy idea at best.

So these observations amount to the position that "consciousness" isn't a
fundamental property at all, it's the result of other processes and it can
come in different flavors. Likewise, the concept of "intelligence" is
similarly flawed. In both, we tend to make the mistake of looking at them as
simple scalar values. They're not. They're a name we give for a collection of
different capabilities. As such, I believe a mathematical model of
consciousness is actually pretty unscientific, since consciousness is not an
objective property. It's an artificial label we're obsessed with.

~~~
apl
That's not the notion of consciousness we're dealing with when talking about
the "hard" problem of consciousness. You're describing its psycho-functional
interpretation: the ability to see yourself as a "self".

Qualia are a largely orthogonal issue. Specifically, it's imaginable that an
entity we'd consider unintelligent has a rich and detailed subjective internal
life.

~~~
dragonwriter
> Specifically, it's imaginable that an entity we'd consider unintelligent has
> a rich and detailed subjective internal life.

Yes, just as its possible that an entity we'd consider intelligent _doesn 't_,
but since it's subjective, its not subject to empirical -- and, hence,
scientific -- verification. Anything we are going to say objectively on issues
_related to_ consciousness isn't going to address that, because that's a
question outside the scope of science.

~~~
KC8ZKF
It's ontologically subjective but epistemologically objective, in the sense
that we can ask questions and observe behavior. Pain is a good example.

------
byerley
I'd argue that the author's views are too concerned with intuition. If we
reduce consciousness to a scale rather than a binary property; of course a
thermostat is somewhere on the scale, it simply has such a small value that
it's not worth considering - leading to our philosophical intuition. There's
nothing magical about our level of consciousness; as unintuitive as that may
be. Aaronson argues that the model must yield to our intuition, but if the
model is consistent and explains our observations, our intuition should yield
to it. - the obvious analog here being quantum mechanics

In regards to saying "both that the Hard Problem is meaningless, and that
progress in neuroscience will soon solve the problem if it hasn’t already,"
neuroscientists and mathematicians too often overlook the Turing test here.
It's consistent (if not accurate) to say that the "Hard Problem" is ill-
defined, but maintain that we've clearly solved it if we can comprehensively
beat the Turing test. That's what the Turing test was designed for, knowing
that we've created a conscious machine because it's indistinguishable from a
human; even if we can't decide on what consciousness is.

------
alokm
I have had the fortune of working on this IIT right after my IIT :). I worked
on implementing the research software used by Giulio Tononi and his research
team. I added a visualizer in OpenGL, and optimized the calculations for
calculating integrated information.

------
tunesmith
Pretty dense article for what seems to be a really daffy definition of
consciousness, so maybe someone can summarize? It seems to just be the
difference between systems thinking and reductionism, but why would any
irreducible concept be evidence of consciousness? Why on earth would that idea
make any more sense than, say, a math problem that is too hard for a 4th-
grader being evidence of consciousness? Or a locked machine, or a patented
process, etc? There's nothing intrinsically special about an irreducible
process other than it being irreducible - it's not like it's mystical or
anything.

~~~
gone35
Close. Aaronson's most devastating argument (he offers two more) is that
Tonini's \Phi or "integrated information" of an input-output system
(function), under a reasonable operationalization as a measure of how
correlated subsets of inputs are with subsets of outputs, is just not good
measure of "consciousness" because, as it happens, large families of rather
mundane systems that one wouldn't think as "conscious" in fact have _provably
large_ "integrated information" or \Phi by design --such as Reed-Solomon
codes, for instance.

Put in another way, if Tonini were right then your (say) portable DVD player
ought to be "conscious" because, surprisingly, the amount of "integrated
information" achieved by the scratch- and skip-tolerant error-correcting code
it uses internally has to be _huge_ in order for it to work. But that is
_prima facie_ ridiculous; so either we are wrong and the DVD player is in fact
"conscious" or, more likely, Tononi's proposed definition of consciousness is
lacking.

Again as I said Aaronson makes two other points, but I think this one alone is
the most conclusive.

~~~
logicallee
It's not _prima facie_ ridiculous that the DVD player might be conscious, as
let's assume it _is_ prima facie ridiculous - then it would be prima facie
ridiculuos in 1969 with a tape player, in 1979 with a VCR, in 1989 with a CD-
ROM drive, 1999 with a DVD-ROM drive, in 2009 with a Blu-RAY drive, in 2019
with x, in 2029 with y, and there is no reason it shouldn't still be prima
facie ridiculous even if at some point along the way the thing happens to be
conscious as a side-effect, without this being necessarily visible in its
outputs. So you can't just call it prima facie ridiculous and be done with it
- you need some other argument.

------
andyjohnson0
I wanted to like this, but IIT just seems like a lot of hand-waving.

 _" to hypothesize that a physical system is “conscious” if and only if it has
a large value of Φ"_

By this measure, would a long mathematical proof or an orchestral symphony be
conscious? If so, how does this actually help us understand how subjectivity
relates (or doesn't) to all this?

[1]
[http://en.wikipedia.org/wiki/List_of_long_proofs](http://en.wikipedia.org/wiki/List_of_long_proofs)

