
A Neuroscientist’s Theory of How Networks Become Conscious (2013) - tegeek
http://www.wired.com/2013/11/christof-koch-panpsychism-consciousness/
======
xnull
Integrated Information Theory I think is a fad, and a bad one at that.

One could think of integration of information as a necessary but not
sufficient condition for consciousness a la Scott Aaronson [1] who applies
more rigorous mathematical examples of integrated systems - in fact ones close
to maximal possible integrated information - that very clearly are unlikely
have any measure of consciousness (like expander graphs and error correcting
codes). What about ciphers, where small changes to a single bit in key,
ciphertext or plaintext radically alters the evolution of the internals and
output of the system?

But even that might be too strong a statement. Aren't minimally interacting
systems such as Conway's Game of Life Turing Complete? There are tiny, tiny
Universal Turing Machines, and UTMs are essentially free to encode/decode
their tape inputs in any way they see fit.

Ultimately while IIT has a 'mathematical description' it mostly only serves to
obfuscate a naive and trivial notion that 'the system has to be complicated
and has to integrate and process local information in some holistic global
way'.

No shit.

[1]
[http://www.scottaaronson.com/blog/?p=1799](http://www.scottaaronson.com/blog/?p=1799)

~~~
SapphireSun
Not quite. I attended a talk by Dr. Koch at MIT. I didn't get all of it, but
it seemed to me that his definition tries to figure out what's included or
excluded in conscious systems and how conscious they are. It may or may not be
right, but it at least it's working towards something with predictive value.

The examples you gave are mostly abstract and not physical. If consciousness
is a physical phenomenon, these algorithms must be manifest before they can
qualify. As for the game of life, the pieces on a physical board aren't
actually interacting with each other with feedback loops. In a computer, the
circuits are mostly feed forward and the algorithm is simulated rather than
physically realized.

~~~
xnull
The examples though _do_ have physical manifestations - everything that exists
in the universe has a physical manifestation (by tautology). How is a feed
forward circuit carrying information (to be integated) not a physical
realization? It's not clear why neurons carrying potentials and mixing them
count but not an FPGA (no longer an ALU simulation) doing cryptographic
operations for a bitcoin mining rig. Furthermore as an FPGA more directly
implements the information integration system than an ALU fetch-execute cycle,
is it 'more conscious'? For me that sounds absurd.

Presume the Game of Life _did_ have a direct physical implementation, let's
say neurons fire according to GOL rules. Would you then be forced to call it
conscious or not? Wouldn't it depend on the cell configuration? An all blank
cell configuration certainly isn't conscious but the system would have the
same low IIT score as a manifestation running some UTM-equivalent setup.

Finally, what about physical systems that do highly integrate information?
N-body problems exhibit highly integrated behavior manifest in the so-called
butterfly effect. Why do the air molecules in a balloon not count as
conscious? This physical example escapes the "America" criticism - every
molecule applies a force on every other and due to n-body mechanics the system
is highly integrated.

My worry, which I think is mostly corroborated, is that the definite clarity
of the IIT definition will serve to obscure the fact that underlying
definition has no real definite clarity or derivation.

Ultimately the IIT formula was imagined to capture some heuristic notions -
but it was not derived. For this reason, added to those above, I am very
skeptical.

Or maybe it's my CS background that makes me biased toward thinking that _how_
information is integrated matters purely more than some measure of how much
mixing there is (e.g. by one of many versions [5 now?] of the IIT formula).

~~~
SapphireSun
So I'm not exactly sure how an FPGA is wired, so I won't comment on that.
However, I feel, but cannot prove that the GOL neurons might have some kind of
internal experience.

Your air example is a good one, however, the kind of interactions they have
depend on temperature. Also, the information a air molecules transmit has
little explicit encoding other than a few attributes, so I imagine that it
wouldn't feel like much to be air. Feedback loops are probably frequently
formed and broken, so I'm not sure what to make of that.

I think the IIT formula is interesting. I'm not sure if it's right (it
probably isn't and will need modifications at least. Dr. Koch talked about
this version of it being invariant across time, which sounded off to me). I'm
planning on learning more about it though because it sounds like might have at
least part of the answer and it might be on the path to finding something more
experimentally amenable which is more than I can say of most theories I hear
about.

I'm also attracted to the physicality requirements of IIT because it's clear
to me that abstract computation alone is not enough to reify physical
phenomena. For instance, mathematics often returns imaginary solutions to
physical problems. Some math represents reality, but not all of it. It's
important to stick close to physical processes in physics.

I haven't yet heard that there were five versions of the formula, but that
doesn't surprise me. It's actively under development and it hasn't hit the
point where we can conduct experiments to narrow it down. As far as I'm
concerned, it's an interesting idea at this point, and I hope I can take away
some useful tools from it.

------
pedrosorio
"WIRED: Getting back to the theory, is your version of panpsychism truly
scientific rather than metaphysical? How can it be tested?

Koch: In principle, in all sorts of ways. One implication is that you can
build two systems, each with the same input and output — but one, because of
its internal structure, has integrated information. One system would be
conscious, and the other not. It’s not the input-output behavior that makes a
system conscious, but rather the internal wiring."

Isn't this the definition of a non-testable assertion? The observable input-
output behavior is the same, yet he claims the property of consciousness is
different. So where is the test?

~~~
efnx
Maybe one has a higher (non zero) phi rating, which apparently means it is
conscious. I agree with you, I don't see the test. I'd also like to agree with
Koch, intuitively what he says makes sense - it's possibly a lack of
understanding on the writers part that this key point is lost.

------
Animats
_According to Koch, consciousness arises within any sufficiently complex,
information-processing system. All animals, from humans on down to earthworms,
are conscious; even the internet could be. That’s just the way the universe
works._

Clearly one can build a hugely complex system that performs some special-
purpose function and isn't anywhere near "conscious". Consider a big
supercomputer doing finite-element analysis. Or a big network of packet
routers.

Koch would probably argue that those aren't "sufficiently complex"? So what's
the definition of "sufficiently complex"? Something that is "conscious"? This
is not helpful.

Perhaps we should focus on "common sense", rather than "consciousness". Common
sense can usefully be defined as the ability to predict (at least the near-
term) consequences of actions. Given that, planning and evaluation of
alternatives is possible. Most animals above the insect level have some
capability in that area. They're not purely reactive. We really need to get
this figured out so we can build robots that can handle new situations without
screwing up too badly.

Neuroscience has work to do at the bottom. Check out
"[http://www.openworm.org/"](http://www.openworm.org/"), where some people are
trying to simulate the simplest nematode known at the neuron level. Until that
works, neuroscience doesn't really have neurons figured out.

Classic quote: "Philosophy is the kicking up of a lot of dust and then
complaining about what you can't see".

~~~
jcfrei
> Consider a big supercomputer doing finite-element analysis. Or a big network
> of packet routers.

There's another perspective here. We always argue from a very anthropocentric
point of view and measure experience based on actions and reactions that a
human would perform. Ie. asking questions, seeing something and saying
something based on what we see, etc. The supercomputer you described can
hardly be considered conscious with regards to such actions. However it might
be very conscious with regards to a different set of actions, ie. turning off
some switches which allow for intercommunication between the different CPUs
(the computer might react by rerouting some packets), it might react to
raising the room temperature (by spinning up the fans), it might react to
removing some RAM (by allocating more data on the hard disk), etc.

~~~
Animats
That line of philosophy goes back to the steam engine governor, which some
people at the time considered intelligent. That concept didn't yield much
value then, and it doesn't now. (Unlike Maxwell's 1868 paper "On Governors",
which established the mathematical basis for stable feedback control, and is
still the basis of basic control theory.)

------
millstone
Many drugs induce unconsciousness, but these don't work by decreasing the
complexity or organization of our brains. Doesn't this neatly refute the idea
that consciousness is an "immanent property of highly organized pieces of
matter, such as brains?"

~~~
kazinator
Not really, because complexity can have an on-off switch: a global one, or at
least a partial.

If you implement a software system that is conscious, but then suspend it to a
storage device, and resume it one week later (so that it is surprised: where
did the time go?) its complexity hasn't gone away. It just wasn't conscious
for one week because it wasn't running.

Another thing to consider is this: there may be more than one consciousness in
your brain! The one which is _you_ , the one typing on Hackernews, is just the
one which has access to the "console" so to speak: the fingers, the eyes, ...
There could be other consciousnesses hidden in your brain that are suppressed.
Like "background daemons". Maybe that part of the brain that regulates your
body while you sleep (or don't sleep) is also conscious!

The consciousness-process which is "you" is deactivated during sleep, but the
other background "daemons" remain conscious. (Clearly, sleep is not a
complete, global shutdown of brain activity, in other words.)

(Also, I'm here reminded of dolphins which put half their brain to sleep at a
time.)

~~~
millstone
I like your computer analogy because it's easy to reason about. Surely a
powered-down computer is not conscious. So if consciousness has an off switch,
then it's not immanent (inherent) to organization.

Koch compares consciousness to the electric charge of an electron. But the
electric charge has no off switch!

~~~
sebastianconcpt
"But the electric charge has no off switch!"

Exactly. And "consciousness is computation" is an hypothesis that does _not_
really work.

~~~
kazinator
> * And "consciousness is computation" is an hypothesis that does not really
> work.*

Why not?

All of reality might be a computation.

------
Xcelerate
> Why we should live in such a universe is a good question, but I don’t see
> how that can be answered now.

His last point here is salient. Much like the various interpretations of
quantum mechanics, any theories of consciousness are untestable. But my
problem with "consciousness" is that it's not even a well-defined concept.
Suppose I replace all occurrences of "consciousness" in that article with the
word "qualgma". It makes just as much sense.

Gravity, electrons, and energy are all concepts that are _defined_ to exist as
the manifestation of certain physical phenomena that can be modeled and
predicted with mathematical equations. These words are just English
simplifications of the equations.

But consciousness has no such analogue. It's just a term that people throw
around when they talk about the brain. I believe that over time, it has
started to acquire a more well-defined meaning -- something along the lines of
"the emergent properties of the brain's activity resulting from lower-level
processes". This is similar to how a human body is just the emergent result of
many atomic interactions. This version of consciousness can be modeled,
theorized about, and tested experimentally. There's nothing magical about. As
the brain's neural circuitry becomes better understood, you could say
consciousness is a simplified model that still has good predictive capability.

However, I would also like to focus on a much more interesting topic in the
article:

> My consciousness is an undeniable fact. One can only infer facts about the
> universe, such as physics, indirectly, but the one thing I’m utterly certain
> of is that I’m conscious.

I believe Koch is conflating two distinct concepts. If we take the definition
of consciousness as the predictive modeling of the brain, then this is
something _totally_ different than what he's talking about here. Let me alter
his quote:

> That I _experience my existence_ is an undeniable fact. One can only infer
> facts about the universe, such as physics, indirectly, but the one thing I’m
> utterly certain of is that I _am experiencing existence_.

It's really astonishing to me that he words it this way, because independently
I have thought almost exactly the same thing for a long time.

I've often struggled to put this notion into words, but let me try it in a new
way with an analogy. Suppose you see an apple floating in front of you. You
tell other people "Look at this apple!" But they just go "What apple?" So, you
try to get crafty. You take a picture of the apple with a digital camera. But
when you show people the photo on the computer, they still see no apple. So
you zoom in on the pixels and start copying the RGB values for each one by
hand onto paper. "Look, this value is 237!" you say. And you sit down with a
friend, and start calling each and every pixel's values out as he puts them in
manually into his image editing program. When he's does, you say "Ha! Look,
there's my apple, right on your screen! And you put it in there yourself!" But
he stares at you quizzically, and says, "I still see no apple; just an empty
table." And no matter what kind of tricky experimental method you try to come
up with to get everyone else to understand the concept of this apple that's
always floating in front of you, every attempt to catch it produces a
representation of the apple for you and nothing for everyone else.

It's frustrating because the people you tell about the apple say "Well, you
have provided no testable predictions, and no data that can be independently
verified by everyone else in society. Clearly your apple doesn't exist." But
it does! You know it does! In fact, out of everything that constitutes
reality, the floating apple is the one thing you're _most_ sure of.
Frustrated, you walk around thinking you're crazy until one day someone says,
"Well, I don't see an apple floating in front of you, but do you see the
orange floating in front of me?" Which, of course, you don't.

Now replace "apple" with "the fact that I am experiencing my own existence".
It's an element of reality that is only apparent to you on a personal level,
and since it isn't testable by other people, it is _not_ a part of their
reality, and thus does not exist as far as they are concerned. And thus it's
not science. Yet even though it's not science, it is the one thing in life I
am most certain of. My five senses could all be faked with advanced enough
neural circuitry. I could be in an entirely simulated environment, with
entirely simulated physics. So when I order things in probability of how
likely they are to be an illusion, _experiencing existing_ falls at the lowest
probability. In fact, you could strip me of all my senses and put my brain in
a vat, and as long as it's running, I'm still experiencing existence.

If the idea bothers you that things might exist that are incapable of being
verified consensually by society, look at this image of all the particle
interactions (so far discovered):
[http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Ele...](http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Elementary_particle_interactions.svg/2000px-
Elementary_particle_interactions.svg.png)

If you'll notice, gluons "hang on" to the rest of the particles by merely one
interaction. If this interaction was not present, as far as positivists are
concerned, then gluons don't exist. But that seems entirely unreasonable to
me. I could imagine plenty of particles that could "exist" that simply don't
have an interaction with the ones in the standard model that constitute our
reality.

Anyway, this is probably the most bizarre post I've ever written on HN, but
hopefully what I'm trying to convey "clicks" for some people.

~~~
TheOtherHobbes
>I believe Koch is conflating two distinct concepts.

I believe Koch is conflating _many_ distinct concepts - as most of the
consciousness research field seems to.

How can you say anything useful until you define your terms?

It seems to me humans have many different layers of experience:

Visceral pain/pleasure Instinctive drives Emotional/limbic experiences
Locational (physical mapping) Memory-based/learned
Abstract/intellectual/symbolic

Consciousness is another kind of experience which adds a symbol that works a
bit like JavaScript's 'this', and seems to create a level of experiential
indirection - so instead of sensing something directly, you experience how it
affects the 'this' symbol. It seems to do this by abstracting the experience
and simultaneously comparing it with an associative database of previous
experiences, using all of the circuits in the list. (And probably others too -
I won't pretend this is a complete model.)

You need some basic ability to abstract to have a 'this.' The better you are
at it, the more bandwidth passes through the
experience/memory/abstraction/future prediction/new memories system.

Simpler animals have basic input circuitry, and they learn instinctively
without abstraction.

But... I'd guess surprisingly many animals of all kinds have a 'this' network.
The human advantages are a 'this' network with more bandwidth, a much improved
ability to learn from experiential invariants, an ability to make useful
predictions, and an ability to store and communicate 'this' experiences using
external technologies.

If this sketch is accurate, consciousness doesn't have to be mysterious. But
it also doesn't depend on a simple connection count.

The Internet won't be conscious because it has no 'this' network. It has no
pain/pleasure circuits, no emotional circuits, and no instinctive drives. But
most of all it has no ability to compare memories with current experiences and
to abstract learning from that. There are systems - like Google search - which
have incredibly basic precursors to abstraction/learning/prediction. But they
still mostly run on a batch basis with input->processing->output, and no
explicit self-concept state memory.

My guess is that if you built a system from these elements and included the
current state of the art - or better - machine learning, machine vision, and
natural language elements, with useful analogs of instinctive drives and the
other animal fundamentals, you'd potentially have something that at least
acted and sounded conscious, and would likely do some unexpected things.
(Which is one of the practical ways we think we recognise consciousness.
Conscious entities are _hard to model_. In fact we're probably easier to model
than we think we are, but face to face, consciousness in other people usually
implies surprises.)

Whether or not it would be truly self-aware or just pretending is a question
that's impossible to answer without resorting to metaphysics.

~~~
Vraxx
I really like this idea that you fleshed out in this post, it seems to relate
more to the more common concepts of "consciousness" that are usually
referenced. I do think that your description and Koch's don't necessarily have
to be disjoint, it is possible that the "simple connection count" is highly
correlated with this pattern of self that you have identified. The specific
subsystems that you listed may just be very effective (and thus common) ways
that this theme of consciousness usually emerge, though not necessarily the
only ways. Disclaimer: I tend to think of this subject as more speculation
than science anyways, at least currently, thus the opinions I have voiced are
just that.

------
jcfrei
For me questions on the nature of consciousness have always been deeply
connected to the question of identity. Specifically the following thought
experiment: What if there was a machine that would read every atom of your
body and recreate your body at a different location at a different time whilst
destroying the original. Would the arising consciousness still be your own
original consciousness or would it be a copy? What if we keep both, the
original and the copy? Are there now two separate instances of your
consciousness running?

I guess the only answer in line with Koch is yes. Consciousness emerges out of
a complex system. And I might add that consciousness is not a continuously
working "state of mind". It fades in and out during our daily lives. The
consciousness we had yesterday is lost today and replaced by a new one, which
runs on a different chemical setting in your brain (albeit shares most of the
memories of the previous one).

I think that while I can "experience my own existence", this "existence"
doesn't always refer to the same consciousness - one moment to the next the
underlying system might have changed marginally or even substantially (in the
case of being based on an entirely new set of atoms, which just happens to be
in the same configuration as before).

------
maebert
TL;DR:

Christoph Koch says that 1) Make a really complex system. We can even measure
how complex systems are! 2) ??? 3) Consciousness!

By the way, that's not panpsychism, that's emergentism, and for much better
accounts of that read CD Broad or Jaekwon Kim.

------
nl
The biggest neural network yet built was a 100 billion neuron simulation of
the human brain[1]. That didn't exhibit consciousness, but is considerably
smaller than the 1 quadrillion (1 million billion) synapses a human brain has.

I'm unsure what the relationship between a synapse and a neuron is. There are
claims[2] that a human brain has "only" 100 billion neurons itself.

Of course, it's true that consciousness may emerge from the complexity of the
connections rather than the raw count.

I'm unconvinced. It seems possible that whales (500 million neurons) have more
complex brain networks than humans, and yet by some definitions they aren't
conscious.

Conversely, some argue that birds are conscious, and yet they have
considerably smaller brains than some neural networks.

[1]
[http://www.izhikevich.org/human_brain_simulation/Blue_Brain....](http://www.izhikevich.org/human_brain_simulation/Blue_Brain.htm)

[2] [http://www.quora.com/How-many-neurons-are-needed-to-
create-a...](http://www.quora.com/How-many-neurons-are-needed-to-create-a-
conscious-entity)

~~~
sebastianconcpt
"Of course, it's true that consciousness may emerge from the complexity of the
connections rather than the raw count."

Your "of course" is taking as proved something that is not (the consciousness
is computation hypothesis).

In your benefit you said _may_ so of course you're not convinced. Which is
good because people shouldn't believe in things that aren't testable. :)

And you talk about birds but there are als unicellular organisms reacting to
anestesics, so that doubt should be even broader

------
Taek
When we think of consciousness, we normally associate with the brain. But
according to the complexity theory, shouldn't there also be consciousness in
most of our organs? The liver is a massive network of cells that are in at
least a moderate amount of communication, and yet I am not aware of its
consciousness.

But perhaps it is conscious, just at a level that's inaccessible to my brain,
which is a nearly distinct structure with very minimal relative communication
to the liver.

I also found it interesting that he denied the idea that America might be one
larger consciousness. I've had multiple experiences on LSD which have
suggested to me that as an aggregate humans are one larger organism with a
single collective consciousness, that operates both at a slower speed and a
higher overall awareness. (Most accessible in the 150-200ug range)

When you think about how individual neurons work, spraying neurotransmitters
at each other to trigger responses from the next neuron, essentially passing
information in a complex yet highly organized web, are humans that much
different. The set of humans who design cities are different from the set of
humans who manage government policies are different from the set of humans
that try to go to space. Each human passes information to others in highly
complex ways that form super organized macrostructures. It doesn't seem to be
that much of a stretch then to say that consciousness also arises from the
interactions of the macrostructures in the same way consciousness arises from
the interactions of all the parts of our brain.

And the internet would seem to be a massive facilitator of this. Because of
the internet, the amount of communication I do per day is enormous, and my
interactions happen at a global (though mostly English speaking) scale. I
doubt communications of this scope and magnitude were available to humans even
20 years ago.

~~~
john2x
The internet, as a neural network, "dreams" of cat videos when they go viral.
Huh.

In fact, it "decides" for itself which ones should go viral.

Interesting thought.

~~~
ganzuul
I blame my deep integration with the internet for my belief that I'm actually
a cat.

------
brittonrt
This brings me to an interesting thought experiment I struggle with:

Most likely most people here would agree that if you make an exact copy of a
person's brain, whilst leaving the original intact, it would be a new person,
identical but divergent from the original. A new thread of consciousness by
such definition.

But then, what if you destroy the original at the moment of copy? It would
appear to the same.

But then, what if you replace each neuron one at a time over a period,
maintaining the original network? This question is troubling because it brings
into obvious doubt the integrity of our notion of consciousness. As it is in
fact the case that we shed most of the atomic matter that constitutes us in a
given year, we are clearly immaterial. Patterns.

So put plainly: should you copy your brain all at once, killing the original,
are you a new person? But if you are: transitioning slowly piece by piece over
time, which is what we observe in nature, this maintains the conscious strain?
How are these different?

It's obvious to me there is something fundamental here we are missing. I
welcome any insights you all might have had in similar thought experiments.

~~~
dghf
You seem to be assuming the actual existence of a "thread of consciousness" or
a "conscious strain", when it could be an illusion.

When you wake up after a dreamless sleep, are you the same person, the same
conscious entity, as went to bed the night before? Or is that entity now
"dead", and "you" are a new entity that has just inherited its memories (most
of which it in turn inherited from its predecessors)? How could you ever tell?
In fact, are you the same conscious entity from moment to moment, or at least
from thought to thought?

More "making a copy" thought experiments, none terribly original:

\- If you're disintegrated and _immediately_ reassembled, are you still you?

\- Does using different atoms make a difference?

\- Does leaving a gap between disintegration and reassembly make a difference?
If so, how long a gap? What if you're resurrected at the Omega Point by
sufficiently advanced aliens/post-humans?

\- If you're split in two (sagitally, coronally, or however), and each half is
immediately reconstructed into a whole human, each identical to you before the
split, which is you? Which pair of eyes would you find yourself looking out
of? Both? Neither?

\- If the two "yous" exchange atoms, such that one ends up with the entire
complement of atoms that made up you before the split and the other ends up
with none, does that affect the claims of either to be the "real" you?

~~~
brittonrt
Excellent points, and actually in line with my line of question as well: I
think my assumption of a thread of consciousness was a semantic mis-
communication, as bringing that idea into question was indeed where my
questions were leading.

------
jostmey
I failed to find anything concrete in Koch's argument. Its funny because at
some level he is right - there is this thing called consciousness, but even
when the most rational people try to describe it they inevitably sound
unscientific. I imagine that before Darwin everyone must have sounded crazy
when they talked about biology. I guess the field of Neuroscience is still
waiting for a revelation.

------
RevRal
Also worth a read: [http://kk.org/thetechnium/2008/10/evidence-of-
a-g/](http://kk.org/thetechnium/2008/10/evidence-of-a-g/)

Too much emphasis is usually placed on the "consciousness" part of general
intelligence.

~~~
sadkingbilly
_" But a malformed packet could also be an emergent signal. A self-created
packet."_

If a program sent out a malformed packet, this would indicate an error in the
sender's program. Let's assume that a receiving program reads the malformed
packet and is able to process it. It results in something favorable (no clue
how this would be determined, but let's go with it) and so the receiving
program repeats whatever it originally did to get this malformed packet sent
to it, so that it gets another malformed packet. Multiply this by thousands of
other programs doing similar things to "hack" other programs into sending
malformed packets to perform useful functions, and maybe you have something
happening here.

The above process basically describes a genetic algorithm, which I don't think
the Internet is. It also assumes that most programs are flexible enough to
produce bugs like this - creating malformed packets. Does this mean we should
stop unit testing code to allow more freedom from the programs? Plus, the
above also assumes there is some way of ranking an outcome from a malformed
packet as "favorable". Maybe that would be humans?

------
Synaesthesia
Having read and listened to Alan Watts I have become convinced that the self
or ego or "I" as we commonly call it is a delusion, and a dangerous one at
that. It leads to all kinds of confusions and contradictions.

It is this delusion that is the source of our pain and jealousy. We should
realize that we are part of a greater whole, that our own existence is
inseparable from the rest of the universe. We are all one. Cheesey.

Rather than say "I think therefore I exist", I prefer the simpler, "thought
exists"

------
BasDirks
This requires a leap of faith, usually not an indicator of good science, but
it's not unimaginable. One could argue that his terms are ambiguous or
otherwise ill-defined, but what if this kind of research could be the starting
point for such definitions?

------
ccozan
He has also a book about it: [http://www.amazon.de/The-Quest-Consciousness-
Neurobiological...](http://www.amazon.de/The-Quest-Consciousness-
Neurobiological-Approach/dp/1936221047)

I have read it half way, a lot of information there.

------
officialjunk
The initial basis that electrons just instrinsically have charge isn't proven.
Infact there is a proof of why electrons have quantized charge (by Dirac), but
it requires that magnetic monopoles exist, which haven't been observed.

~~~
wuliwong
I also thought it was really interesting that they chose that to start with. I
personally find the electron to be one of the more confounding and least
talked about pieces of the current state of physics understanding.

That being said, it is hard for me to imagine what a proof that charge is
intrinsic to an electron would even look like. Proving that something is the
result of something else seems much more straightforward than proving
something not being the result of anything.

Also, only talking about an electron's charge being intrinsic is ignoring a
lot of the mystery of it. The fact that it is a point particle is a pesky
issue as well. An electron is just an electron with no underlying "pieces" or
mechanisms. Obviously that could be wrong, but no one has been able to prove
it yet.

I think using the electron as an example for our understanding of how the
universe works is pretty ignorant. It's one of the weirdest parts of physics
I've encountered during my career. It is a topic that few people actively work
on but feels like their is clearly something amiss with our current
understanding.

------
sebastianconcpt
That theory is not even close to point in the right direction on research.

Consciousness is _not_ computation.

An do _not_ just emerge from complexity. If it would, then the ammount of
quantum information and complexity in any tropical thunderstorm would make it
a supergenius entity. And is not (or prove me wrong).

Consciousness is also _not_ mind. Mind is more like the sum of intelligence.
Consciousness something else, more fundamental. Only quantum biology could be
interesting on researching this.

~~~
stefanu
I don't think we can equate complexity of a tropical thunderstorm to the
complexity as described in the article. The discussed complexity arises from
local connections (be it persistent bonds or repeated interactions) between
entities. While interactions in the thunderstorm might seem "complex" to our
mind, they are not "complex" from the point of view of "complex systems", they
are rather "complicated".

Moreover thunderstorm network (if we consider the interactions of particles in
the thunderstorm) is very transient. No feedback loops emerge in the system
(to my understanding of thunderstorm). Also the system does not adapt to the
surrounding environment through it's reconfiguration of internal connections.
Thunderstorm lacks many properties of the complex systems discussed in the
article.

~~~
sebastianconcpt
A storm doesn't have feedback loops in brain-like interconnections. But in a
quantic level there is thermodynamic activity re-adapting and re-shaping the
system all the time. In any case, as Giulio Tononi would say, "it has zero
integration." And it doesn't have consciousness, only energetic physico-
chemical activity.

I agree with that view.

The important part is (a) that it does _not_ have computation and (b)
regardless of complexity, computation is not consciousness. Saying it and
expecting it to "magically emerge from it" does not make that hypothesis any
truer.

They are throwing darts in the dart and making it sound cool.

But, okay, I'll be nice and not troll the effort of making this subject cool
and let's not put theory against theory because is not productive.

What about seeing something testable?

Take a Paramecium.

It does _not_ have many neurons, so it doesn't have synapses but it still
learns where the food is and reacts to anestesics as we do.

What about that?

That _radical_ theory published there is not predicting the Paramecium, much
less anything about the one in the Homo Sapiens Sapiens.

Bring me news on how quantum-biology is behaving down there and we might
actually get somewhere.

------
3beard
The idea you can magically "generate" consciousness by running an algorithm is
really silly superstition.

~~~
sebastianconcpt
Completely agree.

Also as theory is essentially useless as it does not predict anything.

They are throwing darts in the dark and making it sound cool.

------
dominotw
Are we discussing _consciousness_ everyday ?

[https://news.ycombinator.com/item?id=8515361](https://news.ycombinator.com/item?id=8515361)

