Hacker News new | past | comments | ask | show | jobs | submit login
How does consciousness even make sense? (niklasbuehler.com)
30 points by algoholix 5 months ago | hide | past | favorite | 37 comments



>I can’t really grasp that. Do my thoughts make sense? Where’s the flaw?

This is the "Argument from Incredulity by Implausible Consciousness Substrate"[0]. Other examples include Searle's Chinese Room, and the China Brain thought experiment (what is it with China?). To your credit, you've immediately spotted the problem - how is the actual human brain any different? It's also some sort of broadly mechanistic physical system, unless you're into divine spirits (but that just moves your problems).

The flaw is that there is no flaw, the only actual argument is your personal incredulity. And the reason your intuition is giving you bad results is that scale matters. The implausible substrates are far, far too small to realistically encompass the computation that a brain does, and this makes them poor "intuition pumps" (a useful phrase coined by Daniel Dennet). Your field of rocks looks more like a football field of sand, ankle deep. And of course, manipulating them one by one according to a tiny set of rules is absurdly distorting in time - to properly capture the sheer amount of parallel information processing, perhaps you should instead imagine them buzzing and vibrating and bouncing around and exchanging information and state with the grains around them in highly nonlinear ways. Does this start to sound a bit more of a plausible?

[0] A subtype of the "argument from bad analogy", which goes: 1) make absurd analogy, 2) point out that analogy is absurd, 3) wave hands


First of all, thanks for mentioning those other examples to me, I haven't heard of them before and will definitely engage with them further.

You're right that I can't just dismiss the possibility of the stones or the paper developing a consciousness. However, this fallacy wasn't unnoticed by me, I just don't see a way to approach this question in a more scientific way than by using my intuition.

So you're proposing that there is actually no difference between human brains and computers in respect to the ability to be conscious, did I get that right?

I don't understand your argument about the stones "buzzing and vibrating and bouncing around [...] in highly nonlinear ways" though. That doesn't resemble the computations made by a discrete computer anymore, does it? Or are you hinting at precisely those differences between computers and brains (i.e. mathematically discrete vs. mathematically continuous)?

Also, why do you think parallel information processing matters? Does it really matter in a computer?

Could you elaborate on your argument that scale (in terms of calculation speed; in terms of memory it surely matters) does indeed matter? I don't think it's any more plausible for a huge pool of fast-moving "magic sand" to be conscious (excuse me for once again committing a scientific fallacy here, I'm open for suggestions on methodology).


I fail to see how you're not just waving hands. You assume that consciousness is an emergent property of matter, you think it's plausible, but you can't explain it or prove it.


I assume physicalism as a premise, yes, although I make allusion to other possibilities (and hint that they have problems too). The post that am replying to also seems to assume it, so it's not really in scope for my reply.

Feel free to expand on how you think it solves the dilemma outlined in the post though!


Anything that happened, can.


There is another more nuanced argument, by Colin McGinn, that states that consciousness is a broadly mechanistic system, just not one that human minds are equipped to understand. Which could of course be another type of personal incredulity, but given the general lack of consensus on the subject so far seems possible


It's a really deep problem, which bothers a lot of philosophers and cognitive scientists. I don't pretend to know the answer, but here are some speculations.

We can speculate that consciousness is some kind of a representational process, using non-verbal, non symbolic representations. We can suppose that what is being represented is in part the state of my own body, and my current interactions with the external world. So this is not a static type of representation, it is a continuously changing representation, which is representing a real time 'dance' between my body and the world. So, returning to your (excellent) analogy of the field of rocks, the field of rocks would not be conscious unless and until it becomes an emergent representation of itself and its boundary with the rest of the world, all happening automatically and in real time. The field of rocks has no sensation of the sun's warmth because it has no representation of warmth, and no way of moving or changing it's own body as a response to that representation of warmth, and no attentional mechanism to prioritise how it responds to representations of warmth, hunger, thirst etc.

Once you start to think of consciousness as a representational process you start to see how consciousness might become an emergent phenomenon inside a machine 'designed' for creating representations. A machine like a living animal.

Of course, that puts you in the position of having to explain what is a non-symbolic, or non verbal, type of representation. But that's doable, I think.


Thank you for your thought-provoking answer!

You're arguing that consciousness is to be understood as a consciousness of the self (and its environment). That implies that there can be intelligence that is not conscious, right?

About the missing sensation of the field of rocks; How about adding some sensors to the system, so depending on measurements like temperature, one could move a certain designated stone, which could then be taken as input by the rules moving all the other stones. Would that change alone yield consciousness? Also, how many different types of such sensations are needed to produce consciousness? Humans sense quite a few different types of such input data, but we can't sense everything there is to be sensed.

In your last two paragraphs, I believe you state that consciousness is just the (mathematical) reflexivity of "thoughts" (i.e. representational processes concerning objects). That's an interesting thought I've pondered as well, but doesn't that just move the problem to whether machines can produce such thoughts? Certainly they can produce representations of thoughts, but do they really think them?


Thought experiment.

An entity is conscious if:

- it has external inputs and a facility for creating a symbolized account of them,

- it has a facility that stores these accounts,

- it can order those accounts roughly by time,

- it can place or relate a symbol representing itself in these accounts.

The quality of consciousness varies widely, a program that has an object that references itself and can do the above things is very technically conscious but likely nowhere near to the depth of a human being.


Consciousness is a property we ascribe to our own awareness of our awareness. As we are part of the universe, if consciousness is a thing, it is then a property of the universe, not just the human mind.

The human mind is a plastic, adaptive system, which is shaped by senses that consume the universe and which attempt to model the universe. It is past and present interlocked - causality, bound up in tight knots of matter.

This is the universe, being aware of itself. It is perhaps not capable of being understood, just reveled in.


Defining consciousness will come down to doing so simply. Consciousness is emergent given lack of awareness equals lack of consciousness, especially when the lack of awareness is extended to seeing and hearing, or not. Consciousness emerges from awareness of objects, vision, hearing, thinking mind.

Who here can visualize an apple in their mind, literally seeing it as if existed and was being perceived by the eye (a sense organ)? Who can't? The answer will vary from person to person. I know what some have said, given I've asked 100s of people this question because I don't form internal imagery.

Even though some don't have that mind's eye view, it would seem most of us do build an internal model, similar to the apple, of the world around us, which is built from sensed information. Some of the people I've talked to can put "extra" things in that model. Dots on the wall. Boxes on tables.

If it can be updated, it must exist in mind and it is likely to exist in mind in a similar way for many, at least for us to all agree we seem to have similar experiences and see the world in similar ways.

Similar to an internal biological version of Unity, our eyes, ears, nose, taste and touch/feeling come together to create a sort of reality rendering "camera obscura" in the mind, allowing us to judge and interact with our perceived environment, through the copy we create of it.

Close your eyes, and the model fades (at least for most). Leave them closed long enough and you will lose what we are seeking to define here.


> Who here can visualize an apple in their mind, literally seeing it as if existed and was being perceived by the eye (a sense organ)?

Anybody who dreams when sleeping. Is that not everybody ? Ability to do it when awake (lucid dreaming) is a different thing.

> lack of awareness equals lack of consciousness

No it does not. I'd argue that even deep sleep or getting knocked out does not equal loss of consciousness. We can't tell, as consciousness is only perceived through it's contents, so we can't distinguish between absence of contents or absence of container.

> Leave them closed long enough and you will lose what we are seeking to define here.

Consciousness is not the model. It's where the model is built. You only mention perceptions, what about thoughts ?


> Who here can visualize an apple in their mind, literally seeing it as if existed and was being perceived by the eye (a sense organ)?

Everyone. What you think you are seeing as a whole is your hallucination based on fragmentary sensory input. Same for everything else you think you see around you.

Doing it without corresponding sensory input requires practice, drugs, sleep, unusual brain development, or other techniques, because hallucinating things not there is suppressed for obvious reasons. When that suppression fails, we see things not there, such as Elvis on a piece of toast.


I read it. How does this article even make sense?


There seems to be a flurry of these articles posted to HN, which are vague, loquacious, superficial, obtuse, poorly punctuated, occasionally excessively poly-syllabic, repetitive, colloquial and plausible, yet ultimately nonsensical.

We need a new adjective for what it is to be like GPT-3 ...

disgenerative?

misformational?

cyberblathering?

babble streaming?

thesaurus gargling?


Then the real question is, why does GPT-3 think consciousness is mysterious?


Arguably this is saying that machine learning does not easily and directly yield consciousness as one might expect. If consciousness instead is focused on time and sequences then the machine learning needs to be used more to construct and compare narratives of sequences of observations and actions. That would enable machine learning to derive representations of causality and potentially reason that itself exists. Most machine learning focuses more on raw data and the environment rather than the nature of change over time. Just some thoughts.


To enable machine learning to "become consciousness" it would need to start by having an internal model of reality running that was being built like we build ours, not some camera pointed at a object like robots do today. Well, I suppose DJI drones use SLAM like 3d environments, but they aren't detailed like what we see yet, but maybe in a few more generations...


Consciousness is fundamental, not emergent.


Only your own consciousness is fundamental, i.e. directly experienced. All others are deduced to exist, from clues. But your own consciousness's origin is not fundamental; that is also deduced from clues (i.e., "apparently stuff that exists once didn't").

But any model of our own consciousness that fails to encompass others' as similar is sterile. We are all found to have brains; and damage to brains, or even to the stuff that feeds brains, alters or snuffs out consciousness. Brains turn out to be made of nerve cells, that are like other cells but specialized for processing information. Stimulating an individual nerve cell can trigger a thought, memory or sensation, repeatably. QED, consciousness is a phenomenon of nerve cells processing information.

Anything else is woo.

How you would build a thing that is conscious is not known. But we have existence proof that it's possible, so the rest is a matter of engineering. Philosophers and deists can do whatever the hell they want, but will have nothing meaningful to contribute.


Roger Penrose says it's emergent because quantum mechanics is emergent. You can also find stuff by David Bohm saying something similar regarding Bohmian mechanics or de Broglie-Bohm theory.

Here is Hameroff and Penrose's paper on Orch OR: https://www.sciencedirect.com/science/article/pii/S157106451....


I don’t get the point of the argument anymore?

If it emerges from the fundamental features that themselves emerges from “lower” features, isn’t anything “higher up” going to happen? It did for us so it always would?

These guys are really circling for semantics not theories of reality and consciousness. Has anyone considered English is just a terrible system for this sort of reasoning?

I’ll give a virologist and other applied math types props for data driven and concrete outcomes.

These post-modernists are just deconstructing/reconstructing in circles for book sales, IMO.

If they can output something concrete humanity can use give them all the gold in the world. Otherwise it’s just a book club to me anymore


What specifically do you mean by fundamental? Not made out of other things?


consciousness; we can take different takes on it viz. religion, philosophical, cognitive sciences and evolutionary biology.

the most satisfactory explanation that i got was from cognitive sciences(https://advances.sciencemag.org/content/6/11/eaaz0087)or conscience as a side effect of brain's electrical and magnetic field and evolutionary biology(Attention Schema Theory).

AST: I am going to write a small time line for development of our brain(correct me if i am wrong)

Claim: This happened due to too much information processing! As complex organism starts observing too much information; there were competition among neurons which leads to

(a)Selective Signal Enhancement (eg Hydra)---> (b) Centralized Controller for Coordination (Tectum): this is for overt/default attention controls eyes, head and is found both mammals and reptiles ---> (c) wulst(in reptiles) and cortex(in mammals): this is for covert attention, meaning you don't have to attend to the stimuli to process it; you can think about the sound coming from your behind without looking at it!

This is the main difference tectum is still in both of us and our response to most of the stimuli is controlled by it, but deep processing, thinking is done by the cortex which models it on a SCHEMA(a workflow)[neural pathways/consciousness] nobody knows but we tend to focus on this schema to do the deep processing [covert attention]

We make a sense of self using this; other animals as well! We also associate ourselves with others using this; language was a direct consequence of this. As we extended a communication system of our brain to others!

Now; your statement if a set of stones can have conscience? Yes it can have; over the time if this system of stones will be observed; with some associated meaning to it states[arranged in circle or square]; someone can argue that it is aware!

You can look at universe as a set of rocks; the periodic movements; it's aware!? maybe :)


Consciousness is mysterious, but I find it even more mysterious that the universe had the capacity for consciousness to even exist in the first place. How does something start with capacity to have first-person awareness of itself? It seems almost magical.


Good luck defining consciousness..


> So that’d mean if we arranged a bunch of stones on a large field in a certain pattern and then used some fancy (but deterministic) rules to move them around, we’d create consciousness?! I can’t really believe that’s true.

Illustration: https://www.explainxkcd.com/wiki/index.php/505:_A_Bunch_of_R...


>I can’t really grasp that. Do my thoughts make sense? Where’s the flaw?

That is called argument from disbelief. It is a common fallacious argument form.

The only mystery about consciousness is why anyone insists there is one.


what?

obviously there is one, its the force driving the typing of these words. It isn't the machinery typing them but the fundamental quality of the machinery that feels like it does.

Of course I can't prove my consciousness to you, but I operate on the assumption that p-zombies are nonsense, so, it should be entirely possible to prove your consciousness to yourself, albeit difficult due to lack of meaningful language labels for subjective experiences.


From a scientific point of view, there is exactly one objective observation that needs explanation, which is that from time to time humans like to claim they're something called "conscious". This is not nothing! It is quite a peculiar phenomenon. But it needn't imply any great philosophical dilemma.

(We do have meaningful language for subjective experiences - we call eggplant "purple" and sugar "sweet" and music "groovy" - these correspond not to any physical property, but how they influence our minds.)


I have no difficulty accepting my own consciousness, or even yours, to the degree the word has a definition at all.

I have subjective experience, including the subjective experience of observing my own behavior. So do you. The redness of a tomato has to look like something, and furthermore has had to evolve to catch the attention of animals like yourself that would be nourished by it. Meanwhile, animals like yourself have needed to evolve to find red things appealing. No mysteries there.

Anything that can perceive itself and can perceive itself perceiving will exhibit all the hallmarks of what we call consciousness. It's a recognizable behavior pattern with a name. Imagining there is more to it than that is just tying yourself in knots to no purpose.


> Anything that can perceive itself and can perceive itself perceiving will exhibit all the hallmarks of what we call consciousness. It's a recognizable behavior pattern with a name.

In my opinion this just delays the question. The question now becomes whether machines can perceive themselves (or really anything at all) or whether they can just mechanically and symbolically represent the perception of objects, including themselves. If the answer is yes, then that implies that any sufficiently large representation of computation has a consciousness and therefore that there is nothing special about it. If it is no, then where's the relevant difference to a human brain?


You have made no distinction between "perceive themselves" and "mechanically and symbolically represent the perception of objects". That is the same thing, less the woo. And, events, including their own actions, which also need to be perceived.

I have no reason to think that you are not mechanically and symbolically representing the perception of objects and events. A pattern of activation of neurons is a symbol.


I think the grandparent meant that the only mystery is that consciousness is supposedly a mystery, not that consciousness doesn't exist.


Correct.

You might as well talk about the mystery of where the execution comes from, when the program starts, and of where it has gone when the program finishes.


It doesn't make any sense because we don't know the what, how or the why of it yet.

Consciousness makes as much sense as a solar eclipse made to the primitive man who had no conception of the solar system. A program on your computer wouldn't make much sense unless you understood the inner workings - processes, kernel, ABI, CPU, etc.

Maybe one day we will find out as we learn more about the brain. Until then we just have to live with this magical mystery. The real question is how we deal with the ethical issues. We could learn a lot more about the brain if we could "break it", test it, experiment with it, etc. But obviously we can't do that with the human brain. Maybe start with simpler animal brains and then work our way up?

If we could triangulate the parts of the brain where consciousness resides, then perhaps we can disable that portion and create brains to experiment on? But then again, if we disable the part of the brain responsible for consciousness, how can we study consciousness?


<Maybe start with simpler animal brains and then work our way up>

Biologist have been doing this for decades. Eric Kandel won the Nobel prize just studying a sea slug biting seaweed for years. http://www.laskerfoundation.org/new-noteworthy/articles/eric...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: