This is the "Argument from Incredulity by Implausible Consciousness Substrate". Other examples include Searle's Chinese Room, and the China Brain thought experiment (what is it with China?). To your credit, you've immediately spotted the problem - how is the actual human brain any different? It's also some sort of broadly mechanistic physical system, unless you're into divine spirits (but that just moves your problems).
The flaw is that there is no flaw, the only actual argument is your personal incredulity. And the reason your intuition is giving you bad results is that scale matters. The implausible substrates are far, far too small to realistically encompass the computation that a brain does, and this makes them poor "intuition pumps" (a useful phrase coined by Daniel Dennet). Your field of rocks looks more like a football field of sand, ankle deep. And of course, manipulating them one by one according to a tiny set of rules is absurdly distorting in time - to properly capture the sheer amount of parallel information processing, perhaps you should instead imagine them buzzing and vibrating and bouncing around and exchanging information and state with the grains around them in highly nonlinear ways. Does this start to sound a bit more of a plausible?
 A subtype of the "argument from bad analogy", which goes: 1) make absurd analogy, 2) point out that analogy is absurd, 3) wave hands
You're right that I can't just dismiss the possibility of the stones or the paper developing a consciousness.
However, this fallacy wasn't unnoticed by me, I just don't see a way to approach this question in a more scientific way than by using my intuition.
So you're proposing that there is actually no difference between human brains and computers in respect to the ability to be conscious, did I get that right?
I don't understand your argument about the stones "buzzing and vibrating and bouncing around [...] in highly nonlinear ways" though.
That doesn't resemble the computations made by a discrete computer anymore, does it?
Or are you hinting at precisely those differences between computers and brains (i.e. mathematically discrete vs. mathematically continuous)?
Also, why do you think parallel information processing matters? Does it really matter in a computer?
Could you elaborate on your argument that scale (in terms of calculation speed; in terms of memory it surely matters) does indeed matter?
I don't think it's any more plausible for a huge pool of fast-moving "magic sand" to be conscious (excuse me for once again committing a scientific fallacy here, I'm open for suggestions on methodology).
Feel free to expand on how you think it solves the dilemma outlined in the post though!
We can speculate that consciousness is some kind of a representational process, using non-verbal, non symbolic representations. We can suppose that what is being represented is in part the state of my own body, and my current interactions with the external world. So this is not a static type of representation, it is a continuously changing representation, which is representing a real time 'dance' between my body and the world. So, returning to your (excellent) analogy of the field of rocks, the field of rocks would not be conscious unless and until it becomes an emergent representation of itself and its boundary with the rest of the world, all happening automatically and in real time. The field of rocks has no sensation of the sun's warmth because it has no representation of warmth, and no way of moving or changing it's own body as a response to that representation of warmth, and no attentional mechanism to prioritise how it responds to representations of warmth, hunger, thirst etc.
Once you start to think of consciousness as a representational process you start to see how consciousness might become an emergent phenomenon inside a machine 'designed' for creating representations. A machine like a living animal.
Of course, that puts you in the position of having to explain what is a non-symbolic, or non verbal, type of representation. But that's doable, I think.
You're arguing that consciousness is to be understood as a consciousness of the self (and its environment).
That implies that there can be intelligence that is not conscious, right?
About the missing sensation of the field of rocks; How about adding some sensors to the system, so depending on measurements like temperature, one could move a certain designated stone, which could then be taken as input by the rules moving all the other stones.
Would that change alone yield consciousness?
Also, how many different types of such sensations are needed to produce consciousness? Humans sense quite a few different types of such input data, but we can't sense everything there is to be sensed.
In your last two paragraphs, I believe you state that consciousness is just the (mathematical) reflexivity of "thoughts" (i.e. representational processes concerning objects).
That's an interesting thought I've pondered as well, but doesn't that just move the problem to whether machines can produce such thoughts?
Certainly they can produce representations of thoughts, but do they really think them?
An entity is conscious if:
- it has external inputs and a facility for creating a symbolized account of them,
- it has a facility that stores these accounts,
- it can order those accounts roughly by time,
- it can place or relate a symbol representing itself in these accounts.
The quality of consciousness varies widely, a program that has an object that references itself and can do the above things is very technically conscious but likely nowhere near to the depth of a human being.
The human mind is a plastic, adaptive system, which is shaped by senses that consume the universe and which attempt to model the universe. It is past and present interlocked - causality, bound up in tight knots of matter.
This is the universe, being aware of itself. It is perhaps not capable of being understood, just reveled in.
Who here can visualize an apple in their mind, literally seeing it as if existed and was being perceived by the eye (a sense organ)? Who can't? The answer will vary from person to person. I know what some have said, given I've asked 100s of people this question because I don't form internal imagery.
Even though some don't have that mind's eye view, it would seem most of us do build an internal model, similar to the apple, of the world around us, which is built from sensed information. Some of the people I've talked to can put "extra" things in that model. Dots on the wall. Boxes on tables.
If it can be updated, it must exist in mind and it is likely to exist in mind in a similar way for many, at least for us to all agree we seem to have similar experiences and see the world in similar ways.
Similar to an internal biological version of Unity, our eyes, ears, nose, taste and touch/feeling come together to create a sort of reality rendering "camera obscura" in the mind, allowing us to judge and interact with our perceived environment, through the copy we create of it.
Close your eyes, and the model fades (at least for most). Leave them closed long enough and you will lose what we are seeking to define here.
Anybody who dreams when sleeping. Is that not everybody ? Ability to do it when awake (lucid dreaming) is a different thing.
> lack of awareness equals lack of consciousness
No it does not. I'd argue that even deep sleep or getting knocked out does not equal loss of consciousness. We can't tell, as consciousness is only perceived through it's contents, so we can't distinguish between absence of contents or absence of container.
> Leave them closed long enough and you will lose what we are seeking to define here.
Consciousness is not the model. It's where the model is built. You only mention perceptions, what about thoughts ?
Everyone. What you think you are seeing as a whole is your hallucination based on fragmentary sensory input. Same for everything else you think you see around you.
Doing it without corresponding sensory input requires practice, drugs, sleep, unusual brain development, or other techniques, because hallucinating things not there is suppressed for obvious reasons. When that suppression fails, we see things not there, such as Elvis on a piece of toast.
We need a new adjective for what it is to be like GPT-3 ...
But any model of our own consciousness that fails to encompass others' as similar is sterile. We are all found to have brains; and damage to brains, or even to the stuff that feeds brains, alters or snuffs out consciousness. Brains turn out to be made of nerve cells, that are like other cells but specialized for processing information. Stimulating an individual nerve cell can trigger a thought, memory or sensation, repeatably. QED, consciousness is a phenomenon of nerve cells processing information.
Anything else is woo.
How you would build a thing that is conscious is not known. But we have existence proof that it's possible, so the rest is a matter of engineering. Philosophers and deists can do whatever the hell they want, but will have nothing meaningful to contribute.
Here is Hameroff and Penrose's paper on Orch OR: https://www.sciencedirect.com/science/article/pii/S157106451....
If it emerges from the fundamental features that themselves emerges from “lower” features, isn’t anything “higher up” going to happen? It did for us so it always would?
These guys are really circling for semantics not theories of reality and consciousness. Has anyone considered English is just a terrible system for this sort of reasoning?
I’ll give a virologist and other applied math types props for data driven and concrete outcomes.
These post-modernists are just deconstructing/reconstructing in circles for book sales, IMO.
If they can output something concrete humanity can use give them all the gold in the world. Otherwise it’s just a book club to me anymore
the most satisfactory explanation that i got was from cognitive sciences(https://advances.sciencemag.org/content/6/11/eaaz0087)or conscience as a side effect of brain's electrical and magnetic field and evolutionary biology(Attention Schema Theory).
AST: I am going to write a small time line for development of our brain(correct me if i am wrong)
Claim: This happened due to too much information processing!
As complex organism starts observing too much information; there were competition among neurons which leads to
(a)Selective Signal Enhancement (eg Hydra)---> (b) Centralized Controller for Coordination (Tectum): this is for overt/default attention controls eyes, head and is found both mammals and reptiles ---> (c) wulst(in reptiles) and cortex(in mammals): this is for covert attention, meaning you don't have to attend to the stimuli to process it; you can think about the sound coming from your behind without looking at it!
This is the main difference tectum is still in both of us and our response to most of the stimuli is controlled by it, but deep processing, thinking is done by the cortex which models it on a SCHEMA(a workflow)[neural pathways/consciousness] nobody knows but we tend to focus on this schema to do the deep processing [covert attention]
We make a sense of self using this; other animals as well!
We also associate ourselves with others using this; language was a direct consequence of this. As we extended a communication system of our brain to others!
Now; your statement if a set of stones can have conscience? Yes it can have; over the time if this system of stones will be observed; with some associated meaning to it states[arranged in circle or square]; someone can argue that it is aware!
You can look at universe as a set of rocks; the periodic movements; it's aware!? maybe :)
That is called argument from disbelief. It is a common fallacious argument form.
The only mystery about consciousness is why anyone insists there is one.
obviously there is one, its the force driving the typing of these words. It isn't the machinery typing them but the fundamental quality of the machinery that feels like it does.
Of course I can't prove my consciousness to you, but I operate on the assumption that p-zombies are nonsense, so, it should be entirely possible to prove your consciousness to yourself, albeit difficult due to lack of meaningful language labels for subjective experiences.
(We do have meaningful language for subjective experiences - we call eggplant "purple" and sugar "sweet" and music "groovy" - these correspond not to any physical property, but how they influence our minds.)
I have subjective experience, including the subjective experience of observing my own behavior. So do you. The redness of a tomato has to look like something, and furthermore has had to evolve to catch the attention of animals like yourself that would be nourished by it. Meanwhile, animals like yourself have needed to evolve to find red things appealing. No mysteries there.
Anything that can perceive itself and can perceive itself perceiving will exhibit all the hallmarks of what we call consciousness. It's a recognizable behavior pattern with a name. Imagining there is more to it than that is just tying yourself in knots to no purpose.
In my opinion this just delays the question.
The question now becomes whether machines can perceive themselves (or really anything at all) or whether they can just mechanically and symbolically represent the perception of objects, including themselves.
If the answer is yes, then that implies that any sufficiently large representation of computation has a consciousness and therefore that there is nothing special about it.
If it is no, then where's the relevant difference to a human brain?
I have no reason to think that you are not mechanically and symbolically representing the perception of objects and events. A pattern of activation of neurons is a symbol.
You might as well talk about the mystery of where the execution comes from, when the program starts, and of where it has gone when the program finishes.
Consciousness makes as much sense as a solar eclipse made to the primitive man who had no conception of the solar system. A program on your computer wouldn't make much sense unless you understood the inner workings - processes, kernel, ABI, CPU, etc.
Maybe one day we will find out as we learn more about the brain. Until then we just have to live with this magical mystery. The real question is how we deal with the ethical issues. We could learn a lot more about the brain if we could "break it", test it, experiment with it, etc. But obviously we can't do that with the human brain. Maybe start with simpler animal brains and then work our way up?
If we could triangulate the parts of the brain where consciousness resides, then perhaps we can disable that portion and create brains to experiment on? But then again, if we disable the part of the brain responsible for consciousness, how can we study consciousness?
Biologist have been doing this for decades. Eric Kandel won the Nobel prize just studying a sea slug biting seaweed