Of course there is something intuitively bothersome about that: somehow, we don't believe that there is something it is like to be the United States, but we do believe there is something it is like to be xenophon. That's why the core question isn't absurd at all -- the author is trying to understand what truly differentiates a conscious system.
If you accept that many-celled organisms can be conscious, and that eusocial colonies (ants, bees) behave more like intelligent organisms than any one member, it's not much if a stretch to say "hm, a country belongs on that scale".
It's moderately surprising that a consistent system would put the US at eg 25% of a human by consciousness, but it's not the kind of thing that makes you say "oops I made a wrong choice somewhere".
The lense of homeostasis (or autopoiesis more generally) is useful: it does allow you to put things on a meaningful, insight-yielding scale: there are humans, and rocks, and then fire and hurricanes somewhere in between. So what does it take to make something past human in the scale?
Why do we assume enough inteligence = consciousness? There was a great sci-fi book exploring that - "Blindsight", I highly recommend it.
Just like organs do not work like large cells and people aren't large-scale organs, societies aren't large scale people. But there are social elements that bind us into a society. Dogmatic individualism is wrong, and treating society as an individual is also wrong.
In any event the US acts like a person because it is basically the President plus Congress plus the leaders of the top 100 public and private American institutions. Any group that small is going to act coherently.
Your assertion is sorta like saying that the human body is like an organ because it is basically the heart, brain, and eyes.
We obviously do not live in anarchy, but I don't think our world can be so simplified accurately either.
 like how an atom-by-atom model of an airplane might be better, even if it would be accurate enough to break it into a smaller number of tension- and shear-bearing elements
I could see taking it a step further: instead of a Gaia theory, by which I assume you mean that the Earth is a sentient system, why not have all things that are causally connected be part of a universal consciousnesses? At what distance apart in space or time or both do we reject that individual components of a complex system could possibly combine to be conscious? Also, do all components need to be of the same type (individual human beings, in the United States example)? Can not other mammals and birds and fish and insects and forests and fields contribute?
The example in the article is a nation state, but is there any reason for rejecting a larger system? If the United States is conscious, does an American who goes to live abroad cease to be a bit of United-States-consciousness and become, for example, a bit of France-consciousness? What of astronauts, or those who will inevitably live on Mars someday?
Is the number of bits of nation-state-consciousness significant? Would China be "more conscious" by virtue of having four times the population? Is Canada only dimly conscious, having only a ninth of the population?
Finally, it seems strange to me that we'd use geographical borders to delineate separate nation-state-consciousnesses in an age when individuals so routinely communicate and travel outside of them. Nation states don't exist in vacuums, but interact and cooperate in many ways that affect each other mutually, for both good and bad. Perhaps all the people who speak a given language could be a conscious entity, regardless of where they live.
This is all fun to think about, but I lean towards the so-called neurochauvanism mentioned in the article and have a difficult time wrapping my head around the concept that something outside of brains could be conscious.
Or, if this still does bother you, may be you should question your faith in materialism.
But while materialists may reject consciousness as a fruitful term, there's a problem with finding a functional definition of consciousness -- or even one that agrees enough with what we colloquially understand consciousness to be -- as consciousness, as we understand it, plays absolutely no observable function. Dismissing offhand as an illusion something that is not just perceived to be real but perceived to be the realest thing there is, is just waving away a really hard problem. I can accept that consciousness may be an illusion, but that doesn't make it any less elusive, as illusion itself pre-supposes consciousness.
And suppose that you come up with a theory of when consciousness -- or its illusion -- emerges. How can you possibly test that?
There is a huge difference between, "we can't talk about it in a scientifically meaningful way" and "there is no it to talk about". The former most certainly does not imply the latter.
In that situation the brain halves still insist on there being a self, and insist on knowing the motivations for actions taken by the other brain half, despite not having any direct connection, and often being provably wrong (e.g. researchers have manipulated data, presented them to one brain half as decided by the other, and gotten the brain half to explain its motivations for making choices that were actually made by the researchers).
Similarly, you can sever the connection between the brain and the gut (which also contains a mass of neurons) and the gut will continue to operate independently, yet the gut can affect your emotional state and other parts of what we tend to consider as "self".
We are government by a range of independently operating systems that combine to create "self" in some form.
Note that this does not mean that consciousness have to be an illusion, but the illusion single, coherent unified self is created by and actively perpetrated by our brains, papering over all kinds of holes.
The most that shows is that coherence of self is sometimes an illusion. You could modify the source code of a distributed database so that the nodes in a cluster were no longer guaranteed to be consistent. That wouldn't show that consistency is an illusion; it would just show that you can break stuff by modifying it. It's hardly surprising that people with damaged brains behave in odd ways.
Whether or not they're right in any given claim is besides the point, as at any point past the split that includes an element of chance and is dependent on the extent of outside interference. E.g. just talking to a person with severed brain stem from one side in sufficiently lowered voice is sufficient to provide different information to each brain half and cause them to diverge - coherence is lost pretty much from the first moment; though the extent of it can remain quite low for some types of information for quite some time.
> You could modify the source code of a distributed database so that the nodes in a cluster were no longer guaranteed to be consistent. That wouldn't show that consistency is an illusion; it would just show that you can break stuff by modifying it.
This is a poor analogy. In this case the brainstem is the replication mechanism, not computation.
Take a multi-master database and sever the replication, but let the nodes most of the time see mostly the same data. Now change the database so that it tries to infer what the results actually should be based on patterns seen in the past, so that it actively lies to you about what the basis for the query responses are (the database tells you it is derived from inserted data, but half the time it's computed based on imperfect assumptions of what the other node will have seen).
This is what is actually observed in experiments on people with severed brain stems: You know you've fed bullshit data in, yet the system responds with providing a result with a confidence it has no justification for. You can argue we can't prove that the motivation is to maintain the illusion of self, but the effect certainly is to maintain an illusion of a coherent self - or at least try to. Unlike the database, you can ask each brain half what it based it's decision on, and the deceived brain half will usually insist it made the decision based on x,y,z, even when researchers made the decision without its knowledge.
The only change is blocking communication.
Yet when one brain half tells you it made a decision and explains to you why, you can't trust a word it's saying, as it will present itself with equivalent certainty whether or not it actually made the decision, as long as it thinks some part of the collective actually did make the decision.
Think of any part of your brain as some government spokesperson who is trying to speak for the whole, and who needs to answer questions about statements they personally never made without letting slip that it's total chaos behind the scenes.
Some certainly do, more specifically eliminative materialists: http://plato.stanford.edu/entries/materialism-eliminative/
But my read is that this article is intending to throw a challenge in the direction of philosophers who advocate a materialist understanding of consciousness, but who aren't eliminative about the concept of consciousness. I.e. they believe it's meaningful to call humans "conscious", and they also believe that the phenomenon has a materialist explanation. This article aims to force them to either drop the concept, or accept that it can be applied very broadly.
consciousness (atman/soul) is self-existent, ever-blissful non-material energy. it has no beginning and no end. It never born so never dies. it's the absolute truth because it's not subjected to change, death and decay. Material reality is only a relative truth because it's always changing. Whole of our body cells recycle every year. But we're still aware of the body as before. that awareness is the non-material atman, which associates, itself with this body as long as it's bound.
Yogis for ages have been able to go to the source – atman – by silenting the mind and its modifications (vritis) and have been able to observe the causal, subtle and gross planes of existence along with wheel of time. Hence Upanishad declares: "That which cannot be observed through mind rather through which mind gets the ability to observe is atman"
It hardly takes an year or two under proper guidance and full commitment to get glimpses of this subtlemost thing, do you dare to invest?
If you fanatically stick to a viewpoint without even evaluating the other view with an open mind, you risk of repeating what religious fanaticism did with pre-modern scientists in the west.
And regarding downvotes, do you think I care? I could have posted more popular comment in favor of materialism to gain more material points :p
Edit: To learn more, you can read autobiographies of two living yogis  and . A succinct yet most authoritative practical guide is Yoga Sutras .
Where does the idea of materialism fall?
Is the idea of materialism itself part of materialism?
If you think it's hard to define what a soul is, try defining what it is for something to be "material".
And saying that consciousness is a fruitless concept is a bit short, when this very concept is unavoidable when trying to know what is humanity. It's so fruitless that hundreds of books have been written to try to grab a part of it. It is also central to psychoanalysis, and to many major novels of the last century. Fruitless you said?
To say this is quite ignorant of the past few (24?) centuries of philosophy, including those centuries and thinkers when philosophy was part of religious thinking.
Where are you getting this from?
If you are using consciousness to mean being aware of stuff in a way that you can act on it then the questions are fairly simple - humans are conscious when in a normal state, not so when knocked unconscious. Likewise rabbits. The United States can show collective consciousness in that it's citizens in aggregate can be aware of things and react. If you look at a different meaning of consciousness in terms of subjective experience then probably other people have similar experience and rabbits a simpler version but it's hard to tell.
I don't get why philosophical writing tends to be so vague and waffly. Maybe because they don't achieve much in the real world unlike say neuroscientists studying consciousness and so need to hand wave and be vague to cover up the lack of real content?
Secondly he clearly refers to a definition of conscience called phenemonology. And one must assume that "materialist" is a term in philosophy that people familiar with the field knows what entails and people not familiar with it doesn't. this paper is not an introduction to either of those terms.
Your complaint is that you don't know the what these words are and that is what is apparently wrong with philosophy.
If your method of answering questions yields contradictory answers to the same question (eg: "Do individual humans have moral worth? Nazism says no, liberalism says yes -- philosophy!"), then it's not very good.
My reading is that he's actually debunking this argument!
"The correct method in philosophy would really be the following: to say nothing except what can be said, i.e. propositions of natural science--i.e. something that has nothing to do with philosophy -- and then, whenever someone else wanted to say something metaphysical, to demonstrate to him that he had failed to give a meaning to certain signs in his propositions. Although it would not be satisfying to the other person--he would not have the feeling that we were teaching him philosophy--this method would be the only strictly correct one."
It's a bad idea to treat philosophy as a menu from which one can choose the most appealing items and be left with a satisfactory understanding of the problems at hand.
Why's that? All moral propositions ultimately rest on either circular reasoning or unjustified assumptions (https://en.wikipedia.org/wiki/Regress_argument, also see the argument in Wittgenstein's aforementioned Tractatus), so searching for absolute philosophical truth is doomed to failure. There's even a school of philosophy based around this approach: https://en.wikipedia.org/wiki/Pragmatism.
If you define the success condition for the search to be arriving at the final truth, then yes, you are doomed to failure. That's definitely not my success condition for philosophical inquiry, but now we are getting into why I think philosophy is valuable, which is a bit out of scope here I think, because we both agree that philosophy is valuable. I was just saying that stopping at the Tractatus as if it would satisfy the previous commenter is, well, not good. At the very least one should put the Tractatus in context with Wittgenstein's Philosophical Investigations. The Tractatus is very assailable.
I didn't mean to imply this, rather I meant it as a starting point, to give the commenter an example of a philosophical text that is more formal in its definitions, and hence might frustrate them less. I should probably have made it clearer though that I was just presenting the quote from Tractatus, not necessarily endorsing it.
In particular, he relies on this notion of "positive properties". It is not at all clear that there is any notion of "positive properties" that satisfies his axioms, still less one for which "positive" is actually a good name.
(The axioms are certainly false if, e.g., we take "positive" to mean something like "regarded as good by some particular person" or "regarded as good by a majority of intelligent and thoughtful people". Gödel wants the conjunction of all positive properties to be positive, which in particular implies that it isn't outright impossible. But it's easy to find properties generally regarded as good that are not mutually compatible.)
I define the "God" I'm looking for as an invisible green humanoid. Because something cannot be invisible and green at the same time, this "God" cannot exist.
It's naturally much harder to do this with Gods that correspond to particular religions, but from time to time it is possible. E.g. a jealous God that will destroy civilization if we don't offer it human sacrifices has not done this, so it can't exist.
Do you mean "which reflects or emits light of these particular wavelengths", or do you mean something about human perception?
Also, similar questions apply wrt "invisible".
If "green" is based on human perception, why would not an entity which, when a person looks in their direction, does not really see them like any shape, but rather, perceives some greenness in that General direction?
Why would this entity not be considered both green and invisible?
Alternative, does "invisible" mean to the naked human eye?
What if the entity is microscopic?
Being vague is simply sloppy. If you can't write concise thoughts which clearly communicate your intention, then I for one am not going to bother reading it.
Precise philosophy already has a name: it's called physics.
(In the end I transferred and finished off my degree doing mathematics and computer science.)
E.g., Newton's "Mathematical Principles of Natural Philosophy" (Philosophiæ Naturalis Principia Mathematica)
Describes in greater detail the exact problem with philosophy you are lamenting!
Similarly, I feel trying to parse conscious intentionality by any known meaning of the term "conscious" out of something like "all the individuals of the United States" is a situation where the mapping is doing all the work.
Clearly, there is something to the "United States" as well as other groupings of human beings. But I daresay in a lot of ways these things are less mysterious than they seem... we deal with these groupings all the time, and if we thereby fail to ascribe consciousness to them, I'd say our experience should probably be listened to. Sure, groups of humans routinely do great things no individual could do, but at the same time, and with no contradiction, groups of humans fail miserably and stupidly too. We've all seen committees that fail to successfully accomplish a goal than any individual on the committee could have, or where the committee remains fuzzy on its purpose (pardon English's anthropomorphization, there) even as the individual members are all quite clear (but divergent). Rather than trying to throw this in the "consciousness" bucket I think it's better understood as, well, the way we all already collective conceive of these things, as human organizations, with their own foibles, characteristics, and properties.
It's not consciousness. It's something else. "Greater" in some ways, and yet, profoundly lesser in others. Trying to view it through the lens of "consciousness" is just anthropormorphism striking again, and I'd say actively harmful in terms of trying to understand the phenomena.
This is why panpsychism appeals to me. On some level it makes more sense for me to believe that sentients is an intrinsic property of physical reality then to believe that sentients can emerge from someone moving sticks and stones around according to some algorithm.
There is no mathematical rigor whatsoever in that portion of the paper. He just conjectures that there would be no efficient algorithm for encoding (e.g.) chess positions as states of a waterfall. This actually seems fairly unlikely. You only need to be able to specify a set of physical states that compose in such a way that you can, e.g., incrementally push items onto a stack, and you're basically done. How can we possibly be confident that there is no state description of a waterfall that makes that possible? Note that we have to allow the state descriptions to be very complex, since the physical states corresponding to the computational states of a microprocessor are also extremely complex. This complexity happens to be easier for us to deal with because microprocessors have been designed to make it easy for us to put them into particular computational states. But the relevant states of a waterfall, while much more difficult for us to manipulate, will not obviously be any more complex. And to make the key point, we need only find one suitable inanimate physical system. It really doesn't seem so unlikely that there are a few of them out there.
On top of all this, why should it matter if there's no efficient encoding algorithm? It would obviously lead to an infinite regress if we say that a physical state has to have been "encoded" in order to count as a bona fide computational state (since then the input to the encoding algorithm would itself need to have been encoded).
I think you're answering a different question than Scott. You seem to be answering "Can I construct a water-based computer that looks like a waterfall that could solve a chess problem, set up the initial parameters, and read the answer off the bottom?" This is conceivable. It is likely it would involve a highly implausible object of highly implausible accuracy and reliability, but if such an object existed it would still be a polynomial calculation to produce it. (Or, at least, a polynomial calculation could produce an object, if not the optimal one. The lack of solution to the Navier-Stokes equations might prevent you from finding an optimal one. And you might find yourself having some trouble with quantum effects. But such are the problems in the real world.) And our polynomial approximation is still not going to look like a waterfall as it will inevitably need looping constructs where state flows "back" uphill... if you want it to be one-way like a waterfall that is going to be exponentially large.
Scott's question is a different one. Given a real world waterfall that you have not constructed, but simply come across, can it be said to be computing the solution to a chess problem? Is a random rock that is sitting on the ground, jiggling away with atoms in constant motion that all the computers in the world could not possibly provide enough computational power to fully simulate, able to be said to be computing something with all that power? Could a boulder be sitting there simulating a human mind? I don't mean in the pan-psychic sense, I mean, literally, is it simulating a human mind? It has the computational capacity, when considered in the raw.
This is where Scott's argument kicks in, where the mapping is doing all the work.
Further evidence of this is that even if you want to sit here and argue that this boulder that is sitting in front of us (metaphorically) really and truly is calculating the state of your mind as well, well, with another equally sensible (and exponential and impossible) mapping, it's also calculating mine, it's also calculating Attila the Hun's, and it's also calculating the brain states of the final human being to ever live, and also deer's brains and the solutions to world hunger and pretty every other interesting thing ever, really. The contortions required to create the mapping of boulder state to human brain state are such that you can fit literally almost anything into it, and therefore, it is reasonable to point out that it is meaningless. It's an interesting argument that provides a surprisingly rigorous line that allows us to say that, no, that waterfall is not solving a chess problem or anything else... it really is, well, a waterfall.
Yes, that's the question I was addressing. I'm really can't figure out why you thought otherwise. There's nothing in my original comment about constructing artificial water-based computers.
>This is where Scott's argument kicks in, where the mapping is doing all the work.
It is his conjecture that there is no computationally simple mapping, but I see no reason to believe that this conjecture is correct. He certainly doesn't give any reason to think so in the paper. And to echo jameshart's point, he gives no reason to think that the mapping from chess positions to brain states would be any simpler than the mapping from chess positions to waterfalls.
If you find his conjecture plausible for some reason, that's fine, but you're quite wrong if you think that the paper presents any rigorous mathematical argument. The discussion is much less rigorous and sophisticated than the existing philosophical literature on the topic. (After all, Putnam's original paper contained a proof.)
>Further evidence of this is that even if you want to sit here and argue that this boulder that is sitting in front of us (metaphorically) really and truly is calculating the state of your mind as well, well, with another equally sensible (and exponential and impossible) mapping, it's also calculating mine, it's also calculating Attila the Hun's, and it's also calculating the brain states of the final human being to ever live, and also deer's brains and the solutions to world hunger and pretty every other interesting thing ever, really.
Yes, that's the problem. That's why the computational theory of mind seems to be a bit of a non-starter. Since pretty much every physical system computes pretty much every function, it's difficult to see how thought can arise merely as a consequence of the brain computing a particular function.
I feel justified in referring to "the" human state because in the grand possibility space of complex systems, human brains are but a point at scale, in much the same way that our little planet is just a dot.
Any definition of "consciousness" that covers all possible complex systems is just another word for "complex system", and therefore useless.
It isn't for me to have to prove that the quadrillion-dimensional-system of the United States is different than the billion-dimensional-system of the human consciousness , with wildly differing characteristics between the relationships of those dimensions... I'd say it's the other way around, that those who think it's all the same ought to give a compelling reason as to how it could possibly be the same thing, because on the fact of it it's actually even more mathematically absurd than common sense would suggest. Common sense still has that anthropomorphization built into it, deceiving you, and leading to the ideas of "pan-psychicism" where the only thing we can conceive is consciousness-as-we-know-it so the only question is how much of it something can have.
: My selection of the dimensionality of the human is arbitrary; my selection of the United States is obtained by simply multiplying people x human states. This is a brutal underestimate because the humans are interacting too and that creates further dimensionality, but, meh. It's really just "big numbers".
We ascribe consciousness to humans by observing their behavior, not the structure of their brains. And it's true that countries do respond to stimuli and act with purpose, but (echoing Chalmers) I think a lot of that can be ascribed to individual people controlling a hierarchy, and not to the collective. If there's anything about a country that arises from the distributed connections between humans, it would be more likely to manifest as broad social trends, not specific actions like going to war. But those general trends seem to ebb and flow for their own inscrutable reasons; they certainly don't show obvious evidence of intelligent purpose.
Nevertheless, the concept is fascinating. And I think the author makes an excellent point that even if it's wrong, the argument is worth considering if only to help us come up with better criteria for what it means to say an entity is conscious.
"One might think that for an entity to have real, intrinsic representational content, meaningful utterances, and intentionality, it must be richly historically embedded in the right kind of environment. Lightning strikes a swamp and “Swampman” congeals randomly by freak quantum chance. Swampman might utter sounds that we would be disposed to interpret as meaning “Wow, this swamp is humid!”, but if he has no learning history or evolutionary history, some have argued, this utterance would have no more meaning than a freak occurrence of the same sounds by a random perturbance of air. But I see no grounds for objection here. The United States is no Swampman. The United States has long been embedded in a natural and social environment, richly causally connected to the world beyond – connected in a way that would seem to give meaning to its representations and functions to its parts.
I am asking you to think of the United States as a planet-sized alien might, that is, to evaluate the behaviors and capacities of the United States as a concrete, spatially distributed entity with people as some or all of its parts, an entity within which individual people play roles somewhat analogous to the role that individual cells play in your body. If you are willing to jettison contiguism and other morphological prejudices, this is not, I think, an intolerably weird perspective. As a house for consciousness, a rabbit brain is not clearly more sophisticated. I leave it open whether we include objects like roads and computers as part of the body of the U.S. or instead as part of its environment.One might think that for an entity to have real, intrinsic representational content, meaningful utterances, and intentionality, it must be richly historically embedded in the right kind of environment. Lightning strikes a swamp and “Swampman” congeals randomly by freak quantum chance. Swampman might utter sounds that we would be disposed to interpret as meaning “Wow, this swamp is humid!”, but if he has no learning history or evolutionary history, some have argued, this utterance would have no more meaning than a freak occurrence of the same sounds by a random perturbance of air. But I see no grounds for objection here. The United States is no Swampman. The United States has long been embedded in a natural and social environment, richly causally connected to the world beyond – connected in a way that would seem to give meaning to its representations and functions to its parts.
I am asking you to think of the United States as a planet-sized alien might, that is, to evaluate the behaviors and capacities of the United States as a concrete, spatially distributed entity with people as some or all of its parts, an entity within which individual people play roles somewhat analogous to the role that individual cells play in your body. If you are willing to jettison contiguism and other morphological prejudices, this is not, I think, an intolerably weird perspective. As a house for consciousness, a rabbit brain is not clearly more sophisticated. I leave it open whether we include objects like roads and computers as part of the body of the U.S. or instead as part of its environment."
I'm quite sure that from the perspective of a cell in my liver, my whole life seem to be ebb and flow. Assuming that the cell would be able to observe and comprehend processes much broader and longer than itself.
It's the same in case of possible consciousness of a large group - as parts of it we will face many difficulties just to notice and understand such phenomenon.
Especially taking into account that it won't be exactly the same type of consciousness as human one.
It's conceivable that any kind of system whatsoever is conscious in ways that we can't understand or comprehend. But if the author's position is that the US should be provisionally considered conscious because it takes purposeful, intentional actions, then a lack of apparent purposefulness seems like a reasonable criticism of the argument.
Clearly, large collectives of people can seek goals. Large collections of transistors can also seek goals, up to a point.
But it's impossible to communicate directly with the United States as a self-aware entity, or to have a direct conversation with a collective.
You can communicate with a representative of the US, but there's no way anyone can talk to, or email, or Skype, or send a paper letter to, or have a telepathic conversation with, any entity that would consciously describe itself as "The United States of America."
This matters because you could ask ten representatives of the US for their opinion on something and get ten conflicting answers - without essentially damaging the concept of "US-ness."
This clearly doesn't match the definition of a unified consciousness. It's not the same as a single consciousness changing its mind, because there is no recognisable single mind that changes.
What about insect colonies, animal herds, bird flocks, and corporations? They simply amplify the goals and personalities of their leaders. I'm not aware of any instances where - for example - a separate corporate mind made its wishes known to override board decisions.
(You could possibly argue this is what happened with Reddit. I'd say no - that was a conflict between factions with different goals, not evidence that there's a metaphysical Reddit-mind independently placing conference calls and tweeting to steer Reddit's future.)
Would you expect one of your cells to be capable of carrying a conversation with you? ("No.") Then why would you expect a "cell" (citizen) of the United States to be able to communicate with it?
> This matters because you could ask ten representatives of the US for their opinion on something and get ten conflicting answers - without essentially damaging the concept of "US-ness."... It's not the same as a single consciousness changing its mind, because there is no recognisable single mind that changes.
You could also stimulate ten neurons separately and receive 10 differing responses. And when a person changes their mind, there is also no recognizable single neuron that has changed.
Do countries communicate with each other in similar ways? They can appear to. But in fact there's no communication independent of the individuals who represent the countries. The entities called "Russia" and "United States" are wholly defined by the contents of the embodied minds of their representatives.
There is no way "United States" can change its mind during an international negotiation independently of any of its representatives. If it did they would suddenly stop pushing one line and start pushing a different line for no obvious reason.
I'm not aware of this ever happening.
Compare that with human and animal communication. So far as I know, my self awareness isn't defined by a shared description and belief in "Me" across all my neurons. If you pull an individual neuron out of my brain it won't have any concept of anything, never mind of "Me."
So the two processes are completely different. One is the flocking behaviour of (semi)intelligent agents.
The other is an emergent property of units that have almost exactly zero intelligence and awareness individually, but somehow combine to produce something that has much more.
This is provably false unless you are adding hidden assumptions. It is easily shown in the case of some bureaucracies that a person will have a hard time getting something out of them, even though no single person will claim to be the one stalling the request. From the outside, the bureaucracy behaves as if it has a different goal than the (stated) goal of each individual.
On your basis you could impute consciousness to any cybernetic system that takes a majority decision - such as the navigation systems on the old Shuttle.
And just because communications are attributed to a nation, doesn't mean you're dealing with anything more than a diplomatic convention. In practice you're still dealing with leaders and representatives, and the leaders will set policy.
Without the leaders there is no entity - and in fact it's also known in diplomacy that you can decapitate a country simply by killing its leadership, or define a new country by changing its leadership.
A change of leadership creates a change of identity and intention, even though the rest of the things it does remain broadly unchanged.
Compare that with human brain, where there are no "leadership neurons". There are broad areas of the brain that integrate experience and are involved in making decisions, but you can't point to one neuron and say "That's the president", or to one group and say "That's the ruling party."
More, there's no self-awareness. These arguments are kind of pointless without a final definition of consciousness, but it seems likely to me that entities that act in conscious ways have some internal representation of a persistent self which is perceived - somehow, in a magical way we don't understand - as a unified self-identity.
So you need two things for that. One is a persistent representation of self. And the other is the ability to experience that representation as a singular self-definition.
Humans are embodied, so we know what experience is. Corporations, countries, and bird flocks aren't.
If you invade a large land area, remove a source of profit, or trap half a flock of starlings in a net, no singular embodied experience corresponds to that loss. You can predict goal-seeking behaviours, and you can find individual responses to individual circumstances. But all of those fail the requirement for a single unified self-aware change in state.
Otherwise you'd have to argue that countries somehow feel pain when they're invaded. Citizens may feel pain, especially if they're killed, injured, or made homeless. But countries?
It's also with considering that the US could be sliced along arbitrary boundaries and it probably wouldn't change too much. That alone seems like such a structural/functional difference that arguing for the consciousness of a system like the US on simialriry-to-brains grounds seems difficult.
Still, the strangest thing I've considered is: suppose that the US, or some other group we are a part of, like the galaxy, is conscious.and we get good evidence that it's true. Then disturbing it's function, or destroying it could kill that consciousness. It would be like killing God.
I absolutely believe in universal conscioussness (as instances of networks across many different substrates running at different speeds and with varying topologies and so on). But whatever you do will disturb its function, and it will be relatively trivial in the overall scheme of things - to quote Peter Greenaway 'A great many things are dying and being born all the time.' If you can conceive of an entity like 'the United States' or 'humanity in general' being conscious on a different scale than oneself, then one can consider oneself as the superposition of a variety of different ideas of varying intensity, as a multiplexed signal whose sample rate is a function of the underlying cellular processes. 'You' may not be the sum of your networked parts, but rather the algorithm that defines the travesal of said network.
Obviously the most fun thing to think about. It's worth noting that "consciousness" is on a sliding scale from dim awareness to focused lucidity. Is my dog conscious? Probably not, but there's something there. How about a gorilla? The lines start to blur. Is it outlandish to ascribe to a large, distributed group, a fish-level of awareness?
assuming that all consciousness resembles that of humans. I'm guessing our tiny brains are capable of only a thin sliver of what is possible when it comes to apprehending and interacting with reality.
I would say absolutely. But maybe we have a different definition of consciousness.
I too thought of G.E.B. wasn't there a story in there where someone was talking to an ant colony? Not the ants, but the colony as a whole. I've long considered the similarity between cells:people and people:society. A society can have behaviors and a sort of personality. Consciousness seems a stretch, but not impossible. The thing I tend to think more about is cancer, or the fact that a person can pick a scab or that people with various psychological issues can harm themselves or parts of themselves (see cutting, eating disorders, etc).
However the same restriction don't apply to corporations, and indeed those are heavily competing over their means of existence (money). Which gives an interesting perspective to the current tendency of large corporations to improve their own chances of survival by dodging taxes, influencing politics etc. In some sense it does seem like corporations have become 'intelligent' and are attempting to change their environment to suit them. I'm not too sure what the end result will be, but I fear it's unlikely to be beneficial to humans.
To stop this we may have to, as you say, kill "God".
A more plausible way for the US to take this type of question seriously is if it was indeed asked by a planet-sized alien. Then I expect the authorities in the US to come up with some traditional explanation of countries and how the US is one. And would reject the fact that it's conscious. If an entity doesn't believe that it's conscious, then can it be conscious?
I find it strange that the author didn't entertain such line of thinking since, to me, it makes the most sense. The first thing you would do when you want to know something about a language-capable entity is probably ask it. He mentions the Turing Test with regards to testing the consciousness of the supersquids, but why not the United States?
But, then, do you consider that the entity may actually lie about it? It may be psychopathic enough to say, "I am not conscious, that's ridiculous", and yet be conscious.
Another possibility is that it wouldn't understand words in the way we understand them. Asking a nation state "are you conscious?" may be like asking a child "do you have knee-jerk reflex?". How would you know without prior experience?
However, asking in a different manner, say like in a referendum in Greece, you can get a very emphatic response which hints at consciousness at national level.
On the flip side, if an entity asks itself that question, it probably is conscious.
I'm sure there are brain-damaged people who have a single bit stuck flipped causing an aversion to talking about that concept and fully claim to not be conscious but are otherwise high-functioning and really do have the same sort of internal experiences as the rest of us. Classifying them as "not conscious" wouldn't be useful at anything besides making them stop arguing with you. At that point, "conscious" is just a word for people who meet some minimum intelligence and happen to also call themselves "conscious". We might as well use any other word.
I don't think the word "conscious" has enough of an agreed-upon definition for this question to be even as productive as the classic question-that's-really-about-definitions "does a tree falling with no one around make a noise?". I think the question "is X conscious" is mixing many different questions together. Does the US have a subjective experience? (Who knows. I think we only assume other humans even do because we realize we're probably not the one special person to have one, but it's hard to generalize that solution to things very physically different than ourselves.) Does the US think like us? (It clearly takes a stupidly long time to come to any decisions and then changes its mind every 4 or so years on certain things.)
Just for fun as a little tangent, here's an absurd example possibly closer to the US: imagine you have a person that acts just the same as any other, except somehow their neurons in addition to carrying simple signals each have human intelligence and awareness and rich inner lives that they mostly keep private, but when you ask the person if they're conscious, their neurons stir from their own thoughts at hearing a familiar word and break protocol by answering individually instead of following their proper neural behaviors to allow the person to answer the question themselves. (If that seems a little too disconnected from reality to even picture, then it might help to imagine a natural born human who undergoes a surgical process involving nano-machines that replaces their original neurons with the described self-aware neurons one at a time.)
This, actually my other objection to the paper. It seems like centrality is important for the concept of consciousness (I agree that it means a hundred different things but it's convineient to talk about as a single thing). Most people would identify as the person behind the eyes (or generally in the head). And the fact that all experiences of the body is experienced in that central place makes it quite hard to imagine that the individual parts can also experience things on their own. Let alone, be able to disagree with the central thing.
It seems people are postulating properties of collections and patterns of groups, then checking things off, one by one, being satisfied of the similarities. It is the person checking these things off that is realizing similarities then taking the leap of anthropomorphizing awareness into that idea.
Individual brain cells are not self-aware, or intelligent in any capacity. A whole brain is both.
Individual humans in The United States are aware of both themselves and of the United States. The United States is arguably not aware of itself in a greater-than-the-sum-of-its-parts capacity, as a human brain is.
(So far I've only read the abstract of the linked article)
> do you think that brain cells are (would be) aware of the fact that we are conscious?
No, a neuron is too simple a system to be capable of awareness by itself. I get what you're getting at, but if we're talking about a higher-level phenomenon, let's not also call it consciousness. In fact, we essentially use "consciousness" as a post-facto description of our own illusion of self, so it essentially makes no sense to ascribe it to a system that didn't arise more or less the same way as we did.
Really, I think the major issue is to determine how to talk to the U.S. What's the sovereign state equivalent of "how's it going?" It seems that the only obvious attributes of the state we to acquire resources---usually by cooperating in s semi-symbiotic fashion with similar states, and responding to basic pain-like stimuli. The U.S. seems to act more like a very simple worm, and less like a higher animal. Perhaps it has primitive organization? (I'm thinking of the larger jelly fish which, while massive, are cognitively simple.)
This is really disappointing, because consciousness is neat.
I also find it questionable to posit fake scenarios then try to draw conclusions from them. I can say "what about a fire that boils water, but it's actually frozen?", but it doesn't really mean such a thing is possible. See also p-zombies, an exercise in imagining nonsensical things in order to draw questionable conclusions.
So either consciousness is based in physics, in which case pzombies are non-starter, or we don't know what consciousness is and it's a pointless conversation.
But that would, apparently, be far too much of a downer for most people, finding out that questions can be answered with evidence.
In the same vein, I see emotion associated with AI so often, and that is so frustrating. Emotion is the opposite of intelligence, it's a series of global variables the left over parts of our brain use to influence the parts in control, it's a bad system and need not be copied.
If there is a god, this is how I imagine it. Free from the flawed systems we live and deal with, free from emotion, free from compassion, a horrifying being of pure logic who you could never begin to comprehend
I think a lot of philosophical hurdles will be gone when respecting boundaries more. In this case not so much physiological or neurological boundaries, but "(machine) learning" boundaries. A system is separated from its environment by a Markovian boundary. If not, it would not be able to built up a representation of "that out there". It would "be it", but not "represent it". I think much of the questions around "self" and "consciousness" can be solved by postulating mechanisms that use such Markovian boundaries within the (artificial) brain.
Information requires a transmitter and a receiver. These can be the same entity separated through time. For example by writing something down that you read the next day the two of you (your old and new version) communicate with each other. But, separation, physically or temporary, makes a system more than some kind of chemical soup.
Definitely possible, however. And fun to consider whole galaxies exhibiting consciousness.
Yeah, but a slow consciousness that would be, unless information can travel at speeds faster than light. If not, then there is a limit to how large a computer can be on account of the speed of light and a few other physical limitations.
I propose a ridiculously expensive study with a huge grant wherein I give the United States an aptitude and personality test. Based on the results, we could attempt to find this 239 year-old citizen some rehabilitative help to reduce some of its more antisocial and criminal tendencies.
You added your opinion to the title, rather than leaving it alone.
In fact, your title less accurately reflects the material of the paper, which specifically deals with:
> Finally, the United States would seem to be a rather dumb group entity of the relevant sort. If we set aside our morphological prejudices against spatially distributed group entities, we can see that the United States has all the types of properties that materialists tend to regard as characteristic of conscious beings.
Your title completely omits the key focus of the paper, and its examination, which the original title included.
You did this because you disagreed with the paper, and used the pretense of the nebulous notion of "clickbait" to edit in your disagreement.
The original title was more accurate as to the article contents.
I think samclemens was genuinely trying to follow the guidelines, though, which is good.
Perhaps ironically, the post is at #2 anyway, which is way high for this sort of thing.
As pointed out by the article, society as a distributed group entities demonstrate signs of consciousness, but that's not even the key, the key is society defines every single consciousness of the members in it: how we think, what we want, etc. A single person's intelligence is simply a tiny subcomponent of the ultimate intelligent being - the society. How intelligent we are are mostly determined by the society (doesn't mean that everyone thinks a like), and the exciting thing is that some of us get to contribute some improvements to that ultimate intelligent being.
Society may know how to create fire, but it's one man who created for the first time, and it's the choice of just that one man to pass it onto another (and yet another). A man on a island may certainly discover fire.
I guess my point is that how we have learned how to attribute and locate the source knowledge doesn't necessarily describe how it is actually created, moves, and is altered.
That's fair, I'm just turned off by the language of "ultimate intelligent being." Perhaps that's just my human bias, but I have trouble seeing society as anything greater than its parts, primarily people and knowledge.
Humans are the ones that decide what intelligence is to them. We've seen many times in history that this is in fact, very unintelligent to do. But sometimes it seems to work splendidly. The individual can define him or herself as the most intelligent while everyone agrees. That really doesn't necessarily mean anything beyond everyone agreeing that they are intelligent. And some people may as a result, choose to entertain their consciousness in other ways, in order to direct it on a different course. And this may wind up being more intelligent. And then we pretend that nothing weird happened and we knew all along what real intelligence was.
I tend to view things as absurd before I view them as intelligent, but my life is fairly boring.
Ecosystems could exhibit what appears (to an external [hypothetical] observer) to be a conscious, "intelligent" behavior due to obeying to underlying "forces" or "laws" - physical, environmental (herd behavior, etc.) economical (energy expenditure, diminishing returns), evolutionary (which affects populations).
Forest or town formations (which appears to be "optimal", "designed", while they just grew up causally out of individual "processes") are obvious examples.
Financial markets could be [falsely] viewed as "intelligent", while it is just a stochastic individual actions and "herd behavior".
Ants or bees colony, a big city at night, viewed from an aircraft - they all appear to have consciousness of its own, but no, it is mere an appearance. Nevertheless we cannot assert that these formations are purely chaotic - they are shaped by a chance, but in accordance with the underlying forces and laws which govern (or limit) behavior of individual "agents" within the system.
Just like all these atoms - they have their positions due to multiple causes (stochastic processes), but in accordance with fundamental laws of what we call "gravity", "magnetism and electricity", "conservation of energy", etc. There is no "intelligence" or "consciousness" apart from that, or That, as a Hindu would call it.
The author, judging from the abstract (it's 7AM here), has it up-side-down. One has to first prove that spatially distributed complex entities (like an atom, molecule, the brain or a pen) has phenomenal experience, that is "there is something that it's like to be a pen". Then she can prove materialism is true. But to start with materialism being to true and then... well, who am I to judge.
Some say that materialism is refuting itself, in the sense that proving it true dissolves any kind of truth into non-sense.
I understand you're a dualist, but from the materialist standpoint, "spatially distributed group entities" describes neurons.
This means that anyone calling themselves a materialist must also have a concept of this 'rest'. But what the heck is it if you fully believe in a materialist world?
But like many flights of scientific fantasy (and like the article) it also isn't very predictive, it isn't verifiable, and it isn't ultimately very useful as a model.
I abandoned the idea when I recognised it as otological onanism.
In a sense the United States is conscious, but the "experience" of being the United States, or Exxon mobile, or Google, is so far removed from the experience of being a person or even a rabbit that it doesn't matter. i.e. It's a metaphor more than anything else.
We take in information through eyes, ears, etc made up of cells. We think in a brain, experience emotions, inhabit a body.
The United States takes in information through organizations, people, computers, etc. It thinks and makes decisions via all sorts of different systems and processes. It doesn't have a physical body, it has different parts and material all over the place made of many different elements.
Perhaps it's true we only value forms of consciousness similar to our own.
I don't know, I don't philosophy well. This paper was thought provoking.
I don't get the impression that Schwitzgebel takes analytic philosophy very seriously, which is something I find very refreshing about him (compared to other philosophers of consciousness such as Chalmers). His early interest was on ancient Chinese philosophy, and in http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/ZZ.htm, he promotes what he regards as an ancient Chinese outlook about not taking language too seriously, which contrasts with traditional Anglo-American philosophy.
So, groups and even America often act like an individual might even in terms of feelings. The groups memory is spread out among parts of its members. New members, like the author's aliens, even get some of that memory through information exchange, stories, sharing feelings, and indoctrination. Their mind can transition to a cross between individual mind and the group's mind.
So, brain stores its knowledge in neural connections and states updated by certain processes. Consciousness emerges. Details of it have varying levels of internal strength, conflict, and so on. Human groups store their knowledge in individual brains and connections between them via senses. Group's consciousness similarly has varying levels of commitment to specific ideas or actions with conflict. The question is, "What's the real difference here and how far can it go?"
If it can go far enough, then the U.S. might be conscious, have a collective memory, have the ability to feel pain collectively, and have an intent formed of members' consensus. That's stronger than the author's own claim. Yet, I think I've cited examples to back some of it up. The U.S. itself doesn't need a brain: its identity can be made of pieces of other brains, both storing knowledge & having feelings, plus their interactions with each other.
Note: I'll end with a possibly controversial view that I don't think all of U.S.'s members make up its consciousness. I think it filters out a lot of them. It has it's own self-organization and learning principles that are quite a bit different than the brain's aside from basic concepts of sensory processing, reflection, conflict and consensus.
One of CMU's AI Guru (I forgot his name), back in 60s, described brilliantly a principle that "incredible complex" observable behavior of an ant is not due to its "intelligence", but due to obstacles in environment. This is the clue.
There are "meditation" techniques to observe "discrete" (non-continuous) nature of what we call consciousness.
Just a comment that this is called an "emergent phenomenon" in some circles, and an "illusion" in Buddhist thought.
How is this any different? If you relax the definition of "concious", then why isn't the United States concious?
The thing is, externally, one can talk about a nation in the same ways we talk about a person in the same ways we talk about a group or even a couple. You can consider it responding to input and even being "self-aware" as an entity, even being deliberating. A materialist doesn't believe in the immaterial, s/he can only judge an entity by observing its reaction to stimula, so as far as s/he can judge, the materialist can observe the US from the outside, and apply the duck rule to conclude it's concious.
However, I'm not talking about qualia. Qualia, a materialist would say, is an illusion. But, again, going by the author's definition, I'm not sure we can say whether rabbits or flies experience qualia, and qualia is sort of a hard to thing to reason about from the outside of the head. For me, the question of qualia is a much more interesting argument against materialism.
Disclaimer: I'm not sure I'm a materialist, but I'm a scientist by trade, so I have to work in that mindset. Still, I never felt comfortable with it.
Nobody says qualia are an illusion, in the sense of not really existing. People just say, "Qualia are not metaphysically spooky and immaterial; they're things your brain makes."
Also: A materialist view of consciousness does not need to examine from the "outside". After we have figured out what consciousness is, then it's a matter of examining whatever bits of matter there are and seeing if they have the right properties. Of course that requires some serious breakthroughs, ones some folks are convinced aren't possible.
But consider this: you are aware of yourself, but how do you know that you are the only thing occupying your brain? I mean, what if your brain spawns 5 different conscious processes, each of them completely separate and independent from the others? And what you feel as yourself is just one of these 5 processes.
How could you ever know if that's true or not?
Well, if I never see myself saying or doing something that MY process hasn't decided itself, it's as if those extra processes don't exist for all practical purposes.
Not to mention there's a lot of "subconscious" stuff that happens inside one's mind.
I've read that too. But philosophically I think people draw the wrong conclusions from this -- like we're puppets and the brain is some autonomous third agent.
My brain and my consiousnless are part of the same thing, so making the decision (in the background) and having it come to awareness a little later, is still the same entity "thinking".
I don't have to think "out aloud" in my brain (consciously) for myself to make a decision: my brain can make it drawing directly and subconsciously from the same memories, sensory inputs, biases etc that I have available when I think.
You could skip most of this paper and ponder the deeper question posed by Sam Harris: how do we know anything is conscious? What if stars are conscious. Could we ever tell? Fundamentally we don't know the mechanisms that give rise to consciousness, so in theory anything could be conscious with a complex enough physical system. A country could be consciousness and we'd likely never know. Fun to think about!
Luckily, there are some people who honestly try to find out. Dan Dennett to name one.
Also this talk about consciousness by Susan Blackmoore is both funny and enlightening: https://www.youtube.com/watch?v=sdMA8RVu1sk
Other possible, but similar measures can be the number of components (much larger in the brain than in the US or an ant colony), or the average number of connections each element has. In any case the result may be the same: a quantitative difference may lead to a qualitatively different result.
As we develop AI, I see it naturally augmenting the Corporation's ability to make decisions, eventually supplanting humans in the high-order strategic planning.
Once that happens (and it will!).... hope for the best?