I consider the Fermi paradox nothing but a fun game.
There are so many unknowns that we can literally say nothing about it. Nothing. We have no idea. We have a sample size of one and virtually nothing to even calibrate any of the scales for any of the parameters in any model. For all we know abiogenesis (life from non-living matter) is phenomenally rare to the point that there is an average of less than one example per galaxy per billion years. Or it could be common but complex life could be rare. Or, or, or, or... the or's become infinite because we have no information to constrain them. It's even worse than an endless theology argument since at least there you're arguing within a set of theoretical philosophical boundaries. This is literally unbounded.
We have only barely started to explore a tiny fraction of the vastness of the universe. It's far too early to make statements like these.
I do think exploring it is a fun way to enumerate all the possibilities for what we might or might not find out there, but it can at times get problematic. You have "longtermists" out there who actually cite the Fermi paradox in political, social, or ideological arguments, arguing for real world policies on the basis of an unconstrained untestable hypothesis.
> There are so many unknowns that we can literally say nothing about it. Nothing. We have no idea. We have a sample size of one and virtually nothing to even calibrate any of the scales for any of the parameters in any model.
You can't even prove to me that you exist.
Yet we all seem to be able to agree on a lot of things. Re Fermi, we're pretty sure that gravity, physics, chemistry are the same throughout the universe. Biological forms did evolve on our planet, and our planet doesn't seem super-unusual. Given those things, we're pretty sure that there are planets with biological forms. Given that, I think we can say that there is a significant chance that other intelligent biological forms are out there somewhere. And then we get the Paradox.
Right, ok fair. The handling of intermediate distributions when combining factors is straightforward math but I have found it almost impossible to have a coherent conversation about that with many people, especially tech leaders. They want to talk about outcomes so they collapse all the intermediates into if/else decision trees and then get unhappy when the final probability doesn’t match their intuition.
And these are smart people… I just think most people generally struggle with contingent probability.
This reminds me of a short scifi story I recently read, "Touring with the Alien" by Carolyn Ives Gilman. I hope it's not too much of a spoiler to say it explores the idea of a visitation of Earth by aliens who are tremendously advanced but unconscious. Personally I can't quite come to terms with such a thing, but it's a neat story. Freely available from Clarke's World: https://clarkesworldmagazine.com/gilman_04_16/
I haven't read that yet, but sounds like a similar theme as "Blindsight" and "Echopraxia" by Peter Watts. An excellent hard sci-fi duology by a Ph.D. biologist!
Thanks! I actually came across "Touring with the Alien" in a Goodreads group (Evolution of Sci Fi) which has monthly long and short reads. Watts came up in that month's discussion, with Blindsight mentioned but also "The Island". I hope to check him out some time soon.
AGI would look like that, given enough time. It would be somewhat plausible for an AGI civilization to go off on their own, becoming completely separate from humanity. They'd have a much better chance at becoming a spacefaring civ.
>Conversely, the main tenet of illusionism is to deny the existence of phenomenally conscious states. For an illusionist there is “nothing it is like” to be in any mental state. Proponents of the theory only accept “quasi-phenomenal” states and properties, which are purely physical, but misrepresented as phenomenal by our introspective mechanisms.
>While it is no longer a question of understanding the magic itself, we still have to explain how it happens. The “hard problem” has been replaced by the “illusion problem”. It consists in explaining why it seems to us that we are phenomenally conscious
Lol. "why it seems to us that we are phenomenally conscious". Maybe because we are phenomenally conscious? The very subjective experience of pondering whether we are phenomenally conscious, or for that matter the subjective experience of anything else whatsoever, is already proof sufficient that we are phenomenally conscious because it is an example of phenomenal consciousness. Granted, maybe there are some good defenses of illusionism out there, but based on this text it seems to me that illusionism is just a sort of hail mary pass attempt to save reductive physicalism by deploying a bunch of hand-waving and deceptive semantics.
There are no other phenomena which we accept as real - where “accept as real” is an ability to correctly predict and manipulate, ie, create technology - based purely on qualia. This is the essence of physicalism. The hard problem of consciousness is therefore broadly acknowledged.
“Reductive physicalism” doesn’t need to be saved; there’s nothing else; physicalism is the fundamental philosophy of science, the prerequisite for a functional, predictive physics.
"I don't like the word materialism because we don't know what the material is."
It's all a quandary, no? Wonderfully useful predictive models aside, we have almost as little metaphysical/ontological purchase as we have w.r.t. qualia.
And though it is difficult to argue with physicalism (broadly conceived), there are compelling critiques of reductive physicalism.
Or as the Wikipedia article on Eliminative Materialism succinctly puts it:
>Another view is that eliminativism assumes the existence of the beliefs and other entities it seeks to "eliminate" and is thus self-refuting.
That said, Illusionism should not be seen as a "hail mary pass attempt to save reductive physicalism". From the same Wiki page:
>In the context of materialist understandings of psychology, eliminativism is the opposite of reductive materialism, [which argues] that mental states as conventionally understood do exist, and directly correspond to the physical state of the nervous system.
The last line is a link to the page for "neural correlates of consciousness". The evidence for which, to me, sure suggests that there is a casual relationship between physical and mental events.
The evidence for which, to me, sure suggests that there is a casual relationship between physical and mental events.
I think it's a bit funny to call this a causal relationship. People do it all the time and what is meant depends on the meaning of "cause", but it is a bit ambiguous here if the physical events are other events that cause the mental events.
My mental model of reductive physicalism is that the physical events are the mental events, i.e. the mental event occurs if and only if the physical event occurs.
For me, it's the other way around. I don't feel like there's anything special about "what it's like" to experience a certain mental state that cannot be explained physically/algorithmically. It feels like the concept of qualia is a last-ditch attempt by humans to make themselves feel special.
The hard problem of consciousness is hard for a reason. There's no obvious way to link the experience all of us (solipsism aside) are having at each and every moment with the mechanism that generates it. It's a category error to confuse the two, misidentifying the map and the territory. Stating as above, that qualia can be reduced to an illusion of qualia merely restates the problem in an effort to elide it. 'I'm imagining that I have a feeling' isn't qualitatively different to 'I'm feeling the feeling'. Given that it's fairly trivial to link affect and self awareness to concrete adaptive advantages, and that each can be diminished through neurological damage - we know that phenomenology is both useful and organic. What we don't understand is precisely how it comes into being and 'where' it resides - besides using words like 'emergent' in nonspecific and hand wavy ways.
> There's no obvious way to link the experience with the mechanism that generates it
I agree that it's not obvious. I suppose where I disagree with anti-physicalists is that I think it can conceivably be resolved by reference to physical facts. I can imagine reading a computer program that simulates the human brain and thinking "ah yes, that captures everything about what I feel". Whereas I don't think the anti-physicalists will ever accept such a possibility.
I am guessing that you think sperm and eggs have no consciousness. But, humans do.
When do you imagine it started? Does your mental model explain this physically/algorithmically? (Honest question, I consider this often and really have no clue)
Asking if a single-celled organism is conscious kind of feels like asking whether quantum theory is statically typed. Like applying a descriptor from one category to objects in another.
Are you referring to pansychism? I don't get the impression that it's very popular these days.
> There are few explicit defenders of panpsychism at the present time. The most prominent are David Griffin, Gregg Rosenberg, David Skrbina and Timothy Sprigge.
My hunch is that consciousness will come to be described in terms of information processing systems, rather than the biological substrate on which they run. Systems with certain properties - possibly including a sufficiently complex model of the environment, model of the self, and a continuously updating internal narrative - may be said to possess a level of consciousness. I'm not a biologist and I appreciate that the above is somewhat vague, but I wouldn't say that sperm or egg cells have the above.
Nowadays it seems rather easy to have a computer that talks as if it were conscious. Do you think it really does perceive itself the same way we do? I woke up a human today, but someone else woke up an algorithm?
According to the theory, an attention model. Here's artificial consciousness in three steps:
1. Have a robot build perception models of its environment and itself
2. Have the robot allocate computational resources and sensory bandwidth to the models using attention
3. Have the robot control attention using model predictive control
Because the attention model is less detailed than its actual attention, by virtue of being a model, it doesn't represent the mechanisms of attention or modeling accurately. Instead, it uses non-physical concepts such as "mental possession" to model itself or other agents paying attention to things, or "qualia" to denote the recursion that occurs when percepts we attend to are summarized by the attention model (which in turn can be attended to, and so on).
I don't think there's anything in principle preventing computers from becoming conscious, if that's what you're asking. I'm not convinced LLMs are there yet, although sometimes they do sound like it.
> It feels like the concept of qualia is a last-ditch attempt by humans to make themselves feel special.
I don't think so, I think fundamentally it's just a concept used to express a more generalized form of perception (i.e. not divided into discrete senses, either internal or external). I'm happy to leave it at that, but YMMV.
The problem is the idea that no amount of physical information can capture or encode qualia. I think this puts qualia into a different category than other kinds of perception.
The usefulness of AST is not limited to biological systems. A hypothetical conscious electronic system might also need AST to: (1) stabilize internal states more quickly during learning, (2) respond more quickly to input stimuli, (3) use internal predictive flows to 'simulate' inputs in order to learn from a reduced number of sensory-linked events.
It may not magically appear. It may, by design, be woven into the learning process.
Is SETI those guys listening with radio telesopes for life signs and did not get the memo that pr definition every solar system by definition has a astronomically large and ultra power WIDE-BASE RADIO NOISE Generetor in them, you know, like the. Sun? Any radio signals moving in and out of solar systems are essentially scrambled leaving ours and entering say wherever E.T. go home? Strikes me as a little problematic to overpower or filter the by far largest and most powerful radionoise generators: For example our sun us so much bigger and powerful than anything we could begin to compete with, thus making listening with radio for radio that bears hallmarks life of being guaranteeed to fail using that method?
One question one could ask is whether these huge radio signal generators called Suns are themselves some kind of intelligence. While this was a popular concept in the past it would blow peoples mind today. Fortunately for the average mind there is currently no indication of this being the case so either SETI succeeded in ruling it out or SETI failed in finding the intelligence signal.
Here's how I would summarize this (interesting) paper (please correct as appropriate).
1. Assume that consciousness is an illusion of the brain (Illusionism). That is, when we see the color red, the brain creates qualia (the feeling of seeing), but it's just an illusion.[a]
2. Why is the brain like that? The Attention Schema Theory (AST) suggests that this helps animals to focus on important inputs. The brain turns certain inputs into qualia (sensations), while ignoring others, to help us focus. Consciousness is kind of a like a blackboard on which certain inputs are placed, which helps higher planning functions to make decisions.
3. Once we have this sense of self, we can infer that other beings also have a sense of self and are able to form social bonds. Moreover, we evolve a morality based on having a sense of self. We believe inflicting pain is morally bad because we don't like feeling pain and can empathize.[b]
4. If Illusionism and AST is true, then it is likely that alien intelligence also evolved in the same way (because of convergent evolution). Therefore, alien intelligence might also have consciousness and might have also developed a moral framework based on consciousness.
5. But what if Artificial General Intelligence is possible? If AGI is possible, then it is likely that alien species quickly transition to a post-biologic state. Even if it takes us another 1,000 years to develop AGI, that's nothing in the lifetime of a species. Any alien intelligence we encounter is likely to be millions of years ahead of us (evolutionarily) and therefore has already developed AGI.[c]
6. Moreover, just as AST is useful in us to efficiently use brain resources, it should be useful in AGI. Thus, AGI might be programmed with (or eventually evolve) consciousness.
7. But such consciousness might be very different from biologically created consciousness and that would lead to misalignment. For example, an AGI might be nihilistic and not think consciousness is special. From their perspective, killing a human would be no different than disassembling a laptop.
---------------
[a]: This is in contrast to theories that say that qualia/consciousness is some unique (and unknown) physical process. For example, Roger Penrose thinks qualia is some kind of quantum mechanical effects (I personally think that's bordering on pseudoscience, but what do I know).
[b]: Consciousness is intertwined with free will here. You can't have free will without consciousness (cars don't have free will) and if you have consciousness, you have free will (at least as far as humans are concerned). And free will is the basis of all morality. You can't hold someone responsible unless they have free will. And thus, you can't hold someone responsible unless they have consciousness.
[c]: Alien civilizations can't be millions of years behind us because then they wouldn't have technology. And they can't be equally advanced because in a universe that is 12 billion years old, having two civilizations the same age would be an awful coincidence.
Sure you can. Being consciously aware of a having made a decision does not mean that consciousness was a prerequisite for having made the decision. And then there's all the decisions that we make without being consciously aware of them at all.
We're just arguing definitions at this point, and I grant that my definition of free will requires consciousness. You're right to ask whether that is the only possible definition of free will.
For my purposes, though, I use free will in the context of morality and culpability. And you definitely can't have moral culpability without free will, which is why neither animals nor the insane can be culpable of crimes.
And you can't have moral culpability without consciousness. If I kill someone while sleepwalking, I would likely be found not guilty.
Agreed! But if aliens visited Earth in the past, how far back could they visit and still conclude that we're intelligent?
2000 years? Absolutely, that's Roman Empire times and we would absolutely count as an intelligent civilization.
20,000 years? Maybe. This is before agriculture, but we'd still be animals making tools and even boats.
200,000 years? I'm not sure we would be distinguishable from modern apes.
So the window is around 200,000 years, which is about 0.004% of the age of the Earth, and 0.04% of the time Earth has had multicellular life. If we visit a planet that has multicellular life, what's the chance that we will visit exactly in the 200,000 year window when they have technology similar to ours?
Most likely we'll either visit too early (before civilization and even intelligence) or millions of years after the first civilization.
> Illusionism is an eliminativist position about qualia stating that phenomenal consciousness is nothing more than an introspective illusion. The attention schema theory (AST) relates this philosophical stance to a large body of experimental data and states that phenomenal consciousness arises from an internal model of attention control.
A sufficiently advanced jargon is indistinguishable from magic.
There are some specialized terms here that you're unlikely to encounter outside of an academic philosophy paper, but there's nothing complex about the meanings of any of the individual terms. Once you know what the words mean, it all makes sense.
>eliminativist
Eliminativist claims in philosophy are claims that deny the existence of some class of entities. You can be eliminativist about all sorts of things - numbers, objective morals, countries, tables and chairs, etc.
>qualia
First-person conscious experiences. Pain is a qualia. The way the color blue looks, as opposed to say the color red or green, is a qualia. The sensation of hot or cold is a qualia.
When someone stubs their toe and says "ow", you can infer that they're in pain based on their behavior and your knowledge of how pain works, but you can't actually feel or directly observe their pain. That's the "first-person" part.
>phenomenal consciousness
A synonym for "qualia", because some philosophers started to feel like the word "qualia" had too much historical baggage, so they needed to come up with a new term.
>introspective illusion
Exactly what it says on the tin. An illusion (meaning, an impression that something is real, when it is in fact not) generated by introspection.
So, putting it all together:
>illusionism
Illusionism about consciousness is the thesis that phenomenal consciousness is not real. So, to give a specific example, an illusionist would be committed to the thesis that pain is not real. As a corollary, no one has ever felt pain before, because there is no such thing as pain. People have been under the illusion that they feel pain, but they actually don't.
> First-person conscious experiences. Pain is a qualia. The way the color blue looks, as opposed to say the color red or green, is a qualia. The sensation of hot or cold is a qualia.
When someone stubs their toe and says "ow", you can infer that they're in pain based on their behavior and your knowledge of how pain works, but you can't actually feel or directly observe their pain. That's the "first-person" part.
So cool! I’ve always felt there was something really interesting about the idea that someone might internalize the color blue as I see the color red. I know we can define the colors mathematically, but I never knew the term for that subjective interpretive difference—qualia.
And if two people agree a wall is blocking their path does that elevate the sensation of wall from a quale into a reality?
I know that some members of this community (the “we live in a simulation”-ists) would posit that one person sensing the presence of another is as fabricated as the color “red”!
The modus ponens of one side is the modus tollens of the other side.
Meaning that when one side in philosophy says: from A (their body of arguments) follows B, and A holds, thus B must hold. A&B, A => B is called modus ponens.
Then the other side will say: from A follows B, and B does clearly not hold, thus A does not (or cannot) hold. A&B, ~B => ~A is called modus tollens.
Just wanted to add this here because that's how in my experience the discussion of such topics close to one's self tend to unfold.
These are all very basic concepts from philosophy of mind, you are complaining about using words such as "linked list" when talking about CS topics.
Translated the quote says:
"Illusionism is a position in which there is nothing happening in the brain except the physical process as known via modern day physics, or a future physics very similar to modern day physics, that position (illusionism) is the position that other things the mind is claimed to be able to do such as vivid experiences of colours or pain (qualia)are in fact nothing more than an illusion that happens then the brain queries its own systems. The attention schema theory (AST) relates this philosophical stance to a large body of experimental data and states that consciousness of experiences (qualia again, collars, pain, sound etc) arises from a model the brain generates of control what it gives attention to.
Apparently there's an inverse correlation between the amount of jargon in a paper and the prestige of the university of the writer. Ie, people in worse universities tend to write more complicated language. This paper seems to be in line with that finding.
Have you ever actually written a research paper? Some things are impossible to flesh out without jargon.
E.g., a musicologist once wrote a paper disproving a previous paper that claimed to show similarities between the proportions of sections in a musical piece and the proportions of architectural sections of a basilica.
How the hell are you going to discuss those measurements and proportions without using the jargon from both Renaissance music and architecture?
Furthermore, what does the necessity of that jargon have to do with the "prestige" of the university of either scholar?
Edit: take down the temperature by changing the word "fuck" to "hell" :)
Excessive use of opaque jargon is typical of wannabes trying to climb a ladder of authority and credibility. But jargon is also a useful tool for competent people within a discipline. Very hard to tell the difference between the two from outside.
I read the comment as linking non-prestigious universities with the first case.
I personally do not have the same faith in prestigious institutions that OP has.
> Have you ever actually written a research paper? Some things are impossible to flesh out without jargon.
I have, yes. I have gotten some compliments from reviewers for clarity as well, if that's worth anything.
As pointed out already by others, some jargon is necessary, and using difficult language can be seen as a way to prop up credibility.
As a personal anecdote, I've known someone (from an unknown institute) who deliberately didn't want to make his results sound as simple and accessible as they could have been. However, most top people whose papers I've read are written with clarity and unnecessary jargon.
I'm not a philosophy major, so the jargon is tough, but I can kind of read it. It's not nonsense.
What helps me make sense of it is that I recognise the first sentence to be similar to my own position on consciousness:
Consciousness is those parts of the workings of the brain that are available to introspection.
I dislike the use of the word "illusion", though. Just because consciousness doesn't have a separate material presence, doesn't mean it's not real.
The article seems to boil down to this: If you take this view on consciousness, then other kinds of intelligences, such as a post-biological AI, will also be conscious, because why not. So whatever ETI is out there is also conscious.
And then, weirdly, the author decides that this post-biological AI is going to have some other kind of ethics, because reasons, and that means something for SETI and the Fermi paradox. That part doesn't seem very convincing. I wonder what it means to "surpass someone in moral philosophy". I'm not sure it means anything.
Maybe it ties back to the word "illusion". If you believe that consciousness is an illusion, then it follows that someone could be capable of looking past the illusion and seeing only the reality behind it, and that would have consequences for their moral posture. Since I don't believe the word "illusion" is used properly, I don't buy this conclusion either.
I don't think consciousness is an illusion, but our way of experiencing it is. It appears unified because of consistent semantic space across experiences, and serial action bottleneck. But the brain is massively parallel.
I think this illusion of unity in consciousness led to so many people postulating some kind of essence. The way we apprehend ourselves is seriously biased, we have a blind spot.
Every field has jargon and specialized terminology. Do you expect to be able to read any random physics or math paper outside your area of expertise and understand every word?
This has nothing to do with academia. Just grab a comment about anything tech-related in your own personal comment history and send it to a friend with zero tech background. They will be equally confused.
Jargon exists to simplify communication between people with specialized knowledge in a field. People without that specialized knowledge aren’t going to be able to follow as easily. News at 11.
If you think that, then you’re impugning the entire field of philosophy (and likely psychology, too). CS/EE/Math are not the only fields with “hard” papers in them.
There are so many unknowns that we can literally say nothing about it. Nothing. We have no idea. We have a sample size of one and virtually nothing to even calibrate any of the scales for any of the parameters in any model. For all we know abiogenesis (life from non-living matter) is phenomenally rare to the point that there is an average of less than one example per galaxy per billion years. Or it could be common but complex life could be rare. Or, or, or, or... the or's become infinite because we have no information to constrain them. It's even worse than an endless theology argument since at least there you're arguing within a set of theoretical philosophical boundaries. This is literally unbounded.
We have only barely started to explore a tiny fraction of the vastness of the universe. It's far too early to make statements like these.
I do think exploring it is a fun way to enumerate all the possibilities for what we might or might not find out there, but it can at times get problematic. You have "longtermists" out there who actually cite the Fermi paradox in political, social, or ideological arguments, arguing for real world policies on the basis of an unconstrained untestable hypothesis.