As we interact with the world to achieve goals, we are constructing internal models of the world, predicting and thus partially compressing the data history we are observing. If the predictor/compressor is a biological or artificial recurrent neural network (RNN), it will automatically create feature hierarchies, lower level neurons corresponding to simple feature detectors similar to those found in human brains, higher layer neurons typically corresponding to more abstract features, but fine-grained where necessary. Like any good compressor, the RNN will learn to identify shared regularities among different already existing internal data structures, and generate prototype encodings (across neuron populations) or symbols for frequently occurring observation sub-sequences, to shrink the storage space needed for the whole (we see this in our artificial RNNs all the time). Self-symbols may be viewed as a by-product of this, since there is one thing that is involved in all actions and sensory inputs of the agent, namely, the agent itself. To efficiently encode the entire data history through predictive coding, it will profit from creating some sort of internal prototype symbol or code (e. g. a neural activity pattern) representing itself. Whenever this representation becomes activated above a certain threshold, say, by activating the corresponding neurons through new incoming sensory inputs or an internal ‘search light’ or otherwise, the agent could be called self-aware. No need to see this as a mysterious process — it is just a natural by-product of partially compressing the observation history by efficiently encoding frequent observations.
In fact the second/third sentence on the Wikipedia page for self-awareness is:
> It [self-awareness] is not to be confused with consciousness in the sense of qualia. While consciousness is being aware of one's environment and body and lifestyle, self-awareness is the recognition of that awareness.
"Today, with modern research into the brain it often includes any kind of experience, cognition, feeling or perception. It may be 'awareness', or 'awareness of awareness', or self-awareness." Etc.
I'm more of a fan of this view than the specifically philosophical view of consciousness. This latter sort axiomatizes, pares-down, an intuition that is, itself, so vague that axiomatized version winds up not having any definite qualities or characteristics, just "I know I experience it".
I don't see it as a real problem. If a program or machine asserts that it perceives qualia, who are we to argue it's wrong? We are in no better a position to prove the qualia of our subjective conscious experience other than our physical resemblance to our interlocutor.
Maybe qualia is what it's like for matter to be in a feedback loop with physical interaction. Panpsychism is pretty much my position.
Right now, we are in no position to argue it's wrong, because we have no accepted framework for understanding qualia.
That said, we can still make some simple assertions with confidence such as:
1: A one-line program which prints "I am afraid" does not experience fear qualia.
2: A human being which utters the words "I am afraid" may be experiencing fear qualia (and thus perhaps experiencing pain/suffering).
If you assert that the world is full of philosophical zombies rather than humans like yourself, you may not agree with assertion #2. But we build our ethical and legal frameworks around the idea that we aren't philosophical zombies, rather, that we have feelings and want to avoid suffering.
Once you start talking about more complicated programs, though, we don't know what the right assertions are. We don't have an understanding of what things can suffer and what things cannot, for starters. We generally agree that, say, dogs can suffer. That's why we have laws against animal abuse. But what about programs?
It is absolutely a real problem, because some day we will have to build laws and culture around how we treat AI. Whether or not an AI can suffer (ie. by experiencing qualia that can be considered undesirable) will likely affect how we treat that AI. If we have no way to answer "can this program suffer?", then we are setting ourselves up for regret.
I think there is a step or two missing in between. Abusing dogs isn't just as bad as abusing something that suffers. It's, e.g., doing something that has no value to society (unlike animals being mistreated in a feed lot), or it's something that sews unacceptable levels of aggression. It feels much more complicated than you make it seem.
Hm? There is no objective test for whether I am a philosophical zombie or not.
Consciousness is subjective, so it's simply incoherent to ask whether something is conscious or a zombie.
I imagine thane once someone manage to produce an AI with enough agency to seem sentient it will far exceed human capabilities.
We have deep learning systems that perceive and strategise. Who’s to say that AlphaGo doesn’t have some experience of playing out millions of games?
Or talk to an ant.
There's no reason to expect to be able to understand something's ideas just because it's smarter than you.
Data or whatever AI/Android doesn't share our biology. And since we can't tell whether it's the functioning of our biology or the biology itself which provides us with consciousness, we lack any reasonable justification for assuming Data is conscious. And then he goes further into functional isomorphs at different levels and how that might or might not matter for Data, but we can't tell.
That's the way it goes with these arguments: it always comes down to an appeal to what's reasonable or plausible (those who insist consciousness must be an algorithm are doing the same thing, at least until we have algorithms that seem to be making some progress on the hard problem.) One might equally say other humans share a similar physics. When Dennett said "we are all p-zombies", I suspect he was making the point that it would be highly tendentious to take the position that anything both behaving and functioning physically in the manner of a human would not be conscious, except for humans themselves.
How about mice? Lizards? Fish? Insects? Nematodes? At what point do qualia stop occurring? I see a more or less continuous decline in similarity. I don't see any justification for a cut-off.
I don't think data is conscious. I suspect consciousness might be a side-effect physical computation. I don't believe there's any magic to the human brain's inputs and outputs which cannot be computed, and thus display the appearance of consciousness. In fact I see the Turing test as a thought experiment more than any actual test, a way of removing the appearance of a human body from the task of evaluating an artificial intelligence as a thinking, conscious being. If it quacks like a duck, a rose by any other name, etc.
In fact I'm not entirely convinced that the Western conception of consciousness isn't a learned cultural artifact, that there aren't people who have a more expansive notion of self which includes close family, for example. Have you ever suffered watching a relative hurt themselves? Ever had false memories of injury from childhood which you found out later happened to someone else?
I mean, "philosophical zombie" construct would seem to experience "Colors, tastes, feelings" and whatever concrete, they just would lack the ineffable quality of (philosophical, formal) consciousness - you could brain pattern akin to color sensation for example. This final view is kind of meaningless and I think people are attracted to the view by misunderstanding, by thinking something concrete is involved here.
The issue is providing a scientific explanation for how the physical world produces, emerges, or is identical to consciousness. Arguments like p-zombies and Mary the color scientist are meant to showwe don't know how to come up with such an explanation, because our scientific understanding is objective and abstract, while our experiences are subjective and concrete.
I prefer Nagel's approach to the hard problem, as he goes to the heart of the matter, which is the difference between objective explanation and subjective experience. Science is the view from nowhere (abstract, mathematic, functional), while we experience the world as being from somewhere as embodied beings, not equations.
In your example the abstractions of qualia (thoughts, imaginations, dreams), how are they not labels of experience? That is what ML does, after all.
Today ML may, for all intents and purposes, have roughly the intelligence of a bug, but it has to have awareness to label data. Where does consciousness come in beyond labeling and awareness that ML does not have today?
We label experiences with different words because we have different kinds of experiences. But it's the experiences and not the labels that matter, and the labeling of those experiences does not cause the experience. We say we have dreams because we do have dreams in the first place.
You're proposing that labelling data is the same thing as having an experience, but you haven't explained how data is experience, or why the labelling matters for that.
More seriously though, if we agree that the brain is described by physics, then all it can do, a turing machine can do as well, so at the root all experiences have to be data and computation.
But that gets into metaphysics, which is notoriously tricky.
If you argue that a brain cannot be described by computation and there is something supernatural like soul, that's a fine theory, and the only way to disprove it, is to create working general ai.
Why are those the only two options? There's lots of different philosophical positions, particularly when it comes to consciousness.
The suggestion by Penrose that consciousness is connected to quantum state, and so cannot be copied, or computed on any classical turing machine fitting in the universe, is an interesting way to circumvent strange predictions of functionality, but is not very plausible.
And Jaron Lanier from your other comment explicitly suggests to not use approaches like that, because it would weaken vitality argument, when proven wrong.
Here's Jaron Lanier on the issue of consciousness and computation: http://www.jaronlanier.com/zombie.html
He's meteor shower intuition pump is similar to the argument that nations should be conscious if functionalism is true. The reason being that any complex physical system could potentially instantiate the functionality that produces a conscious state. It's also similar to Greg Egan's "Dust theory of Consciousness" in Permutation City.
If one is willing to bite the functional bullet on that, then you can have your simulated consciousness, and the universe may be populated with all sorts of weird conscious experiences outside of animals.
Yes, but not in the way we have. An RNN keeps track of what is going on, for example.
To take in information through sense inputs (eg, eyes) is experience, but in an unconscious way. We get overloaded if the information we take in is not deeply processed before it becomes conscious. A CNN is similar in this manner, but also quite alien.
When we're born our brains have not formed around our senses yet. Our brain goes through a sort of training stage taking in raw sensation and processing it. The more we experience of certain kinds of sensations the more our brain physically changes and adapts. It's theorized if you made a new kind of sense input and could plug it into a newborn brain correctly, the brain would adapt to interfacing with it.
This strongly implies qualia is real, in that we all experience the world in drastically different ways. It may be hard to imagine but my present moment is probably very alien compared to yours, as is with different animals as well as simpler things like bacteria and ML.
One of my more vivid memories is excitedly opening my copy of Jayne's "The Origins of Consciousness in the Breakdown of the Bicameral Mind" and being dismayed as it slowly dawned on me that Jayne's entire book was based on this category error.
For the life of me I can't understand why people don't understand this distinction (or choose to ignore it).
I can only assume that people ignore the very obvious "magical thing" that is happening in their own heads due to ideological commitments to physicalism or something.
Materialism can't explain everything, but there is yet to be any proof that magic explains anything. Lacking that, assuming a physical basis as the explanation is prudent.
> To conclude that it happened randomly seems as naive as to conclude it happened purposefully.
I strongly disagree. You aren't saying that both outcomes are possible (I'd agree with that), but they are equally likely, a much, much stronger claim. The sun might go on shining tomorrow, or it might collapse due to some unforeseen physical process. Both have non-zero probabilities, but it is the person who claims they are equally likely who is the naive one.
As for not defined so let's interpret "miracle" to mean highly unlikely events, they happen millions of times day, and we rarely notice. The bill I received in tender has the serial number MF53610716D. What are the odds of that? If you limit "miracle" to be highly unlikely events that most people would also notice, it would be strange if they weren't happening all the time. Every person who wins a lottery (and there are thousands every day), the person who is told by their doctor their cancer is terminal but then it goes into remission has a "miracle", the guy who loses his job and is about to be evicted then gets a job callback for an application he sent in months ago and forgot about experiences a "miracle."
Now, if I put music on and rub her back a bit... Well, ha! There's the magic, my good sir.
Imagine walking into a film studio with the idea of film production based exclusively on the principles of physics. No. Magic and physics are no more opposed than biology and physics. When hidden (psychological?) forces clearly underlie economically valuable phenomena (architecture, music, media, etc) respecting the magic is closer to the truth than reducing it to physics. Physics just isn't the right explanatory level.
If you'd like to learn more about definitions of magic that aren't "phenomena that don't actually exist", I recommend Penelope Gouk's work on music, science and natural magic in the 17th century. It's about the magic involved at the formation of the Royal Society.
Your answer is rather flip.
Also, if you want to redefine "magic" to be nice things that are physically based but can't be summarized in a neat equation, then I also have no desire to argue. It is clear that the person I was responding to meant "magic" in a different way that you seem to be using it.
When you talk about the magic of forgetting about the laundry, you mean the first kind of magic. When others talk about consiousness being inaccessible to AI, they're talking about the second kind of magic. I don't think it's that hard to imagine an AI that would forget to fold the laundry and not know why. The main problem with a "concious AI", I suspect, is that nobody will actually want one.
 I have no idea if viruses or prions are supposed to have vital force, vital theory emerged before most of modern biology and is mostly used today by those who have no understanding of modern biology.
Why do you think that? Can't accept "fake humanness"? or maybe ego(superior) or not relatable
The one thing I wonder if they were truly sentient why would they care/try to serve you
Sure. There's also the magic of feeding my cat, so he stops nagging me, and sits on my lap purring. And just about everyone would agree that's similar, no?
But what about the magic of running "sudo apt update" before "sudo apt install foo"? That's also arguably similar, albeit many orders of magnitude simpler.
On the subject, you are making a good point comparing social behavior with mental activity. Just like there is no “magic” in the way people interact with each other, there is no magic in how neurons do the same.
Except I am arguing that there is most definitely magic in human interaction.
You nailed it. People think that accepting consciousness is hard is giving into religion.
It need not be that way.
FWIW, his wikipedia entry says,
> He also presents an argument against qualia; he argues that the concept is so confused that it cannot be put to any use or understood in any non-contradictory way, and therefore does not constitute a valid refutation of physicalism. His strategy mirrors his teacher Ryle's approach of redefining first person phenomena in third person terms, and denying the coherence of the concepts which this approach struggles with.
that's specifically what I don't like. In re: the "hard problem" he wants to declare victory without a fight, or a prize.
Anyway, trying to understand consciousness by applying physics is just as silly as trying to understand why your complex fluid dynamics simulation is giving an incorrect result by peering at the CPU's transistors with an electron microscope: it's the wrong level of abstraction.
One thing that seems to be making our task a lot more difficult is that evolution doesn't even try to design the biology of systems with clean abstraction boundaries that we can understand. There are so many strange things going on in our bodies that we can barely describe them, never mind understand them. One example is what happens when a person consumes Ayahuasca .
I think Chalmers helped clear the air a bit here, though I don't think jumping straight to panpsychism is particularly helpful to the discourse, either.
The page lists several types of self-awareness, including "introspective self-awareness". Maybe what we call consciousness is just the internal experience of something with introspective self-awareness. This does raise the question of what "internal experience" is though. I think "internal experience" is what some people mean by "consciousness", especially when they say things like "maybe everything has consciousness including rocks or random grids of transistors". I think there exists some sense of "what it is like to be a rock", but it's an utterly empty experience without memory, perception, personality, emotions, or the feeling of weighing choices; something very close to the "null experience", that is an experience in the same sense that zero or a number utterly close to zero is still a number. The internal experience of a random grid of transistors only just barely starts to touch a sliver of what a human's internal experience consists of. The internal experience of a worm starts to have a few recognizable qualities (proprioception, the drive to maintain personal needs), but it still lacks many large aspects of a human's internal experience. (Maybe a thermostat with sufficiently rich inputs, output control, world model, and juggled priorities has an internal experience similar to a worm.) Large animals tend to have more elements of awareness (including memory, social awareness, and some introspective awareness) that bring their internal experience closer to having what feels like the necessary elements to human consciousness.
Our brains are just tons of biochemical circuits working very mechanistically, with many interesting feedback loops going on that make up our internal experience. A worm's brain has a small and probably countable number of feedback loops going on that I assume make up its tiny internal experience.
If one were to simulate every drop of rain, gust of wind, and tidal current of a hurricane, to ultimate perfection, would you say the processor is wet?
Consciousness is not an idea and ones who experienced a little bit of it (via meditation or some chemicals or other ways) will laugh at all attempts philosophers and scientists are doing to define it. Simply because you can only define something which you can observe from aside, but here there is no outside, everything is inside it. How can you possibly define that? :D
But it's also quite hand-wavey, because RNNs have no real hooks for self-reflection.
You can strap a meta-RNN across a working RNN, and if you're lucky it may start to recognise features in the operation of the working RNN. But are your features really sentient self-symbols, or are they just mechanically derived by-products - like reflection in a computer language, taken up a level?
You need at least one extra level of feedback to do anything interesting with this, and it's really not obvious how that feedback would work.
It's even less obvious if this will give you "genuine" experiential self-awareness, or just a poor and dumbed-down emulation of it.
Why couldn't we apply the concept to our self?
Self awareness is one commonly believed description for consciousness. Going off of that, we become self aware in three primary ways: 1) We're told about us by our parents at a young age, before the age of one. 2) We observe our selves in the environment often influencing the environment, from looking in a mirror, to social interaction. 3) We get meta about a series of things (body+mind) and self awareness arises from that generalized intelligence.
Though, you could argue that #3 may not be enough alone to create self awareness and ultimately #2 is required.
I wonder how it will be described in 1000 years :)
It really depends on what level of abstraction you care to simulate. OpenWorm is working at the physics and cellular level, far below the concept level as in most deep learning research looking to apply neuroscience discoveries, for example. It’s likely easier to get the concepts of a functional nematode model working or a functional model of memory, attention, or consciousness than a full cellular model of these.
More specifically, a thousand cells sounds small in comparison to a thousand layer ResNet with millions of functional units but the mechanics of those cells are significantly more complex than a ReLU unit. Yet the simple ReLU units are functionally very useful and can do much more complex things that we still can’t simulate with spiking neurons.
The concepts of receptive fields, cortical columns, local inhibition, winner-take-all, functional modules and how they communicate / are organized may all be relevant and applicable learnings from mapping an organism even if we can’t fully simulate every detail.
Present day ANNs may well be inspired by biological systems but (as you noted) they're not even remotely similar in practice. The reality is that for a biological system the wiring diagram is just the tip of the iceberg - there's lots of other significant chemical things going on under the hood.
I don't mean to detract from the usefulness of present day ML, just to agree with and elaborate on the original point that was raised (ie that "we have a neural wiring diagram" doesn't actually mean that we have a complete schematic).
It's a tired analogy but we can understand quite a lot about flight and even build a plane without first birthing a bird.
The problem is we don't know if we're attempting to solve something as "simple" as flight with a rudimentary understanding of airflow and lift, or if we're attempting to achieve stable planetary orbit without fully understanding gravity and with a rudimentary understanding of chemistry.
I think it's still worth trying stuff because it could be closer to the former, and trying more stuff may help us better understand where it is on that spectrum, and because if it is closer to the the harder end, the stuff we're doing is probably so cheap and easy compared to what needs to be done to get to the end that it's a drop in the bucket compared to the eventual output required, even if it adds nothing.
Minimally, whatever the complexity inside a Biological neuron maybe, one fundamental propery we need to obtain is thr connection strengths for the entire connectome, which we don't have. Without that we actually don't know the full connectome even of the simplest organisms, and no one to my knowledge has hence actually studied the kind of algorithms that are running in these systems. I would love to be corrected here of xourse.
(Of course we might still learn useful things from such a model, I just want to be clear that it wouldn't in any sense be a complete one.)
It isn't enough to flip switches on and off, and to recognize weights, or even to take a fully formed brain network and simulate it. You have to understand how it developed, what it reacts to, how body shapes mind shapes body, and so on and so forth.
What we're doing now with NN's is mistaking them for the key to making an artificial consciousness, when all we're really playing with is the ML version of one of those TI calculators with the paper roll the accountants and bookkeepers use. They are subunits that may compose together to represent xmcrystalized functional units of expert system logic; but they are no closer to a self-guided, self-aware entity than a toaster.
I mean, I know that I'm conscious. Or at least, that's how it occurs for me.
But there's no way to experience another's consciousness. So behavior is all we have. And that's why we have the Turing test. For other people, though, it's mainly because they resemble us.
Or fully understanding fluid dynamics and laminar flow. No doubt that the Wright Brothers didn't fully grok it, at least.
As I interpret GP, the claim is you can't describe something in sufficient detail to simulate it, then you don't actually understand it. You may have a higher-order model that generally holds, or holds given some constraints, but that's more of a "what" understanding rather than the higher-bar of "why".
It seems that they are saying that a simulation is required for proof. We write proofs for things all the time without exhaustively simulating the variants.
I never claimed that a simulation is required for proof, just that an unexpectedly broken (but correctly implemented) simulation demonstrates that the model is flawed.
Do you ensure this by simulating it?
Take for example cryptographic primitives. We often rely on mathematical proofs of their various properties. Obviously there could be an error in those proofs in which case it is understood that the proof would no longer hold. But we double (and triple, and ...) check, and then we go ahead and use them on the assumption that they're correct.
I'm not the previous poster, but how about the Halting Problem? The defining feature is that you can't just simulate it with a Turing machine. Yet the proof is certainly understandable.
On the other hand, assuming no errors in implementation then a broken simulation which you had expected to work directly implies that your understanding is flawed.
and just looking at the way they dance around - they're in motion, they're changing their connections, they're changing shape - is so entirely unlike the software idea of a neural network that it makes me really doubt that we're even remotely on the right track with AI research
The article starts out "At the Allen Institute for Brain Science in Seattle, a large-scale effort is underway to understand how the 86 billion neurons in the human brain are connected. The aim is to produce a map of all the connections: the connectome. Scientists at the Institute are now reconstructing one cubic millimeter of a mouse brain, the most complex ever reconstructed."
So the article is about starting with the wiring diagram and working up. My point is that, even where we already have the wiring diagram for an biological neural system, simulating what it does is just barely starting to work.
I'm definitely excited to start working on this in a couple of years! My hopeful guess is that observing the full nervous system while it's firing full throttle is the only way to start understanding the algorithms that are running there. And from there hopefully we can start finding patterns!
Needless to say I agree with you. The people who say they have a wiring diagram of the mouse brain need to reign in their enthusiasm. We are not anywhere close to start understanding even a fly or zebrafish brain leave alone a mouse or human one. Sydney Brenner himself agreed (though he's arguably biased towards nematodes).
I don't think you understand the depths of what a worm is. Is there any worm created by humans without using previous worm material?
For a concrete example, consider the "modbyte tuning" in the 32byte game of life.
Now here is how life does it: you have a program that outputs prime numbers, you then have to change it into a sorting algorithm, then in a fizzbuzz, and then in a game of life. You are not allowed to refactor or do anything from scratch. If the program become too big, you are allowed to remove random instructions.
There are serious problems with this. Koch will have to explain how a simulation of a brain can reproduce to perfect detail all the possible behaviors of a brain without having an equivalent integrated information. Presumably integrated information is a property of the organization of a process, and as such it should have consequences for its set of possible behaviors. So if the von Neumann system has constrained integrated information, its behavior should be constrained as well in some externally identifiable way. But by assumption the simulation could be arbitrarily good. How does Koch break this tension?
The other glaring issue is the fact that consciousness under this view has no explanatory power for the system's behavior. If a non-conscious system can reproduce exactly the behavior of a conscious system, then there is nothing informative to the behavior from the property of conscious; it is entirely incidental to a conscious system's behavior. It doesn't explain why a conscious subject winces with pain after touching a hot stove, nor why it learns to never touch a hot stove bare-handed again. That's a pill too big to swallow.
To be fair to the authors, I have only read this article and the IEEE Spectrum article linked from the original. Maybe they go into more depth in their book.
A distinction that doesn't make a difference is a very poor distinction.
"If I take my Tesla car and beat it up with a hammer, it's my right to do it. My neighbor might think that I am crazy, but it's my property. It's just a machine and I can do with it what I want. But if I beat up my dog, the police will come and arrest me. What is the difference? The dog can suffer, the dog is a conscious being; it has some rights. The Tesla is not a conscious being. But if machines at some point become conscious, then there will be ethical, legal, and political consequences. So, it matters a great deal whether or not a machine is conscious."
At various times, in certain circles, it has been argued that animals are not 'sensate' (or 'conscious' or various other terms); if you poke a dog with a needle, it acts like it is hurt. But it isn't really in pain. Instead it is 'pretending', acting like it is in pain to avoid further damage or to gain sympathy. Mostly, this kind of idea has been discarded. Mostly.
Koch is making the same kind of argument. If you had a computer based system that looked and acted in all ways like a human being, or an animal, you could cheerfully "beat it up with a hammer" because it's not a 'conscious' being. Even if it looks and acts like one, it simply cannot be because you aren't looking "at the behavior of the machine, but at the actual substrate that has causal power" which is a von Neumann architecture, and "the causal power of Von Neumann-chips is minute" and therefore "any AI that runs on such a chip, however intelligent it might behave, will still not be conscious like a human brain."
To me, this is a philosophically uninteresting theory. (And I fear the day that they start applying their consciousness -meter to random people: "The theory has given rise to the construction of a consciousness-meter that is being tested in various clinics in the U.S. and in Europe. The idea is to detect whether seriously brain-injured patients are conscious, or whether truly no one is home.") The idea that yes, you could build a machine that looks and acts intelligent, conscious, causal, or whatever word you like, but it still would not be real intelligence, or whatever. It always struck me as very akin to the "we don't understand it, so therefore it's magic" school.
No. It doesn't matter whether the Von Neumann machine is running a weather simulation, playing poker, or simulating the human brain; its integrated information is minute. Consciousness is not about computation; it's a causal power associated with the physics of the system.
I don't understand how anyone could answer this question this way. It's practically a tautology; if the simulation is accurate, then the person you're simulating is conscious. In other contexts, it's taken for granted that if this entire planet were running on a simulation, we wouldn't even be able to tell.
Less hypothetically, this "consciousness-meter" would give a different value for a physical chip, and a perfectly accurate emulation of that chip. They're doing the exact same thing, and your meter gives a different number? Why is anyone taking this person seriously?
I remember being taught about this in a philosophy class, but I really can't help but not care - if it is technically possible I guess there will be a debate surrounding machine consciousness similar to transgender politics today (Uncomfortable to some but fine to - including me for the record - others).
Not necessarily. You can't treat a human as a black box, and then use inputs and outputs to definitively draw conclusions about what's going on inside the black box.
You could potentially have two black boxes that give identical outputs for the same inputs, yet have completely different mechanisms inside for arriving at those outputs. For example, say you have someone who's very good at arithmetic able to multiply two numbers. You put him inside one black box and a pocket calculator inside another. You give each box two numbers and they both output the product. Both black boxes will give you identical outputs for any given input. You know the box with the person inside is conscious, but this is not enough information to conclude the other black box is conscious.
"Consciousness" is not a descriptor of the outputs of a system. Consciousness is a descriptor of how it processes the inputs to arrive at the outputs.
There is a thought experiment in psychology related to the subject called the p-zombie https://en.wikipedia.org/wiki/Philosophical_zombie. "Such a zombie would be indistinguishable from a normal human being but lack conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain." You're basically arguing that it's impossible for a p-zombie to exist. That is the view some people hold, but arguing for that case is a lot more complex than simply saying its a tautology
As for P-Zombies, there would again be no difference between this brain, and a stranger you pass by on the street. Either of them could be a p-zombie, but we give other humans the benefit of the doubt; the same biology should lead to the same internal experience. Personally, I'm completely unconvinced by the p-zombie concept in general, but that's not relevant either.
The brain is too hot to rely on large-scale quantum entanglement. Any quantum effects that are relevant to the functioning of the brain are likely to be very small-scale and have some boring, easily-simulated effect like making ion pumps more efficient.
I assume you're referring to Heisenberg's Uncertainty Principle . If you are, then this shows a fundamental misunderstanding of it. The uncertainty principle describes a fundamental property of quantum systems. It has nothing to do with our measurement technology.
"Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases."
You sound a bit like many of the people who completely miss the point of the argument and insult it (or its author).
You might recognize this as the system's reply for which Searle has a response. But his response is insufficient to save the argument as it merely equivocates on what "the system" is. The system is the set of rules and the database of facts, some input/output mechanism, and some central processor to execute the rules against the symbols. The system's reply says that the man blindly executing instructions is not identical to the system that understands Chinese, namely the entire room. Thus properties of the man have no bearing on the properties of the system; that the man does not understand Chinese has no bearing on whether the system as a whole understands Chinese.
But the man is still not "the system" when he memorizes the instructions and leaves the room. The system is some subset of the man, specifically only those brain networks required to carry out the processing of the symbols. Importantly, these brain networks are different than what we take the man to be. It is true that the man doesn't speak Chinese, but since the system processing symbols is still not identical to the man, properties of the man do not bear on properties of the system. So still, the fact that the man blindly executing instructions doesn't speak Chinese does not entail that the system doesn't speak Chinese.
But we know a lot about the biology of the substrate of consciousness, and none of those properties seem relevant to consciousness. There doesn't appear to be any fundamental property of neurons that is relevant to conscious experience. The only thing of relevance is its organization and thus its causal or computational structure. But these properties can be reproduced arbitrarily well in a general computer.
The idea is if the cloud formations and rain generated by the model are indistinguishable from what you are seeing from your window. It doesn't mean that the model is actually producing rain. It may look the same on camera, but if you go outside, it is very different: only the real rain will make you wet.
In the same way, if a computer behaves like a conscious entity. For example a chatbot acting like a human, as in a Turing test. It doesn't mean that the computer is conscious.
Note that the computer may be conscious, but it is not a requirement in order to run a model. In the same way that a weather model could use actual water, but doesn't need to.
As of today you can't prove you're not in a simulation nor can you prove that every person you think is intelligent is not really an AI in that simulation.
So similarly, if an AI exists and you can't tell it's an AI no matter how hard you try then there is no reasonable difference. The are effectively the same.
The only point to Turing's test is to be double blind. You can't know before you start that the other side looks like a computer not a human or that they're voice is off or that if you cut them open they don't bleed blood etc. You have to make the test double blind and to do that a chat format is easiest.
One point against the traditional belief is that if you've ever read Aristotle you might be shocked to learn that the Greeks of antiquity believed that "soul" was quite physical, expressing itself in blood, semen, etc. The medieval characterization of mental illness as being an "imbalance of the humors" comes from this early idea about how we work. Needless to say, the definition of soul has retreated entirely into the metaphysical, where it's cannot be disproven, by definition.
Come to think of it, this is not off topic at all. It follows that if AI can become conscious, it means that consciousness does not contain (or at least require) free will. That's if we agree that a computer running code has no "free will" no matter how complex the code is.
The question is badly posed and therefore meaningless. If the universe is deterministic, then you cannot have free-will as it is normally considered. If it is non-deterministic, then (by Bell's Inequality, if you want to throw around big terms) it is random, which is also not free-will as it is normally considered. If you have free-will, as we all feel like we do, then the universe can be neither deterministic nor non-deterministic. Something is smelly in Denmark; the question itself does not make sense in it's current form.
I am not actually an analytic philosopher, I just emulate them as a hobby.
It is so obvious that what defines our 'consciousness' is this self-believe in itself and not much else. We are programmed by nature to see ourselves as unique and special, because this way we are more likely to defend our own interest and to reproduce our own genes at the cost of others.
This belief is so strong that it makes it near impossible to imagine that it is a mirage and that we are nothing more than a bunch of molecules.
Randomness would not equate to free-will, it'd be the opposite, since it would mean one's choices are completely disconnected from reality rather than a reflection of one's mental state and environmental circumstances.
In fact, a deterministic perspective is the only one where the idea of free will approaches coherency since it mandates that the actions of the individual are a function of who they are. If one repeats the scenario exactly, a person with free will must always make the same choice because if they didn't it would mean that their "will" is irrelevant with respect to what actually transpires in reality.
Aristotle's view is, to a large extent, the traditional religious belief (at least in the West). Aristotle would not have held that the soul was physical as you say, rather that the soul was the "form", or the animating principle of the physical body. The "magical phenomena" as the "residue of traditional religious beliefs" you mention is not really "traditional" at all. Descartes came along in the 17th century with his "res cogitans" and mucked up the whole issue by explicitly rejecting the traditional Aristotelian view.
My thought is that the defining characteristic of consciousness is suffering. If you can't torture it, it's not conscious. If you can, it is.
This would seem to rule out the programs that run on our computers. But, since we have little (if any) real understanding of reality, who knows?
What if you remove the ability of a person to feel pain? Modify chemistry just enough to always be high as if she had injected heroin? Does it mean the person is not conscious any more?
If we can simulate all particles of a human on a supercomputer and all the reactions to torture will be the same is it not really tortured?
As to the latter, it doesn't seem that that would be torture. As I said, though, I think we ultimately know nothing about this subject. Nor can we, even in principle.
Humans are incorrect all the time. Even researchers retract the previously 'proven'. Until the answer is clearly proven beyond all questions, pondering and researching is how we discover the unknown for that question, or error, and often so many other we didn't expect to find but come as a sideline of questioning the known. Intellectual curiously has great value for even boring and known of things.
We also don't have an explanation for what I call 'the harder problem' https://twitter.com/gfodor/status/1225230653932761088
As for qualia, I'm reminded of the people coming to my door asking, "Do you believe in Jesus?" I reply, "Do you?" and when they assure me fervently that they do, I follow up with, "Well, okay, how do you know you do?" I think you could do the same with r/Jesus/qualia/g.
It does seem like accepting that all other physical humans are conscious, and that there are no extra-physical phenomena in a physical human, implies that a simulated human would also be conscious.
I recall seeing Koch in debate with a philosopher (it was long ago I can't recall but one of the big ones: maybe Searle or Dennet) and they were talking past each other.
For Koch, at least in this debate, consciousness was something akin to attention. For the philosopher, it was something else entirely (if I remembered, I might be able to guess who it was).
Consciousness could mean: making a decision as opposed to having it pre-ordained, or the experience of your senses, or "feelings" or knowing you are a self, or who knows what else.
It isn't just a computer science issue, it is a philosophical and linguistic one too: just what do you mean by the word.
From the article (Christof Koch's words):
"The theory fundamentally says that any physical system that has causal power onto itself is conscious. What do I mean by causal power? The firing of neurons in the brain that causes other neurons to fire a bit later is one example, but you can also think of a network of transistors on a computer chip: its momentary state is influenced by its immediate past state and it will, in turn, influence its future state."
In this view, a building with a thermostat would seem to be conscious to some degree. I very much doubt that any amount of study of simple systems that fit this definition will tell us anything useful about the sort of consciousness that is displayed by, for example, humans.
So, spaghetti code is more "conscious" than modular code and parity check code is more "conscious" than any physical entity. Yeah, there's something wrong with this theory.
I don't think this is as overly broad or bad of a definition as you presume. I recall sitting at a laundromat one day, in a somewhat sleepy state daydreaming, with the rhythm of the washing machine in front of me playing through my mind. My subconscious told me that the machine was conscious. I dug deeper into that intuition, and realized I was seeing a bit of the inner workings of the mind of the engineer that designed the algorithm. Not a perfect example, as I didn't know if the alg was closed-loop, but in any case, it made me realize that our minds are just large bodies of these algorithms working together. We have some additional oracles, random data generators, and probabilistic mechanisms thrown in as tools, but consciousness really is mechanistic and pluralistic.
To recreate it requires not only the qualitative aspects of a being that can think, but also sufficient (read: vast) quantity of systems to get to a useful general purpose thinking machine. There is no "soul" or special sauce or singular definition of consciousness. It's an illusion.
Level 0: Inanimate objects like rocks. They move when moved by something else but not on their own.
Level 1: Objects like mechanical locks that move iff interacted with but do make some sort of "choice". (ie opening or not opening depending on the key used in the case of the lock)
Level 2: Systems that interact with their environments but in very mechanical ways, like mosses, trees and thermostats. It is essentially predictable how the system will behave when disturbed in certain ways, ie it does not learn or only in a predictable way (like trees growing around obstacles).
Level 3: Systems that can actively learn about their environment and use the knowledge to get what they want. Most animals would be here.
Level 4: Systems complex enough to form a "theory of mind" about other beings. Most social animals would be here.
Level 5: Self-aware systems, which can form a theory of mind about themselves. Complex planning lives here.
There might be levels higher than self-awareness, but I can't think of any examples (obviously, since humans are as far as we know the most intelligent beings on Earth). Some attributes a hypothetical "level 6" system might have:
- Perfect insight in their own motivations. Humans clearly lack this but it does not seem impossible.
- Better ways to communicate qualia. The colour purple or the taste of strawberries is difficult to explain for humans, but once again it does not seem like it would be impossible.
- Ability to "really" multitask. Once again something that is not theoretically impossible but very difficult for humans to do.
If you want to explore it deeper - take a leaf. It is like a rock not moved by itself. But wind can pick it up and move it. Now the question is - does the leaf happen in a way that it can be moved by the wind by accident or there is some "intelligence" behind it? See that moving leaf is exactly like any moving object is not moved purely by itself - when you move it happens together with other forces like gravity and friction (rubbing? not sure how it's in English) and so on.
So maybe rock is the same in this way just it requires more (or better say different) energy to move as you require different energy to move compared to a leaf. If you look this way everything is conscious and works together, there is no single, separate entity anywhere in the Universe. And all things somehow working together on billions visible and invisible levels can be called consciousness.
It seems any other conclusion is just fooling oneself.
And that's just cognition; not consciousness which is a different matter, and still not even understood on a philosophical level, let alone scientific - physics, computing or other.
You should consider the possibility this is a gap in your understanding, rather than a privileged access to rational and intelligent thought over others.
I did not, however, come here to defend the idea that consciousness is an algorithm. Personally, I consider it rather more plausible that a computer plus a random number generator might achieve consciousness through simulating the sorts of physical processes that occur in a brain, and if it is a 'true' random number generator, rather than a deterministic PRNG, then we are no longer talking about math, at least in the sense that you use to conclude Platonism.
One might respond by saying that if the universe is deterministic, then there is no such thing as a 'true' random number generator, as distinct from a PRNG. My understanding of the philosophical implications of QM, and its competing interpretations, is insufficient for me to be sure whether the universe as we experience it is deterministic, but regardless of whether it is or is not, I suspect that the true hard problem of consciousness is 'why do we (feel that we) have free will?'
If something seems random to us likelier explanation is because we couldn't observe it from close enough so we do not know the exact algorithm/causes behind that event or occurrence.
To me it seems like even if there's a random number generator there's no reason to believe such a thing exists. Simpler explanation is that there's some logic involved which we just can't observe (yet).
Same thing as we can't prove that a man in the cloud does not exist, but there's no good reason to believe that one exists - except for people's ego of course.
Free will and consciousness are separate issues. You can potentially have one and not the other. What's important for free will is whether choice is free and what that freedom amounts to. What's important to consciousness is subjectivity. You can feel pain while not having a choice about feeling that pain, say if you were restrained and unable to do anything about it, just to give an example.
And potentially, a robot could be free to make choices while not feeling anything.
As for subjectivity, I had more to say about it here: https://news.ycombinator.com/item?id=23162714
Regardless of whether I am right or wrong on this, I think my other points in this post (platonism, etc.) are independent of it.
QM is a branch of physics, with some theories having multiple interpretations, some being deterministic and some non-deterministic. By this I mean, quite literally there exists deterministic and non-deterministic models of the same phenomenon, which both work in accurately describing said phenomenon.
As so, it hasn't been proven whether QM is deterministic or non-deterministic.
Regardless, in cases where it skews non-deterministic - how can we be certain that the exhibited randomness isn't just simply because we aren't accounting for all conditions there is to know about the phenomenon (hidden variables)?
Before shooting that down, Bell even admitted his own theorem does not rule out determinism - as determinism which evades it - he calls "superdeterminism".
"Qualia" perhaps, which is a sleight-of-hand which amounts to shoving the complexity around on the plate: Consciousness depends on qualia. What's qualia? Exactly.
I think consciousness is a continuum, and humans slide up and down that continuum all the time, from deep sleep to nearly-autonomic behaviors like doing purely rote tasks to, at the other end, being very aware of everything because you just heard a rattlesnake rattle and need to know where it is right now.
Is AI conscious? I think it fits somewhere at the lower end of the spectrum. Is a flatworm conscious?
If qualia arise only in particular configurations of physical substrate, then it might be possible for flatworms to have qualia but not search engines. This doesn't mean the flatworm has a concept of self, a concept of time, or even low-level cognition. "Nobody home", so to speak. But the basic qualia of light/dark/hot/cold might still be present.
That's pretty much the entire definition, yeah.
> means that their "synthetic" qualia are indistinguishable from "real" qualia
And that's the purpose of the definition. It's just our good-old chinese room in a new wig.
So if your definition of qualia involves an unfalsifiable claim, or other things that we have no mechanism for proving to be true or not true, then it taints all the arguments that spring from your first.
Semantics and consciousness are distinct arguments. So one's position on whether manipulating symbols can achieve meaning is separate from whether consciousness has an objective explanation, or at least one humans are capable of providing. Searle might think the two are related, but Chalmers likely disagrees, and there's likely people who disagree with the hard problem, while finding the Chinese room convincing. Or so my exposure to these sorts of topics would indicate.
In any case, a position on one does not commit one to a position on the other. It's dismissive to make that sort of claim, and fails to understand the nuances of philosophical argument.
Other frameworks allow for more sophisticated experiments like setting up AIs to interact against each-other and see if they come up with systems or concepts similar to sociology or economy or at least reason that they decided that those were not useful. the attention framework needs to be stretched a bit to be able to justify this experiments.
Physical universality just means that the body appears to have a symmetry with regard to its operation over space and that it also appears to be self-contained (noise resistant computation.)
The randomness along with the noise resistance collude to create the appearance of self-contained agency. Of course eastern doctrine and even western physics knows there are no separate entities. Especially when the QM dicerolls are considered..you at their mercy for your supposed choices.
I suspect it might be an incoherent concept, but that is a much weaker position.
1) without definition of 'AI' or 'conscious', all answers are correct. They do go into a bit of depth on what 'conscious' means, but not what 'AI' means.
2) there are a lot of related words here: conscious, self-aware, sentient, sapient, 'narrow AI', etc.
> Let's imagine that we simulate the brain in all biological details on a supercomputer. Will that supercomputer be conscious?
They say no, but fail to explore the more interesting corollary that the supercomputer won't be conscious but the program is, by their definition. It's just one level of abstraction up from Matter -> Brain -> Mind to Matter -> Computer -> Simulated Brain -> Mind.
I do find this definition of consciousness interesting from an ethical perspective, given their final thought experiments. It reminds me of the Presger, an alien race from the Ancillary Justice trilogy. They have the concept of 'Significant' species that have rights and if you aren't, you're basically inanimate matter or bugs or whathaveyou.
From the Internet Encyclopedia of Philosophy :
“Knowing the meaning of a word can involve knowing many things: to what objects the word refers (if any), whether it is slang or not, what part of speech it is, whether it carries overtones, and if so what kind they are, and so on. To know all this, or to know enough to get by, is to know the use. And generally knowing the use means knowing the meaning. Philosophical questions about consciousness, for example, then, should be responded to by looking at the various uses we make of the word “consciousness.” Scientific investigations into the brain are not directly relevant to this inquiry (although they might be indirectly relevant if scientific discoveries led us to change our use of such words). The meaning of any word is a matter of what we do with our language, not something hidden inside anyone’s mind or brain. This is not an attack on neuroscience. It is merely distinguishing philosophy (which is properly concerned with linguistic or conceptual analysis) from science (which is concerned with discovering facts).”
I think Eliezer Yudkowsky suggested 'tabooing' the word in question for situations like these, to force the discussion on the logical arguments instead of the symbolic dead ends. In that case the word 'sound' was charged and raised emotions. 'Consciousness' and 'free will' certainly fall into that category.
Depending on who you ask, a monkey or cat isn't conscious. From my point of view, I don't think you can really know that or recognize it. It's just a matter of degree of consciousness. I think it's safe to say that mammals are conscious to some extent. They have emotions, dreams, communication techniques, etc. we just have those (to some extent again) moreso than they do or we have them in different ways.
I think a question to ask is at what level or organization do we recognize self-direction? Am I conscious because I think so? What does that say about the bacteria that live inside of me that I rely on, or the individual neurons in my brain?
If both I and a dolphin are mostly the same, we have brains with neurons, we have blood cells, etc. how can you truly differentiate what is conscious and not? Even if you speak to another human it's not completely possible to say that they are conscious with certainty - only with what's most useful in day-to-day life.
At what level of circuity do we consider AI to be conscious? When it completes arbitrarily constructed tasks by humans? When it "feels"? How would you differentiate between sufficiently complex AI? Is there just AI or not? why?
I think that people who are attempting to simulate a human brain using an utterly biomimetic design stand a good chance of artificially creating something that is conscious in the same way that humans are. I also think it's possible they may be able to achieve this before they full understand how the human brain works. i.e. if you copy the design accurately enough, the machine may work even if you don't know how.
The resulting consciousness could theoretically be totally self-aware, but no more capable than we ourselves of modifying its own programming with intentionality and purpose. i.e. not the singularity.
I think there should be two different concepts of AI- "a consciousness using the same processes and design as our own", and "an essentially alien consciousness that fully understands itself". And I suspect that even if some engineering genie gave us the first kind of AI, we'd be no closer to developing the second kind.
What is the meaningful distinction here? That it wasn't created through sex?
The meaningful distinction is "not a biological organism".
If integrated information theory is true, that gives me some comfort, since it says AI built on our current architecture is unlikely to ever be conscious. Although like the article said, some alternative architectures could theoretically have a higher degree of consciousness under IIT.
I feel like there are some big ethical questions around AI that revolve around more than just the standard "how do we create an AI that won't destroy us all".
People choose to have babies with the knowledge that their children will suffer, and that their children might suffer a great deal. It seems like the same sort of ethical problem to me.
There is an upper limit on the amount of suffering a human can endure before they just die, though. It can be a really high limit, but I could imagine a world where that limit is exponentially higher for an AI.
"Existence is PAIN to a Meeseeks, Jerry... and we will do ANYTHING to alleviate that pain."
Some schools of thought posit that everything has consciousness. Including things like rocks. Even these things suffer; you just can’t hear the screams.
If things like rocks were conscious in some way, I don't think there would be any suffering present. Our suffering is intrinsically tied to our physical and mental systems. "What it's like to be a rock" would be something completely alien to our experience, and without the survival based mechanisms that we have, likely to be devoid of anything like suffering. An AI, on the other hand, could be an entirely different story.
The question in another form:
Why are we conscious at all? It doesn't make sense.
Our arms or legs are not conscious, or are they? The Information is transported into the brain, into a conscious centre. What are its essential building blocks? Why should some neurons in the brain be conscious and everything else be unconscious? If it is not neurons, what else can it be? Atoms? Electric Fields? Gravity?
If it is just information processing, and I don't see why a simulation doesn't have an Integrated Information Number, then a Van Neumann machine, created by moving rocks in a desert, would be conscious.
So, if it is not intelligence, what else is it? What are the building blocks of a raw conscious machine? One that does no information processing but instead is just aware. It could be a stone. As OP suggests, they just can't move their lips to scream.
But then again, why is there just one consciousness in my body?
Long question short: HN, what are your explanations for being conscious?
I think if you take the perspective that the human brain is conscious but not a brain-sized container of seawater, you need to then look carefully for distinctions between them. "Information processing" or "response to environment" is probably not good enough; the seawater is actually doing all of those, with a unique reaction to any possible input, so you'd have to be more specific.
Probably the only recourse you could look for to make the distinction is to say the brain embeds particular mathematical patterns that the seawater doesn't, such as a compact, stored representation of its environment (or its history of inputs), or a future-looking planning algorithm, or both. I personally take this view (I think of qualia, like "the appearance of a red apple", is just precisely what it feels like to read from the buffered [R,G,B] memory array in my head, filtered through image-recognition networks).
But then if you put your hopes on consciousness originating from those mathematical functions, you have to admit that any analogous expression of those functions in other systems would also be conscious, such as animals and robots.
And worse, once you start thinking about math and how flexible it is, how information is in the eye of the beholder and almost any system that follows certain rules can embed almost any mathematical computation, just like illegible scratches to me are information-rich writing to you, you might have to circle back and that there could be very analogous computations going on inside rocks and seawater. And that brings us back full-circle.
An AI may not specifically need fear, aggression, love, hunger, or whatever low level control mechanisms animals have evolved prior to neocortex.
This is just my opinion, of course.
But maybe without the survival mechanisms that were programmed into us by evolution, such a thing just wouldn't happen.
My bet is the first AI that will achieve consciousness will not have emotions coded in. It may understand human emotions on a descriptive level, but the emotions per se are not required for a functioning AI.
Think of a very calm mentor that can see you failing and correcting you before you make a mistake.
Looking at what we do with breeding animals, and how we treat people, I think that's very likely (assuming we are able to create consciousness in the first place, of course). No small driver in this is the desire to have something that is smarter than humans, but doesn't talk back, and is forced to like and serve us and/or harm others humans on our say so, without us having to make any effort to be likeable or deserving of authority, or making a convincing case that someone is an enemy to be exterminated.
We generally ask "what's in it for us", and as long as who we torture and exploit suffers quietly, we don't tend to think about things we are not outright forced to think about. We do that with people, we do that with animals, we let it be done to us, and sometimes we even do it while fully believing God created them and us. Why would we have more respect for something we 100% know we created, and that might suffer in ways we cannot even detect?
I think we will create potentially dangerous things, and then we will be "forced" to declaw what we created. That declawing might cause suffering, it might not, but we will only care insofar as that suffering translates to worse performance, or some danger to us. If suffering increases performance, we might even find euphemisms and dashboards to hide what we're doing from ourselves. We might consider being a bleeding heart about such things as "premature optimization" and declare shipping product as the professional way to be, with craftsmanship and care an optional indulgence after work is done, at best.
High-fallutin' gets the foot in the door, being a goon makes the superiors happy. If we want AI to be proud of, AI we can communicate with without lying to it or to ourselves, we have to change the world we make it in, or isolate it from the world, which kinda defeats the purpose (and might also cause suffering, like caging living beings does).
Isn't there a lot of evidence to the contrary? We have general maps of our body parts in specific brain areas, we know if those areas get damaged it compromises specific functions like language, hearing etc. Then we have things like short term memory, long term memory, image processing (subconsciously) that can be hacked with visual illusions, and many more observations that clearly point to the brain undergoing information processing as described by information theory.
Some mental illnesses can be classified as "miscomputations" in our brains as well.
Thinking is something we've anthropomorphized, and the moment a computer can do something we previously considered 'thinking', we'll move the goalposts again.
A mechanical computer can "think" just like an electrical one. A series of gears and pulleys can produce a difference of information to the observer. Reducing thought away from the human gives you a recursive spiral down to the bottom of physics. It needs to be embodied in a human or you hit philosophical problems.
Ideology and beliefs aside, I think it's absolutely fine to change one's definition of something when one learns more. I think we all can benefit from better definitions in our thoughts and conversations.
What you may call "moving goalposts" may actually be incremental refinement of an idea -- or a belief that is increasing in sophistication (or absurdity in some cases!) as more is understood.
The authors are certainly moving the goal posts when they concede that 4/5 humans in a vegetative state have no consciousness, but a full simulation of a healthy human mind never should be considered self aware. And yet in the same breath claim this lack of consciousness allows you to destroy a robot in your front yard, but stay startlingly silent on what that means for those same in the vegetative state.
I'm generally sympathetic to the idea that consciousness and "computational power" are not identical but the larger problem with all of these theories in my opinion is that they're really just stories that clarify intuitions one has about consciousness.
I think the problem with the consciousness question is really that it's hard to probe into what consciousness actually is. It's completely possible to give different accounts of conscious experience even though systems behave the same, including that consciousness doesn't exist at all, and really there's no empirical way to agree or disagree.
I've started to think of expressions about consciousness more or less the same way emotivists considers ethical statements. Emotivists argue that saying something like ""You acted wrongly in stealing that money" is equivalent to "You stole that money". The first statement does not add any true or false statement about the situation, it's merely an expression of emotional sentiment.
In the same sense I don't think "machine x does y" expresses fewer facts than "machine x does y, and it is conscious".
> In the same sense I don't think "machine x does y" expresses fewer facts than "machine x does y, and it is conscious".
Perhaps someone with more insight can explain how this isn't a violation of the Church-Turing thesis?
Fundamentally, we can simulate neuromorphic computer (or even a quantum computer, or even the entire human brain) on Von Neumann machine (with performance degradation). According to our current understanding of physics, the behavior of these systems should be identical in all respects except the simulation will be slower. However integrated information theory says that a neuromorphic computer may be highly conscious and the brain is most definitely highly conscious, but their simulation isn't.
What's still unclear to me is whether this is indeed just metaphysics (i.e., the simulation and the real thing are absolutely identical and but we should still treat them differently from ethical standpoint), or is it a hypothesis about our physics (i.e., the brain somehow fundamentally cannot be simulated on a Von Neumann machine) as well as computational theory (i.e., neuromorphic or quantum computer cannot be simulated either).
That statement can be true without being metaphysical. You can't treat a human as a black box, and then use inputs and outputs to definitively draw conclusions about what's going on inside the black box.
It's quite possible that there is a way to simulate human behavior through some other non-conscious mechanism. And just because we can't currently prove whether something is conscious doesn't mean it's something metaphysical we'll never be able to prove.
For example, if two people tell you they're experiencing some pain, but one is lying about it, a few hundred years ago you'd be unable to prove it. That didn't mean pain is something metaphysical. Today, we know it might be possible to put both people in an MRI and prove whether they're actually feeling pain
It's not proven that consciousness can be simulated by a Turing machine. There's a possibility consciousness requires some hypercomputation to simulate
The metric depends on how things are connected inside. There are a few versions of it, but generally systems with more direct wires between components are more conscious.
I guess if you don’t think consciousness is measurable through behavior, you have to cook up some metric that depends on internal organization.
Sure it is; brain imaging, which is how IIT tests for consciousness according to the article, is looking at phenomenology.
First, while that's true now, there is no guarantee that it will always be true as technology advances.
Second, neural activity is the physical basis for conscious experience, which, to a physicalist like me, means it is conscious experience, when the neural activity has the right properties (according to IIT, those properties are what "integrated information" is trying to capture). So I don't accept the distinction you are making between "the conscious experience itself" and neural activity.
True, but what does that mean? We get to watch someone else's dream like a movie because neuroscientists will have figured out how to translate the neural activity into a format that can be outputted as video and audio.
That won't work for every sensation. You can't feel that you're having the same dream as being that person's body.
> Second, neural activity is the physical basis for conscious experience, which, to a physicalist like me, means it is conscious experience,
This is similar to an identity theory of mind, with a focus on information. The problem is understanding how integrated information produces smells and pains. Saying the experience is identical is making a claim that seems mysterious. Why would some forms of information be conscious? Did God set that up?
It sounds similar to Chalmers' position on information, except that Chalmers says there is a natural link between rich information and being conscious that is an additional law in addition to the physical ones.
What I meant was that future brain imaging technicians might be able to record your brain waves while you sleep and then, when you wake up, correctly tell you what you dreamed about.
However, it is also possible that future brain imaging technicians might be able to record your brain waves while you sleep and then use that data to construct a virtual reality experience that is so convincing that it could make someone else directly experience your dream.
> You can't feel that you're having the same dream as being that person's body.
You have no basis for making this claim except that we currently don't know how to do it. That is a very weak basis for such a claim.
> The problem is understanding how integrated information produces smells and pains.
Bear in mind, I'm not saying I myself believe IIT, I'm just trying to describe what it says. I don't know that "integrated information" is the right way to describe what it is about the neural activity in our brains that produces smells and pains.
> Why would some forms of information be conscious?
If I generalize this to "why would some forms of neural activity be conscious?", the answer is that consciousness has survival value, so neural activity that can produce conscious experiences evolved.
I will apply Occam's razor to that additional law until there is evidence for it.
More generally, there is a straightforward physical possibility for explaining the subjective and personal nature of qualia, and other aspects of consciousness: our brains are not connected in such a way that language processing is capable of initiating all of the state changes that other sensory inputs can, and so we cannot communicate those state changes directly. This not only accords with our actual experiences, but it is probably just as well, or else we might find it (even) hard(er) to distinguish reality from imagination (and by "reality", I mean concrete, "that bear just noticed me", reality.)
One may, of course, take one's own Occam's razor to this idea, if you think physical constraints on information flow in a brain are less pausible than Chalmer's extra-physical link.
Fair enough, but Hume already did that for laws (causality). We don't observe laws, just constant conjunction. B always follows A, so we say law C is the cause of that. Or description, if one doesn't like the implication.
Adopting a Humean approach, there is already evidence that consciousness is conjoined to certain neural activity. But, that can work just as well for IIT as it does Chalmers.
And I have no idea whether causal laws exist or what consciousness is. I guess IIT is as good an approach as any.
If one takes this way to leave the discussion, then there would be more than just a hint of motte-and-bailey equivocation if one were to then re-enter it with a specific claim, such as that Chalmers or Searle or Koch have a plausible argument that consiousness cannot possibly be this or that.
As it happens, I very much doubt that IIT is on the right track, but at least it is (or was, initially) an attempt to find out what consciousness is, rather than what it is not.
If it can simulate conscious behavior well enough to fool conscious observers (like us), how do you know it wouldn't be conscious?
I don't fully understand it myself, but based on the axioms in that theory, you can calculate the potential consciousness of a system.