> Where do people get off saying no one has any idea what consciousness is?
I'm not "getting off" saying that, but I do say it often.
For me, it's important to know:
If we think an artificial neutral network can have consciousness and we're wrong, then there is a risk of all the people who want to have their minds uploaded having a continued existence no better than the one of TV stars reproduced on VHS tape. There is also a risk of this being done as a standard treatment for lesser injuries, especially if it's cheaper.
If we think an artificial neutral network can't have consciousness and we're wrong, then there is a risk of creating a new slave class that makes real the fears of the Haitian slaves in the form of the Vodou concept of a zombie — not even death will free them from eternal slavey.
Well, that is an entirely different question. My personal view is that nothing prevents an artificial neural network from having a consciousness (I don't think it makes sense to believe there is anything magical about human brains).
What I am saying is that we emphatically know things about the physical processes that (almost certainly) generate consciousness and that we should take that knowledge seriously when examining artificial neural networks. People eager to attribute more to these networks than they plausibly constitute love to dismiss all this knowledge so as to muddy the waters of comparison.
> What I am saying is that we emphatically know things about the physical processes that (almost certainly) generate consciousness
I'm prepared to believe that people who aren't me know such things, but last time I asked a PhD in brain research about this (a while ago now), they seemed to disagree.
At least, assuming we're talking about the same usage of the word "consciousness" here — when it's defined as "opposite of unconscious" then sure we have drugs to turn that off, and also separately with the non-overlapping definition of "opposite of autonomous or reflexive"…
…but the weird thing where I have an experience rather than just producing responses to stimuli? If anyone knows about that, my search engine bubble hides it from me.
Came across an intriguing paper recently. It postulated that consciousness might emerge when an organism can form a predictive simulation of both its body and surroundings, an approach akin to model-based reinforcement learning (RL). This is distinctly different from merely reacting to the environment, a characteristic of model-free RL.
> What insects can tell us about the origins of consciousness
I'm not "getting off" saying that, but I do say it often.
For me, it's important to know:
If we think an artificial neutral network can have consciousness and we're wrong, then there is a risk of all the people who want to have their minds uploaded having a continued existence no better than the one of TV stars reproduced on VHS tape. There is also a risk of this being done as a standard treatment for lesser injuries, especially if it's cheaper.
If we think an artificial neutral network can't have consciousness and we're wrong, then there is a risk of creating a new slave class that makes real the fears of the Haitian slaves in the form of the Vodou concept of a zombie — not even death will free them from eternal slavey.