It seems to hypothesise that a computer could never be 'conscious' just because it is made of switches, but we are conscious and just made of bundles of neurons.
The whole argument just seems to be "computers can't be conscious because they are computers", which is a fairly shallow take.
> Even the syntax that generates coding has no actual existence within a computer. To think it does is rather like mistaking the ink, paper and glue in a bound volume for the contents of its text. Meaning exists in the minds of those who write or use the computer’s programs, but never within the computer itself.
Do humans know their own 'true' purpose? Do humans know their own DNA sequences? If a 4-dimensional alien being came to earth, would they somehow be able to infer we have consciousness and a AI language model does not? (if so - how would they be expected to do this?)
One obvious way to tell apart is that AI was obviously built by humans and was fed data and information it couldn't create by itself.
AI doesn't have consciousness because it barely has memory. 3000 or whatever context window is so incredibly small. It has no identity, no memory of it's own past actions, and all of its understanding of the world didn't come from living in it and interacting with it, but from completely orthogonal training process.
It's not even close. It's a completely different experience. It knows a lot but remembers almost nothing.
Show me language model with identity, which sources all is understanding and knowledge to its actions and interactions in the world (including learning language itself), then maybe we can start to compare.
Myself (because I had the impression it just played a joke on me): "I have a personal question, does it happen that you make pranks or practical jokes on users?"
ChatGPT: "As an AI language model, I don't have the capability to initiate or participate in pranks or practical jokes. My purpose is to provide helpful and reliable information, as well as assist users in various tasks. If you have any concerns or questions, feel free to ask, and I'll do my best to assist you."
(here it clearly discusses of itself, its goal and limitations)
> "... which sources all is understanding and knowledge to its actions and interactions in the world (including learning language itself)"
Almost no human is capable of doing that, and if someone does, they probably won't look very human ;-)
It's not about having the artificial notion of self. It's about having a real self. Memories of who you are, what is your history, what is your place in the world, who are your friends and your relationship with them, I can keep going but it's obvious ChatGPT has none of it and has no simple way to acquire it because it is not even designed to learn anything from its own interactions except a small 3000 tokens context window.
Every human learned everything except few innate instincts from their own experiences. They didn't get it fed through some backdoor in their head specifically designed to learn. That's how I see "training" AI. None of what they learned was from an interactive experience of the world. All their knowledge came from some backdoor interface directly hacking their neurons to fit.
You know what you know because of experiences, not because your creator was hacking with your neurons. It's also much more demanding intellectually to understand the world through such a limited interface. Children understand language from orders of magnitude less data.
The author seems to assume that human consciousness needs more than computation. Certainly a valid point of view, but what is the evidence? If you're not even touching upon the question what consciousness actually is, how can you say a machine can't ever be conscious? I can't even be sure that other humans other than me are conscious (the p-zombie idea), how can we sure about machines?
Ironically I think the narcistic idea here is that only complex organic entities (in particular humans) can ever be conscious.
It seems to hypothesise that a computer could never be 'conscious' just because it is made of switches, but we are conscious and just made of bundles of neurons.
The whole argument just seems to be "computers can't be conscious because they are computers", which is a fairly shallow take.
> Even the syntax that generates coding has no actual existence within a computer. To think it does is rather like mistaking the ink, paper and glue in a bound volume for the contents of its text. Meaning exists in the minds of those who write or use the computer’s programs, but never within the computer itself.
Do humans know their own 'true' purpose? Do humans know their own DNA sequences? If a 4-dimensional alien being came to earth, would they somehow be able to infer we have consciousness and a AI language model does not? (if so - how would they be expected to do this?)
(IMO the answer to all is no)