It's not unmeasurable. If you ask a friend, "did you like that movie?" would you be happy if they hadnt seen it; didnt know anything about it; etc. etc. and simply generated a response based on some review data they'd read?
Is that what you want from people? You want them just to report a summary of the textbooks, of the reviews of other people? You dont want them to think for a moment, about anything and have something to say?
This is a radically bleak picture; and omits, of course, everything important.
We arent reporting the reports of others. We are thinking about things. That isnt unmeasurable, it is trivial to measure.
Show someone the film, ask them questions about it, and so on -- establish their taste.
NLPs arent simulations of anything. It's a parlour trick. If you want a perfect simulation of intelligence, go and show me one -- I will ask it what it likes; and I doubt it'll have anything sincere to say.
There is no sincerity possible here. These systems are just libraries run through shredders; they havent been anywhere; they arent anywhere. They have nothing to say. They arent talking about anything.
You and I are not the libraries of the world cut up. We are actually responsive to the environments we are in. If someone falls over, we speak to help them. We dont, as if lobtomized, rehearse something. When we use words we use them to speak about the world we are in; this isnt unmeasuarable -- its the whole point.
Why do you think a model of intelligence needs to have tastes, values, likes/dislikes, etc for it to be something more than statistics or pattern matching? Why are you associating these consciousness qualities with AGI?
To use a language is just to talk about things. You cannot answer the question, "do you like what i'm wearing?" if you dont have the capacity for taste.
Likewise, this applies to all language. To say, "do you know what 2+2 is?" *we* might be happy with "4" in the sense that a calculator answers this question. But we havent actually used language here. To use language is to understand what "2" means.
In otherwords, the capacity for langauge is only just the capacity to make a public communicable description of the non-linguistic capacities that we have. A statistical analysis of what we have already said, does not have this contact with the world, or the relevant capacities. It's just a record of their past use.
None of these systems are langauge users; none have language. They have the symbols of words set in an order, but they arent talkiung abotu anything, because they have nothing to talk about.
This is, i think really really obvious when you ask "did you like that film?" but it applies to every question. We are just easily satisifed when alexa turns the lights off when we say "alexa, lights off". This mechanical satisifcation leads some to the frankly schiozphrenic conclusion that alexa understands what turning the lights off means.
She doesnt. She will never say back, "but you know, it'll be very dark if you do that!" or "would you like the tv on instead?" etc. Alexa isnt having a conversation with you based on a shared understanding of your environment, ie., using langauge.
Alexa, like all NLP systems, are illusions. You arent speaking to anything. You arent asking anything a question. Nothing is answering you. You are the only thing in the room that understands what's going on, and the output of the system is meaningful only because you read it.
The system itself has no meaning to what its doing. The lights go off, but not because the system understood that your desire. It could not, if it failed to undestand, ask about your desire.
You're just reiterating that you think tastes, opinions, likes/dislikes are something intrinsic to the issues here. I'm asking why do you think these things are intrinsic to language understanding or intelligence?
>To use language is to understand what "2" means.
I've never held a "2", yet I know what 2 is as much as anyone. It is a position in a larger arithmetical structure, and it has a correspondence to collections of a certain size. I have no reason to think a sufficiently advanced model trained on language cannot have the same grasp of the number 2 as this.
>A statistical analysis of what we have already said, does not have this contact with the world, or the relevant capacities. It's just a record of their past use.
Let's be clear, there is nothing inherently statistical about language models. Our analysis of how they learn and how they construct their responses is statistical. The models themselves are entirely deterministic. Thus for a language model to respond in contextually appropriate ways means that it's internal structure is organized around analyzing context and selecting the appropriate response. That is, it's "capacities" are organized around analyzing context and selecting appropriate responses. This to me is the stuff of "understanding". The fact that the language model has never felt a cold breeze when it suggests that I close the window if the breeze is making me cold is irrelevant.
>You arent speaking to anything. You arent asking anything a question. Nothing is answering you.
It seems that your hidden assumption is that understanding/intelligence requires sentience. And since language models aren't sentient, they are not intelligent. But why do the issues here reduce to the issue of sentience?
Language is a empirical phenomenon. It's something happening between some animals, namely at least, us. It is how we coordinate in a shared environment. It does things.
Language isnt symbols on a page, if it were, a shredder could speak. Is there something we are doing the shredder is not?
Yes, we are talking about things. We have something to say. We are coordinating with respect to a shared environment, using our capacities to do so.
NLP models are fancy ways of shredding libraries of text, and taking the fragments which fall out and calling them "language". This isnt language. It isnt about anything; the shredder had no intention to say anything.
Mere words are just shadows of the thoughts of their speakers. The words themselves are just meaningless shapes. To use langauge isnt to set these shapes in order, its to understand something; to want to say something about it; and to formulate some way of saying it.
If I asked a 5yo child "what is an electron?" and they read from some script a definition, we would not conclude the CHILD had answered the question. They have provided an answer, on the occasion it was asked, but someone else answered the question -- someone who actually understood it.
An NLP model, in modelling only the surface shapes of language *and not its use* is little more than a tape recorder playing back past conversations, stitched together in an illusory way.
We cannot ask it any questions, because it has no capacity to understand what we are talking about. The only questions it can "answer", are like the child, those which occur in the script.
I don't. I believe a perfect simulation of intelligence is intelligence.