I dont agree with most of the comments in here. I also consider the human mind a gigantic pattern matching loop and do not consider me a laymen...
I think Transformer models (like chatGPT) can encode knowledge of the world into their representations as well as work with the encoded world knowledge when predicting.
Consider the example of the apple that falls: I am sure the embedding (internal representation of words in ChatGPT) for apple contains some form of "physical objectness" that will distinguish it from a word like "vacation". It can also put this "physical objectness" into context and infer what happens and what cannot happen when you let it the apple go on earth vs in outer space. Maybe it would be good for the sceptics to try ChatGPT and ask "What happens to X when you let it go from your hand on earth/in outerspace? please explain your reasoning." And fill in X with any object or concept that you can think of.
What is the difference of embedding? It is of no meaning to chatGPT - it only has meanings to humans, who must discern from the embedment. This is Chomsky's exact point.
>"What happens to X when you let it go from your hand on earth/in outerspace? please explain your reasoning." And fill in X with any object or concept that you can think of.
What would a skeptic achieve by asking its reasoning when ChatGPT cannot provide you with its reasoning? Seems like maybe you are the more confused one here than Chomsky (let alone everyone else in the thread).
> What would a skeptic achieve by asking its reasoning when ChatGPT cannot provide you with its reasoning?
"If you let go of an apple from your hand on Earth, it will fall downwards due to the force of gravity. This is because the Earth exerts a gravitational force on all objects near its surface, and this force causes objects to accelerate downwards at a rate of approximately 9.8 meters per second squared.
In outer space, however, the behavior of the apple would be different. In the absence of gravity, the apple would not experience any force pulling it towards the Earth, and would therefore continue to move in a straight line at a constant speed, as per Newton's first law of motion.
However, it is worth noting that in reality, there is no such thing as "zero gravity" in outer space. While the force of gravity decreases with distance from the Earth, it never truly reaches zero. Additionally, other forces such as the gravitational pull of other celestial bodies, as well as the effects of acceleration and rotation, can influence the motion of objects in space. Therefore, the behavior of the apple in outer space would depend on the specific conditions of its surroundings."
Looks like reasoning to me. In seriousness, the reason it's able to generate this output is because it does look for explanations. Those explanations are in the form of weights and biases rather than organic neurons, and the inputs are words instead of visuals, but the function is the same, and neither is a perfect representation of our world. Recognizing patterns is the same thing as an explanation.
>Looks like reasoning to me. In seriousness, the reason it's able to generate this output is because it does look for explanations.
Yeah, it looks like reasoning, but it isn't, because it's not the reasoning that ChatGPT used - it's just, once again, fitting whatever would be the most likely next word for the situation. It's not using logic or reasoning to do that, it's using statistics.
It's as if you flat out do not understand how ChatGPT works. ChatGPT cannot provide you with reasoning because it does not reason. So asking to provide reasoning, just indicates that you do not understand how ChatGPT works and that you also misunderstood the Op-Ed.
>In outer space, however, the behavior of the apple would be different. In the absence of gravity, the apple would not experience any force pulling it towards the Earth, and would therefore continue to move in a straight line at a constant speed, as per Newton's first law of motion.
>However, it is worth noting that in reality, there is no such thing as "zero gravity" in outer space. While the force of gravity decreases with distance from the Earth, it never truly reaches zero. Additionally, other forces such as the gravitational pull of other celestial bodies, as well as the effects of acceleration and rotation, can influence the motion of objects in space. Therefore, the behavior of the apple in outer space would depend on the specific conditions of its surroundings."
Who fucking cares? The point isn't about zero gravity in space - the point is w/r/t what is happening inside of ChatGPT...
It’s as if you don’t know how ChatGPT or the human brain works. The correlations are built into a prediction model. Sometimes those predictions can be near certain, which is indistinguishable from human understanding.
You can see this quite clearly when the same neuron lights up for any prompt related to a certain topic. It’s because there’s actual abstraction being done.
>The correlations are built into a prediction model. Sometimes those predictions can be near certain, which is indistinguishable from human understanding.
This is quite literally not what the word understanding means, and trying to use my words against me in this way just makes you seem smarmy and butthurt. And if you are going to converse with me like that, I'm not going to engage when your material is a) pointed and aggressive, and b) completely non-responsive to what I wrote.
>You can see this quite clearly when the same neuron lights up for any prompt related to a certain topic. It’s because there’s actual abstraction being done.
When you ask a question to a human that has to do with a concept - in the above article it's Halle Berry because it's a funny discovery, but it could be as broad as science - you can often map those concepts to specific neurons or groups of neurons. Even if the question you ask them doesn't contain the word "science", it still lights up that neuron if it's about science. The same is true of neural networks. They eventually develop neurons that mean something, conceptually.
It's not always true that the neurons that neural network's develop are the same ones that humans have developed, but it is true that they aren't thinking purely in words, they have a map of how concepts relate and interact with one another. That's a type of meaning and it's a real model of the world, not the same one we have, and it's not even close to perfect, but neither is ours.
> they have a map of how concepts relate and interact with one another
Yeah but not one that operates how Chomsky described. It can't tell you if the earth is flat or not. Humans figured it out. ChatGPT can only tell you what other humans already said. It doesn't matter that it does so based on a neural net. You completely missed the point.
ChatGPT can tell you the earth is round. You can ask it yourself.
If you’re saying ChatGPT can’t look at the cosmos and deduce it, well it doesn’t have access to visual input, so that’s not the dunk you think it is.
If you’re saying ChatGPT can’t learn from what you tell it, that’s a design decision by openAI, not inherent to machine learning.
There are absolutely models that can do primitive versions of deducing the earth’s roundness, and ChatGPT can deduce things based on text (e.g. you can ask it to play a chess position that’s not in it’s training set and it will give reasonably good answers most of the time).
Lol you can't just end-run the conversation by calling what ChatGPT does learning and then just leaving it at that. It's not an argument, it's a tautology.
> ChatGPT can deduce things
No it can't because it doesn't understand anything about Chess, it's just determining the best response based upon the information fed into it. It's not discerning rules.
You are just fundamentally ignoring Chomsky's point as a means of trying to rebut him. It doesn't work like that. He gave a fairly explicit example of what intelligence is and why ChatGPT does not express it, and your response is basically "but ChatGPT is intelligent because it gave me an answer to something I asked it". These conversations would be so much better if it didn't constantly have to revolve around these sort of basic failure-to-launch in thinking through these problems.
>If you’re saying ChatGPT can’t learn from what you tell it, that’s a design decision by openAI, not inherent to machine learning.
Well, if "machine learning" = "just throw more data to build the ginormous network of statistical weights ever larger" in the hopes that it can better approximate actually knowing things, the inability to learn things is inherent to machine learning.
The argument is that these networks don't understand the concepts that we use to generate our intelligence. It doesn't understand what the world is. It doesn't understand what an object is. It only understands statistical associations with words.
The symptoms of this manifest as hallucinations that it can't correct based on its own fact-checking. It can happily tell you the world is flat at some point even if it told you it was round earlier because there's no concept of "world" upstairs. It's just math on strings.
When it's computing that chess position, it's not picturing a board and a game and objectives about capturing pieces and defeating an opponent; the argument is that it's just doing stats on text (chess notation text). It's a black box stuffed to the brim with brute force "intelligence" from its training data, and it's using that to seem intelligent in the sense that we're intelligent -- actually able to learn and reason with concepts.
The reason we don't need as much training data and actually learn concepts may be because our brains are using a similar-yet-different (we have no idea) mechanism wherein concepts can actually be represented while language models are boiling the ocean to try and encode abstract concepts in the extremely inexpressive terms of a statistical circuit. And, if that's all language models can do, they'll never be able to encode concepts like our brains can. You'd need a galaxy-sized computer to store the concepts a quarter of a human brain can, say, unless you use a better method. Think trying to write code directly in machine binary. It'll take you forever compared to doing it with Python.
I agree with you, but I'm not sure if it matters, and we could say the same thing about a person. We cannot prove that a human reasons, only that they output text that looks like it could be reasoning
No, you can't say the same thing as a person because a person can express reasoning. ChatGPT can't, because it can't reason. You may ask yourself, "wow, is there perhaps a magical algorithm in humans capable of reasoning that is the ultimate source of what emanated from this other person, being that I'm not actually sure everyone else is real?" - that's totally different from what's going on with ChatGPT when it just puts out more dreck, but arbitrarily states "this is reasoning". Like, try and read the article - he deals with this point EXACTLY.
> Maybe it would be good for the sceptics to try ChatGPT and ask "What happens to X when you let it go from your hand on earth/in outerspace? please explain your reasoning."
And this will show the sceptics exactly what? That ChatGPT language models have suffecient info about the ideas of space to be reasonably correct for some definition of correct.
It can definitely cannot predict something outside it's area of knowledge, or construct plausible theories. As can be evidenced by numerous examples where it's plain wrong even in the simplest of cases.
To add here: for a local minimum to occur all those dimensions (or features) need to increase. This is highly unlikely for modern NNs where you have millions of dimensions. If one of the dimensions is going down but the rest up, you have a saddle point. Since you go down only one (or few) dimensions it takes longer.
I think Transformer models (like chatGPT) can encode knowledge of the world into their representations as well as work with the encoded world knowledge when predicting. Consider the example of the apple that falls: I am sure the embedding (internal representation of words in ChatGPT) for apple contains some form of "physical objectness" that will distinguish it from a word like "vacation". It can also put this "physical objectness" into context and infer what happens and what cannot happen when you let it the apple go on earth vs in outer space. Maybe it would be good for the sceptics to try ChatGPT and ask "What happens to X when you let it go from your hand on earth/in outerspace? please explain your reasoning." And fill in X with any object or concept that you can think of.