Can a think which doesn't understand actual concepts actually lie? Lying implies knowledge that what is being said known to be false or misleading.
An LLM can only make predictions of word sequences and suggest what those sequences may be. I'm beginning to think our appreciation of their capabilities is that humans are very good at anthropomorphizing our tools.
Its really hard to say just how clever AI is getting IMO (as a non-expert in the field).
On one hand people say transformer models are just sophisticated autocomplete engines. You look at how they work, and yes this seems to be true.
But then when you give a LLM a completely new problem, not similar to anything they have been trained on - For example, give it a snippet of code and ask it to find the bug.
And they can do this. They can explain what the bug is, and give you a solution. They give all appearances of completely understanding the problem you have given them, and they can pick apart the problem, explain it and solve it. I have done this when stuck on various things with great success.
It really does make me wonder about the nature of our own intelligence, if a program can emulate so much of it but with such curious limitations - such as the difficulty a LLM has telling the difference between a correct answer, and in incorrect answer - Nearly all answers are given with 100% confidence.
>Its really hard to say just how clever AI is getting IMO (as a non-expert in the field).
>But then when you give a LLM a completely new problem, not similar to anything they have been trained on - For example, give it a snippet of code and ask it to find the bug. And they can do this. [...] I have done this when stuck on various things with great success.
I'm afraid you follow the same way of thinking about AI as used by the authors of the article: you accept the anthropomorphization of AI programs. Plus you use an unconfirmed assumption in your anecdotal example ("completely new problem, not similar to anything they have been trained on") to support your unjustified delight in AI capabilities.
Both are - in my opinion - bad for AI developments as they support misunderstanding and false image of LLMs and their application in the real world just like "I, Robot" did to create a false understanding of robotics (and AI...).
Since frontier models evolved beyond the very basic stuff from maybe 2020, "LLM can only make predictions of word sequences" only describes a small fraction of the inner processes that the frontier systems use to get to the point of writing the answer to a prompt.
i.e. output filtering (grammar probably), several layers of censoring, maybe some had limited 2nd hand internet access to enrich answers with newer data (ala Grok with X live data), etc.
Just like you said "predicts the next word", you could invent and/or define a new verb to specifically explain what the LLMs does when it "undertands" something, or when it "lies" about something.
Most probably, the actual process of "lying" for a LLM is far from being based on the way humans understand something, and probable is more precisely described as going through several layers of mathematical stuff, translating that to text, having the text filtered, censored, enriched, and so on, at end you read the output and the thing is "lying to you".
An LLM can only make predictions of word sequences and suggest what those sequences may be. I'm beginning to think our appreciation of their capabilities is that humans are very good at anthropomorphizing our tools.
Is this the right way of looking at things?