LeCun misses the point by a mile, which is weird at his level. LLMs absolutely do perform problem solving, every time you feed them a prompt. The problem-solving doesn't happen on the output side of the model, it happens on the input side.
Someone objected that a cat can't write a Python program, and LeCun points out that "Regurgitating Python code does not require any understanding of a complex world." No, but a) interpreting the prompt does require understanding, and good luck finding a dog or cat who will offer any response at all to a request for a Python program; and b) it's hardly "regurgitating" if the output never existed anywhere in the training data.
I haven't used GPT-3 to generate code for me but I use Copilot all the time. Sometimes it freaks me out with its precience, but most of the time it is generating either nice one-liners or a lot of plausible-sound rubbish that would never build, much less run on its own. It creates a plausible API that is similar to the one in my app, but not the same; it doesn't integrate any actual structural knowledge of the code-base, its just bullshitting.
“Write a Python script that returns a comma separated list of arns of all AWS roles that contain policies I specify with the “-p” parameter using argparse”
Then I noticed there was a bug, AWS API calls are paginated and it would only return the first 50 results.
Well I'd hope that is what is going on in Copilot. It definitely does seem to be trained on my code to some extent, but it doesn't have anything I'd call a semantic understanding of it.
Everyone is so quick to say how unimpressed they are by the thing meanwhile I'm sitting here amazed that it understands what I say to it every single time.
I can speak to it like I would speak to a colleague, or a friend, or a child and it parses my meaning without fail. This is the one feature that keeps me coming back to it.
The difference here is that a cat or dog hasn't been trained to write a python program, and it probably isn't possible - the weights and activation functions of a cat brain simply won't allow it.
Someone objected that a cat can't write a Python program, and LeCun points out that "Regurgitating Python code does not require any understanding of a complex world." No, but a) interpreting the prompt does require understanding, and good luck finding a dog or cat who will offer any response at all to a request for a Python program; and b) it's hardly "regurgitating" if the output never existed anywhere in the training data.
TL,DR: his FoMO is showing.