Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LeCun misses the point by a mile, which is weird at his level. LLMs absolutely do perform problem solving, every time you feed them a prompt. The problem-solving doesn't happen on the output side of the model, it happens on the input side.

Someone objected that a cat can't write a Python program, and LeCun points out that "Regurgitating Python code does not require any understanding of a complex world." No, but a) interpreting the prompt does require understanding, and good luck finding a dog or cat who will offer any response at all to a request for a Python program; and b) it's hardly "regurgitating" if the output never existed anywhere in the training data.

TL,DR: his FoMO is showing.



I haven't used GPT-3 to generate code for me but I use Copilot all the time. Sometimes it freaks me out with its precience, but most of the time it is generating either nice one-liners or a lot of plausible-sound rubbish that would never build, much less run on its own. It creates a plausible API that is similar to the one in my app, but not the same; it doesn't integrate any actual structural knowledge of the code-base, its just bullshitting.


This is a script I told ChatGPT to write.

“Write a Python script that returns a comma separated list of arns of all AWS roles that contain policies I specify with the “-p” parameter using argparse”

Then I noticed there was a bug, AWS API calls are paginated and it would only return the first 50 results.

“that won’t work with more than 50 roles”

Then it modified the code to use “paginators”

Yes, you can find similar code on StackOverflow

https://stackoverflow.com/questions/66127551/list-of-all-rol...

But ChatGPT met my specifications exactly.

ChatGPT “knows” the AWS SDK for Python pretty well. I’ve used it to write a dozen or so similar scripts. Some more complicated than the others.


Ok that actually sounds hugely useful. It makes sense for very well known APIs it will get them quite accurately.


I wonder how well it would work if you seeded it with the inputs and outputs of custom APIs and then tell it to write code based on your API.


Well I'd hope that is what is going on in Copilot. It definitely does seem to be trained on my code to some extent, but it doesn't have anything I'd call a semantic understanding of it.


Again, the interesting part is what happens on the input side.

I can't believe I'm the only person who sees it that way. Likely the legacy of a misspent youth writing Zork parsers...


It's strange, isn't it?

Everyone is so quick to say how unimpressed they are by the thing meanwhile I'm sitting here amazed that it understands what I say to it every single time.

I can speak to it like I would speak to a colleague, or a friend, or a child and it parses my meaning without fail. This is the one feature that keeps me coming back to it.


The difference here is that a cat or dog hasn't been trained to write a python program, and it probably isn't possible - the weights and activation functions of a cat brain simply won't allow it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: