The other day I was working with ChatGPT and I asked it to extract entities from a text passage and "...generate a unique ID for each extracted entity by MD5 hashing the entity value and type." And it did just that!
I get the next token probability model concept in LLMs, but how the heck can it write and run code in the formation of the response? Is that actually what's going on behind the curtain?
Thanks smart HN people.
However, although GPT-4o should have a code execution function, it is possible that the code is not actually being executed and that the LLM is simply generating output that looks like (but is not actually) correct.
reply