Yes. I have been using ChatGPT quite a bit with programming tasks. One of the things I've been trying to do is using chain-of-thought prompting to ask the model to review its own code, line by line, evaluating it for the presence of a certain type of bug or some other criterion.
This has been illuminating: as ChatGPT steps through the lines of code, its "analysis" discusses material that _is not present in the line of code_. It then reaches a "conclusion" that is either correct or incorrect, but having no real relationship to the actual code.
This has been illuminating: as ChatGPT steps through the lines of code, its "analysis" discusses material that _is not present in the line of code_. It then reaches a "conclusion" that is either correct or incorrect, but having no real relationship to the actual code.