Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Trying to make things up to cover for a lack of knowledge is something distinctly different, though. This is a a situation where ChatGPT is able to perfectly describe the mistake it made, describe exactly what it needs to do differently, and then keeps making the same mistake, even with simple tasks. That’s because there’s no greater model that the words are being connected to.

The equivalence would be saying to someone, “put this on the red plate, not the blue one.” And they say sure, then put it on the blue one. You tell them they made a mistake and ask them if they know what it was, and they reply “I put it on the blue plate, not the red one. I should have put it on the red one.” Then you ask them to do it again, and they put it on the blue plate again. You tell them no, you made the same mistake, put it on the blue plate, not the red one. They reply with, “Sorry, I shouldn’t have put it on the blue plate again, now I’m going to put it on the red one,” and then they put it on the blue plate yet again.

Do humans make mistakes? Sure. But that kind of performance in a test wouldn’t be considered a normal mistake, but rather a sign of a serious cognitive impairment.



Even though it was trained on a lot of text, some tasks and some skill combinations appear too rarely and it just didn't have enough exposure. It might be easy to collect or generate a dataset, or the model can act as an agent creating its own dataset.


But the question is: are people with cognitive impairments less conscious than others?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: