Hacker News new | past | comments | ask | show | jobs | submit login

That's because it's still an early version. Also, it's trained on data over 2 years old. OpenAI needs to figure out how to regularly refresh its training. Newer and better models for programming should also help.



The info being old is an excuse for not knowing about recent stuff. It's not an excuse for hallucinating answers to questions it doesn't know the answer to (or for which no answer exists).

I have seen nothing to suggest that future modes will fix the hallucination problem. I wouldn't be surprised if it's just an inherent, unfixable issue with the concept of neural network language models – they don't have a model of the world from which they produce logically coherent text, they simply have a giant statistical model for which words tend to occur in which order – so I don't see why we should expect a language model to not hallucinate. But I'm not an expert on the current state of the art, so if you have some links to papers which indicate that the language model hallucination problem is solvable, I'm open to admitting I may be wrong.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: