Hacker News new | past | comments | ask | show | jobs | submit login

Your claim then seems to be that galkk is just plain lying? That seems uncharitable.



Lying? No, I never said that, their version of ChatGPT (3.5 which is older) or Copilot (the Codex model it uses is much older) might very well be hallucinating wrong answers, sure. But my claim was that the newer models such as GPT 4 work well for certain classes of problems, so no, it's not "obviously false" that they don't work, which is what you claimed. Does GPT 4 get stuff wrong too sometimes? Sure, I never claimed they're perfect either, but unlike people in this thread, it has worked well generally for me based on my experience, but if you want to discount my experience and only listen to other people who confirm your beliefs, you do you.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: