Hacker News new | past | comments | ask | show | jobs | submit login

Unless you are living on technologies edges, ChatGPT gets it right plenty of times and isn't a rude brick about it.

If it gets it wrong it usually is pretty apparent quickly and normally gives enough hints for avenues to explore.

At least my SO use (and programming related googling) has fallen dramatically...




When I've tried using it I've generally found that it leads me in circles between incompatible versions of a framework or tool -- it'll give me syntax that's correct for one version, but wants to use it to call a function that only exists in a different version, that sort of thing. They're not even hallucinations, each step is technically correct, but can't be used successfully with the other steps.


Another thing I found is that it can have "obvious omissions". As in: the answer is correct, and it works, but it's a bad solution and there's a much more obvious and better way to solve it.

"Technically correct", but also not good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: