Hacker News new | past | comments | ask | show | jobs | submit login

I think v3.5 is already a lot worse than earlier this year and v4 is basically the only option if you want a reliable answer.

This coincided with the update that made v4 the default option for new chats for Plus subscribers (of which I am one), drastically increased its speed, and removed the hourly message limits.




I hang around the OpenAI forum every day. The common sentiment this month is the opposite - ChatGPT v4 is worse, and v3.5 is actually better for many use cases. 3.5 handles roughly double the input tokens, so it's a lot better for programming and seems to be better at holding long conversations too.

There still are message limits too, it's just double.


so I am not crazy, 3.5 seems much dumber than before. It used to be able to solve coding problems and have a 75% working solution, now it spits out literal nonsense or doesnt even try.


While I agree that 3.5 seems to be getting worse, I haven’t had it spit out “literal nonsense”. It’s still pretty decent as spitting out code for me.

What’s an example prompt that will induce nonsense?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: