Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

for now, i wouldnt rank any model from openai in coding benchmarks, despite all the false messaging they are giving, almost every single model openai has launched even the high end o3 expensive models are absolutely monumentally horrible at coding tasks. So this is expected.

If its decent in other tasks, which i do find openai often being better than others at, then i think its a win, especially a win for the open source community that even AI labs that pionered the hype of Gen AI who didnt want to ever launch open models are now being forced to launch them. That is definitely a win, and not something that was certain before.



It is absolutely awful at writing and general knowledge. IMO coding is its greatest strength by far.


Sure sounds like they're not good at anything in particular, then.


welcome to 3DTV hype, LLM are useless...


not really, claude is amazing, which is why I pay for Claude Max, its insanely amazing and useful, its just OpenAI's one isnt


NVIDIA will probably give us nice, coding-focused fine-tunes of these models at some point, and those might compare more favorably against the smaller Qwen3 Coder.


What is the best local coder model that that can be used with ollama?

Maybe a too opened ended question? I can run the deepseek model locally really nicely.


Probably Qwen3-Coder 30B, unless you have a titanic enough machine to handle a serious 480B model.


Is the DeepSeek model you're running a distill, or is it the 671B parameter model?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: