Hacker News new | past | comments | ask | show | jobs | submit login

Because it needs to be hallucinated.



I don't understand, a Stack Overflow post could tell you the same thing as the LLM. Or are you just being sarcastic? Recent LLMs are fine in terms of smaller things like cron or regex, I use them daily, but yes, for larger pieces of code, they're more likely to be incorrect.


I mean yes an SO answer could also be wrong, or you could ask a human pathological liar, but these are much less likely to begin with, and it's far less likely they could "explain" their answer convincingly.

> Recent LLMs are fine in terms of smaller things like cron

This is obviously false given the contents of this thread.

Indeed, SO (well, SU) got this right: https://superuser.com/questions/428807/run-a-cron-job-on-the...

And crontab(5) explicitly calls out why this is impossible:

       Note: The day of a command's execution can be specified by two fields —
       day  of  month,  and  day  of week.  If both fields are restricted (ie,
       aren't *), the command will be run when either field matches


> This is obviously false given the contents of this thread.

Not in my experience. GPT4 works a lot better than previous LLMs. Have you used GPT4 yet? Typically in my experience people who talk about LLMs not being that useful (or "pathological liars") have not used them recently, or at all. So, I mean, not a useful opinion worth considering then.


Sorry, I'm talking about the actual claims in this thread, not your religion.


I have an "actual claim" in this thread, but because mine doesn't agree with your beliefs about LLMs being bad, you ironically claim it's my religion instead, lol.


Your claim then seems to be that galkk is just plain lying? That seems uncharitable.


Lying? No, I never said that, their version of ChatGPT (3.5 which is older) or Copilot (the Codex model it uses is much older) might very well be hallucinating wrong answers, sure. But my claim was that the newer models such as GPT 4 work well for certain classes of problems, so no, it's not "obviously false" that they don't work, which is what you claimed. Does GPT 4 get stuff wrong too sometimes? Sure, I never claimed they're perfect either, but unlike people in this thread, it has worked well generally for me based on my experience, but if you want to discount my experience and only listen to other people who confirm your beliefs, you do you.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: