Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's interesting to me is how ChatGPT with GPT-4 can fail to follow your instructions randomly (for the same instructions, in the same session). That's these kind of behaviors that show it is, in fact, not intelligent, but that probabilities line up properly for the emergent behavior to look like it is.


By that measure most humans are not intelligent … go 10 times to a restaurant and tell me the failure rate of getting precisely what you want :)


If that restaurant is McDonald's or KFC then about 10% ~ 15%?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: