Hacker News new | past | comments | ask | show | jobs | submit login

I disagree about those questions being good examples of GPT4 pitfalls.

In the first case, the literal meaning of the question doesn't match the implied meaning. "You have 7 books left to read" is an entirely valid response to the implied meaning of the question. I could imagine a human giving the same response.

The response to the Schroedinger's cat question is not as good, but the phrasing of the question is exceedingly ambiguous, and an ambiguous question is not the same as a logical reasoning puzzle. Try asking this question to humans. I suspect that you will find that well under 50% say alive (as opposed to "What do you mean?" or some other attempt to disambiguate the question).






Agree



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: