Hacker News new | past | comments | ask | show | jobs | submit login

> "Can't" you say. "Does", I say:

Have you seriously not seen them make this kinds of grave mistakes? That's too much kool-aid you're taking.




I literally gave you a link to a ChatGPT session where it did what you said it can't do.

And rather than use that as a basis for claiming that it's reasoning, I'm also saying the test that you proposed and which I falsified, wasn't actually about reasoning.

Not sure what that would even be in a kook-aid themed metaphor in this case… "You said that drink was poisoned with something that would make our heads explode, Dave drank some and he's fine, but also poison doesn't do that and if the real poison is α-Amanitin we wouldn't even notice problems for at about a day"?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: