Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Functional Fixedness and the Game No ChatBot Can Play
1 point by abrax3141 11 months ago | hide | past | favorite
Since the outset of this brouhaha I have been testing these supposed genius AIs on a game that I use in my very first AI classes to teach decision trees, and that I have direct evidence can be learned and played by a seven year old with no trouble. Yet none of them (including this morning testing with the new Bard and the latest GPT4.whatever) is able to play it. The game is a version of the classic animals game, but modified to teach (as above) about decision trees, and in particular, decision tree learning. It appears to confuse the AIs in various ways because it is a modification of a game that is extremely well understood -- the classical animals guessing game -- but instead of just running a tree, in my version, I learn the tree progressively, by asking the chooser (the person who has thought of the animal) for a question that distinguishes my guess (which is the last node I come to, of course) from their choice. Then, when they choose a new animal, I start at the top again, run the tree asking the questions (including the new one, if I end up down that branch), and so on. If you're into very old AI, you'll recognize this as a basic version of Feigenbaum and Simon's EPAM algorithm. No AI-based chatbot seems able to grok this game. They make many many mistakes, including picking the same animal over and over (which seems like a silly error, but even the most recent gemini-enabled bard and openAI gpt seem to fixate on the same animal over and over), they "lie" (answer the really simple questions wrongly, even though they have given me all the questions!), and keep trying to tell me how to play the game, as though it's the classical animals game. And many other confusions. I hypothesize that this game is close enough to the very common classical animals game that the GPTs are unable to distance themselves from that game. I've tried many sort of prompts and contexts, even down to tell it the rules in detail in the prompt, and yelling it the bot over and over not to do things they persist in doing. I am, of course, unsurprised at this behavior; I chose this game precisely because it's so close to the original as to be confusing, and expected this sort of functional fixedness. The question is: Can someone get a (general) bot to play it correctly, including switching roles - -that is, once I've demo'ed it, the bot becomes the guesser and we reset the tree and start again, and then it builds the tree. I've never even got to that point because I can't ever get beyond the demo stage with them! (This whole "the prompt is the program" thing is, IMHO, deep insanity!)



Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: