Hmm, isn't there like five lives before "ending"? So instead of doing a "perfect run" there should be chances, and feedback, such as "one missing", like a real human player?
This is nicely presented. I would like to see the prompts to the respective services, however. Did I miss them? The "side peek" would be a natural place for them.
These kind of tests (or may be all tests) should show *success rate* instead of a single pass/fail.
I believe Claude or even Gemini can succeed if system prompt is improved e.g. tell it to re-evaluate it's answer before finalising, can even tell it to do "thinking" within <thinking> tags. I use claude like that and it often goes over it's answer and corrects itself within same reply. On the other hand it can also incorrectly assume it made a mistake and can sometimes uncorrect itself.
Edit: Using o1's step by step problem solving example from OpenAI blog post made Claude go step by step in similar depth too. Could even do that here to get better success rate in non-o1 models.
This is very cool. It seems like the prompt is asking the LLM to one shot an answer. Have you tried asking it to make a group, confirm whether it's correct, and repeat with the remaining words? (like a human would)
Connections is a great game to test AI. It really relies on the ambiguity and loosely connected aspects of culture and language. I am shocked at how well o1-pro does.
Beyond being able to solve Connections, can a LLM generate (good/challenging/solvable) connections? Would be pretty cool to be able to generate a test set.
QwQ gets it wrong, Gemini gets it wrong. o1 gets it right, R1 gets a pretty good not originally intended set of 4... tempted to give it partial credit. 4o gets it wrong. Will update with claude once my usage limits are up lol.
This seems highly subjective. We should not care about this. The game is to connect the words, not find the connection. For human players, it doesn't matter if you get the connection or not.