I'm absolutely not bullish on LLMs, but I think this is kinda judging a fish on its ability to climb a tree.
LLMs are looking at typical constructions of text, not an understanding of what it means. If you ask it what color the sky is, it'll find what text usually follows a sentence like that, and tries to construct a response from it.
If you ask it the answer to a math question, the only way it could reliably figure it out is if it has in its database an exact copy of that math question. Asking it to choose things from a list is kinda like that, but one could imagine that the designers would try to supplement that manually with a different technique from pure LLM.
LLMs are looking at typical constructions of text, not an understanding of what it means. If you ask it what color the sky is, it'll find what text usually follows a sentence like that, and tries to construct a response from it.
If you ask it the answer to a math question, the only way it could reliably figure it out is if it has in its database an exact copy of that math question. Asking it to choose things from a list is kinda like that, but one could imagine that the designers would try to supplement that manually with a different technique from pure LLM.