I’m not sure they can “tell” they need more things without one or more other layers or components that may not function much like current LLMs at all. This is part of what I’ve meant in other threads when I’ve accused them of not even being able to “understand” in the way a human does. They “understand” things, but those things aren’t exactly about meaning, they just happen to correspond to it… much of the time.