A key difference is that the way LMMs (Large Multimodal Models) generate output is far from random. These models can imitate/blend existing information or imitate/probably blend known reasoning methods in the training data. The latter is a key distinguishing feature of the new OpenAI o1 models.
Thus, the signal-to-noise ratio of their output is generally way better than infinite monkeys.
Arguably, humans rely on similar modes of "thinking" most of the time as well.
Thus, the signal-to-noise ratio of their output is generally way better than infinite monkeys.
Arguably, humans rely on similar modes of "thinking" most of the time as well.