Hacker News new | past | comments | ask | show | jobs | submit login

A key difference is that the way LMMs (Large Multimodal Models) generate output is far from random. These models can imitate/blend existing information or imitate/probably blend known reasoning methods in the training data. The latter is a key distinguishing feature of the new OpenAI o1 models.

Thus, the signal-to-noise ratio of their output is generally way better than infinite monkeys.

Arguably, humans rely on similar modes of "thinking" most of the time as well.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: