Hacker News new | past | comments | ask | show | jobs | submit login

It's hard to observe your own mind, or indeed others, it's intangible and generally difficult to observe thinking.

Because of this drawback, LLMs are actually a decent model for this sort of process since we can observe how they operate. I'm not claiming they're actually intelligent like we are, but rather that they model the process of drawing connections and making associations close enough to how we think to the point where it's an useful analogy.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: