Hacker News new | past | comments | ask | show | jobs | submit login

It is discomforting to read, in the first paragraph, that "dynamical adjustment of weights" is justified as "adaptation". Clearly it is a sought milestone to have «a future where AI models are no longer static»: but the chief reason remains, "intelligent systems reprocesses their body of knowledge and change it to improve it" - it is anterior to "adaptation to environment", it is "maintenance of the body of knowledge (of the world model)": it is the continuous practice of "thinking about things", "pondering", "reflecting", "using judgement"...

There is not just a simple «lifelong learning»: the whole past experience is still productive, requiring analysis, not "solved".

Anyway: the directions seem good.

Edit: equally interesting in another direction is the automated analysis of the internal subagents, «break[ing] down the vast, complex knowledge stored in the LLM into smaller, meaningful, and independent pieces (e.g., the different pathways or components for math, language understanding, etc)». Should not there be a general study of the dissection of systems with seemingly emergent intelligence, doing on LLMs like we do on C. Elegans?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: