Hacker News new | past | comments | ask | show | jobs | submit login

> LLMs aren't really capable of "learning" anything outside their training data.

ChatGPT has had for some time the feature of storing memories about its conversations with users. And you can use function calling to make this more generic.

I think drawing the boundary at “model + scaffolding” is more interesting.






Calling the sentence or two it arbitrarily saves when you statd your preferences and profile info "memories" is a stretch.

True equivalent to human memories would require something like a multimodal trillion token context window.

RAG is just not going to cut it, and if anything will exacerbated problems with hallucinations.


Thats the whole point of llama index? I can connect my LLM to any node or context i want. Syncing it to a real time data flow like an API and it can learn...? How is that different than a human?

Once optimus is up an working by the 100k+, the spatial problems will be solved. We just don't have enough spatial awareness data, or for a way for the LLM to learn about the physical world.


Well, now you’ve moved the goalposts from “learn anything” to “learn at human level”. Sure, they don’t have that yet.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: