Hacker News new | past | comments | ask | show | jobs | submit login

Not the parent, but there are couple of things current AI lack:

- learning from single article /book with lasting effect (accumulation of knowledge)

- arithmetics without unexpected errors

- gauging reliability of information it’s printing

BTW. I doubt that you’ll get satisfactory definition of “able to reason” (or “conscious” or “alive” or “chair”). As they define more an end or direction of a spectrum, not an exact cut off point.

Current llms are impressive and useful, but given how often they spout nonsense, it is hard to put them into “able to reason” category.




> learning from single article /book with lasting effect (accumulation of knowledge)

If you mean without training the model, it can be done by using RAG, and allowing LLM to decide what to keep in mind as learnings to later come back to those. There are various techniques for RAG based memory/learning. It's a combination of querying the memory that is relevant to current goal, as well as method to keep most recent info in memory, as well as compressing, throwing out old info progressively, assigning importance levels to different "memories". Kind of like humans, honestly.

> arithmetics without unexpected errors

That's a bit handwavy, because humans make very many unexpected errors when doing arithmetics.

> gauging reliability of information it’s printing

Arguably most people also whatever they output, they are not very good at gauging the reliability. Also you can actually make it do that with proper prompting. You can make it debate itself, and finally let it decide the winning decision and confidence level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: