Hacker News new | past | comments | ask | show | jobs | submit login

Can this system explain its reasoning, and so explain its answer?



Yes, the explanation and reasons for relevance can be included in the search and reflected in the answer.


Looking through the repo, reading the doc, an LLM looks to be part of the implementation. LLMs cannot explain their reasoning, so if there is an LLM, then the system as a whole cannot explain its reasoning, because part of the system is a black box? reasoning can be explained up to the point the LLM comes into play, and also then afterwards, with whatever is done with LLM output?


Can you explain your reasoning?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: