Hacker News new | past | comments | ask | show | jobs | submit login

The unprompted part, that's the important bit. If you find out you were wrong in with previous recommendation or advice or just basic info, let me know. Email, text, whatever. I don't think there's enough AI agent ecosystem to support that yet.



It seems like the product for this could be a separate model that evaluates statements for truthiness (possibly some other NLP model that correlates to a known source, as well as checking that it doesn’t contradict another known source). This maybe doesn’t have to be an LLM (maybe indeed is a completely independent architecture). LLM output could be run through this model as a final filter, with strictness perhaps defined by user parameters or use case.

It certainly isn’t an easy problem, but it does feel as though it’s structurally solvable. Defining trusted sources vs standard training data seems to be a first step, as well as identifying a means of segmenting domains under which those sources would carry authority.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: