Hacker News new | past | comments | ask | show | jobs | submit login

I couldn't agree more. Neither LLMs or search engines yield reliable answers due to the ML model and misaligned incentives respectively. Combining the two is a shortsighted solution that doesn't fix anything on a fundamental level. On top of that we have the feedback loop. LLMs are used to search the web and write the web, so they'll just end up reading their own (unreliable) output over time.

What we need is an easier way to verify sources and their trustworthiness. I don't want an answer according to SEO spam. I want to form my own opinion based on a range of trustworthy sources or opinions of people I trust.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: