Hacker News new | past | comments | ask | show | jobs | submit login

> Where would the AI get the data necessary to generate correct answers for novel problems or current events?

In a certain sense, it doesn't really need it. I like to think of the Library of Babel as a grounding thought experiment; technically, every truth and lie could have already been written. Auguring the truth from randomness is possible, even if only briefly and randomly. The existence of LLMs and tokenized text do a really good job of turning statistics-soup into readable text.

That's not to say AI will always be correct, or even that it's capable of consistent performance. But if an AI-generated explanation of a particular topic is exemplary beyond all human attempts, I don't think it's fair to down-rank as long as the text is correct.




Are you suggesting that llms can predict the future in order to address the lack of current event data in their training set? Or is it just implicit in your answer that only the past matters?


The explosion in AI over the last decade has really brought into light how incredibly self-aggrandizing humans naturally are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: