Hacker News new | past | comments | ask | show | jobs | submit login

Is this paper wrong? - https://arxiv.org/abs/2311.09807



It shows that if you deliberately train LLMs against their own output in a loop you get problems. That's not what synthetic data training does.


I understand and appreciate your clarification. However would it not be the case some synthetic data strategies, if misapplied, can resemble the feedback loop scenario and thus risk model collapse?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: