Hacker News new | past | comments | ask | show | jobs | submit login

> new models will start getting trained with mostly the output of other LLMs

That is a naive, flawed way to do it. You need to filter and verify synthetic examples. How? First you empower the LLM, then you judge it. Human in the loop (LLM chat rooms), more tokens (CoT), tool usage (code, search, RAG), other models acting as judges and filters.

This problem is similar to scientific publication. Many papers get published, but they need to pass peer review, and lots of them get rejected. Just because someone wrote it into a paper doesn't automatically make it right. Sometimes we have to wait a year to see if adoption supports the initial claims. For medical applications testing is even harder. For startups it's a blood bath in the first few years.

There are many ways to select the good from the bad. In the case of AI text, validation can be done against the real world, but it's a slow process. It's so much easier to scrape decades worth of already written content than to iterate slowly to validate everything. AlphaZero played millions of self games to find a strategy better than human.

In the end the whole ideation-validation process is a search for trustworthy ideas. In search you interact with the search space and make your way towards the goal. Search validates ideas eventually. AI can search too, as evidenced by many Alpha model (AlphaTensor, AlphaFold, AlphaGeometry...). There was a recent paper about prover-verifier systems trained adversarially like GANs, that might be one possible approach. https://arxiv.org/abs/2407.13692v1




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: