Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Huh? You won't update the model you'll just give it new information? The exact concern is that the new information will be garbage aimed at pushing the model to produce certain output. Much like SEO spammers do to manipulate Google search results.

"Just don't update the model, only feed it new information" is exactly how to get to the outcome of concern in this thread.



Yes, updating the model is different from updating the knowledge base the model uses.


Great, so you've updated your knowledge base, it's got garbage targeted to make it attractive to the model, and now your model is outputting garbage. It's the exact same problem Google has fighting the SEO spammers. Now the model is significantly less useful, exactly as suggested.

We've already seen exactly this happened with search. There's no reason to believe that LLMs are immune.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: