Huh? You won't update the model you'll just give it new information? The exact concern is that the new information will be garbage aimed at pushing the model to produce certain output. Much like SEO spammers do to manipulate Google search results.
"Just don't update the model, only feed it new information" is exactly how to get to the outcome of concern in this thread.
Great, so you've updated your knowledge base, it's got garbage targeted to make it attractive to the model, and now your model is outputting garbage. It's the exact same problem Google has fighting the SEO spammers. Now the model is significantly less useful, exactly as suggested.
We've already seen exactly this happened with search. There's no reason to believe that LLMs are immune.
"Just don't update the model, only feed it new information" is exactly how to get to the outcome of concern in this thread.