It’s not tricky for a computer to sanitize all io to/from an LLM. You can even build it into an inference engine itself to avoid mistakes. The article just shifts (not necessarily intentionally) a well-known problem with crappycode into AI FUD territory.
> As researcher Thacker explained: The issue is they’re not fixing it at the model level, so every application that gets developed has to think about this or it's going to be vulnerable. And that makes it very similar to things like cross-site scripting and SQL injection, which we still see daily because it can’t be fixed at central location. Every new developer has to think about this and block the characters.
Why dont we fix it at the "API" level, then? I.e OpenAPI or Claude's API could do this for everyone. I know some people host their own models, but this would lower the attack surface.