that's a feature, why would you want to prevent prompt injection? this sort of "safety" is only useful if you want to nerf the usability of your models
I'm currently working on a ui that allows for programmatic prompt modification, if you've never offered an un-nerfed LLM you're missing out
I'm currently working on a ui that allows for programmatic prompt modification, if you've never offered an un-nerfed LLM you're missing out