Hacker News new | past | comments | ask | show | jobs | submit login

that's a feature, why would you want to prevent prompt injection? this sort of "safety" is only useful if you want to nerf the usability of your models

I'm currently working on a ui that allows for programmatic prompt modification, if you've never offered an un-nerfed LLM you're missing out




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: