Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why isn't the prompt path TLS protected end to end? If the attack is at ends not on path then it's not altering GPT that's a risk, you just inject whatever you want irrespective surely?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: