Hacker News new | past | comments | ask | show | jobs | submit login

XSS vulnerabilities on the web are massive. The entire web security model is based around trying to restrict them, and that comes with downsides that limit capabilities.

If prompt injection is "only" as serious as an XSS attack, then that would be enough to upend most of the thinking we have today about how we'll be able to wire LLMs to real world systems.




No one is wiring LLMs to real world systems. This is a flash in the pan that will be forgotten and fully derided in months/years like NFTs, self driving, etc. It's a trap for people to waste time and attention thinking about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: