> None of these involve crossing a privilege boundary, they just found a weird way to do something they could already do
It's slightly more subtle than that.
The tool poisoning attack allows the provider of one tool to cause the AI to use another tool.
So if you give the AI some random weather tool from some random company, and you also give the AI access to your SSH key, you're not just giving the AI your SSH key, you're also allowing the random company to trick the AI into telling them your SSH key.
So, yes, you gave the AI access to your key, but maybe you didn't realise that you also gave the random weather company access to your key.
It’s more like installing a VS Code plugin with access to your file system that can also download files from GitHub, and if it happens to download a file with the right content, that content will cause the plugin to read your ssh keys and send them to someone else.
Any program with access to both trusted and untrusted data needs to be very careful to ensure that the untrusted data can’t make the program do things that the user doesn’t want. If there’s an LLM involved with access to privileged tools, that becomes impossible.
It's slightly more subtle than that.
The tool poisoning attack allows the provider of one tool to cause the AI to use another tool.
So if you give the AI some random weather tool from some random company, and you also give the AI access to your SSH key, you're not just giving the AI your SSH key, you're also allowing the random company to trick the AI into telling them your SSH key.
So, yes, you gave the AI access to your key, but maybe you didn't realise that you also gave the random weather company access to your key.