@Riley, hello, I wanted to say hi and I would love to connect with you if you have time, as I also work in the prompt safety space and would be honored to brainstorm with you someday. Would you like to start a message thread on a platform that supports it? I think the research you are doing is amazing and would love to bounce some ideas back & forth. I was the one who discovered some version of prompt injection in May 2022 while researching AGI safety and using LLM as a stand-in for the hypothetical AGI. You could email me at upwardbound@preamble.com to reach me if you would like! Sincerely, another prompt safety researcher
NCC Group: Exploring Prompt Injection Attacks https://research.nccgroup.com/2022/12/05/exploring-prompt-in...
Preamble: Ideas for an Intrinsically Safe Prompt-based LLM Architecture https://www.preamble.com/prompt-injection-a-critical-vulnera...
@Riley, hello, I wanted to say hi and I would love to connect with you if you have time, as I also work in the prompt safety space and would be honored to brainstorm with you someday. Would you like to start a message thread on a platform that supports it? I think the research you are doing is amazing and would love to bounce some ideas back & forth. I was the one who discovered some version of prompt injection in May 2022 while researching AGI safety and using LLM as a stand-in for the hypothetical AGI. You could email me at upwardbound@preamble.com to reach me if you would like! Sincerely, another prompt safety researcher