Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
CuriouslyC
11 months ago
|
parent
|
context
|
favorite
| on:
Show HN: I made a library for LLM prompt injection...
The technology was always the same amount of useful. This library just lowers the competence floor for exploits. Try to exploit yourself and come up with anti-jailbreak prompts to mitigate the exploits.
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: