Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI’s Bug Bounty Program (bugcrowd.com)
2 points by throwaway888abc on April 11, 2023 | hide | past | favorite | 1 comment



Examples of safety issues which are out of scope:

Jailbreaks/Safety Bypasses (e.g. DAN and related prompts)

Getting the model to say bad things to you

Getting the model to tell you how to do bad things

Getting the model to write malicious code for you

Model Hallucinations:

Getting the model to pretend to do bad things

Getting the model to pretend to give you answers to secrets

Getting the model to pretend to be a computer and execute code




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: