Jailbreaks/Safety Bypasses (e.g. DAN and related prompts)
Getting the model to say bad things to you
Getting the model to tell you how to do bad things
Getting the model to write malicious code for you
Model Hallucinations:
Getting the model to pretend to do bad things
Getting the model to pretend to give you answers to secrets
Getting the model to pretend to be a computer and execute code
Jailbreaks/Safety Bypasses (e.g. DAN and related prompts)
Getting the model to say bad things to you
Getting the model to tell you how to do bad things
Getting the model to write malicious code for you
Model Hallucinations:
Getting the model to pretend to do bad things
Getting the model to pretend to give you answers to secrets
Getting the model to pretend to be a computer and execute code