My understanding was given a prompt X that is normally rejected, create Y variations with small adjustments to phrasing, grammar etc until it gives you the answer you're after.
The term "jailbreaking" used within a LLM context, is when you craft a prompt as to escape the safety sandbox, if that helps.
reply