Ocra is trained mostly on GPT 3 and 4 output, and those models have had a lot of "safety fine tuning", so it's not surprising Ocra is pretty "safe" too.
No, the orca 2 paper mentions more of a counter point towards NSFW and stuff, like if you gave it a NSFW prompt, it would retort back against it, which is arguably a good thing, but really lost in RLHF