I didn’t run the 2.5 Coder 7B model, I ran 2.5 Coder 32B hosted by together.ai (and accessed through poe.com). This is just another example that the censoring seems to be variable across models, but perhaps there isn’t as much relation between censoring and model size or specialty as I thought if the Coder 7B model is self-censoring.
https://poe.com/s/VuWv8C752dPy5goRMLM0?utm_source=link