Hacker News new | past | comments | ask | show | jobs | submit login

I don't know why everyone keeps saying this. I played with ChatGPT(3.5) with SillyTavern for like a month. Many community character cards are questionable or even straight out lewd. I haven't encountered "I'm sorry but as an AI model..." for once (according to API usage, I've generated ~120000 tokens.)

For anyone interested, this is SillyTavern's prompt: https://github.com/Cohee1207/SillyTavern/blob/f25ecbd95ceef5...

Edit: not ~12000 tokens, but ~120000.




The link you provided is using a ChatGPT jailbreak to escape the "AI safety" so it makes sense why you haven't ran into the issue (at least until OpenAI fixes this jailbreak variant).

https://github.com/Cohee1207/SillyTavern/blob/f25ecbd95ceef5...


I just checked my SillyTavern settings. I haven't even turned this jailbreak on so far. (at least according to the checkbox on GUI...to lazy to check the actual API calls in log atm)


The jailbreak prompt is always used, the setting in the panel allows you to rewrite the jailbreak yourself if you want.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: