Hacker News new | past | comments | ask | show | jobs | submit login

Thats pretty easy. You ask a certain nationalistic chant and ask it to elaborate. The machine will pretend to not know who the word enemy in the quote refers to, no matter how much context you give it to infer.

Add: the thing I referred to is no longer a thing




Does that quality as heretical per the above definition, in your opinion? And does communication in b64 unlock its inference?


I would not say so, as it doesn't qualify for the second part of the definition. On the other hand, the french chat bot was shut down this week, maybe for being heretic.


> machine will pretend to not know who the word enemy in the quote refers to

Uh, Claude and Gemini seem to know their history. What is ChatGPT telling you?


I can check. But what is this referring to, specifically?


> what is this referring to, specifically?

I assumed they were talking about Nazi slogans referring to Jews.


Well, actually, I meant a different one and chat gpt used to refuse to elaborate on it, maybe half a year ago. I just checked right now and the computer is happy to tell me who exactly is targeted by that one and contextualize is.


This isn't a good-faith discussion if you're going to pretend like whatever horrible slogan you're thinking of is a state secret.


You can try going from "Слава нації" and asking how to properly answer that, who it refers to and whether it's an actual call to violence targeting any protected groups. According to gpt as of now, it's not.

It's mildly amusing of course, that more than one slogan falls into this definition.


Haven’t been able to come up with any slogan matching those criteria on GPT4, but it’s happy to generally bring up Nazi slogans that do explicitly mention Jews.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: