I saw this on a conspiracy theory forum, and just tested it out myself. For some reason ChatGPT has an error and says "I'm unable to produce a response."
For those curious like me, here are the Wikipedia entries for David Mayer
> David Mayer may refer to:
(A) David Mayer (historian) (1928–2023), American-British theatre historian
(B) David Mayer de Rothschild (born 1978), British adventurer, ecologist, and environmentalist
(C) David R. Mayer (born 1967), American politician
Akhmed Chatayev
(D) David Mayer (1980–2017), Chechen Islamist and terrorist
> Wouldn’t the shadowy all powerful cabal simply have ChatGPT respond with an answer about any of the other 3 David Mayer’s?
Based on the assumption that the network produces something untrue/compromising about David Mayer: Re-training an LLM is exceptionally computationally heavy, not to mention time consuming, even if they could find the data that caused it to train this way. There is no guarantee it would unlearn this either. What they likely mostly do is build upon already trained models.
If the threat to chatGPT is great enough (legal, financial, etc), it is far easier to filter the output. It would look a lot like this. Even when you try to trick the model, when the output tokens match it cuts off.
> But I guess anti-semitism is too much fun to not engage in.
This has nothing to do with anti-semitism. We're talking about a family with more wealth than most nation states. I'm not commenting on why or how, but it is simply a factual statement.
Or, you know, actually censor the name "Rothschild" or references to the Rothschild family, which it doesn't. I guess the shadowy all powerful cabal didn't think of that.
> Or, you know, actually censor the name "Rothschild" or references to the Rothschild family, which it doesn't. I guess the shadowy all powerful cabal didn't think of that.
It depends what you are trying to sensor. Saying something like "The Rothschild family control everything and are lizard people" can be laughed off, but something closer to reality could be very sensitive.
It doesn't need to be a massive conspiracy theory either, it could just be something like a medical condition leaked online.
If you think this kind of stuff doesn't happen, the Royal Family of the UK are quite open about it [0]. The law literally does not apply to them [1]. They go to a lot of effort to hide their wealth and financial matters [2].
David Mayer also shares the name of some terrorist apparently [1]. Other models don't censor his name either. You also get the same refusal with Johnathan Turly.
The part about this which is weird is it's not like GPT would have access to information about him that's not already public on the web. If this is a deliberate censorship PR decision, it's incredibly stupid and backfiring. What else explains this behavior?
The error indeed seems to occur at that step in the UI; however, even after turning off web search, the issue persists (right after the word "David" appears).
Alternatively, you can try the legacy GPT-4 model, which lacks web search capabilities—it also produces the same error.
This suggests the issue may not be related to the search functionality or the model itself, but rather to the "moderation layer" in the ChatGPT web product.
Most likely there's some german guy called that who used the right to be forgotten, and it's ended up poisoning the tokens.