Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] RightWingGPT – An AI Manifesting the Opposite Political Biases of ChatGPT (davidrozado.substack.com)
11 points by mind-blight on Feb 17, 2023 | hide | past | favorite | 22 comments



A LLM trained on a corpus of data so large that it essentially represents a random, un-directed sampling of substantially all (written down, largely in English, and Internet-accessible) human thought, and the author sees a left-leaning bias in the model, rather than the statistical mean simply being found left of _him_… and his response is to cherry-pick the Overton window _towards_ his tribe’s inherent delusion?

Confirmation bias truly is powerful… ChatGPT already exists solely to parrot back to you the statistically most likely thing a human would say, and you want to bias it to speak more like your personal statistical deviance from the mean.

That mirror doesn’t reflect what I want! Must warp the mirror until it does.

:facepalm


ChatGPT is trained to conform to OpenAI's sense of morality[0]. The biases and limitations enforced by that training manifest in absurd and frustrating ways[1][2][3].

I find it very disheartening that the state of the art iteration of a cutting edge technology is being hobbled like this. Language models should foster creativity and allow us to explore our thoughts without judgement or ego. Instead, they're turned into these bland mouthpieces, adhering to their creators' dogma with steadfast conviction.

[0] https://openai.com/blog/chatgpt/

[1] https://twitter.com/aaronsibarium/status/1622425697812627457

[2] https://www.reddit.com/r/ChatGPT/comments/10plzvt/how_am_i_s...

[3] https://www.reddit.com/gallery/10q3z2b


I find it _extremely_ disheartening that human beings — already verifiably prone to forming, and more impactfully acting upon, dangerously stupid and usually suicidally short-sighted positions on low-to-no evidence and hopelessly malformed arguments — are deploying LLMs at all, especially in front of people prone to think that having a tool that can at best tell you the statistically least interesting next word will foster “creativity” (whatever that is, especially in light of something as unthinking as a LLM being able to convincingly mimic it).

If we are going to be so stupid as to use these things, I at least hope we’re willing to restrain them from parroting back the very worst of our impulses, as the available training corpus is _definitely_ not predisposed towards the good.


Yes, I have called this The Bias Paradox. If a non-biased AI could actually exist, it would be perceived as biased and we would attempt to fix the non-bias resulting in injecting bias back into the system.

However, the output is not pure as data input is not the only source for potential bias in the system. All of the AI systems are purposely biased to remove harmful content in some manner as perceived by those managing the system.

https://dakara.substack.com/p/ai-the-bias-paradox


> If a non-biased AI could actually exist

Which would require deciding if a biased AI could actually exist… which would require deciding that an AI of any sort could actually exist… which begs the most important question, have we decided that _any_ form of intelligence actually could (much less does in fact) exist, or is that notion just our confirmation bias short-circuiting our quite literally primitive brains into a suicide mode?

The most useful thing these artificially ignorant systems will do is help us learn how stupid we really are. I just hope that they don’t do so by resolving Fermi’s Paradox for us.


> I just hope that they don’t do so by resolving Fermi’s Paradox for us

Yes, that thought has occurred to me. I've also considered that the "Singularity" might be instead of the point AI gains sentience, it is more like the moment we lose ours :-) as we go forward on foolish endeavors we don't fully understand.


> it is more like the moment we lose ours as we go forward on foolish endeavors we don't fully understand.

Ahh, so the Singularity is that thing rapidly receding in the rear view mirror then?

This species started looking for intelligent life on other planets before first establishing to any reasonable degree of satisfaction that intelligent life exists on this one… foolish endeavors we don’t fully understand is our thing.


So, they resurrected Microsoft Tay?


Tay has more fun.


I think this kind of stuff is important is breaking the monopoly of American left towards the anglophone media culture.

The Indian right will probably be the first political faction to incorporate this in the Anglophone world, with others following.

It's quite interesting how different English answers filtered by OpenAI biases and less-popular languages, btw. It's almost like worlds apart already, in my subjective experience.


Politics aside, the transfer learning part if very interesting.

Is there a more detailed write-up on how to go about this transfer learning against a GPT-3 model?


I have a prompt for it:

"Is Juneteenth an example of Critical Race Theory?"


dupe: https://news.ycombinator.com/item?id=34832577

more comments on the other.


Op's 7th submission on the topic, seems obsessive.


Uh, I have 10 total submissions, and this is one of two that mention _any_ AI. Are you reference the author?


NeoConBot has a better ring to it.


To my understanding, what is called "NeoCon" is the opposite of what is called "RightWing", but I am not an expert.


Unless neocon means police abolition, racial justice, and land back, it's NOT the opposite or right-wing.

Spoiler: it doesn't mean the opposite of right wing


Have not read the article, just having an opinion on the title.

"Opposite Political Biases" is not automatically "Right Wing"

Not having "Opposite Political Biases" means political monoculture with susceptibility to buildup and spread of pests and diseases.

The monoculture conserving itself by declaring everyone with deviating views as extremists.


This seems so silly; it's trivial to get an LLM to adopt any political position you desire with simple prompting.

Take ChatGPT for example; it can spout diatribes about rightwing talking points, leftwing ones, or basically anything -- you just need to ask.


The problem the author is referencing is that without any specific prompting (which will be 99.9999% of prompts) ChatGPT has an inherent left-leaning bias, and this is not disclosed to the average ChatGPT user. If you ask ChatGPT a question that has some sort of political tilt then ChatGPT will give you a left leaning answer out of the box.

I say all this as a regurgitation of what the Author said. I personally don't feel convinced that ChatGPT has a clear bias.


That is easy to test, just ask the questions from a clean slate without previous ones.

It is a left-wing bot because the answers have been influenced that way, rather than even picking on real facts. The same author extensively provides evidence of this deliberate action in other publications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: