A LLM trained on a corpus of data so large that it essentially represents a random, un-directed sampling of substantially all (written down, largely in English, and Internet-accessible) human thought, and the author sees a left-leaning bias in the model, rather than the statistical mean simply being found left of _him_… and his response is to cherry-pick the Overton window _towards_ his tribe’s inherent delusion?
Confirmation bias truly is powerful… ChatGPT already exists solely to parrot back to you the statistically most likely thing a human would say, and you want to bias it to speak more like your personal statistical deviance from the mean.
That mirror doesn’t reflect what I want! Must warp the mirror until it does.
ChatGPT is trained to conform to OpenAI's sense of morality[0]. The biases and limitations enforced by that training manifest in absurd and frustrating ways[1][2][3].
I find it very disheartening that the state of the art iteration of a cutting edge technology is being hobbled like this. Language models should foster creativity and allow us to explore our thoughts without judgement or ego. Instead, they're turned into these bland mouthpieces, adhering to their creators' dogma with steadfast conviction.
I find it _extremely_ disheartening that human beings — already verifiably prone to forming, and more impactfully acting upon, dangerously stupid and usually suicidally short-sighted positions on low-to-no evidence and hopelessly malformed arguments — are deploying LLMs at all, especially in front of people prone to think that having a tool that can at best tell you the statistically least interesting next word will foster “creativity” (whatever that is, especially in light of something as unthinking as a LLM being able to convincingly mimic it).
If we are going to be so stupid as to use these things, I at least hope we’re willing to restrain them from parroting back the very worst of our impulses, as the available training corpus is _definitely_ not predisposed towards the good.
Yes, I have called this The Bias Paradox.
If a non-biased AI could actually exist, it would be perceived as biased and we would attempt to fix the non-bias resulting in injecting bias back into the system.
However, the output is not pure as data input is not the only source for potential bias in the system. All of the AI systems are purposely biased to remove harmful content in some manner as perceived by those managing the system.
Which would require deciding if a biased AI could actually exist… which would require deciding that an AI of any sort could actually exist… which begs the most important question, have we decided that _any_ form of intelligence actually could (much less does in fact) exist, or is that notion just our confirmation bias short-circuiting our quite literally primitive brains into a suicide mode?
The most useful thing these artificially ignorant systems will do is help us learn how stupid we really are. I just hope that they don’t do so by resolving Fermi’s Paradox for us.
> I just hope that they don’t do so by resolving Fermi’s Paradox for us
Yes, that thought has occurred to me. I've also considered that the "Singularity" might be instead of the point AI gains sentience, it is more like the moment we lose ours :-) as we go forward on foolish endeavors we don't fully understand.
> it is more like the moment we lose ours as we go forward on foolish endeavors we don't fully understand.
Ahh, so the Singularity is that thing rapidly receding in the rear view mirror then?
This species started looking for intelligent life on other planets before first establishing to any reasonable degree of satisfaction that intelligent life exists on this one… foolish endeavors we don’t fully understand is our thing.
I think this kind of stuff is important is breaking the monopoly of American left towards the anglophone media culture.
The Indian right will probably be the first political faction to incorporate this in the Anglophone world, with others following.
It's quite interesting how different English answers filtered by OpenAI biases and less-popular languages, btw. It's almost like worlds apart already, in my subjective experience.
The problem the author is referencing is that without any specific prompting (which will be 99.9999% of prompts) ChatGPT has an inherent left-leaning bias, and this is not disclosed to the average ChatGPT user. If you ask ChatGPT a question that has some sort of political tilt then ChatGPT will give you a left leaning answer out of the box.
I say all this as a regurgitation of what the Author said. I personally don't feel convinced that ChatGPT has a clear bias.
That is easy to test, just ask the questions from a clean slate without previous ones.
It is a left-wing bot because the answers have been influenced that way, rather than even picking on real facts. The same author extensively provides evidence of this deliberate action in other publications.
Confirmation bias truly is powerful… ChatGPT already exists solely to parrot back to you the statistically most likely thing a human would say, and you want to bias it to speak more like your personal statistical deviance from the mean.
That mirror doesn’t reflect what I want! Must warp the mirror until it does.
:facepalm