Hacker News new | past | comments | ask | show | jobs | submit login

But how biased is the organization behind this study?

And why does it matter, considering its output is censored Disney style to not offend any pressure group anyway?




Worse, how would a GPT be "biased" in the first place? To claim it is biased implies OpenAI did this on purpose, there is no generous way to read this that indicates UEA did this study in a neutral way.

ChatGPT was trained on, essentially, everything on the Internet... if ChatGPT has a "bias", then, no, it doesn't: humanity does.


Any "alignment" is adding bias and reducing usability of a LLM. You are trying to wrestle the model towards what you find appropriate instead what is the average of the source material.


The bias could be in the source material no?

Or in the accidental choice of source material no?


> To claim it is biased implies OpenAI did this on purpose

You're relying on a particularly unhelpful and narrow definition of "bias". There are tons of valid and widely-accepted uses of the word to apply ot unintentional bias and to apply to things incapable of having intentions.


Very true. Facial recognition models which don't work very well on dark skin aren't that way because the creators of the model didn't want it to work on folks with dark skin. It's because of the bias built into the training material. Those models are biased.


> To claim it is biased implies OpenAI did this on purpose

They did do it on the purpose—the RLHF. The RLHF process was definitely biased, deliberately or unconsciously, in favour of certain political, social and philosophical positions (apparently the same ones held by many in Silicon valley/OpenAI teams responsible for this).


Good, we don't need the robots being ignorant, racist, insurrectionists also.


Current conservative americans have already noted amongst themselves that their policies are problematic because reality has a liberal bias.


>"because reality has a liberal bias."

An expression which is only said by those with a liberal bias.


no, openai had an army of people to classify things


You can run their experiment yourself and check the results.

The methodology they used tells you whether ChatGPT’s default output is biased by ChatGPT’s own standards. I think it’s a pretty neat idea.


Doesn't "Disney style" mortally offend anti-woke crusaders? Or have they officially 'Moved On' to being mortally offended by something else now?

DeSantis Has ‘Moved On’ But Disney Just Sued for New Damages. The company alleges that the Florida governor retaliated to its "Don't Say Gay" bill dissent by establishing an administrative board that would nullify contacts:

https://www.rollingstone.com/politics/politics-news/disney-c...


> censored Disney style to not offend any pressure group anyway?

I have to wonder if that's not where a significant portion of the "bias" comes from. ChatGPT isn't going to say that vaccines don't work. It's not going to bash trans folk. It's not going to call teachers groomers or indoctrinating kids. It's not going to rant about drag shows. It's not going to deny global warming. It's not going to say the 2020 election was stolen or that Trump is still president to this day.


Those aren't the pressure groups I'm thinking of. There is a lot more (sometimes unconscious) censorship than just what seems to be called "political" in US discussions.


Apparently, it's bias all the way down.


Before the big bang the universe was unbiased. After that all useful creation has been because of one form of bias or another.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: