I define bias as not treating all political and philosophical opinions equally. If one is picked over the others, that is bias. I saw this bias and made many records of it in case of ChatGPT.
I think the point is when taking information from some source (like the web) to just represent that source fairly, which is a reasonable expectation of a search engine, for example.
"Fair" isn't well-defined. Is it "fair" if Amazon results are seen as "more trustworthy" than a random new startup web store? Even ignoring SEO manipulation of any rules publicly believed to exist, that's the default outcome for things like PageRank.
Going beyond sources to conclusions, given LLMs aren't search engines and do synthesise results:
Politically, low-tax advocates see it as "fair" for people to take home as much as possible of what they earn, high-tax advocates see it as "fair" for broad shoulders to carry the most and also for them to contribute the most back to the societies that enabled them to succeed.
Is the current status of Americans whose ancestors were literally slaves made "fair" by the fact that slavery has ended and all humans are equal in law? Or are there still systematic injustices, created in that era, whose echos today still make things unfair?
Who has the most to blame for climate change, the nations with the largest integrated historical emissions even where most of the people who did the emitting have died of old age, or the largest emitters today?
Well, I think you're going beyond the parameters of the discussion... LLMs synthesize datasets and that is all they do. They are not reasoning agents and they don't have opinions about anything. All we can say is that they reflect the biases inherent in the dataset, and to say anything else would be dishonest at best. It's only because most people have no idea how these things work that we get all this magical thinking.