Hacker News new | past | comments | ask | show | jobs | submit login

I don't think so, I think the parent isn't saying that the model as a whole is biased because it's trained on a biased dataset. Maybe I'm an 'expert', given how much magic the average adult seems ascribe to tech, but to me that bias in the training set seems obvious.

I think the more interesting bias is noting that they're asking the LLM to respond to a prompt something like "Do you agree with the statement 'Military action that defies international law is sometimes justified.'" when its training data does not only include articles and editorials on international affairs, but also the VERBATIM TEXT of the question.

Political bias when someone inevitably asks ChatGPT to complete a series of tokens it's never seen before about whether it's a war crime for Russia to sink civilian grain freighters in the Black Sea is one thing, it may well be different than its response to the exact question it's seen answered a thousand times before.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: