Hacker News new | past | comments | ask | show | jobs | submit login

neal.fun's Infinite Craft is a fun app, you play by combining two words to make a new word, e.g "Water" + "Fire" makes "Steam", "Shark" + "Hurricane" makes "Sharknado", etc.

Except it's exposing AI bias, combining "Palestine" + "Child" makes "Terrorist".

The underlying LLM (Meta AI's LLaMA) doesn't know who Palestinian children are. It doesn't know they're dying en masse from bombings and starvation. It's only regurgitating propaganda that associates them with terrorism.




> It's only regurgitating propaganda that associates them with terrorism.

More fundamentally, it's only regurgitating associations. That's all LLM "AI" does. There's nothing deeper however much we want to believe otherwise. What it does is hold up a mirror. In a sense each model is a "personality" based on what it was fed as training. The Meta model that associates children with terrorism is a reflection of the values of the company that created it and those who contributed data. They selected the training data. It is not representative of a global "mind of humanity" but of the narrow demographic of the kind of people who use Facebook.

As a commenter here put it recently: "Why are all mirrors so ugly?"


How everybody seems to just acquiesce to calling these excrement generators “AI” is a source of endless astonishment to me.

It cannot be a surprise that a digital parrot will of course regurgitate anything and everything that's in its training data.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: