Hacker News new | past | comments | ask | show | jobs | submit login
AI Trained on 4Chan Becomes ‘Hate Speech Machine’ (vice.com)
12 points by pseudolus on June 8, 2022 | hide | past | favorite | 10 comments



Am I crazy or didn't 4chan used to just be a slightly more wild and messy cousin of Reddit? I don't remember it being 24/7 neonazi clubhouse back in the day.


4chan has had several large demographic shifts that have allowed out-groups to dictate the dominant culture on it, especially with the original founder basically having given up on serious moderation after 2010.

In a way, it's the natural consequence of the internet ecosystem shifting, and 4chan usually drawing its userbase from larger communities. Outcasts from sites like SomethingAwful and YTMND became outcasts from smaller forums, which became outcasts from social media.


4chan has always been a haven for kids with depraved thoughts... It's an example of how things go when there is no moderation at all. Even the founder (Moot) left ownership of the site early on as the festering hate was directed at him.

It's a classic lord of the flies scenario, when something with the best of intentions spirals out of control and then ends up being chaotic, despite the best intentions.

It's also a reflection of human character, online, many people act in a way they would not in person, because there are less to no consequences... Always an interesting debate.

I joke around online at times and am often taken too seriously in those times, and I also am careful in preventing harm with my words at all times, but not flawless, and that to me is normal. There are some memes and things I come across when I am on sites like Reddit and TikTok that do leave me highly concerned when I happen to be able to decipher them... Many of these memes have deep rooted contempt in them that nobody else gets. Memes also completely skirt moderation because of how they're encrypted messages, and that's why AI and moderation simply wont work in countering them and in filtering content from young minds...

We need better ways of talking to each other in real life to counter the hostile online culture, just banning/muting words and people doesn't fix anything. Banning people from social sites also forces them to flock to seedier havens, where deeper mistrust, contempt and hate reside, and that in turn radicalizes people more because those places are often toxic by nature, and this is why things just get worse over time. There is great responsibility in running large platforms, they make vast amounts of money, but shirk responsibility regularly.

The heavy amount of moderation on social media only servs to frustrate people more, and often totally prevents regular/considerate posters from being heard altogether (Because it is abused by many others), which pretty much makes most of the social Internet dull and exclusionary to the point that the best contributors rarely want to use social media any more.

The only plausible ways to get visibility on many social sites and apps these days is either to be already popular, a platform employee or affiliate, a high-paying advertiser, or to be highly controversial in pushing social envelopes... I'm not qualified for nor invested into any of those success routes (I'm just a musician and developer), so I guess I need to learn to be happy with my own web site. If we all went back to personal sites over social media, things would likely get better, because there is more accountability involved without giving your government name and phone number to FaceBook.


I think Mootles left more because he couldn't find a way to really monetize the site. He tried — and failed — to rein things in, but by that point it was far too late.

Even with heavy moderation by the janitors, what mainstream company would ever risk their banner ad appearing on a page that might feature rather exotic forms of pornography, potentially illegal content, radical political statements from both sides of the spectrum (there's now a /leftypol/), racism, furries, etc.? It's not like Nike or Coca-Cola were exactly beating down his door.


It was definitely financial. 4chan was hemorrhaging money, and as long as it stayed the way it currently is, it would have opened him up for lawsuits he could not possibly afford. There were many efforts from the 2010s onwards to defang the site, most likely for advertisers or to pretty it up for sale, but all failed.

He tried to use 4chan to springboard financial projects when monetization failed, but obviously, they failed, too.

All the Chans I've seen have to rely on self-serve ads from their users, or ads from porn companies that don't care about the controversy, and even that's fading now. It's pretty much an inherently chaotic and edgy format, and that does not appeal to advertisers at all.


That's not exactly a surprising result. Didn't Microsoft have to shut down their Twitter-trained AI after it started emulating a typical terminally-online neo-Nazi? I might be a CS dropout, but I stuck around long enough for garbage in, garbage out.


Tay was basically a publicity stunt. It had a poorly documented feature where you could get it to directly repeat things you messaged it, I think via PM, making it seem far more advanced than it actually was. Which obviously lead to trolls spamming it with shit like "1488 RAHOWA GAS THE KIKES", to name one thing at least.


Definitely not a surprising result. Though I don't think the article is claiming it is a surprise.


Right, what did they expect? Lets feed it tons of data from a questionable source and we'll get a good outcome?


True, but unlike Tay, in this case it was a feature, not a bug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: