Hacker News new | past | comments | ask | show | jobs | submit login
AI-generated spam is starting to fill social media. Here's why (npr.org)
38 points by ceejayoz 16 days ago | hide | past | favorite | 58 comments



Yes, this is one of the reasons people have been so hostile to AI: it's the ultimate spam engine.


The ultimate spam engine would be one that figures out how to bypass account creation and reputation systems.

Just like here on HN, some random can create a hundred accounts but their green username posting AI junk isn’t going to rank them. Nor will the various spam detection algorithms around networks and user behaviour be suddenly bypassed. Nor will user reporting if they do get ranked.

The “write believable texts” thing is only one small portion of a very complicated problem.

The only real usecase is a human using it to juice their own feeds or paying decent $$ for real accounts (there aren’t that many on the market IRL). Which isn’t automated spam that we need to fear taking over the internet. That doesn’t scale.


IMO a bigger problem is that the social media giants will be perfectly happy to allow AI-generated content as long as it makes them money. My Facebook feed these days is a graveyard of nonsense loosely connected to things I clicked on once. The AI generated text is useful filler for them.


> their green username posting AI junk isn’t going to rank them

You're going to see a substantial overlap between the least original users and the smartest AI, assuming the AI people actually deliver on their improvement promises. They see their job as making "Turing test" discrimination non-viable. This kind of thing leaves you with an increasing proportion of "users" on the platform who aren't real but have garnered enough crumbs of interest and upvote to appear real.


The general publicly-accessible internet will turn into AI grey goo. The real value will retreat to invite-only private chats where human-scale social norms maintain a high signal-to-noise ratio. Asking a friend to recommend a good burger place is way more productive than sifting through the sludge of "best burgers near me" on Google and the like.



I’m becoming more of the mind that humans communicate more like birds or whales. There’s a general sound or frequency that everyone is humming. The spam isn’t fodder, it’s the collective “song” of humans at the moment.

I don’t think people say anything unique, and LLMs are proving it. We’re all kinda humming the same tunes. It’s why you won’t read anything new here or anywhere, you’ll read the same old story or hear the same old song.

I guess I’m saying humans in general communicate with spam, a buzz.


There's a lot of "phatic" communication, "small talk", repeated memes, in-group signalling, and so on, but at the end of the day if you don't think there's anything new or interesting or transcendent from other humans, no real "connection", then that's an extremely bleak place to be. You've already disconnected yourself.


Sounds like you’re not out in the “real world” enough if you think that. At least it’s absolutely not what I observe offline. But online sure; because most of it is algorithm driven


I wouldn’t read too much into it. Most of it is coming from why LLMs even hallucinate.

In the simplest terms I’m just spitting out

Noun noun verb verb noun etc

And so are you

If you try to read into what the actual noun was, you’ll miss the forest for the trees.

People are mostly humming the same tunes as they always have been humming and filling the same roles they have always been filling.

Just take our exchange:

Opinion

Retort

Another Retort

——- We’re just dancing. You think there’s something unique going on here but there isn’t, hah. It’s a dance and pattern both me and you know from wherever, and we are sorta humming the song out.

Not too different when we try to figure out if there is a real magic to an LLM output.


It's fine to study the forest, but it's also fun to look at individual trees. On another level we are deterministic chemical reactions and all language is just flapping meat sounds. So what?


> I’m becoming more of the mind that humans communicate more like birds or whales. There’s a general sound or frequency that everyone is humming.

sorry to um actually here... but neither whales nor birds all have a "general sound" the way you're describing. Both whales and birds actually have strict regional differences and grammars in their singing and communication patterns.


Then maybe I’m using the wrong animal(s) here. I guess any animal group that uses group sounds (dolphins, etc). I mean think about it, why do some animals spread a message out via broadcast?

Why would humans spread so much of the same data (spam in this case) via broadcast?

I don’t know, but I don’t think it’s as simple as a means for warning. I think this is one of the ways we communicate. It sounds weird to say because, of course we communicate like that, we send emails, group messages …

But to hear that we might communicate like Dolphins is the hot take I guess


If you declare all communication to be the same as spam because all communication is just performing some messaging, that's so abstract as to make discussing this practically useless. Are you really saying that this is literally not different from me posting MY PUSSY IN BIO 5 times?

Even dolphins have regional differences and names for individuals, man. Many animals broadcast (wolves howl, lions roar, elephants use low-level frequencies felt through their feet) but the purpose, pattern, and meanings can all be quite different from each other.


Are you really saying that this is literally not different from me posting MY PUSSY IN BIO 5 times?

For the most part yes. If you are a girl, almost everything you say is probably not even in your control and your only truth is probably that sentence. This ain’t gender specific stuff, people are roughly expressing one or two things over and over with a lot of fluff (spam, filler) filling in negative space.

I fully agree that I’ve gone off the deep end with this belief, but that’s where I’m at.

Are you single and looking for a guy or girl?


Live by the engagement-optimized timeline, die by the engagement-optimized timeline.


> "She'd be like, 'Wouldn't this be so cool for your daughter?'" McVay said. "And I'm like, 'That's not real, though.'"

Other than not being able to buy an AI generated item, what's the functional difference between a clickbait image generated by AI, and one generated by a human? My assumption is that people endlessly scrolling for dopamine hits are not overly concerned with the epistemology of it all. In these circumstances, something is true or real if it is amusing or distracting, because it's all just content, and that is what content is for. AI is so good at generating content that this is probably exactly the right use for it.


> In these circumstances, something is true or real if it is amusing or distracting, because it's all just content

"Ten Hours of Jingling Keys"

Perhaps, but ultimately replacing meaning with meaningless isn't going to go well? These people still have votes.


Fair enough but: how's it going giving people regular clickbait? How's it going trying to get people to stop caring about clickbait? My point is that I can't really see the difference between what we already have and what we're going to have with AI, other than that AI will do it better. The idea that meaning matters in social networks does not pass the smell test for me; I haven't been able to confirm it.


It's probably a good idea in general to have clear-ish boundaries between real and not real unless someone is knowingly opting into an experience that is designed to blur those boundaries.


Here's the amplification of the noise over signal we expected from AI.

Scams, spam... all expected. It makes you wonder if there's not more at work like active sabotage. There will come a point where no one is able to believe anything. When that happens, consensus reality goes out the window. Massive social upheaval follows. We know Tik Tok was used specifically for these effects. Where is all this "content" coming from? Does Meta (for example) even know? Is it all from a handful of accounts?


The way AI is being positioned as a cure for the problems that AI caused (or at least massively exacerbated) in particular makes it feel like a racket. Google is becoming unusable due to LLM-driven content farms? Try our new LLM-powered search engine instead! It may not work very well but we've made sure to completely destroy the traditional way of finding information online so you don't have much of a choice.


> We know Tik Tok was used specifically for these effects.

I know no such thing, and this is the first I'm hearing of anyone saying it's used for social upheaval, denial of consensus reality, or sabotage.


Poe's Law?

https://finance.yahoo.com/news/us-spy-chief-cannot-rule-1627...

> Lawmakers have long voiced concerns that the Chinese government could access user data or influence what people see on the app, including pushing content to stoke U.S. political divisions.

https://www.npr.org/2024/04/26/1247347363/china-tiktok-natio...

> National security is at the heart of bipartisan concerns in Washington motivating the law. Lawmakers say they're worried the Chinese government could lean on ByteDance in order to use TikTok to suck up Americans' data, surveil them, and spread false and misleading claims to U.S. voters.


yeah, only American companies should be allowed to do that!


I mean, China's way ahead on this one.

https://apnews.com/article/tiktok-bytedance-ban-china-india-...

> TikTok has never been available in mainland China, a fact that CEO Shou Chew has mentioned in testimony to U.S. lawmakers. ByteDance instead offers Chinese users Douyin, a similar video-sharing app that follows Beijing’s strict censorship rules. TikTok also ceased operations in Hong Kong after a sweeping Chinese national security law took effect.


Ai assisted rage farming is something I'm part of and it scares the crap out of me. Nothing gets attention like opposite AI personalities raging against each other.


Care to elaborate? Are you a rage farmer, or trying to mitigate them?


Rage farmer. It's sucks how well it works. Basically, we have AI personas with one side always over-reaching and looking slightly stupid. This is to bolster the side we think is correct.

Works really well and we're swaying the opinion of a rather local community.


Thanks for your honesty there. Why do you do it, just the money? A cause you actually believe in?

Question: if the future is everyone having agents and assistants, won’t this be expected and even okay? Why do I need to lose time if my agent can represent my voice? And if so does that mean all social media will lose anonymity to avoid spam or influence campaigns? Or is there some middle ground?


So we make AI to produce spam so that we need AI agents to filter the spam and reply to other people’s AI agents so that we don’t have to waste our own time and energy..?

Why would anyone want that future? There’s already more than enough technology getting between humans and having decent conversations.


In all seriousness I suspect that’s our present. How many people feed bullet points into ChatGPT and get it to write out full paragraphs, only for the recipient to take the paragraphs and ask ChatGPT to summarize them? We’ve invented a uniquely bloated data transmission format that exists just for appearances.


That’s a good point. What a distasteful waste of precious energy.


That future is already here. Replace "spam" with "job applications", for example.


There are ways of maintaining a chain of trust without losing total anonymity. For example DNS, email and Marshall Island companies. Karma systems and IP reputation can come into play too.

(I know the examples aren't truly anonymous, but they're all broadly comparable to using a Twitter without your real name).


The internet already isn't truly anonymous with how people actually use it. People are logged into a bunch of accounts that are traceable to them and also rarely mask their IP. With how much stuff I've paid for over the internet there's probably a double digit number of companies with my entire name and current address.


I just mean they may force you to share ID documents to make an account. I have trouble believing I would be anonymous at that point forward?


Five minutes later someone'll have an AI offering to make fake IDs.


Those will not be in the verification database.


What verification database? If I show you an Afghan passport, does the Taliban have an API to check it?


There are ways to combat spam that still allow for anonymity.

One example is proof-of-work. You can require a submission do 3-5 seconds worth of work on typical high end laptops/phones so that it becomes cost prohibitive for someone to post thousands of submissions an hour. There are ASIC resistant algorithms to prevent spam-for-hire services that might try to game the system.

Most users are passive consumers of content anyway so they wouldn't notice this process and when they want to post a few times per hour or day then the 5 seconds lag will be worth it to have a better social experience.


Social media is already a lot of spam and influence campaigns and people still use it. I think it'll keep chugging along as it has been with large amounts of spam and then AI to fight the spam.


If social media is read and written by agents, can we just put our phones down and get back to living in meatspace? Perhaps it's the best thing to ever happen to social media.


I wonder why Facebook doesn't just turn off these recommendations until they fix it?


Because it drives engagement.


They can still show ad views against them, and have long abandoned any "is the user actually satisfied with their feed?" metrics.


Related:

Slop is the new name for unwanted AI-generated content

https://news.ycombinator.com/item?id=40301490


Because it's easy to do and gives people a sense of creative expression.

It gives them an easy "sense of pride and accomplishment".


The mutilated children asking for birthday wishes is beyond the pale. That got me to leave facebook.


So we really need an article explaining why? It's obvious, because the spammer profits from it


> But many of the pages don't have a clear financial motivation, Goldstein said. They seem to simply be accumulating an audience for unknown purposes.


Probably just to sell the account. There is a market for social media accounts that already have 1000s of followers.


This is like saying "startups have no clear financial motivation, they just gather customers and bleed money"

The purpose is always money and influence. There's rarely someine in "for the lulz"


Money isn't necessarily the goal. It could be someone building an audience to spread disinformation to affect public opinion.

The fact it's happening is reason enough for the article to exist.


Is it hard to figure out why?

It's cheap and easy.


Now just imagine a month or two before November elections in USA

It's going to be an absolute sh*t show even on the one or two platforms that do actually want to stop it


I'm presuming a lot of these pages exist to accrue followers now and start posting disinformation closer to November, yeah.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: