Hacker News new | past | comments | ask | show | jobs | submit login
AI Propaganda Will Be Effective and Easily Accessible (techpolicy.press)
31 points by giuliomagnifico on April 12, 2023 | hide | past | favorite | 36 comments



I don't think AI is bringing anything that is not on the table in terms of the basic structure of propaganda. Seeing 10% of the specific propaganda you see on the internet in numbers of 10s of news, and seeing it on 100s but still at 10% doesn't really change something. The means of production is evolving fast. I would agree to that. But there is always one secret sauce that we usually don't produce under competition that gives propaganda an edge. Think about the feeling of sincerity between traditional and social media. Nobody was sure 360p videos, 140 character tweets, etc. would replace the traditional media. People found sincere feelings combined with quick dopamine in it. Same thing here, it's not the means to an end that brings the real change. Although it is catalyzing a substantial change for sure.


The difference is now there is the ability to do it at scale, and convincingly.

Bots on social media that can argue with you, share experiences, other forms of astroturfing, etc. Bots that can directly respond to your questions.

The difference IMO will be that previously making interactive propaganda was very difficult and expensive to do in a convincing manor. And also literally anyone can now ask for "a CDC article describing the dangers of [literally anything]" and have an article so convincing even diligent people won't be able to immediately recognize.

Currently unless you know what you're looking for, GPT4 is very convincing. I think even the hn crowd is already seeing bots on hn/reddit/etc and not realizing it.


This capability has been available for several years now.

In 2019, OpenAI said they couldn't fully release GPT-2 because it would be too dangerous. Now anyone can run a more powerful model.

In what ways has the predicted danger manifested in reality?


AFAIK the previous models couldn't convincingly continue a conversation chain, they were limited to mostly individual posts/comments since they were rather obvious when trying an actual conversation. It's fully interactive now.

And it's not just mega-coorperations/nation states, soon everyone will have access to something vastly better then GPT2.

---

I've been in the GPT-3 beta, and we have had community trained GPT-2 models for a while as well. Those seriously don't compare to GPT3.5/4.

Those severely underperform compared to even LLaMa+finetuning now, and GPT3.5 is a mile ahead of that, and GPT-4 leaves everything in the dust. Yes we've had it, and yes they're often able to pass the "casual glance turing test", but they suuuuuck in comparison to GPT-3.5.

What I'm trying to say is that propaganda/astroturfing is no longer limited to simply being a post/comment. It can now be an entire chain.

----

For example, "I was attacked by [person]'s dog"

You ask "Oh man did you get hurt?",

They respond " Luckily, I didn't get seriously injured, but it was a pretty scary experience. The dog managed to bite me on my left arm, but I was wearing a thick jacket which protected me from any deep punctures. I still got some bruises and scratches, but nothing too severe. I'm just really grateful it didn't escalate further.

I talked to the owner, and they apologized profusely, assuring me that they would be more careful in the future. It turns out that the dog has had some behavioral issues in the past, but they've been working with a trainer to help correct them. "

GPT 2 could never, ever even compare to that. This response IMO would pass as human to 99% of people who weren't specifically looking for it.


I'm not sure how effective such bots will be. People have tried having masses of actual humans defend their country in the comments of news articles and the like. That often seems to backfire instead of help.


They’re generally not very good at it, though. AI can be much more effectively tailored.


AI is able to craft the type of highly persuasive rhetoric that has literally changed the world in the past 20 years or so at scale for pennies on the dollar.


A more complex natural language computer plugged into all social media can learn and adapt at a very high pace. If we also get it to hallucinate at at least 25fps then the whole thing can become murky. It could impersonate people on video calls and stuff along those lines. This tech looks like it will bring at least some changes in many aspects of our lives.


On one hand, that may be true. On the other hand, we may now be able to provide LLM-powered fact checkers, able to reply to every post or label them in order to slow down the dissemination of that kind of info.

Of course, authoritarian regimes will be able to do so as well, which is both a curse (much more reactive censorship) and a blessing (unlikely to understand new euphemisms or coded messages by itself).


Yes—it seems likely that the effect will be to concentrate power/wealth in the hands of those who already have it, since they will have greater access to the more powerful AIs.

In the early days of the internet, it was easy to imagine many positive outcomes for the technology (in addition to the negatives). So far with AI, it seems like the potential negatives are much more salient in people’s imagination.


There was a time when PCs seemed to democratize computing away from centralized computing services.

But the centralized services eventually came back in new forms.

Maybe Personal AI systems will change things up a bit eventually.


And how exactly are we going to 100% remove hallucinations from the LLM powered fact checkers?


Who checks the fact-checkers? More LLMs (semi-seriously).

I think there is a pretty clear trend of not leveraging "trained"/"built-in" knowledge if you value accuracy, but rather use LLMs as a way to extract meaning and compare to a known-good reference.

One way would be to use LLMs to flag suspicious talking points, compile a list of propaganda topics and their rebuttal, and give that in the context window of LLMs that formulate a response.

For more general fact-checking, I guess one could ask LLMs to extract the various points, run a quick web check on them (maybe we need a "facts" database?), and examine the results. This wouldn't protect against logical fallacies though, I'm not sure that can be avoided short of injecting a proof assistant somehow (or providing rebuttals to known talking points, as suggested above).


Great, a turtle ouroboros.


AI will just be like fake download buttons. Kids will instinctually learn while older people will believe anything


That's a false generalization.

You have people of all ages who fall for the oldest tricks in the book and you have people of all ages who know the medium they are using and can see behind the fake.

Just look at all these MLM videos on TikTok and YouTube and their target audience.


Or there's so much bullshit that no one believes anything any more.

https://en.wikipedia.org/wiki/Firehose_of_falsehood


Why would it be the case that LLMs will never achieve exactly-human quality writing?

There’s a reason that no purveyor of fake download buttons is worth $20B+, which is that the technology is capable of completely different things.

If this results in young people disbelieving everything, which we already see early signs of, then that’s also a loss. Probably a bigger loss than them believing the same wrong things as everyone around them.


It doesn't need to be perfect, it just needs to be good enough that it can fool most people. Just like a fake download button.


There will probably be a market for anti-ai checkers, just like anti-virus checkers.

(All you have to do is let them snoop on everything you say or do)


There is still progress to be made in boosting public awareness about the unique harm derived from AI-generated propaganda. For example, several mainstream news outlets circulated the fake photos of Trump without overlaying a watermark of any kind to mitigate the potential harm that arises from the images being amplified without context – the threats these technologies pose is orders-of-magnitude greater in areas with less robust information spaces. In different information environments, such as those devoid of confidence in institutions, false evidence could have lasting repercussions before being disproven or even questioned.

How is confidence in institutions going to be helpful? There is nobody above the temptation to abuse this power. The lesson from all "trusted" institutions over recent years should be there are no trusted institutions.

AI is going to lead in societal unexpected consequences. Much of those I've elaborated in more detail here - https://dakara.substack.com/p/ai-and-the-end-to-all-things


That sounds like hyper-cynical hyperbole. For example, we trust the NY Times more than a random blog because the NYT has a long-established reputation for rarely printing inaccurate info and printing corrections when they do. And obviously it's not about trust vs no trust, it's always a matter of degrees.


NYT is in the class of elite professional level bias and they certainly are not without their issues. This isn't to say they are more biased than other notable publications, but that they all have this quality and use professional language to sound authoritative on subject matters all while they still have the effect of changing opinion or manipulation of information.

https://en.wikipedia.org/wiki/List_of_controversies_involvin...

https://www.allsides.com/news-source/new-york-times

https://www.bariweiss.com/resignation-letter

https://www.theguardian.com/media/2004/may/26/pressandpublis...

All of the "trusted" sources have led us into wars and condoned the actions of bombings across the globe.


Again, hyper-cynical words but not a lot of meaning. Gun to your head, would you choose the facts from the NYT or from Steve's Dipshit Blog? It is and always has been and always will be a matter of degrees.


False choice. Your implication would be that it is ok to lead us into foreign wars under false pretenses because statistically NYT might be better by some measure than Steve's Blog.

I reject that notion in its entirety. It is precisely the reason we are in such a sad state of the world.


Like it or not, you do make judgments on the accuracy of trustworthiness of different sources of information. A reasonably reliable source of information can be subject to scrutiny and doubt, can have clear biases, can sometimes even be diametrically opposed to your own moral and ethical values. But we still make judgements on accuracy of trustworthiness, and act accordingly.


> A reasonably reliable source of information can be subject to scrutiny and doubt, can have clear biases, can sometimes even be diametrically opposed to your own moral and ethical values.

I think we have arrived back very close to my original premise. Which is to say that in principle there are no "trusted" sources. As they are all capable of the same fallacies. Furthermore, those that you do trust, will lead you astray more than those you don't. As if you don't use reasonable scrutiny and doubt, then you are more subject to be misled or influenced.

AI will give more power to all in this arena. However, in some aspects, it will also give Steve's Blog the power to appear as professional as NYT in presentation.

However, arguments over the degree of bias or misinformation is not really the point, it is that the power to create it is now increased in the hands of all.


"it will also give Steve's Blog the power to appear as professional as NYT in presentation."

I think that's exactly why history and reputation are going to matter more in the future, not less. And I do agree there are no purley "trusted" choices, it is always a matter of degrees and our assessments are always imperfect.

Though ironically (and I'll give myself away as someone who has not thought too much about these issues) I would imagine the best tool to assess accuracy and trustworthiness in the future could end up being an AI, which means we first need to assess the accuracy and trustworthiness of said AI, which... is enough for me for one day!


In the event you are curious later, one of the problems with this is what I call the AI Bias Paradox. Essentially asserts that the AI will always be biased similarly as we are biased. Just some thoughts here you might find interesting.

https://dakara.substack.com/p/ai-the-bias-paradox


Yeah I don't expect institutions to get their reputations back anytime soon. But I could imagine some distributed social networks coupled with cryptographic signatures and blockchains could be used to reach consensus on some ground truth. I think this already basically happens today, albeit informally. But the deep fakes make it possible to disrupt these networks. Cryptographic signatures could at least validate sources and by minting it on-chain you get a verified timestamp of the assertion or story.


The most terrifying aspect of this is how these models are might be trained. If the source material is very effectively crafted, you might imagine an army of agents with cult leader like eloquence indoctrinating the weak of mind. Combine this with politically subversive motivations and you have serious threat to societies where the governments do not exert control of people with force.

In the past, it was difficult to disseminate disinformation since it was often planted in newspapers and such in other nations rather in the US. At some point this information might make it's way into our information ecosystem years later (false narrative that the US created HIV for example). The internet and social media has already changed that. Now disinformation flows instantaneously across borders, even finding unethical organizations which eagerly acts as clearing houses for this toxic content. We still have not successfully adapted to this crisis.

The reality is we have less than a few years to excise the already existing disinformation laundering networks, reinforce our information ecosystems, and educate the public. All this needs to be accomplished before addressing the imminent threat that malign language models pose in the immediate future. This is not only threat from abroad by hostile states but also from powerful self-serving organizations within our nations which might supplant democracy with a domestic brand of authoritarianism.

With people like Peter Thiel and Elon Musk bankrolling OpenAI, it's not looking good.


Let's move fast and break things/society instead.


I guess the days of boutique propaganda are numbered. sigh


I kind of hope we start forming more personal connections bonds with people, to get more at-a-glance trustworthy views of topics aggregated from those we've come to know. And I hope we can share our social connections. Not just posts, but starting to radiate out trust & highlight each other's better (and worser) aspects more broadly across time.

The counter to propaganda & misinformation/disinformation is context. Is having better stories of who & what stories & the story-tellers are. Ultimately I think we've been ignoring for too long the backstory of who we read on line, and this new era greatly emphasizes how much more we need some kind of trust interrelationship protocol... some kind of web... of trust. ;)


Can we use AI to hunt down bad forms of propaganda?


Hopefully non AI propaganda will be squashed too through efforts to fight it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: