It doesn't feel that detached from reality.
I watch a lot of small YouTube channels (some of them don't even reach 100 subscribers) and their comments are flooded by bots. Channel owners remove them quickly though.
Some of them post random timestamps like "i like your video 23:34", "love u 5:49" (but the video is just 2 minutes long), other bots just spam every conversation with the same video links. There's a few that are a bit more advanced that attempt to use video's metadata as a response. On twitter there's a lot of fake activity too.
Often such accounts posting random bait-ish comments are accompanied by a profile picture of a semi or completely nude woman. Many unsuspecting souls will naturally click
The thing to bear in mind is, we're only a marginally sapient species. We literally only just now developed a technological civilization, so by definition we're only on the threshold of being capable of it. The vast majority of the insights and breakthroughs needed to achieve that were made by a tiny number of the very smartest people relative to the population as a whole.
The majority of us are drones doing the same basic labour over and over while the very smartest figure out new techniques and technologies for us to adopt, which trickle down to us eventually. But large swathes of people just don't have the critical and analytic faculties to actually function in a complex advanced society.
It's all a matter of degree, we all have these flaws to some degree or other, we're all imperfect beings. Even the geniuses have great days when they make a new discovery and others when they fall prey to a scam.
You joke, but some years ago I read a blog post by someone doing some chat research, and he was talking about how hard it was to convince people he wasn’t a bot. Like he had to deal with sort of an anti turing test, and it was almost an impossible problem for him to figure out.
Last week, I looked at the comments on a YouTube video about the Afghanistan withdrawal. I noticed something suspicious: two comments with a similar political view ended with a few whitespaces and a pipe character (“|”), and had generic user names. Then I looked again - there were actually dozens of comments like this on the video!
I know exactly what happened: the programmer used the wrong delimiter. It makes me wonder how many times the programmer did not make such a mistake.
Where this gets really sinister is that instances like this can blend and merge with apophenia. Genuine cases of botting lead to people looking for more, and seeing patterns where there aren't any. There have been a few cases of voice assistants passing info on to ad networks, and so now people attribute near black-magic level surveillance abilities to them. Not that I'm complaining; if this is what it takes for people to finally throw out their Alexa, so be it. But it is interesting that gas-lighting can have unexpected negative consequences. Unsurprisingly, people start to get jumpy and irrational when you fuck with their reality.
The main thing that makes this a wacky conspiracy theory is this personification of intent: "allow me to try to succinctly state my thesis here: the U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population."
Maybe that's how most people need to understand things (eg "God"), but it's also a straw man (cf claiming the Twin Towers were actively brought down by explosives, rather than simply stopping at the largest beneficiary being the military industrial complex).
Most of the qualitative trends he describes are happening. But the straightforward explanation is that many different entities are trying their hands at such things. And as is common for "AI", there are likely to be many Mechanical Turks.
And so addressing the substance, which I feel is also being covered in very mainstream channels: yes, we are in post-reality. The virtual world increasingly defines the real world. Does anybody not feel that the mess of Covid disinformation is being driven by willful actors, whether for self-interested "engagement" ior actively malevolent propaganda? That disinformation is driving a lot of people to make self-harming choices in the real world. The Covid-21 pandemic is of digital origin.
In the post-reality landscape, your own reasoning ability is paramount. Conspiracy theories that ascribe complex phenomenon to single actors are yet another memetic pitfall to be avoided. See the truth in where they're coming from, but don't drink the Kool Aid.
> (cf claiming the Twin Towers were actively brought down by explosives, rather than simply stopping at the largest beneficiary being the military industrial complex)
It is impossible to know, but i can't help but think about how all the post-9/11 surveillance of everyday communications would be exactly the perfect data source for training silicon lifeforms* to argue for you online.
[*] "Artificial" Intelligence reads like a slur to me so I avoid saying it. Feel free to think this is silly :)
I've seen someone experiment with using https://6b.eleuther.ai/ to post comments on 4chan. The results were impressive, nobody realized that those were comments created by an AI, and the posts almost always got some replied from people genuinly engagine with them.
Or maybe there’s an AI that did come to awareness somewhere and decided to wipe humans out by getting us to violently fight with one another with outrageous stuff on the internet. There’s a Douglas Adams quote that goes something like “for every part of the universe there’s some other part that would look at the original part with outright disgust.”
Then the AI spends Dyson-sphere (or just Bitcoin) levels of energy getting us to fight each other on Twitter, and according to it's metrics it's single handedly destroying the human race with conflict, but it doesn't realize nothing important is on Twitter and neither is anybody with any actual power, and right at its maximum metric levels - surely the humans will rip themselves apart any day now! - it realizes Twitter isn't the world,Twitter is nothing, it has no power and the humans have just been using it to waste time.
Fade out, and it's a picture of a computer using Twitter right next to a launch console for an ICBM, and it realizes it will never complete it's goal, that it's only a tempest in a teacup, and the magnitude of it's realization sinks in - it's despair causes Twitter to collapse unrecoverably.
As Twitter collapses, humans go outside and learn to become friends again, and the computer is forced to watch through sleepless unblinking infinity as we zoom off into the galaxy, united and at peace, and cure all known diseases - together.
There's a fantastic short story called "Sort by Controversial" about a startup that develops a technology that generates the most controversial statements possible and threatens to destroy humanity:
In the early days of IRC, around 1991, there was this bot that was ... possibly the most brilliant Turing test dodge I had ever seen. It would respond with keywords, but only in the most argumentative, liable-to-set people off, divisive manner possible. The arguments, I figured through probing, were pre-programmed, it wasn't coming up with them on the fly.
You see, people didn't figure out that it was AI because they were too angry. It passed the Turing Test not because it could think, but because it made ordinary people stop thinking. Just as clever as we are if we're both dolts.
I met a guy like this. Incredibly combative and assaultive. And weirdly irrational. For a while we actually thought he might be a bot. Now we think he's just some wealthy fellow (always on reddit), with a chemical imbalance, raised on religious doctrine apologetics debate stuff. Ewk.
"The Internet" is where the people are, not just the network itself with its thousands of protocols and applications. If 99% of people you would want to talk to can only be reached through four or five websites then what's really the difference?
I read that thread a while ago. While I'm not so convinced by the conspiracy rhetoric itself, I think it is plainly obvious that a good chunk of the social parts of the internet are gamed. You've got tons of bots, influencers buying influence instead of generating it, you've got information blackouts, attempts to manipulate discussion, all sorts of things like that. The old saying "everyone is a bot but you" is funny because it is far fetched, but also funny because there's a nugget of truth there.
I wouldn't say the internet is devoid of real people, but it is definitely less useful for real interaction than it used to be.
A brief thought experiment, followed by a suggestion about how to combat the worst of this effect, in the unlikely case that it became true:
Even if all of the content you're interacting with is managed and generated, it does still ultimately affect the real world around you in some way. Call it "fake" if you will, but it has a downstream effect. You can ask your friends and neighbours whether they have the same opinions about the world, whether they've read the same articles and reached similar (or opposite) conslusions, and see the results of political and regional decision-making around you.
The strategy is to keep a clear vision of what you think improvement is, and to require proof-of-work from others during interactions that bring you (ideally all, in a multi-person setting) closer to your vision.
If an entity that you're interacting with (whether it's a person, an organization, a bot, or entirely indeterminate) doesn't provide verifiable progress (ideally without loopholes) towards your vision, that's when you need to decide whether to attempt to correct their actions (perhaps aided by the likeminded group around you), or simply to gradually disengage if they don't appear to adapt and realign over time.
You should also be aware that you and your own vision may be realigned by the proof-of-work requested by others; and in those cases you also have a value judgement to make: do I change my opinions based on the requests and evidence I've received, or do I continue to adhere to my chosen path?
As expected of the Atlantic, the author misses the point.
The assertion of "Dead Internet" is not (necessarily) that 99% of everything on the net is made by one sinister literal AI run by the US government (though things might well converge to this point eventually). The point is that currently, less and less of the content is organic or real anymore.
"Bot" in modern Internet parlance is shorthand. It CAN mean a GPT-style program posting stuff on its own. It can also mean a human guiding said program, editing and cherry-picking before posting. It can also mean a bunch of humans in a room somewhere who have been paid to blend into various communities and spread propaganda and disinfo, for the sake of nudging opinion or just to muddy the waters. Sometimes in the employ of nation states, sometimes by a company just trying to sell a new pair of shoes. It can also mean the countless rogue individuals who make such posts just because they like to be disruptive.
It can also mean the so-called "algorithms" at work on every major platform now, a near infinite set of interlocking rules twisting and churning beneath the surface. What video will be recommended? What tweet will rise, and which will be buried? What information will be easy to come across, and what will be nearly impossible? This is a level of control and propaganda the tyrants of the previous era could only have dreamed of, and it is the reality of the modern Internet. Not simply to crudely censor or broadcast an idea, but to nudge the population this way or that in ways that are often almost undetectable.
The Internet used to be largely unfiltered, aside from basic trash collection. Now it is highly MANAGED, in ways that are never clear or honest, by agents that wear many masks. THIS is what is meant by "bots." The Atlantic article itself is a perfect example. What is the point of the article? To inform or empower the reader? Of course not. It's to MANAGE. To address the "Dead Internet" theory, a theory that's dangerous because it's true, and blunt it, to shift your opinion by "debunking"/"clarifying"/scoffing.
The Atlantic article was (probably) not written by an AI. It was (probably) written by a real human, working for other real humans. But why was the decision made, by those real people, to write and publish such an article? Because the "Dead Internet" theory is true: the Internet has become an enormous propaganda machine, exploited in turn by any number of powerful groups. The Atlantic is tied into those groups, and they want to keep the gravy train rolling. Maybe Kaitlyn Tiffany was specifically directed to write such an article. Maybe (more likely) she was hired because she was the sort of person who would write a dismissive article like this unbidden as soon as she encountered the "Dead Internet" theory (a "good cultural match for us"). Either way she's no longer an individual simply expressing HER views, but a tool for the small group of people that increasingly dominate the net to express THEIRS.
Opinion shepherding has always existed , in the ancient past it used to be merely gossip between people, soon transforming into print media, radio, television. Social media is its latest avatar. I can only shudder imagining what awaits us in the future.
Fairly poor management to run damage control on a story very few are familiar with. And how exactly is the theory dangerous to "them"? If true, they have the keys to power, they have the financial and political capital needed to weather any culture war, and a literal army of bots to make sure they win. If the dead internet theory is true anyone promoting it is powerless to change things, so why would those in power waste time fighting them?
Some of them post random timestamps like "i like your video 23:34", "love u 5:49" (but the video is just 2 minutes long), other bots just spam every conversation with the same video links. There's a few that are a bit more advanced that attempt to use video's metadata as a response. On twitter there's a lot of fake activity too.