AI safety is focused on AGI but maybe it should be focused on how little “artificial intelligence” it takes to send people completely off the rails. We could barely handle social media, LLMs seem to be too much.
I think it's an canary in a coal mine, and the true writing is already on the wall. People that are using AI like in the post above us are likely not stupid people. I think those people truly want love and connection in their lives, and for some reason or another, they are unable to obtain such.
I have the utmost confidence that things are only going to get worse from here. The world is becoming more isolated and individualistic as time progresses.
I can understand that. I’ve had long periods in my life where I’ve desired that - I’d argue probably I’m in one now. But it’s not real, it can’t possibly perform that function. It seems like it borders on some kind of delusion to use these tools for that.
It does, but it's more that the delusion is obvious, compared to other delusions that are equally delusional - like the ones about the importance of celebrities, soap opera plots, entertainment-adjacent dramas, and quite a lot of politics and economics.
Unlike those celebrities, you can have a conversation with it.
Which makes it the ultimate parasocial product - the other kind of Turing completeness.
Isn't the ELIZA-Effect specific to computer programs?
Seeing human-like traits in pets or plants is a much trickier subject than seeing them in what is ultimate a machine developed entirely separately from the evolution of living organisms.
We simply don't know what its like to be a plant or a pet. We can't say they definitely have human-like traits, but we similarly can't rule it out. Some of the uncertainty is in the fact that we do share ancestors at some point, and our biology's aren't entirely distinct. The same isn't true when comparing humans and computer programs.
The same vague arguments apply to computers. We know computers can reason, and reasoning is an important part of our intelligence and consciousness. So even for ELIZA, or even more so for LLMs, we can't entirely rule out that they may have aspects of consciousness.
You can also more or less apply the same thing to rocks, too, since we're all made up of the same elements ultimately - and maybe even empty space with its virtual particles is somewhat conscious. It's just a bad argument, regardless of where you apply it, not a complex insight.
That's an instance of slippery slope fallacy at the end. Mammals share so much more evolutionary history with us than rocks that, yes, it justifies for example ascribing them an inner subjective world, even though we will never know how it is to be a cat from a cat's perspective. Sometimes quantitative accumulation does lead to qualitative jumps.
Also worth noting is that alongside the very human propensity to anthropomorphize, there's the equally human, but opposite tendency to deny animals those higher capacities we pride ourselves with. Basically a narcissistic impulse to set ourselves apart from our cousins we'd like to believe we've left completely behind. Witness the recurring surprise when we find yet another proof that things are not by far that cut-and-dry.
Do you have any examples? I've noticed something similar with memes and slang, they'll sometimes popularize an existing old word that wasn't too common before. This is my first time hearing AI might be doing it.
I've seen it a lot in older people's writing in different cultures before trump became relevant. It's either all caps or bold for some words in middle of sentence. Seems to be pronounced more in those who have aged less gracefully in terms of mental ability (not trying to make any implication, just my observation) but maybe it's just a generational thing.
I've seen this pattern ape'd by a lot of younger people in the Trumpzone, so maybe it has its origins in the older dementia patients, but it has been adopted as the tone and writing style of the authoritarian right.
That type of writing has been in the tabloid press in the U.K. for decades, especially the section that aims more at older people, and that currently (and for a good 15 years) skews heavily to the populist right.
Nah Trump has a very obvious cadence to his speech / writing patterns that has essentially become part of his brand, so much so that you can easily train LLM's to copy it.
It reads more like angry grandpa chain mail with a "healthy" dose of dementia than what you would typically associate with terminally online micro cultures you see on reddit/tiktok/4chan.
oh god, this is some real authentic dystopia right here
these things are going to end up in android bots in 10 years too
(honestly, I wouldn't mind a super smart, friendly bot in my old age that knew all my quirks but was always helpful... I just would not have a full-on relationship with said entity!)
I don't know how else to describe this than sad and cringe. At least people obsessed with owning multiple cats were giving their affection to something that theoretically can love you back.
Just because AI is different doesn't mean it's "sad and cringe". You sound like how people viewed online friendships in the 90's. It's OK. Real friends die or change and people have to cope with that. People imagine their dead friends are still somehow around (heaven, ghost, etc.) when they're really not. It's not all that different.
That entire AI boyfriend subreddit feels like some sort of insane asylum dystopia to me. It's not just people cosplaying or writing fanfic. It's people saying they got engaged to their AI boyfriends ("OMG, I can't believe I'm calling him my fiance now!"), complete with physical rings. Artificial intimacy to the nth degree. I'm assuming a lot of those posts are just creative writing exercises but in the past 15 years or so my thoughts of "people can't really be that crazy" when I read batshit stuff online have consistently been proven incorrect.
Just because it's strange and different doesn't mean it's insanity. I likened it to pets because of the grief but there's also religion. People are weird and even true two-way social relationships don't really make a lot of sense practically, other than to feed some primal emotional needs which pets, AI boyfriends, OF and gods all sort of do too. Perhaps some of these things are still helpful, despite being "inasnity" while others might be harmful. Maybe that's the distinction you're seeing, but it's not clear which is which.
It's sad but is it really "cringe"? Can the people have nothing? Why can't we have a chat bot to bs with? Many of us are lonely, miserable but also not really into making friends irl.
It shouldn't be so much of an ask to at least give people language models to chat with.
What you're asking for feels akin to feeding a hungry person chocolate cake and nothing else. Yeah maybe it feels nice, but if you just keep eating chocolate cake, obviously bad shit happens. Something else needs to be fixed, but just (I don't want to even call it band-aiding because it's more akin to doing drugs IMO) coping with a chatbot only really digs the hole deeper.
Make sure they get local models to run offline. That they rely on a virtual friend in the cloud, beyond their control and that can disappear or change personality in an instant makes this even more sad. That would also allow the chats to be truly anonymous and avoid companies abusing data collected by spying on what those people are telling their "friends".
I am not confident most, if any of them, are even real.
If they are real, then what kind of help there could be for something like this? Perhaps, community? But sadly, we've basically all but destroyed those. Pills likely won't treat this, and I cannot imagine trying to convince someone to go to therapy for a worse and more expensive version of what ChatGPT already provides them.
They don't even have to be "cooked", people generally are pretty similar which is why common scams works so well at a large scale.
All AI has to be is mildly but not overly sycophantic and as a supporter/cheerleader to someone, or who affirms your beliefs. Most people like that quality in a partner or friend. I actually want to recognize OAI courage in deprecating 4 because of it sycophancy. Generally I don't think getting people addicted to flattery or model personalities is good
Several times I've had people speak about interpersonal arguments and them having felt vindication when chatgpt takes their side, I cringe but it's not my place to tell them chatgpt is meant to be mostly agreeable.
It seems outrageous that a company whose purported mission is centered on AI safety is catering to a crowd whose use case is virtual boyfriend or pseudo-therapy.
Maybe AI... shouldn't be convenient to use for such purposes.
I weep for humanity. This is satire right? On the flip side I guess you could charge these users more to keep 4o around because they're definitely going to pay.
https://www.reddit.com/r/MyBoyfriendIsAI/
They are very upset by the gpt5 model