It may not be a concern now, but it comes down to their level of maintaining critical thinking. The risk of epistemic drift, when you have a system that is designed (or reinforced) to empathize with you, can create long-term effects not noticed in any single interaction.
I don't disagree that AI psychosis is real, I've met people who believed that they were going to publish at Neurips due to the nonsense ChatGPT told them, that believed that the UI mockup that claude gave then were actually producing insights into it's inner workings instead of just being blinking SVGs, and I even encountered someone participating at a startup event with an Idea that I'm 100% is AI slop.
My point was just that the interaction I had from r/myboyfriendisai wans't one of those delusional ones.
For that I would take r/artificialsentience as a much better example. That place is absolutely nuts.
Not necessarily: transactional, impersonal directions to a machine to complete a task don't automatically imply, in my mind, the sorts of feedback loops necessary to induce AI psychosis.
All CASE tools, however, displace human skills, and all unused skills atrophy. I struggle to read code without syntax highlighting after decades of using it to replace my own ability to parse syntactic elements.
Perhaps the slow shift risk is to one of poor comprehension. Using LLMs for language comprehension tasks - summarising, producing boilerplate (text or code), and the like - I think shifts one's mindset to avoiding such tasks, eventually eroding the skills needed to do them. Not something one would notice per interaction, but that might result in a major change in behaviour.
I think this is true but I don't feel like atrophied Assembler skills are a detriment to software development, it is just that almost everyone has moved to a higher level of abstraction, leaving a small but prosperous niche for those willing to specialize in that particular bit of plumbing.
As LLM-style prose becomes the new Esperanto, we all transcend the language barriers(human and code) that unnecessarily reduced the collaboration between people and projects.
Won't you be able to understand some greater amount of code and do something bigger than you would have if your time was going into comprehension and parsing?
I broadly agree, in the sense of providing the vision, direction, and design choices for the LLM to do a lot of the grunt work of implementation.
The comprehension problem isn't really so much about software, per se, though it can apply there too. LLMs do not think, they compute statistically likely tokens from their training corpus and context window, so if I can't understand the thing any more and I'm just asking the LLM to figure it out, do a solution, and tell me I did a good job sitting there doomscrolling while it worked, I'm adding zero value to the situation and may as well not even be there.
If I lose the ability to comprehend a project, I lose the ability to contribute to it.
Is it harmful to me if I ask an LLM to explain a function whose workings are a bit opaque to me? Maybe not. It doesn't really feel harmful. But that's the parallel to the ChatGPT social thing: it doesn't really feel harmful in each small step, it's only harmful when you look back and realise you lost something important.
I think comprehension might just be that something important I don't want to lose.
I don't think, by the way, that LLM-style prose is the new Esperanto. Having one AI write some slop that another AI reads and coarsely translates back into something closer to the original prompt like some kind of telephone game feels like a step backwards in collaboration to me.
Acceptance of vibe coding prompt-response answers from chatbots without understanding the underlying mechanisms comes to mind as akin to accepting the advice of a chatbot therapist without critically thinking about the response.
Related: "Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it)" ( https://doi.org/10.31234/osf.io/cmy7n_v5 )