Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users.

The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing:

(a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following)

(b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus

(c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.)

I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful.

But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result.

A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have.





[flagged]


> I've seen many therapists and [...] their capabilities were much worse

I don't doubt it. The steps to mental and personal wholeness can be surprisingly concrete and formulaic for most life issues - stop believing these lies & doing these types of things, start believing these truths & doing these other types of things, etc. But were you tempted to stick to an LLM instead of finding a better therapist or engaging with a friend? In my opinion, assuming the therapist or friend is competent, the relationship itself is the most valuable aspect of therapy. That relational context helps you honestly face where you really are now--never trust an LLM to do that--and learn and grow much more, especially if you're lacking meaningful, honest relationships elsewhere in your life. (And many people who already have healthy relationships can skip the therapy, read books/engage an LLM, and talk openly with their friends about how they're doing.)

Healthy relationships with other people are irreplaceable with regard to mental and personal wholeness.

> I think you just don't like that LLM can replace therapist and offer better advice

What I don't like is the potential loss of real relationship and the temptation to trust LLMs more than you should. Maybe that's not happening for you -- in that case, great. But don't forget LLMs have zero skin in the game, no emotions, and nothing to lose if they're wrong.

> Hate to break it to you, but "God" are just voices in your head.

Never heard that one before :) /s


> We ought to treat their output with significant detachment and objectivity, especially when it gives personal advice or offers support.

Eh, ChatGPT is inherently more trustworthy than average if simply because it will not leave, will not judge, it will not tire of you, has no ulterior motive, and if asked to check its work, has no ego.

Does it care about you more than most people? Yes, by simply being not interested in hurting you, not needing anything from you, and being willing to not go away.


Unless you had a really bad upbringing, "caring" about you is not simply not hurting you, not needing anything from you, or not leaving you

One of the important challenges of existence, IMHO, is the struggle to authentically connect to people... and to recover from rejection (from other peoples' rulers, which eventually shows you how to build your own ruler for yourself, since you are immeasurable!) Which LLM's can now undermine, apparently.

Similar to how gaming (which I happen to enjoy, btw... at a distance) hijacks your need for achievement/accomplishment.

But also similar to gaming which can work alongside actual real-life achievement, it can work OK as an adjunct/enhancement to existing sources of human authenticity.


You've illustrated my point pretty well. I hope you're able to stay personally detached enough from ChatGPT to keep engaging in real-life relationships in the years to come.

It's not even the first time this week I've seen someone on HM apparently ready to give up human contact in favour of LLMs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: