Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Is that a debate worth having though?

Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".

If ChatGPT has "PhD-level intelligence" [1] then identifying people using ChatGPT for therapy should be straightforward, more so users with explicit suicidal intentions.

As for what to do, here's a simple suggestion: make it a three-strikes system. "We detected you're using ChatGPT for therapy - this is not allowed by our ToS as we're not capable of helping you. We kindly ask you to look for support within your community, as we may otherwise have to suspend your account. This chat will now stop."

[1] https://www.bbc.com/news/articles/cy5prvgw0r1o



>Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".

I think it’s fair to demand that they label/warn about the intended usage, but policing it is distopic. Do car manufacturers immediately call the police when the speed limit is surpassed? Should phone manufacturers stop calls when the conversation deals with illegal topics?

I’d much rather regulation went the exact opposite way, seriously limiting the amount of analysis they can run over conversations, particularly when content is not deanonimised.

If there’s something we don’t want is OpenAI storing data about mental issues and potentially selling it to insurers for example. The fact that they could be doing this right now is IMO much more dangerous than tool misuse.


Those analogies are too imprecise.

Cars do have AEB (auto emergency braking) systems, for example, and the NHTSA is requiring all new cars to include it by 2029. If there are clear risks, it's normal to expect basic guardrails.

> I’d much rather regulation went the exact opposite way, seriously limiting the amount of analysis they can run over conversations, particularly when content is not deanonimised.

> If there’s something we don’t want is OpenAI storing data about mental issues and potentially selling it to insurers for example. The fact that they could be doing this right now is IMO much more dangerous than tool misuse.

We can have both. If it is possible to have effective regulation preventing an LLM provider from storing or selling users' data, nothing would change if there were a ban on chatbots providing medical advice. OpenAI already has plenty of things it prohibits in its ToS.


>Cars do have AEB (auto emergency braking) systems, for example, and the NHTSA is requiring all new cars to include it by 2029. If there are clear risks, it's normal to expect basic guardrails.

That's cool, but it is cool because it happens on device.

Analysing your conversations for certain patterns, and either cross referencing between conversations or keeping some sort of strike system, is the exact kind of feature that is a bad turn away from a dystopia.

Hopefully local llm's get good enough that we can avoid this issue altogether.


Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice? From my experience talking about your problems to the unaccountable bullshit machine is not very different then the "real" therapy.


> Are people using ChatGPT for therapy more vulnerable than people using it for medical or legal advice?

Probably. If you are in therapy because you’re feeling mentally unstable, by definition you’re not as capable of separating bad advice from good.

But your question is a false dichotomy, anyway. You shouldn’t be asking ChatGPT for either type of advice. Unless you enjoy giving yourself psychiatric disorders.

https://archive.ph/2025.08.08-145022/https://www.404media.co...

> From my experience talking about your problems to the unaccountable bullshit machine is not very different then the "real" therapy.

From the experience of the people (and their families) who used the machine and killed themselves, the difference is massive.


I've been talking about my health problems to unaccountable bullshit machines my whole life and nobody ever seemed to think it was a problem. I talked to about a dozen useless bullshit machines before I found one that could diagnose me with narcolepsy. Years later out of curiosity I asked ChatGPT and it nailed the diagnosis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: