Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: ThoughtCoach: Helping to improve mental health with AI (thoughtcoach.app)
18 points by mtharrison on April 9, 2023 | hide | past | favorite | 17 comments



Do you have literally any experience with mental health? According to your registration the site is registered in the UK which means you'd have to qualify under these terms for a EuroPsy license (which are mirrored by the UK agencies as well) https://www.europsy.eu/national-requirements-uk. I'm going to guess you don't. You're putting yourself in serious legal liability for deploying something like this. Not to mention your disclaimer saying, more or less "we don't save anything but we send it to a company without any sort of business agreement, go check out what they do for yourself."

If I were you I'd take this down immediately before you get in trouble or worse, someone gets seriously hurt.


I have taken it down, thanks for the advice. Definitely not worth taking risks. As someone who has struggled with mental health this is something that I built mainly for myself. I will keep it as a personal assistant.


Thanks for doing the right thing. If you're struggling (I do, and have as well) definitely seek real professional help. I know it can be a real struggle in the UK especially to find a therapist, but the NHS tries their best and you can get started on that here: https://www.nhs.uk/service-search/mental-health/find-an-nhs-.... If you can't find someone through there BetterHelp provides online therapy to the UK https://www.betterhelp.com/advice/counseling/online-counseli....

Try not to put your mental health in the hands of a language model, you deserve real help.


It can be hard to find a therapist who you gel with. I've been through a few in my time. The real work is done by oneself ultimately. Be it with CBT homework or mindfulness between sessions. Obviously it's very individualised though.

Robert Burns (of CBT fame) is currently involved in an app which incorporates AI so I'm interested to see how they've dealt with the sensitive issues.


I'm going to be curious to see how it plays out with trained and ethically aware psychologists. However, just because they have their PhD or MD or DO doesn't mean they're not charlatans or that they're definitely good people.


Just to add to this: I see that you took it down for some valid reasons but don't be dismayed. This is a cool idea, probably not best to be implemented as a product to improve mental health but there's something here. If you find that this has been helping you personally then keep working on it. You may find what you end up building may not be the product you originally envisioned but solving a broader 'technical issue' around language models, perhaps in predicting emotion or understanding the sentiment behind the words.

Overall, I'm just saying often these personal projects don't translate to the public well at first. But eventually you fine tune it for yourself enough and find the true value in it. Then you figure out how to translate that to the public


Thanks for this! It means a lot. It can be a little discouraging to hear criticisms when you've put a lot of thought and time into something that you hope can help people.

But I fully understand and appreciate the comments made by other posters.

I have put it back online with a disclaimer that must be acknowledged, and pointed out this is a research app not a substitute for therapy.

I hope that does enough to mitigate some of the risks, whilst allowing me to continue to work at making this a safer and more valuable resource. Whenever that may be, or not :)


That’s great to hear you found a positive path forward for the project. Keep it up!


As someone with mental health issues who also sees GPT-4 as a profound step towards (and perhaps even the first instance of) general AI:

No. No. No no no no no no no no. Soooo much no. You haven't thought this through enough and while I want nothing more than to help all of those in pain using my skills in technology; this is non-trivial and you have not done so here.

I hope that isn't too harsh. I just think it can't be understated just how important it is to get this sort of an idea right on the first try. There should be _no_ tolerance for hacker mentality/move-fast-and-break-things, because the things you are breaking are people.


I've taken it down. It was meant as a quick demo of how one might leverage GPT4 in this area but there's a risk posting here people might think it's a serious product.

From your comment however, you don't rule out applying AI in this field entirely?


Not at all! In fact, after my own personal tinkering I suspect GPT-4 is often better at diagnosis for both mental and physical health issues than the average doctor for common ailments.

The obvious issue is that it can confidently output the wrong answer. This will be less of a problem though as model accuracy continues to improve.

Really the bigger problem are the ones that doctors and therapists already have: patients are not always reliable narrators and may mis-describe symptoms, insert their own prejudices, and outright conceal important information. In particular, many patients may not even possess the proper writing and reading skills necessary to properly respond to even the most accurate of prompts.

This is now the muddy domain of "sorry, you just need a human available". There's too much risk online, too many edge cases to accomodate, and since the tech itself actually provides a _convenience_ over doctor visits; you could be causing more harm than you might think if people decide to ignore their doctors for the "cheaper" option. This is particularly relevant to me as an American, where health care is a mess, and mental health care is far worse.

A better approach might be to assume that this type of tool is available for patients who are currently receiving some kind of treatment; whether that is a doctor's appointment or in-patient care at a behavioral health facility. There's probably a chance to reduce friction and maybe improve patient outcomes if such a tool could be provided as an early survey, perhaps. Or - as you have done here, a way to teach some coping skills that are highly individualized to each person.

Really though yeah, I think a qualified professional should be in the room and making sure things go well.


Thanks so much for your thoughts. I tried my app with a friend and it took a good amount of digging deeper to actually get to their root thoughts about their difficult event.

So to be successful with such a tool, a good level of self-awareness is required.

Yes a tool as an adjunct to regular therapy is a good opportunity. I discussed this actually with my own therapist and he thought it was a good idea, with the NHS being so overwhelmed here in the UK to be able to have a first-line resource-cheap option is appealing.

I suppose the thing to figure out is how this is best done in a safe and useful manner.


No problem! It's an area I'm recently quite interested in but don't really have the background necessary to approach it.

I am also quite biased against any system that reduces the accountability of the care provider - and so much of modern application development is about being able to scale essentially everything _but_ accountability.

I like the idea of "ThoughtCoach" because you are framing it as empowering the user to improve their own abilities to manage their mental health. In this regard, it's kind of in the genre of mindfulness meditation (and the various apps/integrations that have shown up to help people with that). You could go with a gamified option like Noom does.

This requires you to scope out a progression for the user to go through - and to be informed (at least hopefully) by modern psychology and proven to work statistically before being implemented. This also requires you to significantly narrow the LLM's possible outputs/modes at each level - which alleviates many issues with overconfident outputs and unreliable patients (you can even force the user to choose from a list of words, rather than type anything at all).

I haven't checked this space out enough yet but it's rapidly evolving. A lot of chatGPT-based start-ups are in this learning-with-a-smart-tutor-available regime. I think it could be very powerful but a lot of clever engineering is still required. On the other hand, maybe the limits there prevent it from being nearly as useful as what you're imagining.

Food for thought.

edit: I guess my final thought is that, from personal experience, a good part of why therapy can be successful is that some amount of positive human interaction alone can be enough to improve one's general outlook. The context of "using an app" requires initiative and perseverance. The comfort from consistent human contact can inspire that but apps just feel like "thumbing through your phone". Reducing screentime in general is good for mental health. Thus, the app shouldn't be gamified really; because the only targets you can set in an app are LLM-based and will require the user to type or tap. Anything other than that is just what's already out there - Apple's Screen Time, step tracking, calorie counters, mindfulness guiding. Adding an LLM doesn't obviously help there.


What’s the plan if someone were driven to adverse psychiatric episode by this app? How do you plan on handling liability?


How do you prevent cases where the AI might say something harmful (e.g. suggesting the user kill themselves)?


Can you subtly integrate this into a game?


Would you mind elaborating a bit?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: