Woebot is a project out of Stanford which teaches you how to recognize and correct unhealthy thinking. It is also an interesting example of a chatbot based service. https://woebot.io/
I'm quite disturbed by efforts like this.
Perhaps because I cannot fathom how a chatbot can help replace human compassion and emotion from just simple human characteristics like facial expressions.
A major part of the solution will be to get together and try to understand each other. Not to delegate this to a bot that'll spit out some random LSTM generated "words".
Our bots can help solve some problems in our communication, and can help us communicate when we cannot physically do so. Hawking will testify to this perhaps.
They should not communicate instead of us being able to do so.
For example, here is an article about Ellie, a South-Carolina University virtual therapist used in the treatment of PTSD. It notes some explicit advantages which at least complement humans:
“One advantage of using Ellie to gather behaviour evidences is that people seem to open up quite easily to Ellie, given that she is a computer and is not designed to judge the person”, Morency explains to news.com.au
...
Morency stresses she is not a substitute for a human therapist. Rather, she is used in tandem with a doctor as a data-gatherer, able to break down walls which may exist due to a patient’s unwillingness to disclose sensitive information to a human.
As Morency explains, “The behavioural indicators that Ellie identifies will be summarised to the doctor, who will integrate it as part of the treatment or therapy. Our vision is that Ellie will be a decision support tool that will help human doctors and clinicians during treatment and therapy.”