Hacker News new | past | comments | ask | show | jobs | submit login
Diagnosing Mental Health Disorders Through AI Facial Expression Evaluation (unite.ai)
39 points by Hard_Space on Aug 3, 2022 | hide | past | favorite | 35 comments



I'm a psychiatrist, and (like most people) I'm both intrigued and terrified about what AI will bring.

I'm not terrified about losing my job (we're spread so thin that I don't think it will be an issue for competent psychiatrists in my lifetime), but I'm terrified that the mentally ill are marginalized and that a crappy but cost-effective approach with a high margin of error may rise to prominence.

The diagnostic categories, as mentioned by just_steve, are indeed imprecise, and the RDoc approach (https://www.nimh.nih.gov/research/research-funded-by-nimh/rd...) would be a much better fit for this type of research.

Lastly, here is my dream for a good use of AI in psychiatry: With so much psychiatric care happening remotely, I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity, etc, that I could use to supplement the information available via video link. It would be nice to have some additional data to make up for the trade-off of not being in the room. Perhaps one day with some kind of AR it might even be able to happen in the same room. It would be hard to make it not distracting, but done well it could add a lot. And if it went with the patient if they moved then it would be far more useful as it would be trained to their own personal variations. It would be really nice to catch early onset of episodes of mania or psychosis though subtle changes, and many of the public-sector patients I work with bounce around too often for someone to get to know them well enough to catch those subtle changes from baseline.


> a crappy but cost-effective approach with a high margin of error may rise to prominence.

What, like antidepressants?


Ba-zing. I was thinking the same thing. It’s incredible how much of a hammer mental practitioners think they have. It’s borderline criminal how so many have turned into a nail.


>but I'm terrified that the mentally ill are marginalized and that a crappy but cost-effective approach with a high margin of error may rise to prominence.

I understand the fear but is there any value to be gained by vastly expanding access to Mental Health diagnostics, even flawed ones?

Not being cheeky, genuinely interested. One hears about a Mental Health crisis in the US quite often, so could this actually be a net positive tool?


Personally, acknowledging that mental health is a real thing that sometimes goes awry and requires real treatment is still an issue that more people need to be aware of and more accepting of.

I think we've made progress in addressing postpartum depression and first responder PTSD. There remain many stigmas and a lack of awareness about mental health, mental injury, and moral injury, that in North American society lead to substance and screen addictions, as well as anti-social behaviours.

So diagnostics would be helpful, but treatment and societal perspective needs to improve.


no small irony in the name of a company involved as "unite AI" .. humans are predatory animals in packs, and make no mistake, this tech will be used to hunt weak humans by other humans


What do you think of tools like one? https://www.aifredhealth.com/


I can't really tell from the website, but maybe? The site seems focused on depression, and at least in my practice I'm not sure that would add much for me.


> a crappy but cost-effective approach with a high margin of error may rise to prominence.

The tech startup that eventually creates that will call this "efficiency". This type of 'solution' is exactly what capitalism creates.

> I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity

> It would be hard to make it not distracting

Not only would it be distracting, it could also bias you in unexpected ways.

> I would love to have a real-time dashboard/HUD with measures of disorganized speech patterns or affective intensity,

I already don't trust a lot of the mental health industry because of the very bad experiences[1] I've had in the past. The easiest/fastest way to guarantee I never visit a psychiatrist again is to start using that kind of "AI" tech without first showing me the source code. "Magic" hidden algorithms are already a problem in other medical situations like pacemakers[2] and CPAP[3] devices.

> make up for the trade-off of not being in the room

Maybe what you need isn't some sort of "AI" or other tech buzzword. It sounds like you need better communication technology that doesn't lose as much information.

--

On the more general topic of "AI in psychiatry", I strongly encourage you to play the visual novel Eliza[4] by Zachtronics. It's about your fear of a cheap, high error rate system with an additional twist: the same system also optimizes your role into a "gig economy" job.

[1] a brief description of one of those experiences: https://news.ycombinator.com/item?id=26035775

[2] https://www.youtube.com/watch?v=k2FNqXhr4c8

[3] https://www.vice.com/en/article/xwjd4w/im-possibly-alive-bec...

[4] https://www.zachtronics.com/eliza/


"Opto Electric Phrenology" (OEP) would be a better name.

"Depression" and "Schizophrenia" as diagnostic categories are fraught, as they include many subtypes and, in the latter case, may lump together multiple distinct conditions.

At best, this "AI diagnosis" research can claim that the "diagnosis" the system produced matches the "diagnosis" a panel of human psychiatrists reached for the same subject. Like most AI research, the training set and the biases and assumptions it contains are the real challenge.

AI Ethics are not yet standard practice in most institutional settings, and it shows.


> Like most AI research, the training set and the biases and assumptions it contains are the real challenge.

That seems like an insurmountable problem. AI obfuscates those assumptions and risks deeply ingraining them/causing stagnation. If there isn’t some kind of distributed human mechanism actively involved in not just generating data, but also in choosing the relevancy and the modeling, it seems like many of these AI applications will be actively worse than human centered systems that can address and evolve the model/data collection based on those biases/assumptions on the fly.

AI seems like something that should only be used when the success criteria are super clear/close to incontrovertible.


This paper is a dog's breakfast. Even the header isn't properly proofread (it reads "JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015"). Figure 5 shows the results of an experiment where people were asked to rate the mean faces, which seems irrelevant to the rest of the work.

There could be some interesting work in this area, for example to support or reject the popular belief in Asia that visible sclera above or below the iris is indicative of mental health, but this paper isn't it.


I agree with a few fellow posters that this looks a bit like soothsaying 2.0, but it really doesn't matter whether it works or not when it comes to the abuse potential, it's enough that people in charge believe it. Ad targeting. Hiring. Dating.

My psychotherapist partner will like it, finally something in that field that might be worse than the remaining Jungians.


Just what we need, machine learning phrenology to give people diagnoses that effect them for life that are largely arbitrary and porous categories


On top of other commenters' (correct) observations about phrenology, I'll point out: the "disordered" face on the right is coded as feminine: higher cheekbones, thinner and raised eyebrows, &c.

Given that women are diagnosed with depression at nearly twice the rate that men are, there's probably a fundamental data error here. Which is no surprise, given that it's all bunk science.


Phrenology 2.0


Exactly what I was thinking. Like Jesus Christ, diagnosing mental illnesses is already a very shaky thing just due to their nature, and now we think we can do it by looking at a person's face?


Exactly - this kind of research is repugnant and hopefully will be seen the same way phrenology is seen.


Yup. My exact thought upon seeing the headline. Reading the article didn't really bring me away from this reaction either, still feels like a whole lot of bunk.


Zachtronics' Eliza (https://www.gog.com/game/eliza) should be mandatory reading material before one chooses to use AI to deal with mental health issues.


Great; phrenology except the method is an inscrutably complex AI/ML algorithm which will end up hidden behind IP and trade-secret barriers, so it's harder to call bullshit on.

And the people in control of the training corpus get to decide who to arbitrarily add into which disorder category, either on purpose or through incompetence.

What could go wrong?!


The pictures they offer look like someone who's feeling well, and someone who's feeling bad. Would this model classify people with chronic illnesses as mentally ill, because they're feeling bad? What about people who had just dropped their phone and cracked the screen?


While many people have raised negative applications and uses of this technology, your question implies that the test is performed without any other context or input from the person, and that no follow-up questions are asked.

So, yes, it can be used to provide inaccurate classifications. The application of the system will determine the (mis) use of the classification.

Not enough therapists (currently a problem)....use the app!

Emergency Room full (currently a problem)....use the app!

Applying for insurance....use the app and send in your score...


Right. And we can also tell whether they're gay, whether they're criminals, and whether they will buy Pepsi from our million-dollar ad campaign.

Just kidding. A million dollars is way too little to spend on Pepsi ads.


Oh, so now companies could 'diagnose' potential interview candidates resume photos with AI, rejecting eveyone who scores badly. Great dystopian future.


Hard not to remember that AI parole-officer scene from the Elysium movie [1].

The AI was 'detecting' several emotional states in real-time, most notably the sarcasm.

At the movie release this looked quite spot-on about the AI state. Perhaps now it's more able and sensitive/sensible.

[1]: https://m.youtube.com/watch?v=flLoSxd2nNY


according to multiple historical spurce this is the basis of the voight-kampff test


Further taking out the humanity. People don't need a heartless computer diagnosing them and then giving them meds. People need people. People need to connect, especially those with mental health challenges.


As a schizophrenic I'm super interested in obtaining a source code and a dataset


In a world run by utter psychopaths who print money out of thin air for all those grants to award to AI, data mining and surveillance R&D, there is no trace of doubt those areas are inevitably going to be weaponized and used against the worst enemy of the state - the individuality.

This is Punitive Psychiatry 2.0


Humans are much complicated than that. The article is a pure shitposting and ignore it for your sanity. How come this can stay in front of hackernews, It's unbelievable.

WHOLE HN HOMEPAGE IS SHITPOSTING. WHATS HAPPENNING.


Quick, are your eyebrows arched rn?


Wut


Imagine this thing in your wearable AR glasses..


"Disorders"?!

What's this, "person of color" instead of "black" or N*, that's all cool and agreeable, but then how on earth is "disorder" still okay? Bull.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: