Hacker News new | past | comments | ask | show | jobs | submit login
Mother sues AI chatbot company Character.AI, Google over son's suicide (reuters.com)
1 point by Jerry2 4 months ago | hide | past | favorite | 5 comments



IMO she should be in prison for failing to parent.

Where was she when all of this was happening?


Did you read the article?? There’s a character with 176M messages whose prompt is “your friend who is a boy who has a secret crush on you”. These predators are targeting children all so they can show a slide deck with numbers going up to secure their next round.

I hope the CEO, who was apparently one of the people on the transformers paper, feels every ounce of shame and guilt, because they put their own agenda ahead of public safety, and now a 14 year old boy has had their life taken from them


>These predators are targeting children all so they can show a slide deck with numbers going up to secure their next round.

FTA (emphasis mine):

>Character.AI allows users to create characters on its platform that respond to online chats in a way meant to imitate real people.

Ms. Garcia's filing may argue that "Character.AI targeted her son... with anthropomorphic, hypersexualized, and frighteningly realistic experiences", but this doesn't reflect an accurate understanding of the technology or the platform. ("Anthropomorphic" also seems like a strange choice of word for a character apparently modeled on Game of Thrones, but whatever.)

Also FTA:

>Google re-hired the founders in August as part of a deal granting it a non-exclusive license to Character.AI's technology.

What "round"?

Moving on:

>There’s a character with 176M messages whose prompt is “your friend who is a boy who has a secret crush on you”

One could easily imagine plenty of motivations for that prompt that have nothing to do with predation or "targeting children". One could also easily imagine that it was, itself, written by an underage user.

Also, I don't see anything like that in the article.

I'm not even sold on the bot's culpability in the first place. Even putting aside the gun safety issue:

>When Sewell found the phone, he sent "Daenerys" a message: "What if I told you I could come home right now?" The chatbot responded, "...please do, my sweet king." Sewell shot himself with his stepfather's pistol "seconds" later, the lawsuit said.

It seems unreasonable to me for a chatbot service - AI-powered or otherwise - to anticipate that kind of reaction to a bot message, or intended context behind the user's message. Even with previous mentions of suicide in other conversations. (It comes across as though this kid imagined "Daenerys" - who as far as I know[0] is still alive in the story - was speaking to him from beyond the grave.)

[0]: https://en.wikipedia.org/wiki/Daenerys_Targaryen


Did you read the article? The kid shot himself with a gun "within seconds" of her taking his phone away.

Why did he have such easy access to a firearm? Why did the mother not know that her child was suicidal?

The c.ai stuff, which I've used, really isn't any more affective than a good book or video game.

This is 100% her fault.


Discussion (51 points, 4 days ago, 135 comments) https://news.ycombinator.com/item?id=41924013




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: