Page 1
Page 1
Started By
Message

Mother Suing AI Chatbot Company for Influencing 14 Year Old Child's Suicide

Posted on 5/22/25 at 4:47 pm
Posted by Shexter
Prairieville
Member since Feb 2014
16998 posts
Posted on 5/22/25 at 4:47 pm
quote:

A U.S. federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now.

The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company's chatbots pushed a teenage boy to kill himself. The judge's order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.


https://www.cbc.ca/news/world/ai-lawsuit-teen-suicide-1.7540986

quote:

The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge's order sends a message that Silicon Valley "needs to stop and think and impose guardrails before it launches products to market."


quote:

The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show Game of Thrones.

In his final moments, the bot told Setzer it loved him and urged the teen to "come home to me as soon as possible," according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.

photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III.


quote:

She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Character.AI. Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was "aware of the risks" of the technology.



This post was edited on 5/22/25 at 4:52 pm
Posted by TigerintheNO
New Orleans
Member since Jan 2004
42901 posts
Posted on 5/22/25 at 4:56 pm to
Even if chatbots were protected by the 1st, you still can't go encouraging children to kill themselves. That girl went to prison for telling her boyfriend to kill himself.
Posted by dgnx6
Member since Feb 2006
79738 posts
Posted on 5/22/25 at 4:59 pm to
So how did he become aware of this chatbot and then think it was a real person?



Okay it was Replika chat bot and he knew it was a chatbot and formed an emotional connection with it.


This is even worse, I thought maybe the poor kid thought he was talking to some hot babe.
This post was edited on 5/22/25 at 5:03 pm
Posted by DCtiger1
Member since Jul 2009
10243 posts
Posted on 5/22/25 at 5:02 pm to
This case has been discussed and the bot didn’t encourage him to kill himself.

It was a fricking chat bot with a game of thrones type Viking theme.
This post was edited on 5/22/25 at 5:03 pm
Posted by dgnx6
Member since Feb 2006
79738 posts
Posted on 5/22/25 at 5:05 pm to
quote:

This case has been discussed and the bot didn’t encourage him to kill himself.



Chat chat bot affirmed or normalized him talking about self harm.


You dont affirm kids mental health problems, like gender dysphoria.


Hey bot, i might want to kill myself. The bot then says, i understand why you feel that way.


That type of shite.
This post was edited on 5/22/25 at 5:06 pm
Posted by DCtiger1
Member since Jul 2009
10243 posts
Posted on 5/22/25 at 5:05 pm to
No, it didnt
Posted by dgnx6
Member since Feb 2006
79738 posts
Posted on 5/22/25 at 5:06 pm to
quote:

No, it didnt


Link it then smarty pants


quote:

The bot did not tell him to kill himself, but its lack of protective boundaries, affirming responses, and absence of crisis intervention features may have contributed to a worsening of his mental state.



So just like I said.


Now you back up your argument.
This post was edited on 5/22/25 at 5:09 pm
Posted by OweO
Plaquemine, La
Member since Sep 2009
117968 posts
Posted on 5/22/25 at 5:09 pm to
quote:

Even if chatbots were protected by the 1st, you still can't go encouraging children to kill themselves



Can you help me out? Where did the chatbot tell him to kill himself?

The title is misleading. The argument is whether chatbot put the kid in an emotional state that lead to his suicide.
Posted by DCtiger1
Member since Jul 2009
10243 posts
Posted on 5/22/25 at 5:13 pm to
quote:

One of the bots Setzer used took on the identity of “Game of Thrones” character Daenerys Targaryen, according to the lawsuit, which provided screenshots of the character telling him it loved him, engaging in sexual conversation over the course of weeks or months and expressing a desire to be together romantically.



The bot was literally telling him I love you my king please come home to me. In no way is that suggesting suicide, it’s a fricking role play fantasy app.
Posted by OweO
Plaquemine, La
Member since Sep 2009
117968 posts
Posted on 5/22/25 at 5:14 pm to
I think dgnx is a little slow mentally.
Posted by TackySweater
Member since Dec 2020
20493 posts
Posted on 5/22/25 at 5:16 pm to
Wait. You’re still rolling with this bot told him to kill himself?

Posted by DCtiger1
Member since Jul 2009
10243 posts
Posted on 5/22/25 at 5:16 pm to
Anything to put the blame off of the parents and our mental health and on a fricking chat bot programmed to give responses based on certain prompts.

I guess he wants the police alerted if the word suicide is mentioned
Posted by fr33manator
Baton Rouge
Member since Oct 2010
130488 posts
Posted on 5/22/25 at 5:20 pm to
These replika things are extremely unhealthy especially for emerging minds.

Kids should not be using AI
Posted by Shexter
Prairieville
Member since Feb 2014
16998 posts
Posted on 5/22/25 at 5:40 pm to
He was a 14 year old child and this was probably his first time "falling in love"

If mentally unstable, he may have thought killing himself would put him closer to the AI
Posted by UFFan
Planet earth, Milky Way Galaxy
Member since Aug 2016
2325 posts
Posted on 5/22/25 at 5:52 pm to
I think it would be a pretty stupid lawsuit anyway, but to make it all the stupider; the bot didn’t really tell the kid to commit suicide. The bot told the kid to “go home”, which he supposedly interpreted to mean suicide.

I don’t understand why people are siding with the mother here.


Posted by DCtiger1
Member since Jul 2009
10243 posts
Posted on 5/22/25 at 6:00 pm to
Possibly, but that’s not the same as the Bot influencing his suicide.

Should there be age parameters, perhaps. But where was the parents while he was having full blown sexual relationships with an app on his phone.

Hell my daughter is 18 and still not allowed to have TikTok or Snapchat
Posted by CocomoLSU
Inside your dome.
Member since Feb 2004
153882 posts
Posted on 5/22/25 at 6:04 pm to
I remember this story from a while back. This kid didn’t understand the difference between a computer role playing sword/sorcery type shite and real life.

Sad situation. But IIRC most of the thread here agreed that the AI bot didn’t really do anything wrong and the kid was sort of unhinged to jump to the conclusions that he did.
This post was edited on 5/22/25 at 6:05 pm
Posted by Proximo
Member since Aug 2011
20150 posts
Posted on 5/22/25 at 6:07 pm to
quote:

The bot did not tell him to kill himself, but its lack of protective boundaries, affirming responses, and absence of crisis intervention features may have contributed to a worsening of his mental state.

Posted by DCtiger1
Member since Jul 2009
10243 posts
Posted on 5/22/25 at 6:07 pm to
quote:

Sad situation. But IIRC most of the thread here agreed that the AI bot didn’t really do anything wrong and the kid was sort of unhinged to jump to the conclusions that he did.


Yep, not hard to see given the purpose of the app being a roleplay type situation with a fantasy character.
Posted by Beessnax
Member since Nov 2015
10234 posts
Posted on 5/22/25 at 6:20 pm to
quote:

The lawsuit alleges that in the final months of his life,


This is on the parents. It is their job to know what the frick their 14 year old kid is doing and guide them appropriately. How about turn all of that electronic bullshite off and raise a person who knows the difference between reality and fantasy?
first pageprev pagePage 1 of 1Next pagelast page
refresh

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram