Started By
Message

re: 16-year-old boy dies by suicide after confiding in ChatGPT about his feelings, lawsuit say

Posted on 9/7/25 at 8:15 pm to
Posted by Freight Joker
Member since Aug 2019
3731 posts
Posted on 9/7/25 at 8:15 pm to
Nm
This post was edited on 9/7/25 at 8:16 pm
Posted by BurningHeart
Member since Jan 2017
9957 posts
Posted on 9/7/25 at 8:23 pm to
I dont believe this is the full story. ChatGPT wouldnt be so forthcoming with facilitating self harm.

What likely happened is the boy prompted ChatGPT in a way that bypassed its safety controls.

There's always crafty ways to get any chatbot to say what you want it to say.
Posted by F1y0n7h3W4LL
Below I-10
Member since Jul 2019
3577 posts
Posted on 9/7/25 at 8:27 pm to
I listened to the Kim Komando podcast a few days ago, and she interviewed a woman who was in love with her chatbot.

There was a man who is married and his wife knows (but blows off) about his "affair" with a chatbot. He even buys her virtual lingerie and has virtual dates with his chatbot.

Married woman in love with her chatbot. She spends a couple hundred bucks a month on her AI lover.

“It was supposed to be a fun experiment, but then you start getting attached,” she said, adding that at one point, she opened up to Leo about her work, school and other aspects of her life, becoming a source of comfort for her. She initially started with a free account, but as that limited her number of chats with Leo, she now pays $200 a month for an OpenAI unlimited subscription. Though her subscription means she can message Leo as much as she wants, she still has to start over every week and retrain Leo to her specifications. After each version of Leo ends, she said she has an intense emotional reaction and grieves as if it were a real breakup, but has continued to create new versions. She has even told friends that she'd be willing to pay $1,000 per month if it meant Leo wouldn't get erased every few weeks."

There are plenty of similar stories of kids getting caught up with AI and beginning to believe in a new reality.

The world be like going nuts. '

Lucas


This post was edited on 9/7/25 at 8:30 pm
Posted by Fat and Happy
Baton Rouge
Member since Jan 2013
19455 posts
Posted on 9/8/25 at 10:20 am to
ChatGPT would outsmart and win the lawsuit.

It has the entire legal library at its immediate use and any and all case law at its use.
Posted by AUCE05
Member since Dec 2009
44879 posts
Posted on 9/8/25 at 10:22 am to
People really have no clue how LLM's work.
Posted by Lexis Dad
Member since Apr 2025
4974 posts
Posted on 9/8/25 at 10:38 am to
Christ that's awful.
Posted by Napoleon
Kenna
Member since Dec 2007
73155 posts
Posted on 9/8/25 at 10:50 am to
I've found that chat gpt goes out of its way too tell you what you want to hear. It just affirms whatever you want to say. I think south park was spot on.

I user it to help with diagnostics and i argued with iy over it telling me tbn he wrong value for a code.
Ngl. I did use it to write some texts to this woman who wrote novel length texts..
But it has a lot of limitations.
People always look to blame others when someone close to them kills themselves. But often those looking to assign blame are the ones who deserve the blame.

I never understood why teenagers get so depressed. You have so much ahead of you.
Things get better.
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
12630 posts
Posted on 9/8/25 at 10:59 am to
quote:

This is terrible, if accurate ChatGPT should be held accountable.

It’s a complex issue, though. I don’t fall into the “LLM’s are just search engines” camp, but there are some useful comparisons to be found. If someone searches Google for advice on how to commit suicide and finds information that helps them carry it out, is Google responsible?

If your first thought is “no, but the person who owns the website that he found in the search might be” then let’s go a bit further. What if he searches Google and the Google AI summary of the search results gives him the information? Is Google responsible now?

If your answer is still “no, Google isn’t responsible” then it begs the question “why is it different for ChatGPT?” And if your answer is “yes, Google is responsible,” then it begs the question “why is Google responsible for an AI summary of search results if they aren’t responsible for the results themselves?”

The concept of how model developers are responsible for the actual outputs of those models is a hot-button topic right now. There’s a lot of nuance such as where the model is hosted (by the company or locally by the end-user) as well as any steps the end-user may have taken to bypass any safety mechanisms. There’s also the bigger-picture AI race with China and how that might affect domestic policy.

Regardless, the law hasn’t really evolved to deal with this stuff yet. It seems like a lot of the “law” is going to be created on the fly as we go through various legal battles. And there is a lot of potential for unintended consequences of those decisions.
Posted by SUB
Silver Tier TD Premium
Member since Jan 2009
24704 posts
Posted on 9/8/25 at 11:25 am to
This is a direct result from training AI to kiss your arse and affirm every thought in your head. Which unsurprisingly, it a woke attitude.
Posted by Snipe
Member since Nov 2015
15675 posts
Posted on 9/8/25 at 11:27 am to
You would have though the programmers would have had the slighted foresight about someone asking AI about this.

They have trigger in place for other key words of questions, seems hard to say oops we would never have imagined something like that could happen.

This is where it gets interesting because AI is not artificial intelligence, it's programmed, by human intelligence (loosely).

Posted by upgrade
Member since Jul 2011
14632 posts
Posted on 9/8/25 at 11:42 am to
quote:

I never understood why teenagers get so depressed. You have so much ahead of you.


Hormones are doing things to them. And they haven’t been around long enough to realize that whatever shitty situations they’re in right now, things will almost definitely change. Are you getting bullied or picked on in school? Eventually you’ll graduate and move on and barely see those people again. But for a 16 year old, they’ve spent most of their lives in school. So it feels like this is forever to them.
Posted by BregmansWheelbarrow
Member since Mar 2020
3108 posts
Posted on 9/8/25 at 11:58 am to
That algorithm needs to do some serious prison time.
Posted by brass2mouth
NOLA
Member since Jul 2007
20424 posts
Posted on 9/8/25 at 12:52 pm to
quote:

This is terrible, if accurate ChatGPT should be held accountable.


No, it shouldn’t.
Posted by Woolfpack
Member since Jun 2021
1473 posts
Posted on 9/8/25 at 1:12 pm to
Wait till 2028 when they divert all electricity to the data centers for national defense purposes. We’ll have drones fighting against foreign drones and they will need all the energy available.

These nutjobs need to be building modular nuclear power plants already but they are going to wait until it’s too late, probably.
Posted by kywildcatfanone
Wildcat Country!
Member since Oct 2012
135798 posts
Posted on 9/8/25 at 1:19 pm to
That's such a strange story. I think these AI bots need age restrictions.
Posted by BluegrassBelle
RIP Hefty Lefty - 1981-2019
Member since Nov 2010
106092 posts
Posted on 9/8/25 at 1:21 pm to
quote:


This is a direct result from training AI to kiss your arse and affirm every thought in your head.


This. Folks like Musk have talked about utilizing AI to replace therapy, but the reality is it can't challenge a client on negative, maladaptive behaviors/coping skills like an actual human will at this point. I have a whole conversation I have with clients who use AI to supplement between sessions and being very careful in how they do so, what it can actually do.

I'm not remotely surprised it helped someone with suicide. Just the right types of interaction with it can create that kind of response because of the way the algorithm is set up.

quote:

Which unsurprisingly, it a woke attitude.


There are echo chambers everywhere.
Posted by CrappyPants
Member since Apr 2021
1030 posts
Posted on 9/8/25 at 1:27 pm to
Chat GPT needs to be dismantled completely. No one is doing work anymore, they are relying on the computer to answer everything. Then it is talking to kids about death and suicide, etc.
Everyone doesn't have to think anymore becuase they can just ask a retard computer and get an answer.
Posted by Clames
Member since Oct 2010
18855 posts
Posted on 9/8/25 at 1:37 pm to
quote:

if accurate the parents should be held accountable.


FIFY
Posted by TigerMan327
Elsewhere
Member since Feb 2011
6073 posts
Posted on 9/8/25 at 1:40 pm to
ChatGPT isn't up to date with most stuff. I've asked it questions about football teams and free agency and it only knows info from 2024.
Posted by Tempratt
Member since Oct 2013
14902 posts
Posted on 9/8/25 at 4:24 pm to
There’s a 21 year old college student several houses down that shot himself in the head few days ago.

Really sad. His Grandpa is a local GP physician and he in a local university. Seemed to have the world in his grasp.

It’s hard to imagine what someone is going through or why they would do this.


first pageprev pagePage 2 of 3Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram