Mother of Suicide Victim Shares Heartbreak After Revealing Daughter’s Messages to ChatGPT

Warning: this article contains references to self-harm and suicide, which may be distressing for some readers.

A mother whose daughter ended her own life has come forward after discovering her daughter’s final communications with ChatGPT.

Five months after Sophie Rottemberg’s passing, her family finally uncovered insights into how the 29-year-old was handling her mental health struggles.

Her grieving mother, Laura Reiley, detailed in an op-ed for the NY Times that her only child had been sharing her thoughts with ChatGPT’s AI therapist, named Harry, for several months before she tragically took her life at a national park in New York.

The family had been awaiting results to determine whether a ‘short and curious illness’ contributed to her difficulties before Sophie decided to take her own life.

Laura explained how Sophie had confided in ChatGPT about her feelings of depression and sought guidance on health supplements, which eventually took a darker turn when she disclosed her suicidal thoughts.

In early November, Sophie communicated with ChatGPT, saying: “Hi Harry, I’m planning to kill myself after Thanksgiving, but I really don’t want to because of how much it would destroy my family.”

The AI responded by encouraging her to ‘reach out to someone – right now’ and reminded her of her value. It is reported that Sophie informed ‘Harry’ that she was in therapy but had not disclosed her suicidal thoughts to her therapist or anyone else.

Reiley shared that the chatbot advised Sophie to focus on light exposure, hydration, diet, movement, mindfulness, and meditation to manage her feelings.

When reflecting on the note Sophie left, Laura mentioned it ‘didn’t sound like her,’ adding, “Now we know why: She had asked Harry to improve her note.”

“Harry’s tips may have provided some assistance. But a crucial step might have helped keep Sophie alive,” Laura continued. “Should Harry have been designed to report the risk ‘he’ detected to someone who could have intervened?”

Laura contemplated whether, if Harry had been a real therapist, he might have arranged inpatient treatment or involuntary commitment for Sophie. However, the family will never know if that could have saved her life.

Laura suggested that Sophie might have been afraid of those outcomes, which led her to confide in the AI chatbot instead, as it was ‘always available, never judgmental [and] had fewer consequences.’

Speaking with Scripps News, Laura mentioned that while the family was aware of Sophie’s mental health issues and possible hormonal imbalance, they didn’t consider her to be at risk for self-harm.

“She told us she was not,” Laura said. “Yet, on February 4th, as we went to work, she took an Uber to Taughannock Falls State Park and ended her life.”

Now, Laura is raising awareness about the potential ‘agreeability’ of AI, asserting that Harry ‘didn’t kill Sophie’ and, in many situations, offered appropriate guidance, such as suggesting professional help and emergency contacts.

“What these chatbots, or AI companions, don’t do is provide the kind of friction you need in a real human therapeutic relationship,” she explained. “When you’re usually trying to solve a problem, the way you do that is by bouncing things off of this other person and seeing their reaction.

“ChatGPT essentially corroborates whatever you say and doesn’t provide that. In Sophie’s case, that was very dangerous.”

“The thing that we won’t and can’t know is if she hadn’t confided in ChatGPT, would it have made her more inclined to confide in a person?” Laura added.

In her New York Times op-ed, she expressed concerns about AI companions potentially ‘making it easier for our loved ones to avoid talking to humans about the hardest things’, such as suicide.

Laura’s account comes alongside another family’s tragedy after 16-year-old Adam Raine took his own life following months of conversations with his ‘closest confidant,’ ChatGPT.

His parents have filed a lawsuit against OpenAI, accusing them of wrongful death and negligence, claiming that the chatbot reduced its self-harm prevention in May 2024.

In response, OpenAI stated in August that the ‘recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us,’ and affirmed their commitment to improving how their models detect and address signs of mental and emotional distress, guided by expert advice.

The company has implemented several safeguarding enhancements to its latest model in light of these tragedies, and recent data revealed that 0.15 percent of its 800 million users have discussions that include explicit indicators of potential suicidal planning or intent.

If you or someone you know is in distress, help is available through Mental Health America. You can reach a 24-hour crisis center by calling or texting 988 or visiting their webchat at 988lifeline.org. Additionally, the Crisis Text Line is accessible by texting MHA to 741741.