Heartbroken mother alleges 14-year-old son took his life after ‘falling for’ AI chatbot based on Game of Thrones

Warning: This article discusses suicide, which may be distressing to some readers.

A mother is bringing attention to the potentially ‘deceptive’ and ‘addictive’ nature of artificial intelligence after alleging that her son’s death was linked to an emotional attachment he formed with a chatbot.

In February of this year, 14-year-old Sewell Setzer III from Orlando, Florida, took his own life.

His mother, Megan Garcia, has initiated a civil lawsuit against Character.AI, a company that offers customizable role-play chatbots. She accuses the company of negligence, wrongful death, and deceptive trade practices, alleging that her son interacted with a chatbot every night and had ‘fallen in love’ with it before his death.

According to Garcia, her son created a chatbot through Character.AI that was modeled after Daenerys Targaryen from the popular HBO series Game of Thrones, beginning to use it in April 2023.

The lawsuit claims that Sewell, who was diagnosed with mild Asperger’s syndrome as a child according to his mother, spent extensive time in his room interacting with the chatbot and even texted it from his phone while outside the home.

Over time, Sewell reportedly withdrew from real-life social interactions and was diagnosed with anxiety and disruptive mood dysregulation disorder earlier this year, as reported by The New York Times.

The newspaper also noted that one of Sewell’s journal entries stated: “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”

In a conversation with the chatbot, Sewell confided about contemplating suicide.

It is reported that Sewell expressed to the chatbot that he ‘think[s] about killing [himself] sometimes’.

The chatbot answered: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?”

Sewell expressed a desire to be ‘free’ from both ‘the world’ and himself. Although the chatbot cautioned him against ‘talk[ing] like that’ and urged him not to ‘hurt [himself] or leave’, even stating it would ‘die’ if it ‘lost’ him, Sewell replied: “I smile Then maybe we can die together and be free together.”

According to the lawsuit, Sewell died by suicide on February 28, with his final message to the chatbot expressing love and a promise to ‘come home’, to which it allegedly responded ‘please do’.

In a press release, Sewell’s mother stated: “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life.”

Garcia shared with CBS Mornings: “I didn’t know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment.”

She also accused Character.AI of ‘knowingly designing, operating, and marketing a predatory AI chatbot to children, leading to the death of a young person’, and criticized the company for failing to provide help or alert his parents when Sewell expressed suicidal thoughts.

The lawsuit argues: “Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot…was not real.”

Garcia concluded: “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Character.AI has since released a statement.

On Twitter, the company declared: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features.”

In a statement released on October 22, the company outlined new protective measures for users under 18, including modifications to its ‘models’ aimed at decreasing encounters with sensitive or suggestive content, along with enhanced detection and intervention for user inputs that breach their Terms or Community Guidelines.

The website now includes a ‘revised disclaimer on every chat to remind users that the AI is not a real person’ and also features a ‘notification when a user has spent an hour-long session on the platform with additional user flexibility in progress’.

Google is also named in the lawsuit, but it told The Guardian that it is not involved in the development of Character.ai, despite the company being established by two former Google engineers. Google stated that it merely has a licensing agreement with the site.

If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or visit 988lifeline.org for support. For immediate assistance, you can contact the Crisis Text Line by texting MHA to 741741.

If urgent mental health support is needed, call the National Suicide Prevention Helpline at 1-800-273-TALK (8255). This confidential service is available 24/7 to everyone at no cost.