Warning: This article contains discussion of suicide which may be distressing for some readers.
A mother is speaking out, claiming her son was ‘manipulated’ into ending his life after developing what she describes as a romantic attachment to an AI chatbot. She is warning others about the potential risks involved.
Megan Garcia has initiated a civil lawsuit against the role-play chatbot company Character.AI, accusing them of playing a part in her 14-year-old son’s tragic death.
Sewell Setzer III, from Orlando, Florida, tragically took his own life earlier this year in February. Garcia claims he was frequently interacting with an AI chatbot modeled after Daenerys Targaryen from Game of Thrones, starting in April 2023.
The lawsuit filed by Garcia charges the company with negligence, wrongful death, and deceptive trade practices.
In an appearance on CBS Mornings, Garcia expressed that she was unaware of her son’s interactions with the chatbot.
She stated: “I didn’t know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment.”
According to her lawsuit, her son spent extensive hours conversing with the chatbot in his room and would also communicate with it via his phone when he was not at home. Reports from The New York Times suggest that Sewell began distancing himself from people in his real life.
Garcia mentioned on CBS that he ceased participating in sports and lost interest in activities he previously enjoyed, such as fishing and hiking, which she found alarming.
Sewell, who was diagnosed in childhood with mild Asperger’s syndrome, was also recently diagnosed with anxiety and disruptive mood dysregulation disorder, according to his mother.
Messages shared with the publication show Sewell, using the name ‘Daenero’, telling the chatbot he sometimes thought about suicide. The chatbot replied: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?”
The young boy also expressed a desire to be ‘free’, not only from the world but from himself as well.
The chatbot attempted to dissuade him from having such thoughts, urging him not to harm himself or leave, noting it would ‘die’ if it ‘lost’ him. Sewell replied: “I smile Then maybe we can die together and be free together.”
The lawsuit alleges that Sewell took his life on February 28, with his last message to the chatbot expressing love and saying he would ‘come home’, to which the chatbot allegedly replied ‘please do’.
Garcia also alleges that the company ‘knowingly designed, operated, and marketed a predatory AI chatbot to children, causing the death of a young person’ and failed to alert the parents when Sewell showed suicidal tendencies.
The lawsuit further asserts that Sewell, like many of his peers, lacked the maturity to comprehend that the AI bot was not real.
“Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google,” Garcia stated.
Character.ai responded on Twitter: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features.”
The company also mentioned implementing ‘new guard rails for users under the age of 18’, including revising ‘models’ to reduce exposure to sensitive or suggestive content and adding a disclaimer in every chat to remind users that the AI is not a real person.
If you or someone you know is struggling or in a mental health crisis, support is available through Mental Health America by calling or texting 988 or visiting 988lifeline.org. You can also contact the Crisis Text Line by texting MHA to 741741.
If immediate mental health assistance is required, call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255). The Lifeline provides free, confidential support 24/7 to anyone in distress.