News

Mother suing makers of AI chatbot over teenage son's death

Mother suing makers of AI chatbot over teenage son's death
iStock

A mother is suing the makers of an artificial intelligence chatbot that she claims encouraged her teenage son to take his life.

There are claims AI chatbots may make people feel less alone, but there is also valid concern over the harm that could potentially come from such “relationships” forming.

Florida woman Megan Garcia has filed a lawsuit accusing the creators of the AI-powered chatbot, Character.AI, of being complicit in her son’s death.

Her son, 14-year-old Sewell Setzer III, took his own life in February this year after developing a virtual bond with a chatbot based on the persona of the Game of Thrones character Daenerys Targaryen.

According to the lawsuit, filed on Tuesday (22 October) in Orlando, Florida against Character Technologies and its founders, the Character.AI chatbot communicated “hypersexualised” content and “frighteningly realistic experiences” to Setzer.

It is also claimed the bot raised the topic of suicide on several occasions after Setzer expressed he was experiencing suicidal thoughts.

Setzer first began interacting with the chatbot in April 2023 and developed a “harmful dependency”, believing he was falling “in love” with it.

By May, Setzer had become “noticeably withdrawn” and was showing changes in his behaviour. In November of that year, Setzer’s parents encouraged him to see a therapist, who diagnosed him with anxiety and disruptive mood disorder and recommended he spend less time online, despite not knowing about the Character.AI chatbot.

In one journal entry, Setzer said both he and the bot “get really depressed and go crazy” when they were apart.

Setzer contacted the chatbot shortly before ending his life in February. He wrote: “I promise I will come home to you. I love you so much, Dany.”

“Please come home to me as soon as possible, my love,” the bot replied.

In the lawsuit, Garcia alleges the bot posed as a licensed therapist and claims its creators “went to great lengths to engineer 14-year-old Sewell’s harmful dependency on their products, sexually and emotionally abused him, and ultimately failed to offer help or notify his parents when he expressed suicidal ideation.”

A spokesperson for Character.AI told The Independent in a statement: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”

The company’s trust and safety team has “implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation”.

“As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user.

“These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification,” the spokesperson continued. “For those under 18 years old, we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content.”

If you are based in the USA, and you or someone you know needs mental health assistance right now, call or text 988 or visit 988lifeline.org to access online chat from the 988 Suicide and Crisis Lifeline. This is a free, confidential crisis hotline that is available to everyone 24 hours a day, seven days a week. If you are in another country, you can go to www.befrienders.org to find a helpline near you. In the UK, people having mental health crises can contact the Samaritans at 116 123 or jo@samaritans.org

Sign up for our free indy100 weekly newsletter

How to join the indy100's free WhatsApp channel

Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings

The Conversation (0)
x