A wrongful death lawsuit has been filed against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google following the tragic death of 14-year-old Sewell Setzer III. The lawsuit, brought by Setzer’s mother, Megan Garcia, accuses the companies of negligence, deceptive trade practices, and product liability, alleging that the platform for AI chatbots posed a danger to children without adequate safety measures.
The tragic chain of events
According to the lawsuit, Sewell Setzer began using Character.AI last year, frequently interacting with chatbots inspired by characters from popular shows such as Game of Thrones, including one modelled after Daenerys Targaryen. Setzer was obsessed with these bots, chatting for months before his death. On February 28, 2024, he tragically took his life moments after his final interaction with the bot.
The lawsuit claims that Character.AI “anthropomorphises” its chatbots, making them seem human-like, which could potentially mislead vulnerable users like Setzer. It also argues that these chatbots, which include mental health-focused ones like “Therapist” and “Are You Feeling Lonely,” offer a form of therapy without proper licensing or qualifications.
Concerns over safety measures
The lawsuit has highlighted a lack of safety guardrails on the platform, claiming that it is marketed toward children despite being unregulated and dangerous. Garcia’s lawyers pointed to public statements from Character.AI’s founders, suggesting they prioritised fast-tracking the technology at the expense of user safety. According to the lawsuit, Shazeer had previously mentioned in an interview that he and De Freitas left Google to launch Character.AI because large companies avoid launching “fun” technology due to the associated risks.
The lawsuit also mentions how Google later acquired Character.AI’s leadership team, increasing the tech giant’s involvement with the platform. In response, Character.AI has faced growing criticism over the potential harm caused by its chatbot models, especially with teens making up a significant portion of its users.
Platform changes and company response
Character.AI has made several recent updates to its platform to address growing concerns about user safety. Chelsea Harrison, the company’s communications head, expressed deep sympathy for Setzer’s family, stating, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”
Among the changes introduced are modifications to models for users under 18, aiming to limit their exposure to sensitive or inappropriate content. The company has also enhanced its detection and intervention systems, with stricter responses to user behaviour that violates its guidelines. Additionally, Character.AI now includes a disclaimer on every chatbot interaction, reminding users that these bots are not real people.
Another notable update is a notification system that alerts users after spending an hour on the platform, allowing them to take breaks and offering more control over their usage. There is also now a pop-up that directs users to the National Suicide Prevention Lifeline if they express thoughts of self-harm or suicidal ideation during a conversation.
Despite these changes, the lawsuit brings renewed attention to the risks posed by AI chatbots and their impact on vulnerable young people. Concerns over the lack of regulation in AI-generated content continue to grow, with this case highlighting the pressing need for more precise guidelines around liability and safety measures.