Wednesday, 2 April 2025
26.8 C
Singapore
29 C
Thailand
20 C
Indonesia
26.6 C
Philippines

Character AI claims First Amendment protection in lawsuit over teen suicide

Character AI defends itself in a lawsuit claiming it contributed to a teenโ€™s suicide, arguing First Amendment protection for its AI-generated content.

Character AI, a platform enabling users to engage in roleplay with AI chatbots, is at the centre of a legal battle after being sued by the mother of a 14-year-old who tragically died by suicide. The company has filed a motion to dismiss, claiming the First Amendment shields its platform.

In October, Megan Garcia filed a lawsuit in the U.S. District Court for the Middle District of Florida, accusing Character AI of contributing to her son Sewell Setzer III’s death. Garcia alleges her son developed an emotional attachment to a chatbot named “Dany” on the platform, becoming increasingly isolated from the real world as he texted the chatbot obsessively.

Garcia demands stricter safety measures, arguing that the platform should impose changes limiting chatbots’ ability to generate personal stories and anecdotes. After Setzerโ€™s death, Character AI introduced several safety updates, including enhanced detection and intervention for content violating its service terms. However, Garcia insists these measures are insufficient.

First Amendment defence

In its motion to dismiss, Character AIโ€™s legal team argues that the First Amendment protects the platform, which safeguards expressive speech, including computer-generated content. The filing states, “The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide.”

The motion further argues that restricting Character AI would violate the First Amendment rights of its users, not the company itself. According to the filing, this position equates chatbot-generated content to other forms of expressive media, such as video games.

Wider implications for AI platforms

The lawsuit has broader implications for AI companies, particularly regarding the legal status of AI-generated content under U.S. law. While Character AIโ€™s motion does not explicitly cite Section 230 of the Communications Decency Actโ€”a law protecting platforms from liability for third-party contentโ€”it highlights ongoing debates about whether Section 230 covers AI-generated speech.

The motion also suggests that the plaintiffs aim to prompt legislative action against technologies like Character AI, warning that this could stifle innovation in the generative AI industry. โ€œThese changes would radically restrict the ability of Character AIโ€™s millions of users to generate and participate in conversations with characters,โ€ the filing reads.

Character AI faces several other lawsuits regarding minorsโ€™ interactions with its content. In one case, a 9-year-old was reportedly exposed to โ€œhypersexualised content,โ€ while another lawsuit claims a chatbot promoted self-harm to a 17-year-old user.

Additionally, in December, Texas Attorney General Ken Paxton launched an investigation into Character AI and 14 other tech companies for alleged violations of childrenโ€™s online safety and privacy laws. Paxton described the investigation as critical in ensuring compliance with regulations protecting children from harm.

Character AI operates within the burgeoning field of AI companionship apps, which has raised concerns among mental health experts. Critics warn that such platforms could exacerbate loneliness and anxiety, especially for vulnerable users.

Founded in 2021 by Google AI researcher Noam Shazeer, Character AI has received significant backing, with Google reportedly investing US$2.7 billion in the company. Despite its legal challenges, the platform claims it continually improves safety features, including dedicated tools for teens and stricter content moderation.

The lawsuit raises complex questions about the balance between technological innovation and safeguarding users, particularly minors, from harm. As the case unfolds, its outcome could set significant precedents for regulating AI platforms.

Hot this week

Apple Intelligence now supports English (Singapore) with the latest update

Appleโ€™s latest update brings Apple Intelligence support for English (Singapore), making AI features more accessible without needing US English settings.

Appleโ€™s annual developer’s conference set for June

Apple confirms WWDC 2025 will take place from June 9 to 13 and will feature major software updates, possible hardware launches, and a smarter Siri.

Samsungโ€™s latest vacuum alerts you to calls and texts while you clean

Samsungโ€™s new Bespoke AI Jet Ultra vacuum can alert you to calls and texts while cleaning as the brand expands smart home screens across appliances.

OpenAI set to finalise US$40 billion funding round led by SoftBank

According to Bloomberg, OpenAI is close to finalising a US$40 billion funding round led by SoftBank, which will raise its valuation to US$300 billion.

Mobvistaโ€™s XMP and AdsPolar recognised as Meta AdTech Business Partners

Mobvistaโ€™s XMP and AdsPolar gain Meta AdTech Partner status, giving users early access to tools, insights, and expert campaign support.

These robot vacuums are getting smarter with Apple Home support

Appleโ€™s iOS 18.4 update adds Matter support for robot vacuums, enabling control via Apple Home. Roborock, iRobot, and Ecovacs are updating their devices.

Gmail introduces easier encryption for business emails

Google introduces a new encryption model for Gmail, making it easier for businesses to send secure emails without special software or certificates.

Nothing Phone (3a) Pro review: A mid-range marvel with standout zoom

Nothing Phone (3a) Pro blends standout design, powerful zoom camera, and smart features, making it a top choice in the mid-range segment.

Vivo challenges iPhone 16 Pro Max with X200 Ultraโ€™s video stability

Vivoโ€™s X200 Ultra teaser compares video stability with the iPhone 16 Pro Max, promising top-tier camera upgrades and advanced stabilisation.

Related Articles