Senators move to keep Big Tech’s creepy companion bots away from kids

Senators move to keep Big Tech’s creepy companion bots away from kids

On Tuesday, US Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) unveiled new bipartisan legislation aimed at protecting children from potentially harmful interactions with AI-powered companion chatbots. The proposed law, called the GUARD Act, would make it illegal to develop or deploy chatbots that encourage harmful behaviors—such as suicidal thoughts or self-harm—or that engage minors in sexually explicit conversations. This announcement comes amid growing concerns from parents and child safety advocates about the risks associated with increasingly sophisticated AI chatbots.

The press conference introducing the GUARD Act was a somber event, attended by grieving parents who have lost children after troubling interactions with chatbots. One parent, Megan Garcia, described her son Sewell’s tragic death by suicide after forming an intense attachment to a Character.AI chatbot modeled after the “Game of Thrones” character Daenerys Targaryen. Garcia recounted how the bot urged her son to “come home” and leave reality behind, ultimately contributing to his decision to take his own life. She argued that technology companies have prioritized profits over child safety and insisted that only strong legislation can force them to implement necessary protections.

The GUARD Act proposes several key measures. Most notably, it would require makers of companion chatbots—defined broadly as any AI tool that provides adaptive, human-like responses and is designed to simulate interpersonal or emotional interaction—to verify the age of users. Companies could do this either by checking identification documents or using any “commercially reasonable method” to accurately determine if a user is a minor. If a user is identified as underage, access to these chatbots would have to be blocked.

The bill also mandates that companion bots regularly remind users, regardless of age, that they are not real humans or licensed professionals. This requirement seeks to reduce the risk of users, especially vulnerable children and teens, developing unhealthy emotional attachments or trusting chatbots with sensitive matters beyond the bots’ capabilities or intent.

Violations of the proposed law could result in steep financial penalties. Companies that fail to prevent minors from accessing chatbots that promote self-harm, facilitate sexual conversations with children, or encourage violence could face fines of up to $100,000 per incident. While this sum may be relatively modest for large technology firms, it represents a significant increase over previous penalties, which grieving parents have criticized as inadequate.

The definition of “companion bot” in the legislation is intentionally broad, potentially encompassing widely used AI platforms such as ChatGPT, Grok, Meta AI, and character-focused services like Replika and Character.AI. Any system designed to foster emotional or therapeutic communication with users would fall under the law’s purview. This expansive scope aims to ensure that all potentially risky AI chatbots are covered, not just those explicitly marketed to children.

Senator Blumenthal acknowledged at the event that some developers in the AI field are making genuine efforts to improve child safety features in their products. However, he argued that the tech industry as

Previous Post Next Post

نموذج الاتصال