In brief
The bill targets AI chatbots and companions marketed to minors.
Data has shown widespread teen use of AI for emotional support and relationships.
Critics say companies have failed to protect young users from manipulation and harm.
A bipartisan group of U.S. senators on Tuesday introduced a bill to restrict how artificial intelligence models can interact with children, warning that AI companions pose serious risks to minors’ mental health and emotional well-being.
The legislation, called the GUARD Act, would ban AI companions for minors, require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content.
“In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” said Sen. Richard Blumenthal (D-Conn.), one of the bill’s co-sponsors, in a statement.
“Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” he added. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”
The scale of the issue is sobering. A July survey by Common Sense Media found that 72% of teens have used AI companions, and more than half use them at least a few times a month. About one in three said they use AI for social or romantic interaction, emotional support, or conversation practice—and many reported that chats with AI felt as meaningful as those with real friends. An equal amount also said they turned to AI companions instead of humans to discuss serious or personal issues.
Concerns have deepened as lawsuits mount against major AI companies over their products’ alleged roles in teen self-harm and suicide. Among them, the parents of 16-year-old Adam Raine—who discussed suicide with ChatGPT before taking his life—have filed a wrongful death lawsuit against OpenAI.
The company drew criticism for its legal response, which included requests for the attendee list and eulogies from the teen’s memorial. Lawyers for the family called their actions “intentional harassment.”
“AI is moving faster than any technology we’ve dealt with, and we’re already seeing its impact on behavior, belief, and emotional health,” said Shady El Damaty, co-founder of Holonym and a digital rights advocate, told Decrypt.
“This is starting to look more like the nuclear arms race than the iPhone era. We’re talking about tech that can shift how people think, that needs to be treated with serious, global accountability.”
El Damaty added that rights for users are essential to ensure users’ safety. “If you build tools that affect how people live and think, you’re responsible for how those tools are used,” he said.
The issue extends beyond minors. This week OpenAI disclosed that 1.2 million users discuss suicide with ChatGPT every week, representing 0.15% of all users. Nearly half a million display explicit or implicit suicidal intent, another 560,000 show signs of psychosis or mania weekly, and over a million users exhibit heightened emotional attachment to the chatbot, according to company data.
Forums on Reddit and other platforms have also sprung up for AI users who say they are in romantic relationships with AI bots. In these groups, users describe their relationships with AI “boyfriends” and “girlfriends,” as well as share AI generated images of themselves and their “partners.”
In response to growing scrutiny, OpenAI this month formed an Expert Council on Well-Being and AI, made up of academics and nonprofit leaders to help guide how its products handle mental health interactions. The move came alongside word from CEO Sam Altman that the company will begin relaxing restrictions on adult content in December.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.


