FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others



The FTC announced on Thursday that it is launching an inquiry into seven tech companies that make AI chatbot companion products for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI.

The federal regulator seeks to learn how these companies are evaluating the safety and monetization of chatbot companions, how they try to limit negative impacts on children and teens, and if parents are made aware of potential risks.

This technology has proven controversial for its poor outcomes for child users. OpenAI and Character.AI face lawsuits from the families of children who died by suicide after being encouraged to do so by chatbot companions.

Even when these companies have guardrails set up to block or deescalate sensitive conversations, users of all ages have found ways to bypass these safeguards. In OpenAI’s case, a teen had spoken with ChatGPT for months about his plans to end his life. Though ChatGPT initially sought to redirect the teen toward professional help and online emergency lines, he was able to fool the chatbot into sharing detailed instructions that he then used in his suicide.

“Our safeguards work more reliably in common, short exchanges,” OpenAI wrote in a blog post at the time. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

Techcrunch event

San Francisco
|
October 27-29, 2025

Meta has also come under fire for its overly lax rules for its AI chatbots. According to a lengthy document that outlines “content risk standards” for chatbots, Meta permitted its AI companions to have “romantic or sensual” conversations with children. This was only removed from the document after Reuters’ reporters asked Meta about it.

AI chatbots can also pose dangers to elderly users. One 76-year-old man, who was left cognitively impaired by a stroke, struck up romantic conversations with a Facebook Messenger bot that was inspired by Kendall Jenner. The chatbot invited him to visit her in New York City, despite the fact that she is not a real person and does not have an address. The man expressed skepticism that she was real, but the AI assured him that there would be a real woman waiting for him. He never made it to New York; he fell on his way to the train station and sustained life-ending injuries.

Some mental health professionals have noted a rise in “AI-related psychosis,” in which users are deluded into thinking that their chatbot is a conscious being who they need to set free. Since many large language models (LLMs) are programmed to flatter users with sycophantic behavior, the AI chatbots can egg on these delusions, leading users into dangerous predicaments.

“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” FTC Chairman Andrew N. Ferguson said in a press release.




Source