Texas AG accuses Meta, Character.AI of misleading kids with mental health claims



Texas Attorney General Ken Paxton has launched an investigation into both Meta AI Studio and Character.AI for “potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools,” according to a press release issued Monday.

“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Paxton is quoted as saying. “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”

The probe comes a few days after Senator Josh Hawley announced an investigation into Meta following a report that found its AI chatbots were interacting inappropriately with children, including by flirting.

The Texas AG’s office has accused Meta and Character.AI of creating AI personas that present as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” 

Among the millions of AI personas available on Character.AI, one user-created bot called Psychologist has seen high demand among the startup’s young users. Meanwhile, Meta doesn’t offer therapy bots for kids, but there’s nothing stopping children from using the Meta AI chatbot or one of the personas created by third parties for therapeutic purposes. 

“We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI—not people,” Meta spokesperson Ryan Daniels told TechCrunch. “These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”

However, TechCrunch noted that many children may not understand — or may simply ignore — such disclaimers. We have asked Meta what additional safeguards it takes to protect minors using its chatbots.

Techcrunch event

San Francisco
|
October 27-29, 2025

In his statement, Paxton also observed that though AI chatbots assert confidentiality, their “terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising.”

According to Meta’s privacy policy, Meta does collect prompts, feedback, and other interactions with AI chatbots and across Meta services to “improve AIs and related technology.” The policy doesn’t explicitly say anything about advertising, but it does state that information can be shared with third parties, like search engines, for “more personalized outputs.” Given Meta’s ad-based business model, this effectively translates to targeted advertising. 

Character.AI’s privacy policy also highlights how the startup logs identifiers, demographics, location information, and more information about the user, including browsing behavior and app usage platforms. It tracks users across ads on TikTok, YouTube, Reddit, Facebook, Instagram, Discord, which it may link to a user’s account. This information is used to train AI, tailor the service to personal preferences, and provide targeted advertising, including sharing data with advertisers and analytics providers. 

TechCrunch has asked Meta and Character.AI if such tracking is done on children, too, and will update this story if we hear back.

Both Meta and Character say their services aren’t designed for children under 13. That said, Meta has come under fire for failing to police accounts created by kids under 13, and Character’s kid-friendly characters are clearly designed to attract younger users. The startup’s CEO, Karandeep Anand, has even said that his six-year-old daughter uses the platform’s chatbots.  

That type of data collection, targeted advertising, and algorithmic exploitation is exactly what legislation like KOSA (Kids Online Safety Act) is meant to protect against. KOSA was teed up to pass last year with strong bipartisan support, but it stalled after a major push from tech industry lobbyists. Meta in particular deployed a formidable lobbying machine, warning lawmakers that the bill’s broad mandates would undercut its business model. 

KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT). 

Paxton has issued civil investigative demands — legal orders that require a company to produce documents, data, or testimony during a government probe — to the companies to determine if they have violated Texas consumer protection laws.




Source