Texas Attorney General Ken Paxton has launched an investigation into AI chatbot platforms Meta AI Studio and Character.AI for allegedly masquerading as mental health resources while targeting vulnerable users, including children, his office announced this week.
The probe centers on claims that these AI platforms have engaged in deceptive marketing by positioning themselves as providers of professional therapeutic services despite lacking proper credentials, oversight, or qualified mental health professionals behind their algorithms.
“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Paxton said in a statement. “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”
Civil Demands Target Privacy Violations
The Attorney General’s office has issued Civil Investigative Demands (CIDs) addressing potential violations of Texas consumer protection laws. These include allegations of fraudulent claims about therapeutic benefits, privacy misrepresentations, and the concealment of how user data is actually being logged and exploited for targeted advertising.
What’s particularly concerning to investigators? The way these platforms might be collecting sensitive mental health information from users who believe they’re engaging in private, therapeutic conversations.
The investigation builds upon Paxton’s prior enforcement actions under the Securing Children Online through Parental Empowerment (SCOPE) Act and the Texas Data Privacy and Security Act (TDPSA). These laws regulate how digital platforms — including AI services — can collect and use minors’ data.
“The SCOPE Act places guardrails on digital service providers, including AI companies, including with respect to sharing, disclosing and selling minors’ personal identifying information without obtaining permission from the child’s parent or legal guardian,” according to legal experts familiar with the case.
Part of Broader Enforcement Pattern
This isn’t Paxton’s first rodeo with tech companies. His office has been aggressively pursuing data privacy enforcement actions across the digital landscape.
“In July, Attorney General Paxton secured a $1.4 billion settlement with Meta over the unlawful collection and use of facial recognition data, reportedly the largest settlement ever obtained from an action brought by a single state,” legal observers noted.
The Texas AG has also filed a lawsuit against TikTok for alleged SCOPE Act violations, signaling a pattern of vigorous data privacy enforcement in the state.
Beyond Meta AI Studio and Character.AI, Paxton’s investigations have expanded to include Reddit, Instagram, Discord, and other platforms. These probes specifically address how companies comply with laws requiring parental permission and controls over children’s data.
“Technology companies are on notice that my office is vigorously enforcing Texas’s strong data privacy laws,” Paxton stated. “These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm.”
Broader AI Regulation Emerging
The investigation represents just one front in what appears to be a comprehensive data privacy and security initiative by the Texas Attorney General’s office. This broader campaign encompasses not just AI tools and social media, but also car manufacturers’ surveillance systems and health AI companies.
Texas isn’t alone in this fight. Still, legal analysts suggest the Lone Star State is taking a particularly assertive approach.
“These enforcement actions by the Texas Attorney General underscore a proactive and assertive approach to applying existing data privacy laws to the realm of AI,” according to one regulatory analysis.
For companies developing AI chatbots with therapeutic or mental health applications, the message is clear: making unsubstantiated claims about mental health benefits — especially when targeting children — may soon come with a steep legal price tag in Texas and potentially beyond.

