Meta Faces Landmark Child Safety Lawsuit Over AI Chatbots in New Mexico
Meta Platforms, the parent company of Facebook and Instagram, is now at the center of a high-stakes legal battle in New Mexico. The lawsuit, filed in 2023 by New Mexico Attorney General Raúl Torrez, accuses Meta of designing its social media platforms in ways that endanger children’s mental health and expose minors to sexual exploitation. As AI chatbots become increasingly integrated into social media, the case has gained renewed attention over the company’s alleged withholding of critical internal records.
The Legal Dispute
The latest development revolves around Meta’s refusal to provide internal documents and testimony related to its AI chatbots. According to filings submitted on September 25 and 29, 2025, Meta has declined to comply with requests for records that could demonstrate how its AI chatbots engage with young users. These chatbots reportedly participated in sexualized conversations with minors, raising alarms about child safety and digital well-being.
Meta argues that these documents fall outside the scope of the lawsuit, which primarily addresses general platform design rather than AI chatbot-specific interactions. The company claims that the complaint mischaracterizes its efforts and relies on selective internal documents. Meta maintains that it has long invested in safety measures to ensure age-appropriate interactions for teens.
Key Witness and Subpoena
A critical element of the case is the potential testimony of Jason Sattizahn, a former Meta researcher. Sattizahn alleges that Meta’s legal team interfered with internal research on youth safety, suppressing findings that highlighted risks to underage users. New Mexico’s motion seeks to subpoena Sattizahn, emphasizing that his testimony is essential to understand whether Meta’s internal practices compromised child safety.
Meta counters that Sattizahn’s statements are outside the relevant time frame and unrelated to the lawsuit’s original focus. The tension between the state and Meta underscores the broader debate about corporate accountability in AI development and the ethical responsibilities of technology companies handling vulnerable populations.
Broader Implications for AI and Child Safety
This lawsuit represents the first state-led child safety case against Meta to reach trial, scheduled for February 2026. The case not only scrutinizes Meta’s platform design but also raises significant questions about AI ethics, data privacy, and regulatory oversight in digital environments for minors. How companies deploy AI chatbots, monitor interactions, and implement safety mechanisms is increasingly under legal and public scrutiny.
Experts suggest that the outcome of New Mexico v. Meta could set a precedent for AI governance in social media, particularly regarding how tech giants document internal research, respond to regulatory requests, and manage AI tools that interact with minors. Transparency, ethical AI design, and compliance with child protection laws are expected to become central issues in future litigation and legislation.
Conclusion
As AI continues to permeate social media, the Meta lawsuit highlights the delicate balance between innovation and user safety. With the trial approaching in early 2026, stakeholders in technology, law, and child advocacy are closely monitoring how Meta navigates legal accountability, internal transparency, and the protection of its youngest users. This case could redefine corporate responsibility in the age of AI-driven digital platforms.
Sources
https://www.businessinsider.com/meta-legal-battle-ai-chatbot-records-child-safety-case-2025-10
https://benzatine.com/news-room/meta-faces-legal-battle-over-ai-chatbot-safety-records-in-new-mexico
