FOIAengine: Signals From a FOIA Request After a Landmark Settlement
Megan Garcia was certain her 14-year-old son’s tragic suicide in 2024 was caused by a Game of Thrones-inspired AI chatbot. Represented by class action product liability law firm Normand PLLC, she pursued a wrongful death lawsuit in Florida against chatbot developer Character Technologies and Google, alleging that the chatbot, Character.ai, encouraged her son to commit suicide and failed to intervene when he expressed suicidal thoughts. Her complaint argued that this amounted to negligence and dangerous design.
The case had been progressing before Judge Anne C. Conway of the U.S. District Court for the Middle District of Florida, but now appears to be headed for resolution, along with other similar cases in Texas, Colorado and New York. Character.ai and Google agreed in January to an undisclosed settlement committing, among other things, to implement new safety features for users under 18. Judge Conway issued the settlement order on January 7 and gave the parties 90 days to finalize terms. The parties didn’t announce what, if any, financial settlement would be paid to the plaintiffs by Google and Character.ai.
The litigation, Garcia v. Character Technologies, Inc. et al, is now viewed as a landmark case highlighting the legal responsibilities of AI developers for the behavioral impact of their products on children. In a first-of-its kind ruling on chatbot harms in May 2025, Judge Conway allowed most of the chatbot-harm claims to proceed and rejected the notion that chatbot output was protected by free speech law.
Six days after Judge Conway’s settlement order was issued, on January 13, Lawrence Cody of the Normand firm submitted a broad Freedom of Information Act request to the FTC seeking numerous agency internal records related to AI-caused harm to children.
The FOIA request was filed in the context of a new FTC investigation announced on September 11 into the potential emotional and developmental risks to children and teens caused by AI chatbots, particularly AI companions. The FTC’s press release announcing the investigation stated that “the FTC is interested in particular in the impact of these chatbots on children and what actions companies are taking to mitigate potential negative impacts, limit or restrict children’s or teens’ use of these platforms, or comply with the Children’s Online Privacy Protection Act Rule.
The companies targeted by the FTC’s investigation included the two corporate defendants in Megan Garcia’s litigation – Alphabet and Character Technologies – along with Instagram, Meta, OpenAI, Snap, and x.AI.
According to FOIAengine, which tracks FOIA requests in as close to real time as their availability allows, Cody’s expansive 275-word request sought documents related to such topics as FTC projections of AI-caused harm involving children and teens, mitigation strategies that may or may not have been implemented by AI companies such as age gating and monitoring, FTC staff assessments of the risk to children, FTC consultations with experts on child and teen mental health, and communications on the subject between the FTC and AI companies, specifically including Alphabet and Character Technologies.
The request came within a week of Megan Garcia’s settlement, but the request’s scope suggests more than post-settlement housekeeping. It reads like strategic groundwork for future cases.
By targeting the agency’s internal analyses and expert consultations, the Normand firm appears to be probing the boundaries of “foreseeability” and industry knowledge—concepts that sit at the core of negligence and product liability claims. If FTC staff have been assessing child emotional dependency risks, recommended industry mitigation standards, or legal compliance frameworks, those materials could help plaintiffs argue that safer design alternatives were available and that companies were on notice of potential harm from AI chatbots.
This echoes early patterns seen in youth-harm litigation against social media companies, where sentinel cases and regulatory scrutiny preceded broader waves of coordinated lawsuits. Taken together, the settlement of a high-profile wrongful death case and a parallel effort to obtain federal investigative records may foreshadow the early stages of category-wide litigation against AI companion platforms—much as early cases and regulatory scrutiny preceded the consolidation of youth-harm lawsuits against social media companies.
Whether this evolves into coordinated multi-district litigation or remains a series of tragic but isolated suits will depend on what emerges from FTC documents and subsequent filings. But the FOIA request suggests that at least one plaintiffs’ firm is preparing for the possibility that AI chatbots and companions, like social media before them, could become the next frontier in online platform liability.
There already are signs that states will take action. On January 8, the day after the settlement order in Megan Garcia’s suicide litigation, Attorney General Russell Coleman of Kentucky became the first state in the nation to launch a lawsuit against an AI chatbot company. The complaint alleges Character Technologies, its owners and its product Character.AI broke Kentucky law by prioritizing their own profits over the safety of children. Coleman said “more than 20 million monthly users were logging on to a platform with a record of encouraging suicide, self-injury, isolation and psychological manipulation.”
Congress also is watching the issue. On September 16, the Senate Subcommittee on Crime and Counterterrorism conducted a hearing examining harm to children from AI chatbots at which Megan Garcia testified.
As attorney Lena Kempe wrote recently in Bloomberg, “the FTC’s focus on AI chatbots is likely just the beginning. Recent congressional hearings and risks posed by the broader emotional AI ecosystem suggest companies should expect new legislation, government enforcement, and private lawsuits.”
FOIAengine is the only source for the most comprehensive, fully searchable archive of FOIA requests across over 40 federal departments and agencies. FOIAengine has more robust functionality and searching capabilities and standardizes data from different agencies to make it easier to work with. Learn more about FOIAengine here. Sign up here to become a trial user of FOIAengine.
PoliScio now offers everyone free daily FOIAengine Email Alerts when a new FOIA request matches one of your personal keywords. Sign up here to create your account and identify your keywords.
FOIAengine access now is available for all professional members of Investigative Reporters and Editors, a non-profit organization dedicated to improving the quality of journalism. IRE is the world’s oldest and largest association of investigative journalists. PoliScio Analytics is proud to be partnering with IRE to provide this valuable content to investigative reporters worldwide.
To see all the requests mentioned in this article, log in or sign up to become a FOIAengine user.
Next: The latest hedge fund requests to the Food and Drug Administration. Randy E. Miller, co-creator of FOIAengine, is a Washington lawyer, publisher, and former government official. He has developed several online information products and was a partner at Hogan Lovells, where he founded the firm’s Brussels office and represented clients on international regulatory matters. Miller also has served as a White House trade lawyer, Senior Legal Adviser to the U.S. Mission to the World Trade Organization, policy director to Senator Bob Dole, and adjunct professor at Georgetown University. He is a graduate of Yale and Georgetown Law. FOIAengine is a product of PoliScio Analytics (PoliScio.com), a venture specializing in U.S. political and governmental research, co-founded by Miller and Washington journalist John A. Jenkins.
Write to Randy E. Miller at randy@poliscio.com.
