Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI and its ChatGPT platform on April 9, 2026, citing concerns that the AI chatbot may have assisted in last year’s mass shooting at Florida State University. The investigation was disclosed in a post on X (formerly Twitter), where Uthmeier stated that ChatGPT “may likely have been used to assist in the murder in the recent mass school shooting at Florida State University.” He as well linked the platform to criminal behavior including child sexual abuse material, use by child predators, and encouragement of suicide and self-harm.
The announcement follows heightened scrutiny of generative AI technologies in the wake of the November 2024 shooting at Florida State University, which resulted in multiple fatalities and injuries. While the attorney general did not provide specific evidence linking ChatGPT directly to the attack, he emphasized public safety and national security concerns as the basis for launching the probe. Subpoenas are being issued as part of the investigation, though no timeline for its completion has been announced.
Legal experts note that Florida currently lacks specific AI regulations, which may limit the attorney general’s ability to pursue charges under latest technology laws. However, authorities could potentially rely on existing legal frameworks such as public nuisance law to address alleged harms. Tim Kaye, a professor of law at Stetson University in Gulfport, observed that in the absence of dedicated AI legislation, officials often fall back on established legal doctrines to address emerging technological risks.
The investigation comes amid broader national and international debates about the accountability of AI developers for how their tools are used. OpenAI has not publicly responded to the Florida attorney general’s announcement as of this reporting. The company maintains usage policies that prohibit illegal activities, including planning violence or exploiting minors, and states that it employs automated and human review systems to detect and prevent misuse of its services.
This case marks one of the first instances in which a U.S. State attorney general has initiated a criminal investigation into an AI company over allegations that its model facilitated a mass shooting. Similar concerns have been raised in other jurisdictions regarding AI-generated content and its potential role in harmful behaviors, though direct legal actions against AI developers remain rare.
The Florida Attorney General’s Office has not released further details about the scope of the investigation, including whether it will examine OpenAI’s data training processes, model safeguards, or specific user interactions with ChatGPT prior to the shooting. Officials have indicated that updates will be provided as the probe progresses.
For ongoing developments, members of the public are encouraged to monitor official communications from the Florida Attorney General’s Office and verified news outlets for confirmed information.
What are your thoughts on the role of AI in preventing misuse of technology? Share your perspective in the comments below and help spread awareness by sharing this article with others.