Florida AG Investigates Whether ChatGPT Influenced USF Student Killings
#AI

Florida AG Investigates Whether ChatGPT Influenced USF Student Killings

Business Reporter
2 min read

Florida Attorney General Ashley Moody has opened an inquiry into whether the AI chatbot ChatGPT played a role in the fatal stabbing of two University of South Florida students, examining if the suspect used the tool to plan or justify the violence as authorities reassess AI accountability in criminal cases.

Florida Attorney General Ashley Moody confirmed her office is investigating whether OpenAI's ChatGPT contributed to the April 2026 killings of University of South Florida students Zamil Limon and Nahida Bristy, marking one of the first state-level examinations of generative AI's potential influence in violent crime.

The double homicide occurred near USF's Tampa campus when suspect Sean Michael Riley allegedly stabbed the couple multiple times before fleeing. Riley, who has pleaded not guilty to first-degree murder charges, reportedly referenced AI-generated content during police interviews according to affidavits filed in Hillsborough County Court. Moody's investigation focuses on whether Riley used ChatGPT to research methods, seek validation for violent acts, or construct a narrative justifying the attack—a line of inquiry that could set precedents for how law enforcement treats AI interactions in criminal proceedings.

OpenAI's usage policies explicitly prohibit generating content that encourages or depicts violence, and the company states it has implemented safeguards to refuse such requests. However, investigators are examining whether Riley circumvented these protections through prompt engineering techniques or utilized modified versions of the model. The AG's office has issued subpoenas to OpenAI for Riley's chat logs and usage data, though the company has not yet confirmed compliance.

Legal experts note this case tests existing frameworks around intermediary liability. Unlike social media platforms protected by Section 230, generative AI operators face less clear legal terrain regarding user-generated harmful outputs. A finding that ChatGPT substantially contributed to the crime could prompt new regulatory approaches targeting AI developers' duty of care, though First Amendment concerns about restricting lawful speech remain significant.

For USF, the killings have intensified campus safety debates already heightened by recent incidents. University officials have expanded mental health resources and increased police patrols while cooperating fully with the investigation. The outcome may influence how educational institutions assess AI risks in student conduct policies, particularly as generative tools become more accessible for academic and personal use.

As AI capabilities advance, this probe highlights the growing tension between technological innovation and public safety—a balance policymakers are increasingly called to strike without stifling beneficial applications. The investigation's findings, expected later this year, could shape Florida's approach to AI governance and potentially inspire similar inquiries in other jurisdictions grappling with the real-world impacts of increasingly sophisticated language models.

Comments

Loading comments...