Loading...

Florida's Attorney General has opened a formal investigation into OpenAI following allegations that ChatGPT was used to help plan a deadly shooting at Florida State University last April. The incident left two people dead and five others injured, and is now drawing legal and regulatory scrutiny toward one of the most widely used AI platforms in the world.
The Florida AG's office confirmed the investigation is focused on OpenAI's practices and whether the company took adequate steps to prevent its technology from being used to facilitate violence.
Key facts from the incident and its aftermath:
This is one of the most significant law enforcement actions taken against an AI company in connection with real-world violence, and it sets a precedent for how state governments may hold AI developers accountable going forward.
MSPs and telecom resellers deploying AI tools, including voice agents and conversational AI platforms, are watching this case closely for good reason. If regulators begin holding AI providers liable for downstream misuse, that liability exposure could extend to resellers and channel partners who white-label or distribute those tools.
This case signals that state attorneys general are willing to move aggressively on AI-related incidents, not just federal agencies. Service providers need to understand the indemnification terms and acceptable use policies in their vendor agreements now, before an incident forces the issue.
The most actionable step: review your contracts with any AI vendor you resell or bundle, and confirm who carries liability if the technology is misused by an end user.
Watch for OpenAI's formal response to the investigation and whether other state AGs follow Florida's lead, which could trigger a wave of similar probes across the country. Service providers should also keep an eye on any new compliance requirements that emerge from this, particularly around AI usage disclosures and monitoring obligations.
For the full story, read the original article on TechCrunch AI.