Something's brewing in the world of artificial intelligence, and it's not an upgrade or a new feature release. It's a darker side that most of us didn't see coming. OpenAI, a name often associated with cutting-edge tech and innovation, now finds itself entangled in a web of child exploitation reports that show a staggering increase. The numbers aren't just creeping up; they're skyrocketing.
In the first half of 2025, OpenAI sent a jaw-dropping 75,027 CyberTipline reports. Compare that to the previous year, where they sent a mere 947 reports. The content in question? A hefty 74,559 pieces, up from 3,252 the year before. It's a jump that can't be ignored. But what exactly does 'content' mean here? OpenAI clarifies that this involves any instance of CSAM—child sexual abuse material—whether uploads or requests, all diligently reported to the National Center for Missing and Exploited Children (NCMEC).
OpenAI's flagship app, ChatGPT, plays a central role. It allows users to upload various files, including images, and can churn out both text and images in response. But that's not all—OpenAI's models are also accessible via API, broadening the scope of potential misuse. Yet, this recent NCMEC count doesn't even consider reports related to Sora, a video-generation app launched in September 2025, outside the current reporting period.
The spike in reports isn't isolated. It's part of a larger trend that's sent ripples through the corridors of NCMEC, which noted a 1,325 percent increase in generative AI-related CyberTipline reports between 2023 and 2024. That's a massive surge. While OpenAI's numbers are public, other tech giants like Google haven't been as transparent about the AI angle in their reports.
As if the numbers weren't enough, 2025 was a year when AI companies like OpenAI faced the microscope over child safety concerns beyond CSAM. Over the summer, 44 state attorneys general issued a stern warning to AI firms, including OpenAI, Meta, Character.AI, and Google. They promised to wield their full power to shield children from predatory AI products.
Legal battles have also emerged, with OpenAI and Character.AI facing multiple lawsuits. Families allege that chatbots contributed to their children's deaths. The storm didn't stop there. The US Senate Committee on the Judiciary convened a hearing to delve into AI chatbot harms, and the US Federal Trade Commission began probing AI companion bots. They sought to understand how these companies are combating the negative impacts of their creations, especially on young users. (A quick disclaimer: I previously worked with the FTC and was involved in this study before moving on.)
As the year draws to a close, these developments put OpenAI and its counterparts in the hot seat. The tech world watches closely, aware that how these issues are handled could shape the future of AI regulation and child safety.
Comments
Log in to write a comment