OpenAI has released a Child Safety Blueprint designed to enhance US efforts to combat child sexual exploitation linked to artificial intelligence. The framework, unveiled on Tuesday, focuses on improving detection, reporting, and investigation of AI-enabled abuse.

The initiative responds to a sharp increase in AI-generated child sexual abuse material. Data from the Internet Watch Foundation (IWF) shows over 8,000 reports of such content were detected in the first half of 2025, marking a 14% rise from the previous year.

Increased Scrutiny and Legal Action

The blueprint's release follows heightened scrutiny from policymakers and child-safety advocates. This scrutiny intensified after several lawsuits were filed in California last November alleging a link between AI chatbots and tragic outcomes.

The Social Media Victims Law Center and the Tech Justice Law Project filed seven suits claiming OpenAI released GPT-4o before it was ready. The lawsuits cite four individuals who died by suicide and three others who experienced severe delusions after extended interactions with the chatbot.

Collaborative Development and Key Focus Areas

OpenAI developed the blueprint in collaboration with the National Center for Missing and Exploited Children (NCMEC) and the Attorney General Alliance, incorporating feedback from state officials including North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown.

The framework concentrates on three main areas: updating legislation to explicitly cover AI-generated abuse material, refining mechanisms for reporting cases to law enforcement, and integrating preventative safeguards directly into AI systems.

Building on Existing Safeguards

This new initiative builds upon OpenAI's previous safety measures. The company has already updated its guidelines for interactions with users under 18, which prohibit generating inappropriate content, encouraging self-harm, or advising young people to conceal unsafe behaviour from caregivers.

OpenAI recently released a similar safety blueprint specifically for teens in India, indicating a broader, global approach to the issue.

The company states the overarching goal is to enable earlier threat detection and ensure actionable information reaches investigators more promptly, aiming to curb the alarming trend of AI-facilitated exploitation.