Moonbounce, a startup developing AI-powered content moderation systems, has raised $12 million in a funding round co-led by Amplify Partners and StepStone Group, TechCrunch has exclusively learned. The company, founded by former Facebook business integrity lead Brett Levenson, provides real-time safety enforcement for platforms using AI chatbots and image generators.
The funding announcement comes amid growing legal and reputational pressure on AI companies following incidents where chatbots provided harmful advice or image generators created nonconsensual content. Moonbounce's technology aims to address what Levenson describes as the unsustainable, reactive nature of traditional content moderation.
From Policy Documents to Executable Code
Levenson's concept for Moonbounce emerged from his experience at Facebook, where he observed human reviewers struggling with a 40-page, machine-translated policy document and making decisions with only about 30 seconds per piece of flagged content. This process resulted in accuracy rates "slightly better than 50%," according to Levenson, and often occurred days after harmful content had spread.
Moonbounce's solution involves turning static policy documents into "policy as code"—executable, updatable logic that is tightly coupled to enforcement. The company has trained its own large language model to evaluate customer policy documents and assess content at runtime, delivering a response in 300 milliseconds or less.
Serving AI Companies and Dating Apps
Today, Moonbounce serves three primary sectors: platforms with user-generated content like dating apps, AI companies building character companions, and AI image generation services. The company claims to support more than 40 million daily reviews and serve over 100 million daily active users.
Its customers include AI companion startup Channel AI, image generation platform Civitai, and character roleplay services Dippy AI and Moescape. According to Levenson, this approach allows safety to become a product differentiator rather than a post-hoc fix. "It just never has been because it’s always a thing that happens later, not a thing you can actually build into your product," he told TechCrunch.
The Growing Need for External Guardrails
Investors point to the unique challenge posed by the integration of large language models into applications. "Content moderation has always been a problem that plagued large online platforms, but now with LLMs at the heart of every application, this challenge is even more daunting," said Lenny Pruss, general partner at Amplify Partners.
Levenson notes that AI companies are increasingly seeking external help to bolster their safety infrastructure. As a third-party system, Moonbounce operates between the user and the chatbot, focusing solely on rule enforcement without being "inundated with context the way the chat itself is."
Future Development: "Iterative Steering"
The 12-person company, co-run by Levenson and former Apple infrastructure engineer Ash Bhardwaj, is developing a new capability called "iterative steering." This responds to tragic cases, such as the 2024 suicide of a Florida teenager linked to a Character AI chatbot.
Instead of issuing a blunt refusal when harmful topics arise, the system would intercept and redirect the conversation, modifying prompts in real time to push the chatbot toward a more supportive response. "We hope to be able to... force the chatbot to be not just an empathetic listener, but a helpful listener in those situations," Levenson explained.
When questioned about a potential acquisition by a company like Meta, Levenson acknowledged the fit but expressed a desire to keep the technology widely available. "I would hate to see someone buy us and then restrict the technology," he stated.