TikTok accepted advertising revenue from apps that promoted the creation of sexualised AI-generated deepfake images, according to an exclusive investigation shared with Business Insider. The platform removed the ads after being alerted to their presence.
The AI content detection service Copyleaks identified more than 50 sexually suggestive advertisements on TikTok for apps and websites that claimed to allow users to digitally "undress" people in photos. The ads, analysed between December 2025 and February 2026, collectively generated tens of thousands of views.
Platform Policy Breach
A TikTok spokesperson stated: "We have removed content and banned accounts that breach our strict rules against sexual activity, including material created using third-party apps." The company's advertising policies explicitly prohibit sexually explicit or suggestive content and ban services that "create content for sexual pleasure or sexual intention purposes, such as AI Nudify Apps."
In its most recent Community Guidelines Enforcement Report, covering July to September 2025, TikTok said it removed over 9.5 million ads for policy violations. In 2023, it blocked user searches for the keyword "undress."
Explicit Advertisements Uncovered
One advertisement, for an app called Soulove, displayed a partially obscured image of a woman mimicking a sex act alongside text reading "Turn Her photo into amazing AI style." The Soulove website redirects to a service named Candy AI, whose terms state it does not allow non-consensual image use.
Another ad for the Movely app showed a video of a woman on a beach with text promising, "NO filter Ever" and stating, "Other Ai say NO! We say YES!!!" Movely's terms also prohibit non-consensual imagery. The developers behind these apps did not respond to requests for comment.
"In many cases, the ads were clearly sexual," April Kozen, Vice President of Marketing at Copyleaks, told Business Insider. "That they were approved points to both moderation and policy failures."
A Growing 'Deepfake' Ecosystem
Copyleaks said the findings highlight a rapidly expanding ecosystem of AI tools where sexualised content is promoted as a primary function to drive engagement. While photo-editing tools have long existed, generative AI has significantly lowered the technical barrier for creating such content.
Kozen argued that apps like those identified are exploiting "gray areas" to circumvent platform policies. "As the ecosystem of AI deepfake apps continues to expand, platforms like TikTok need to ensure their policies and moderation teams recognize the risks to the people whose images are used without their consent," she said. "It's affecting a lot of innocent people."
Industry-Wide Scrutiny
The issue of non-consensual AI-generated sexual content has drawn wider scrutiny. Earlier this year, X (formerly Twitter) disabled the "spicy mode" on its Grok AI chatbot after users exploited it to undress people in images.
Meta announced last year it was developing technology to detect ads for "nudify" apps and sharing signals with other companies. In March, the UK's Advertising Standards Authority banned a YouTube ad for an AI photo editor that claimed it could "erase anything."
Last month, the White House published a national policy framework for AI, proposing that Congress establish federal protections against the unauthorised distribution of AI-generated likenesses or voice replicas.