Characterizing the MrDeepFakes Sexual Deepfake Marketplace
Paper:
Characterizing the MrDeepFakes Sexual Deepfake Marketplace
What Was MrDeepFakes?
MrDeepFakes was the world’s largest marketplace for non-consensual sexual deepfakes: pornographic videos made by digitally swapping faces into explicit content. It hosted over 43,000 videos, targeting more than 3,800 individuals, and drew 1.5 billion views before shutting down in 2025.
While it claimed to only allow celebrity content, research shows hundreds of videos featured non-famous, everyday people. A robust community of creators, buyers, and “deepfake artists” operated through a forum and video platform: commissioning, trading, and distributing sexual fakes at scale.
Why It’s More Than Just a Tech Problem
Sexual deepfakes are not harmless entertainment: they’re image-based sexual abuse. Victims often suffer from reputational damage, trauma, and loss of control over their likeness. With 95% of targets being women, the phenomenon reinforces existing gendered power dynamics.
Even worse, some creators justify their actions as “art” or “community contributions,” normalizing this abuse within online subcultures.
Rules That Didn’t Work
Although MrDeepFakes had formal rules, like banning underage targets, rape scenes, or non-celebrities they were poorly enforced:
14% of targets were not public figures
Over 1,000 videos included rape or humiliation themes
Requests for underage deepfakes were not immediately removed
This shows that self-regulation is ineffective when profit and anonymity dominate.
Why Tech Alone Can’t Solve It
Community members skillfully bypassed technical barriers:
They used open-source tools like DeepFaceLab
Trained models on free cloud GPUs like Google Colab
Evaded detection using renamed files or alternate platforms
Tech companies banning deepfakes simply led users to find workarounds. Adversarial dynamics mean creators are always a step ahead.
MrDeepFakes is gone, but the tech and the harm remain. Stopping the next version will require collaboration across tech, policy, and society. The lesson? AI progress must be matched by ethics, safeguards, and accountability.