Tag Archives: #Deepfakes

Tech Bros, Big Platforms, and Poor Regulation: Who Enables Deepfake Porn?

Recently, I delivered the keynote Techno-patriarchy: How deepfakes are misogyny’s new clothes and what we can do about it at the Manchester Tech Festival. Putting together the presentation prompted me to reflect on my advocacy journey on what is popularly referred to as “deepfake porn.”

In 2023, I had had enough of hearing tech bros blaming unconscious bias for all the ways in which AI was weaponised against women. Decided to demonstrate intent, I wrote Techno-Patriarchy: How AI is Misogyny’s New Clothes, originally published in The Mint.

In the article, I detailed 12 ways this technology is used against women, from reinforcing stereotypes to pregnancy surveillance. One shocked me to my core: Non-consensual sexual synthetic imagery (aka “deepfake porn”).

Why? Because, whilst the media warned us about the dangers of deepfakes as scam and political unrest tools, the reality is that non-consensual sexual synthetic imagery constitutes 96% of all deepfakes found online, with 99.9% depicting women. And their effects are devastating.

Judge for yourself:

It was completely horrifying, dehumanizing, degrading, violating to just see yourself being misrepresented and being misappropriated in that way.

It robs you of opportunities, and it robs you of your career, and your hopes and your dreams.

Noelle Martin, “deepfake porn” victim, award-winning activist, and law reform campaigner.

So I continued to write about the dire consequences of this technology for victims and the legal vacuum, as well as denounced the powerful ecosystem (tech, payment processors, marketplaces) that fostered and profited from them.

I also made a point to bring awareness about how this technology is harming women and girls in spaces where the topic of “deepfakes” was explored broadly. I organised events, appeared on podcasts, and participated in panels, such as “The Rise of Deepfake AI” at the University of Oxford; all opportunities were fair game to bring “deepfake porn” to the forefront.

This week, I had 30 minutes to convince over 80 women in tech – and allies – to become advocates against non-consensual sexual synthetic imagery. The feedback I received from the keynote was very positive, so I’m sharing my talking points with you below.

I hope that by the end of the article, (a) you are convinced that we need to act now, and (b) you have decided how you will help to advocate against this pandemic.

The Digital Feminist is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


The State of Play

All that’s wrong with using the term “deepfake porn”

I had an aha moment when I realised the disservice the term “deepfake porn” was doing to addressing this issue.

“Deepfake” honours the name of the Reddit user who shared on the platform the first synthetic intimate media of actresses. When paired with the label “porn”, it may wrongly convey the idea that it’s consensual. Overall, the term lacks gravitas, disregarding harms.

From a legal perspective, the use of the term “deepfake” may also hinder the pursuit of justice. There have been cases where filing a lawsuit using the term deepfakes when referring to a “cheapfake” — which consists of a fake piece of media created with conventional methods of doctoring images rather than AI — has blocked prosecution.

Continue reading

Inside the Digital Underbelly: The Lucrative World of Deepfake Porn

Two weeks ago, deepfake pornographic images of Taylor Swift spread like fire through X. It took the platform 19 hours to suspend the account that posted the content after they amassed over 27 million views and more than 260,000 likes.

That gave me pause. 260,000 people watched the content, knew it was fake, and felt no shame in sharing their delight publicly. Wow…

I’ve written before about our misconceptions regarding deepfake technology. For example, we’re told that most deepfakes target politicians but the reality is that 96% of deepfakes are of non-consensual sexual nature and 99% of them are from women. I’ve also talked about the legal vacuum regulating the use of this technology.

However, until now I hadn’t delved into the ecosystem underpinning the porn deepfakes: the industry and the viewers themselves. 

Let’s rectify this gap and get to know the key players.

Why is so easy to access porn deepfakes?

We may be led to believe that porn deepfakes are hard to create or find.

False and false.

  • It takes less than 25 minutes and costs $0 to create a 60-second deepfake pornographic video. You only need one clear face image.
  • I can confirm that when searching on Google “deepfakes porn,” the first hit was MrDeepFake’s website — one of the most famous websites in the world of deepfake porn.

Moreover, the risk of hosting the content is minimal.

Section 230, which passed in 1996, is a part of the US Communications Decency Act. It was meant to serve as protection for private blocking and screening of offensive material. 

However, it has become an ally of porn deepfakes as it provides immunity to online platforms from civil liability on third-party content — they are not responsible for the content they host and they can remove it in certain circumstances, e.g. material that the provider or user considers being obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.

So whilst Section 230 does not protect platforms that create illegal or harmful content, it exempts them from any responsibility for third-party content.

Who’s making money from porn deepfakes?

Many are profiting from this nascent industry: Creators, deepfake porn websites, software manufacturers, infrastructure providers, marketplaces, and payment processors.

Creators

They get revenue from two main sources:

Deepfake porn websites

Let’s have a look at three deepfake porn websites, each with a different business model.

MrDeepFakes

Some highlights of how this platform operates 

  • Videos are a few minutes long.
  • Generates revenue through advertisement.
  • Relies on the large audience that has been boosted by its positioning in Google search results.
  • Its forums act as a marketplace for creators and clients can make requests.

Fan-Topia

Their business model 

  • It bills itself on Instagram as “the highest paying adult content creator platform.”
  • Paywalled.
  • Clients may be redirected from sites such as MrDeepFakes afters clicking on the deepfake creators’ profiles. Once in Fan-Topia, they can pay for access to libraries of deepfake videos with their credit cards.

Pornhub

In 2018, the internet pornography giant Pornhub banned deepfake porn from their site. However, that’s not the whole truth

  • When Pornhub removes deepfake porn videos from their site, they leave the inactive links as breadcrumbs that act as clickbait to drive traffic to the site.
  • Users can advertise the creation and monetisation of porn deepfakes on the site.
  • They advertise deepfakes through TrafficJunky, the advertising portal through which Pornhub makes all their ad revenue.
  • Pornhub provides a database of abusive content that facilitates the creation of porn deepfakes.

Software manufacturers

A couple of examples

  • Stability AI has made their model Stable Diffusion — a deep learning, text-to-image model— open-source, so any developer can modify it for purposes such as creating porn deepfakes. And there are plenty of tips about how to use the models in forums where deepfake porn creators swarm.
  • Taylor Swift’s porn deepfake was created using Microsoft Designer, Microsoft’s graphic design app that leverages DALLE-3 — another text-to-image model— to generate realistic images. Users found loopholes in the guardrails that prevented inappropriate prompts that explicitly mentioned nudity or public figures. 

Infraestructure providers

Repositories

GitHub is a Microsoft-owned developer platform that allows developers to create, store, manage, and share their code. It’s also

  • One of the top 10 referral sites for Mr.DeepFakes.
  • A host of guides and hyperlinks to (a) sexual deepfake community forums dedicated to the creation, collaboration, and commodification of synthetic media technologies, and (b) AI-leveraged ‘nudifiying’ websites and applications that take women’s images and “strip them” of clothing.
  • A repository of the source code of the software used to create 95% of deepfakes, DeepFaceLab, as well as other similar codes such as DeepNude and Unstable Diffusion. 
  • A gateway for minors to deepfake source codes and related content, given Github’s worldwide partnership program with schools and universities and its terms of service stating that users can be as young as 13

Web hosting

According to a Bloomberg review, 13 of the top 20 deepfake websites are currently using web hosting services from Cloudflare Inc. Amazon.com Inc. provides web hosting services for three popular deepfaking tools listed on several websites, including Deepswap.ai.

Marketplaces

Etsy

As of December 2023, AI-generated pornographic images of at least 55 well-known celebrities were available for purchase on Etsy, an American e-commerce company focused on handmade or vintage items and craft supplies.

Moreover, a search for “deepfake porn” on the website returned about 1,500 results. Some of these results were porn and others offers non-explicit services to “make your own deepfake video.”  

Apps stores

Apple’s App Store and Google Play host apps that can be used to create deepfake porn. Some of them are available to anyone over 12.

Payment processors

  • On the Fan-Topia payment page, the logos for Visa and Mastercard appear alongside the fields where users can enter credit card information. The purchases are made through an internet payment service provider called Verotel, which is based in the Netherlands and advertises to what it calls “high-risk” webmasters running adult services.
  • The MakeNude.ai web app — which lets users “view any girl without clothing” in “just a single click” — has partnered with Ukraine-based Monobank and Dublin’s Beta Transfer Kassa which operates in “high-risk markets”.
  • Deepfake creators also use PayPal and crypto wallets to accept payments. Until Bloomberg reached out to Patreon last August, they supported payment for one of the largest nudifying tools, which accepted over $12,500 per month.

Other enablers

Search engines

Between 50 to 80 percent of people searching for porn deepfakes find their way to the websites and tools to create the videos or images via search. For example, in July 2023, around 44% of visits to Mrdeepfakes.com were via Google.

NBC News searched the combination of a name and the word “deepfakes” with 36 popular female celebrities on Google and Bing. A review of the results found nonconsensual deepfake images and links to deepfake videos in the top Google results for 34 of those searches and the top Bing results for 35 of them. 

As for the victims, both Google and Microsoft services require in their content removal requests that people manually submit the URLs.

Social media

More than 230 sexual deepfake ads using Emma Watson and Scarlett Johansson’s faces ran on Facebook and Instagram in March 2023. It took 2 days for Meta to remove the ads, once they were contacted by NBC.

Users of X, formerly known as Twitter, regularly circulate deepfaked content. Whilst the platform has policies that prohibit manipulated media, between the first and second quarter of 2023, the number of tweets from eight hashtags associated with this content increased by 25% to 31,400 tweets.

Who’s watching porn deepfakes?

In their report “2023 State of Deepfakes”, Home Security Heroes state

  • There were a total of 95,820 deepfake videos online in 2023.
  • The ten-leading dedicated deepfake porn sites had monthly traffic of 35 million in 2023.

What about the deepfake porn consumers?

They surveyed 1522 American males who had viewed pornography at least once in the past six months. Some highlights:

  • 48% of respondents reported having viewed deepfake pornography at least once.
  • 74% of deepfake pornography users didn’t feel guilty about it. Top reasons they didn’t feel remorse? 36% didn’t know the person, 30% didn’t think it hurt anybody, 29% thought of it as a realistic version of imagination, and 28% thought that it’s not much different than regular porn.

That may lead us to believe that indeed those “watchers” felt porn deepfakes were innocuous. That’s until we learn that 

  • 73% of survey participants would want to report to the authorities if someone close to them became a victim of deepfake porn.
  • 68% indicated that they would feel shocked and outraged by the violation of someone’s privacy and consent in the creation of deepfake pornographic content.

In summary, non-consensual deepfakes are harmless until your mother and daughter are starring on them. 

if they don’t portray your loved ones.

What’s next?

As with other forms of misogynistic behaviour — rape, gender violence, sexual discrimination — when we talk about deepfake pornography, we focus on the aftermath: the victims and the punishment.

What if we instead focused on the bottom of the pyramid —  the consumers?

  • Can we imagine a society where the deepfake porn videos from Taylor Swift would have had 0 views and no likes?
  • What will take to raise boys that feel outrage — rather than unhealthy curiosity, lust, and desire for revenge  — at the opportunity to watch and purchase deepfake porn?
  • How about believing that porn deepfakes are harmful even if they don’t portray your sister, mum, or wife?

As with physical goods, consumers have the power to transform the offer. Can we collectively lead the way towards a responsible digital future?

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.

Navigating the Digital Battlefield: Women and Deepfake Survival

The annual campaign 16 Days of Activism against Gender-Based Violence begins on 25 November, the International Day for the Elimination of Violence against Women, and runs through International Human Rights Day on 10 December.

Gender violence campaigns traditionally focus on physical violence: sexual harassment, rape, femicide, child marriage, or sex trafficking. The perpetrators? Partners, family members, human traffickers, soldiers, terrorists.

But that’s not all. You may be a victim of digital violence right now — in the comfort of your home.

Let’s talk about deepfakes.

The myth of political deepfakes

Deepfakes are images, audio, or video generated or manipulated using artificial intelligence technology to convincingly replace one person’s likeness with that of another

When talking about deepfakes, most media refer to the threats they may pose to democracy. That was exemplified in the famous deepfake video of Obama in 2018, where he called Donald Trump a “total and complete dipshit”. Although that video was clearly false, it did show the potential of the technology to meddle in elections and spread disinformation.

Capitalism and deepfakes

In addition to the threat to political stability, the benefits and threats posed by deepfakes are often framed in a capitalistic context

  • Art —  Artists use deepfakes technology to generate new content from existing media created by them or by other artists.
  • Caller response services — Provide tailored answers to caller requests that involve simplified tasks (e.g. triaging and call forwarding to a human).
  • Customer support —  These services use deepfake audio to provide basic information such as an account balance.
  • Entertainment — Movies and video games clone actors’ voices and faces because of convenience or even for humourous purposes. 
  • Deception — Fabricating false evidence to inculpate — or exculpate — people in a lawsuit.
  • Fraud — Impersonate people to gain access to confidential information (e.g. credit cards) or prompt people to act (e.g. impersonate a CEO and request a money transfer). 
  • Stock manipulation — Deepfake content such as videos from CEOs announcing untrue news such as massive layoffs, new patents, or an imminent merger can have a massive impact on a company’s stock.

As a result of that financial focus, tech companies and governments have concentrated their efforts towards assessing if digital content is a deepfake or not. Hence, the proliferation of tools aimed to “certify” content’s provenance as well as legal requirements in some countries to label deepfakes. 

And many people share the same viewpoint. It’s not uncommon that, when discussing deepfakes, my interlocutors dismiss their impact with remarks such as “It’s easy now to spot if they’re fake or not”.

But the reality is that women bear the brunt of this technology.

Women and deepfakes

Deepfakes themselves were born in 2017 when a Reddit user of the same name posted manipulated porn clips on the site. The videos swapped the faces of female celebrities — Gal Gadot, Taylor Swift, Scarlett Johansson — onto porn performers. And from there they took off.

A 2019 study found that 96% of deepfakes are of non-consensual sexual nature, of which 99% are made of women. As I mentioned in the article Misogyny’s New Clothes, they are a well-oiled misogyny tool:

  • They are aimed to silence and shame women. That includes women politicians. 42% of women parliamentarians worldwide have experienced extremely humiliating or sexually-charged images of themselves spread through social media.
  • They objectify women by dismembering their bodies — faces, heads, bodies, arms, legs — without their permission and reassembling them as virtual Frankensteins. 
  • They may be the cheapest way to create pornography — you don’t need to pay actors and there are plenty of free tools available. Not willing to put the effort into learning how to create them yourself? You can order one for as low as $300.
  • They are the newest iteration of revenge porn — hate your colleagues? Tired of the women in your cohort ignoring you? You create deepfake videos from them made from their LinkedIn profile photos and university face books and plaster the internet with them.
  • They disempower victims — Unlike “older” misogyny tools, women cannot control the origin of deepfakes, how they spread, or how to eliminate them. Once they are created, women’s only recourse is to reach out directly to the platforms and websites hosting them and ask for removal.
  • As with all types of gendered violence, women are also shamed for being the target of deepfakes — they are blamed for sharing their photos on social media. I encourage you to read Adrija Bose’s excellent article that summarises her research work on the effect of deepfakes on female content creators.

What do we get wrong about deepfakes?

If 96% are non-consensual porn, why don’t we do anything about it?

  • We think they are not as harmful as “real” porn because the victim didn’t participate in them. What we miss it’s that we “see” the world with our minds, not with our eyes. If you want to have a taste of how that feels, you can watch the chilling 18-minute documentary My Blonde GF by The Guardian where the writer Helen Mort details her experience of being deepfaked for pornography.
  • Knowing that it’s fake is of little relief when you know that your family, friends, and colleagues have watched or could eventually watch them. Moreover, there is research proving that deepfake videos create false memories.
  • As we believe that “it’s not the real you, it’s fake”, victims receive little support from the justice system and governments in general. You can watch this 5-minute video from Youtuber and ASMR (Autonomous Sensory Meridian Response) artist Gibi who has been repeatedly targeted by deepfakes and who shares the very real consequences of this practice that is perfectly legal in most countries.

Talking about governments, let’s check how countries regulate deepfakes.

Deepfakes and the law

In 2020, China made it a criminal offense to publish deepfakes or fake news without disclosure. Since January 2023 

“Companies have to get consent from individuals before making a deepfake of them, and they must authenticate users’ real identities.

The service providers must establish and improve rumor refutation mechanisms.

The deepfakes created can’t be used to engage in activities prohibited by laws and administrative regulations.

 Providers of deep synthesis services must add a signature or watermark to show the work is a synthetic one to avoid public confusion or misidentification.”

On Friday 8th December 2023, the European Parliament and the Council reached a political agreement on the Artificial Intelligence Act (AI Act), proposed by the Commission in April 2021. Although the full text is not available yet, the Commission published an announcement where deepfakes are categorised as specific transparency risks 

“Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.”

In the US, there are no federal regulations on deepfakes. Some states like California, New York, and Virginia have passed laws targeting deepfake pornography.

What about the UK? In September 2023, the Online Safety Bill was signed off by the Houses of Parliament which criminalises sharing deepfake porn. The offence will be punishable by up to six months in prison and it would rise to two years if intent to cause distress, alarm, or humiliation, or to obtain sexual gratification could be proved. Note that the bill doesn’t criminalise the creation of deepfakes, only sharing them.

For further details about global deepfake regulation approaches including countries such as Canada and South Korea, check this article from the Artificial Intelligence Institute

Call to action

The remedy of our patriarchal society against physical violence towards women has been to encourage them to self-suppress their rights so that the perpetrators can roam free. 

For example, we tell women that to avoid becoming a victim of violence they should stay at home at night, avoid dark places, or don’t wear miniskirts. Failure to do so and get harmed is met with remarks such as “She was looking for it”.

I hope you’re not expecting me to exhort women to disappear from Instagram, get rid of their profile photos on LinkedIn, or stop publishing videos on TikTok. All the opposite. It’s not for us to hide from deepfake predators, it’s for platforms and regulators to do their job.

My call to action to you is threefold

1.- Take space: Let’s not allow this technology to make us invisible on social media —  hiding has never challenged the status quo. It’s a survival mechanism. If we hide now because we’re afraid of deepfakes, we’ll never be safe on the internet again.

2.- Amplify: Educate others about the risks and challenges of deepfakes as well as how to get support when deepfaked for pornography

3.- Demand action: Lobby to make platforms, software development companies, and governments accountable for making us safe from non-consensual sexual deepfakes.

BACK TO YOU: What’s your take on deepfakes? Should they be fully banned? How do you believe the benefits outweigh the risks?

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.