Tech Bros, Big Platforms, and Poor Regulation: Who Enables Deepfake Porn?

Recently, I delivered the keynote Techno-patriarchy: How deepfakes are misogyny’s new clothes and what we can do about it at the Manchester Tech Festival. Putting together the presentation prompted me to reflect on my advocacy journey on what is popularly referred to as “deepfake porn.”

In 2023, I had had enough of hearing tech bros blaming unconscious bias for all the ways in which AI was weaponised against women. Decided to demonstrate intent, I wrote Techno-Patriarchy: How AI is Misogyny’s New Clothes, originally published in The Mint.

In the article, I detailed 12 ways this technology is used against women, from reinforcing stereotypes to pregnancy surveillance. One shocked me to my core: Non-consensual sexual synthetic imagery (aka “deepfake porn”).

Why? Because, whilst the media warned us about the dangers of deepfakes as scam and political unrest tools, the reality is that non-consensual sexual synthetic imagery constitutes 96% of all deepfakes found online, with 99.9% depicting women. And their effects are devastating.

Judge for yourself:

It was completely horrifying, dehumanizing, degrading, violating to just see yourself being misrepresented and being misappropriated in that way.

It robs you of opportunities, and it robs you of your career, and your hopes and your dreams.

Noelle Martin, “deepfake porn” victim, award-winning activist, and law reform campaigner.

So I continued to write about the dire consequences of this technology for victims and the legal vacuum, as well as denounced the powerful ecosystem (tech, payment processors, marketplaces) that fostered and profited from them.

I also made a point to bring awareness about how this technology is harming women and girls in spaces where the topic of “deepfakes” was explored broadly. I organised events, appeared on podcasts, and participated in panels, such as “The Rise of Deepfake AI” at the University of Oxford; all opportunities were fair game to bring “deepfake porn” to the forefront.

This week, I had 30 minutes to convince over 80 women in tech – and allies – to become advocates against non-consensual sexual synthetic imagery. The feedback I received from the keynote was very positive, so I’m sharing my talking points with you below.

I hope that by the end of the article, (a) you are convinced that we need to act now, and (b) you have decided how you will help to advocate against this pandemic.

The Digital Feminist is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


The State of Play

All that’s wrong with using the term “deepfake porn”

I had an aha moment when I realised the disservice the term “deepfake porn” was doing to addressing this issue.

“Deepfake” honours the name of the Reddit user who shared on the platform the first synthetic intimate media of actresses. When paired with the label “porn”, it may wrongly convey the idea that it’s consensual. Overall, the term lacks gravitas, disregarding harms.

From a legal perspective, the use of the term “deepfake” may also hinder the pursuit of justice. There have been cases where filing a lawsuit using the term deepfakes when referring to a “cheapfake” — which consists of a fake piece of media created with conventional methods of doctoring images rather than AI — has blocked prosecution.

What’s more, it leads to confusion with other misogynistic digital crimes like “revenge porn”, which is legally very different, as it refers to the disclosure of unedited non-consensual sexual media of another person.

What’s better? Leading scholars in the field prefer the term non-consensual synthetic intimate imagery, abbreviated as “NCSII.”


It’s not the technology, stupid — it’s the people

The technology may be new but the crime of taking parts of the anatomy of women from pictures and using them to create new images of naked women is very old.

In 1888, one of New York’s professional photographers, 30-year-old Le Grange Brown, was accused of showing and selling photographs of “undraped women” in local saloons. Brown had taken portraits of hundreds of young high society ladies during various social events and then cut and pasted their heads to images of naked women.

As a result, in 1888, a bill to “protect ladies” was introduced to the US House of Representatives to prohibit the use of the portraits of “females” for advertising and trade purposes, without their consent in writing.

Unsurprisingly, the bill didn’t pass at the time. However, as in many other instances, laws and regulations created “for women” end up benefiting everybody.

In 1903, as a result of another case involving the unauthorised use of a woman’s photograph, the New York Legislature enacted the first right to privacy in the US and across the common law world, including Australia and the United Kingdom.

In summary, NCSII is only the latest iteration of an old crime against women.

A young woman puts a hand in front of the camera to avoid her picture being taken.
Photo by Steven Rector on Unsplash

It’s NOT a lack of empathy

Some argue that creators and consumers of NCSII lack empathy. It’s a cop out.

In their report “2023 State of Deepfakes”, Home Security Heroes surveyed 1,522 American males who had viewed pornography at least once in the past six months.

74% of deepfake pornography users didn’t feel guilty about it. Top reasons they didn’t feel remorse?

  • 36% didn’t know the person
  • 30% didn’t think it hurt anybody
  • 29% thought of it as a realistic version of imagination
  • 28% thought that it’s not much different from regular porn

That may lead us to believe that indeed those “watchers” felt porn deepfakes were innocuous. That’s until we learn that

  • 73% of survey participants would want to report to the authorities if someone close to them became a victim of deepfake porn.
  • 68% indicated that they would feel shocked and outraged by the violation of someone’s privacy and consent in the creation of deepfake pornographic content.

Moreover, consumers and creators do have empathy for one another. There are forums dedicated to reporting other users for stealing NCSII video content and re-posting it without credit. A couple of comments from one of those “communities”

“I have been a long-time subscriber to many creators, and it disgusts me to have their hard work been stolen and re-sold like this.”

“Obviously they don’t have consent for the distribution, they are just MOFOs who want to earn with the work of others.”

In summary, NCSII are fun until your mother and daughter are starring on them. Or somebody “steals” the credit from you.


It’s NOT harmless

The damage done to women victims of non-consensual sexual AI imagery has been consistently ignored.

Society wants to believe that they are not as harmful as “real” images because the victim didn’t participate in them. What we miss is that we “see” the world with our minds, not with our eyes. If you want to have a taste of how that feels, you can watch the chilling 18-minute documentary My Blonde GF by The Guardian, where the writer Helen Mort details her experience of being deepfaked for pornography.

As we believe that “it’s not the real you, it’s fake”, victims receive little support from the justice system and governments in general. You can watch this 5-minute video from YouTuber and ASMR (Autonomous Sensory Meridian Response) artist Gibi, who has been repeatedly targeted by deepfakes and shares the very real consequences of this practice, which is perfectly legal in most countries.

Even female politicians who have experienced a myriad of online abuse describe how different NCSII are. The US congresswoman Alexandria Ocasio-Cortez has shared the uncanny experience

“There’s a shock to seeing images of yourself that someone could think are real. […] As a survivor of physical sexual assault, it adds a level of dysregulation. […] It resurfaces trauma.”

“And once you’ve seen it, you’ve seen it. […] It parallels the same exact intention of physical rape and sexual assault, [which] is about power, domination, and humiliation. Deepfakes are absolutely a way of digitizing violent humiliation against other people.”

And then, there are instances where NCSII videos have been paired with doxing, exposing the victims to requests to prostitute themselves or even a lynch mob, like in the case of Indian journalist Rana Ayyub. Her experience completely changed her behaviour

From the day the video was published, I have not been the same person. I used to be very opinionated, now I’m much more cautious about what I post online. I’ve self-censored quite a bit out of necessity.

[..] I’m constantly thinking what if someone does something to me again. I’m someone who is very outspoken so to go from that to this person has been a big change.

However, the police couldn’t care less about her safety

There were about six men in the police station, they started watching the video in front of me. You could see the smirks on their faces.

They asked me where I was when I had first seen it. When I told them I had seen it at a cafe, they told me to go to the police station nearest to the cafe and file the complaint from there.

[…] Finally, after my lawyer told them we would go to the media, they filed the report. That was in April. More than six months later, I haven’t heard a thing from the police.

This image features a pixelated selfie featuring an individual with long brown hair and a fringe. The person has their tongue out and is smiling too. Most of the parts of the image are pixelated with red and yellow squares focusing on certain parts of the image. By each square is also a label and a recognition percentage, including: dark hair (87%), right eye (91%), left eye (94%), ear (27%), mouth-smiling (90%), uncertain feature. The squares are mainly outlined in yellow, but ‘ear’ and ‘uncert
Elise Racine / Emotion: Joy / Licenced by CC-BY 4.0

It’s about patriarchy

As mentioned above, many people want us to consider NCSII as a sort of “democratisation of porn”. It’s not.

It’s techno-patriarchy.

It’s about objectifying women by dismembering their bodies — faces, heads, bodies, arms, legs — without their permission and reassembling them as virtual Frankensteins for the creator and watchers’ satisfaction.

It’s about silencing and shaming women by using nudifying apps and websites that digitally remove clothes from their images. For example, 42% of women parliamentarians worldwide have experienced extremely humiliating or sexually-charged images of themselves spread through social media.

And it’s about imposing creators‘ “standards” on women’s bodies by using generative AI apps to cover them up — as if they were paper dolls — under the pretence that they are “dignifying” them.

Quoting Alexandria Ocasio-Cortez again

“Because this technology threatens to do it at scale — this is about class subjugation. It’s a subjugation of entire people.

And then when you do intersect that with abortion, when you do intersect that with debates over bodily autonomy, when you are able to actively subjugate all women in society on a scale of millions, at once digitally, it’s a direct connection [with] taking their rights away.”

In summary, NCSII is another misogynistic tool used by the patriarchy to control women.


Disclosure is NOT the fix

Lawmakers, regulators, and tech want us to believe that the solution is about detecting “deepfakes” and forcing disclosure about the authenticity of images.

For example, Chapter IV (article 50) of the European Union Artificial Intelligence Act (EU AI Act) only imposes disclosure of “deep fakes”. Of course, provided that it doesn’t hinder “enjoyment” of the work (WTF?!)

4. Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence.

Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

But let’s be honest, knowing that it’s fake is of little relief when you know that your family, friends, and work colleagues have watched NCSII made from your image or could eventually watch them, especially since research has shown that “deepfake” videos create false memories.

Recently, I was a panellist at the University of Oxford on the topic of “deepfakes”. One of the attendees — a young male academic — told me that we “just” have to get used to others creating NCSII from us and then they’ll become harmless.

I replied to him with a thought experiment.

Imagine that somebody creates NCSII with your likeness and every time anyone types your name in Google, the first hits are those videos.

Let’s asssume that the University of Oxford believes you when you assure them that the videos are “fakes.” Even better, there is a small mark on them that says“fake.” Do you still believe that they’ll offer you a job?

His face told me that I’ve made my point.

And this is not only theoretical.

Unfortunately, I know young women in the UK who have been victims of NCSII for whom the only way to try to rebuild their lives has been to change their legal name.

Because it’s not “just” about disclosure.


Tech wants us to believe NCSII is “solved”

Big Tech wants to convince us that NCSII is no longer a problem.

In 2024, I attended TEDxManchester. The main reason I bought the ticket was that one of the speakers was Blaise Agüera y Arcas, VP and Fellow at Google Research, who is the co-author of an excellent article about the misuse of AI against minorities, Physiognomy’s New Clothes.

Whilst Blaise’s TEDx talk focused on his new book, the highlight of the day for me was a 30-minute lunchtime fireside chat offered to attendees.

During that session, I asked him

How can society effectively stop the epidemic of deepfake porn videos, which constitute 96% of deepfakes and target 99% of women?

He answered that it was already regulated and called it “revenge porn.” He also shared his concerns about the timing of regulation and implied that it may be ineffective by reminding us that the internet was too heavily regulated in the 90s and still nobody could have predicted the evolution of social media.

In summary, we have a recognised expert in technology and, more specifically, the weaponisation of AI, brushing off concerns about NCSII. Who do you think the audience believed?

And Big Tech is great at making NCSII somebody else’s problem, too. For example, in 2024, Microsoft published a report calling on lawmakers and policymakers to (1) promote content authenticity, (2) detect and respond to abusive deepfakes, and (3) give the public the tools to learn about synthetic AI harms.

Unfortunately, that hasn’t prevented Big Tech from pushing against directives that would limit their ability to develop AI tools as they see fit by invoking concerns about free speech and the ability to innovate, or shape AI standards that suit their agenda, or lobbying government organisations to water down regulations.


Authorities bury their head in the sand

Unfortunately, key stakeholders still believe that “ignorance is bliss” when talking about NCSII.

For example, despite very publicised instances of boys as young as twelve creating and distributing NCSII from girls, authorities tend to absolve children like this statement from a website of the Australian government

Some young people use these tools as a prank or experiment without fully understanding the impact. They may not realise that creating or sharing fake nudes, even synthetic ones, can be a serious criminal offence in some states and territories.

There is a general lack of knowledge about this technology in schools and, as a consequence, a lack of preparation in the event of NCSII-related incidents. This results primarily in schools trying to hide those events rather than collaborate on uncovering the perpetrators and adequately supporting the victims.

As for the police, unfortunately, it’s very disappointing how little law enforcement bodies appear to be interested in tackling NCSII.

For example, the Europol report Law enforcement and the challenge of deepfakes, which is 22 pages long, only mentions NCSII twice

In a previous September 2019 study, Sensity discovered that 96 % of the fake videos involved non-consensual pornography. To create this, one will overlay a victim’s face onto the body of a pornography actor, making it appear that the victim is engaging in the act. In many situations, the victims of pornographic deepfakes are celebrities or high-profile individuals. These videos are popular, having received approximately 134 million views at the time, and there are several pornographic sites that specifically produce pornographic celebrity deepfakes. Perpetrators often act anonymously, making crime attribution more difficult.

and

There are special marketplaces on which users or potential buyers can post requests for deepfake videos (for example, requests for non-consensual pornography).

Their solution? Unsurprisingly, as per the report, Europol appears to be happy — and confident — to see its role confined to uncovering if content is fake

Law enforcement has always had to deal with fake evidence and therefore is in a good position to adapt to the presence of deepfakes.

How can we tackle NCSII when the pillars that are supposed to lead the charge on the battlefield don’t take ownership of their role stopping this pandemic against women and girls?

Students at computers with screens that include a representation of a retinal scanner with pixelation and binary data overlays and a brightly coloured datawave heatmap at the top.
Kathryn Conrad / Datafication / Licenced by CC-BY 4.0

Targeting creators is not nearly enough

Most laws and regulations regarding NCSII primarily target individual creators and/or the people who share them. For example, the UK Online Safety Act 2023 states

(3)A person (A) commits an offence if — 

(a)A intentionally shares a photograph or film which shows, or appears to show, another person (B) in an intimate state,

(b)A does so for the purpose of A or another person obtaining sexual gratification,

©B does not consent to the sharing of the photograph or film, and

(d)A does not reasonably believe that B consents.

However, the power of the damage caused by NCSII lies not only in the speed and simplicity of their creation (it can take as little as 60 seconds) but, above all, in the virality of distribution supported by a well-oiled tech ecosystem that profits from AI-generated intimate imagery:

  • Software manufacturers like Microsoft and OpenAI have apps that enable the creation of this kind of synthetic imagery.
  • Providers such as GitHub and AWS host code to produce NCSII.
  • Payment processors like Mastercard, Visa, and PayPal take payments from websites dedicated to the commercialisation of NCSII.
  • Marketplaces like Etsy and app stores make it easy to buy and sell NCSII and access the apps to create them.
  • Search engines and social media facilitate finding and sharing them. In fact, between 50 to 80 percent of people looking for NCSII find their way to the websites and tools to create the videos or images via search engines.
  • Some nudifying websites utilise sign-in infrastructure from tech companies such as Google, Apple, Discord, Twitter, Patreon, and Line to enable people to quickly sign up, while also providing a veneer of credibility.

So why are those big tech and payment processor companies not targeted by regulators?

Section 230, which passed in 1996, is a part of the US Communications Decency Act. It was meant to serve as protection for private blocking and screening of offensive material.

However, this piece of legislation has become an ally of NCSII as it provides immunity to online platforms from civil liability on third-party content — they are not responsible for the content they host. They can remove it in certain circumstances, e.g. material that the provider or user considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.

So whilst Section 230 does not protect platforms that create illegal or harmful content, it exempts them from any responsibility for third-party content.

So, in practice, what do you do when you Google your name and the first hits are NCSII? Of course, you can try to convince Google to remove them if they fit the category “involuntary fake pornography”, as one can only report “sexual abuse imagery” if a child is the victim. But it’s one video, one picture at a time.

It’s an individual victim fighting against the power of a misogynist ecosystem profiting from those using AI to automate and scale harm to women and girls.


What Can We Do Now?

Shame should change sides

Gisèle Pelicot, the French woman who became a feminist icon for requesting that the rape trial of her ex-husband and 50 other men should be held in public, said

“I wanted all woman victims of rape — not just when they have been drugged, rape exists at all levels — I want those women to say: Mrs Pelicot did it, we can do it too. When you’re raped there is shame, and it’s not for us to have shame, it’s for them.”

NCSII is digital rape. Our bodies are used without our permission for somebody else’s enjoyment and profit.

And as with rape, we need to stop putting the onus on victims and instead shame perpetrators, watchers, and the ecosystem that sustains them.

Because when we tell women and girls to avoid sharing their images so they cannot be used for NCSII or we state that it’s “up to them” to request the removal of NCSII created from their likeness, we’re effectively leveraging their shame to effect change.

So how do we make shame change sides?

  • Stop exonerating NCSII’s perpetrators and consumers based on “lack of empathy” or “ignorance about consent.”
  • Ending the exculpation of boys creating NCSII of girls under the excuse that they don’t understand the extent of the harm after they’ve proactively shared the images on the internet.
  • Stop absolving Big Tech, digital marketplaces, and payment processors of their crucial role in making NCSII the lethal weapon they are.

Own your responsibility

After reading this article, you have the civic responsibility to do something about it:

Educate yourself: Google the laws and regulations governing NCSII in your region. Check the impact of NCSII locally and the ongoing initiatives to visibilise the problem and mitigate it.

Educate your network: It can be as simple as sharing this article with them.

Also, when the topic comes up in conversation, you can

  • Clarify why the term “deepfake porn” doesn’t do justice to the problem and offer alternatives such as NCSII.
  • Demystify the trope that the most urgent “deepfake” issues are scams or fake news, highlighting that 96 percent of them are of a non-consensual sexual nature.
  • Pushback on people claiming that “disclosure” is a fix for NCSII. The damage caused by this technology is profound and lifelong. Once one NCSII of you is on the internet, you’ll never be sure it doesn’t resurface again.

Bring the discussion on NCSII into your sphere of influence: A US survey of students, teachers, and parents showed that 40 percent of students were aware of a deepfake associated with someone they know at school, compared with 29 percent of teachers in the know; 17 percent of parents surveyed said the same.

Ask your kids’ schools about how they educate children and teachers about NCSII and what their NCSII incident response plan is.

If at a loss, the website https://enddeepfakes.org/ compiles tools and resources to help US schools mitigate and respond to NCSII incidents. In the UK, the charity SWGfL, which works towards ensuring everyone can benefit from technology free from harm, offers free resources on support and advice for schools about synthetic media.

Check also the guidelines for NCSII incidents at your university, workplace, and your place of worship.

The image features a grid of four depictions of a baby, each overlaid with digital distortions and glitches. These distortions symbolise the fragility of data and privacy in the context of nonconsensual data breaches. The glitch effects include fragmented pixels, colour shifts, and digital artefacts, emphasising the disruption and loss of control over personal information.
Zeina Saleem & Archival Images of AI + AIxDESIGN / Distortion Series / Licenced by CC-BY 4.0

Advocate for regulating the entire NCSII ecosystem

It may sound contradictory, given that I mentioned above that supra-national organisations, such as the European Union, have failed to address NCSII in their regulations. However, there is hope.

As I mentioned above, many US states and countries now have laws covering NCSII. They are mostly imperfect — like the UK Online Safety Act 2023 mentioned above, which doesn’t cover the creation of NCSII — but as “done is better than perfect”, they create the basis for further refinements.

For example, there is currently a bill in the UK House of Commons to create offences relating to the creation of, or solicitation to create, a non-consensual digitally produced sexually explicit photograph or film, mitigating the shortcomings of the Online Safety Act 2023 mentioned above.

Sadly, it has taken NCSII to hit children massively for governments to aim for the whole ecosystem and not only creators. A case in point is the TAKE IT DOWN Act (US), which entered into force last May.

The bill was borne out of the suffering — and then activism — of a handful of teenage victims of NCSII who, when they sought to remove the images and seek punishment for those who had created them, found that both social media platforms and their school boards reacted with silence or indifference.

In addition to making it illegal to share online nonconsensual, explicit images — real or computer-generated — the Act also requires tech platforms to remove such images within 48 hours of being notified about them.

“This legislation finally compels social media bros to do their jobs and protect women from highly intimate and invasive breaches of their rights. […] While no legislation is a silver bullet, the status quo — where young women face horrific harms online — is unacceptable.”

Imran Ahmed, CEO of the non-profit Center for Countering Digital Hate

And the results are starting to show. Coincidentally, also in May


Victory is Possible

Patriarchy has been trying to wear women down for millennia.

No rights have been given to us by default. We have had to fight country by country for the right to vote, to divorce, to use contraception, to abort, to open a bank account, to have an education, and for equal pay.

In these current times, when those hard-earned rights appear to be in danger and we realise that we still have to continue defending them in many parts of the world, it’s understandable that we think that, whilst important, NCSII is not as urgent as our right to abortion or to education, and it can wait.

I disagree. Non-consensual synthetic intimate imagery is both important and urgent.

First, because the root cause of the problem is that society and governments have given tech bros carte blanche to develop, release, and monetise applications as they see fit in the name of innovation, progress, and profit, with complete disregard for the body count. We have regulations for drugs, foods, and transportation. NCSII is the perfect example of why we must regulate tech without delay.

Second, because NCSII is gender violence aiming to silence, control, and shame women with no expiration date. Failure to act now is mortgaging our future and that of our mothers, sisters, daughters, and granddaughters.

If you have any doubt left about the urgency of acting against NCSII, last week, Sam Altman and OpenAI launched Sora 2, their “latest video generation model”, which they say, “is more physically accurate, realistic, and controllable than prior systems. It also features synchronized dialogue and sound effects.”

When we go to the webpage “Launching Sora responsibly”, we are reassured of the same platitudes: watermarks, some guardrails set by themselves, usage policies forbidding the use for non-consensual intimate content, and, of course, the mantra that “we’re all learning” no matter at whose expense.

Distinguishing AI content. Every video generated with Sora includes both visible and invisible provenance signals. At launch, all outputs carry a visible watermark.

Filtering harmful content. Sora uses layered defenses to keep the feed safe while leaving room for creativity. At creation, guardrails seek to block unsafe content before it’s made — including sexual material, terrorist propaganda, and self-harm promotion — by checking both prompts and outputs across multiple video frames and audio transcripts. We’ve red teamed to explore novel risks, and we’ve tightened policies relative to image generation given Sora’s greater realism and the addition of motion and audio. Beyond generation, automated systems scan all feed content against our Global Usage Policies⁠ and filter out unsafe or age-inappropriate material. These systems are continuously updated as we learn about new risks and are complemented by human review focused on the highest-impact harms.

What could go wrong?


We have won tougher wars

I refuse to end this article with either claiming that NCSII is yet another hardship we should bear or with a false comfort that things “will get better”.

Instead, I want us to be hopeful that we can put an end to the scourge of non-consensual intimate imagery because we can work towards defeating it.

We have won tougher wars with far fewer resources.

We can win this one, too.

“Hope is often misunderstood. People tend to think that it is simply passive wishful thinking: I hope something will happen but I’m not going to do anything about it.

This is indeed the opposite of real hope, which requires action and engagement.”

Jane Goodall, The Book of Hope: A Survival Guide for Trying Times


Work With Me

To cultivate thought‑leadership and boost your career prospects:

How does this article resonate with you?