Navigating the Digital Battlefield: Women and Deepfake Survival

The annual campaign 16 Days of Activism against Gender-Based Violence begins on 25 November, the International Day for the Elimination of Violence against Women, and runs through International Human Rights Day on 10 December.

Gender violence campaigns traditionally focus on physical violence: sexual harassment, rape, femicide, child marriage, or sex trafficking. The perpetrators? Partners, family members, human traffickers, soldiers, terrorists.

But that’s not all. You may be a victim of digital violence right now — in the comfort of your home.

Let’s talk about deepfakes.

The myth of political deepfakes

Deepfakes are images, audio, or video generated or manipulated using artificial intelligence technology to convincingly replace one person’s likeness with that of another

When talking about deepfakes, most media refer to the threats they may pose to democracy. That was exemplified in the famous deepfake video of Obama in 2018, where he called Donald Trump a “total and complete dipshit”. Although that video was clearly false, it did show the potential of the technology to meddle in elections and spread disinformation.

Capitalism and deepfakes

In addition to the threat to political stability, the benefits and threats posed by deepfakes are often framed in a capitalistic context

  • Art —  Artists use deepfakes technology to generate new content from existing media created by them or by other artists.
  • Caller response services — Provide tailored answers to caller requests that involve simplified tasks (e.g. triaging and call forwarding to a human).
  • Customer support —  These services use deepfake audio to provide basic information such as an account balance.
  • Entertainment — Movies and video games clone actors’ voices and faces because of convenience or even for humourous purposes. 
  • Deception — Fabricating false evidence to inculpate — or exculpate — people in a lawsuit.
  • Fraud — Impersonate people to gain access to confidential information (e.g. credit cards) or prompt people to act (e.g. impersonate a CEO and request a money transfer). 
  • Stock manipulation — Deepfake content such as videos from CEOs announcing untrue news such as massive layoffs, new patents, or an imminent merger can have a massive impact on a company’s stock.

As a result of that financial focus, tech companies and governments have concentrated their efforts towards assessing if digital content is a deepfake or not. Hence, the proliferation of tools aimed to “certify” content’s provenance as well as legal requirements in some countries to label deepfakes. 

And many people share the same viewpoint. It’s not uncommon that, when discussing deepfakes, my interlocutors dismiss their impact with remarks such as “It’s easy now to spot if they’re fake or not”.

But the reality is that women bear the brunt of this technology.

Women and deepfakes

Deepfakes themselves were born in 2017 when a Reddit user of the same name posted manipulated porn clips on the site. The videos swapped the faces of female celebrities — Gal Gadot, Taylor Swift, Scarlett Johansson — onto porn performers. And from there they took off.

A 2019 study found that 96% of deepfakes are of non-consensual sexual nature, of which 99% are made of women. As I mentioned in the article Misogyny’s New Clothes, they are a well-oiled misogyny tool:

  • They are aimed to silence and shame women. That includes women politicians. 42% of women parliamentarians worldwide have experienced extremely humiliating or sexually-charged images of themselves spread through social media.
  • They objectify women by dismembering their bodies — faces, heads, bodies, arms, legs — without their permission and reassembling them as virtual Frankensteins. 
  • They may be the cheapest way to create pornography — you don’t need to pay actors and there are plenty of free tools available. Not willing to put the effort into learning how to create them yourself? You can order one for as low as $300.
  • They are the newest iteration of revenge porn — hate your colleagues? Tired of the women in your cohort ignoring you? You create deepfake videos from them made from their LinkedIn profile photos and university face books and plaster the internet with them.
  • They disempower victims — Unlike “older” misogyny tools, women cannot control the origin of deepfakes, how they spread, or how to eliminate them. Once they are created, women’s only recourse is to reach out directly to the platforms and websites hosting them and ask for removal.
  • As with all types of gendered violence, women are also shamed for being the target of deepfakes — they are blamed for sharing their photos on social media. I encourage you to read Adrija Bose’s excellent article that summarises her research work on the effect of deepfakes on female content creators.

What do we get wrong about deepfakes?

If 96% are non-consensual porn, why don’t we do anything about it?

  • We think they are not as harmful as “real” porn because the victim didn’t participate in them. What we miss it’s that we “see” the world with our minds, not with our eyes. If you want to have a taste of how that feels, you can watch the chilling 18-minute documentary My Blonde GF by The Guardian where the writer Helen Mort details her experience of being deepfaked for pornography.
  • Knowing that it’s fake is of little relief when you know that your family, friends, and colleagues have watched or could eventually watch them. Moreover, there is research proving that deepfake videos create false memories.
  • As we believe that “it’s not the real you, it’s fake”, victims receive little support from the justice system and governments in general. You can watch this 5-minute video from Youtuber and ASMR (Autonomous Sensory Meridian Response) artist Gibi who has been repeatedly targeted by deepfakes and who shares the very real consequences of this practice that is perfectly legal in most countries.

Talking about governments, let’s check how countries regulate deepfakes.

Deepfakes and the law

In 2020, China made it a criminal offense to publish deepfakes or fake news without disclosure. Since January 2023 

“Companies have to get consent from individuals before making a deepfake of them, and they must authenticate users’ real identities.

The service providers must establish and improve rumor refutation mechanisms.

The deepfakes created can’t be used to engage in activities prohibited by laws and administrative regulations.

 Providers of deep synthesis services must add a signature or watermark to show the work is a synthetic one to avoid public confusion or misidentification.”

On Friday 8th December 2023, the European Parliament and the Council reached a political agreement on the Artificial Intelligence Act (AI Act), proposed by the Commission in April 2021. Although the full text is not available yet, the Commission published an announcement where deepfakes are categorised as specific transparency risks 

“Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.”

In the US, there are no federal regulations on deepfakes. Some states like California, New York, and Virginia have passed laws targeting deepfake pornography.

What about the UK? In September 2023, the Online Safety Bill was signed off by the Houses of Parliament which criminalises sharing deepfake porn. The offence will be punishable by up to six months in prison and it would rise to two years if intent to cause distress, alarm, or humiliation, or to obtain sexual gratification could be proved. Note that the bill doesn’t criminalise the creation of deepfakes, only sharing them.

For further details about global deepfake regulation approaches including countries such as Canada and South Korea, check this article from the Artificial Intelligence Institute

Call to action

The remedy of our patriarchal society against physical violence towards women has been to encourage them to self-suppress their rights so that the perpetrators can roam free. 

For example, we tell women that to avoid becoming a victim of violence they should stay at home at night, avoid dark places, or don’t wear miniskirts. Failure to do so and get harmed is met with remarks such as “She was looking for it”.

I hope you’re not expecting me to exhort women to disappear from Instagram, get rid of their profile photos on LinkedIn, or stop publishing videos on TikTok. All the opposite. It’s not for us to hide from deepfake predators, it’s for platforms and regulators to do their job.

My call to action to you is threefold

1.- Take space: Let’s not allow this technology to make us invisible on social media —  hiding has never challenged the status quo. It’s a survival mechanism. If we hide now because we’re afraid of deepfakes, we’ll never be safe on the internet again.

2.- Amplify: Educate others about the risks and challenges of deepfakes as well as how to get support when deepfaked for pornography

3.- Demand action: Lobby to make platforms, software development companies, and governments accountable for making us safe from non-consensual sexual deepfakes.

BACK TO YOU: What’s your take on deepfakes? Should they be fully banned? How do you believe the benefits outweigh the risks?

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.

3 thoughts on “Navigating the Digital Battlefield: Women and Deepfake Survival

  1. Pingback: OpenAI’s ChatGPT-4o: The Good, the Bad, and the Irresponsible - Patricia Gestoso

  2. Pingback: Inside the Digital Underbelly: The Lucrative World of Deepfake Porn - Patricia Gestoso

  3. Pingback: Techno-Patriarchy: How AI is Misogyny’s New Clothes - Patricia Gestoso

How does this article resonate with you?