Techno-Patriarchy: How AI is Misogyny’s New Clothes

In the discussions around gender bias in artificial intelligence (AI), intentionality is left out of the conversation.

We talk about discriminatory datasets and algorithms but avoid mentioning that humans — software developers — select those databases or code the algorithms. Any attempts to demand accountability are crushed under exculpating narratives such as programmers’ “unconscious bias” or the “unavoidable” opacity of AI tools, often referred to as “black boxes”.

Moreover, the media has played a vital role in infantilising tech bros as a means of exculpating them of any harm. They are often portrayed as naughty young prodigies unaware of the unintended consequences of the tools they develop rather than as astute executives who have had notorious encounters with justice for data breaches, antitrust violations, or discrimination at work. There is, however, nothing unintentional or fortuitous.

Patriarchy is much older than capitalism; hence, it has shaped our beliefs about those who have purchasing power and how they use it. So patriarchy wants us to believe that women don’t have money or power, and that if they do, they’ll spend it on make-up and babies and put up with services and products designed for men. Moreover, that women are expendable in the name of profits. All this while in 2009 women controlled $20tr in annual consumer spending and in 2023 they owned 42% of all US businesses.

Tech, where testosterone runs rampant, has completely bought into this mantra and is using artificial intelligence to implement it at scale and help others to do the same. That’s the reason it disregards women’s needs and experiences when developing AI solutions, deflects its accountability on automating and increasing online harassment, purposely reinforces gender stereotypes, operationalises menstrual surveillance, and sabotages women’s businesses and activism.

Techno-optimism

Tech solutionism is predicated on the conviction that there is no problem tough enough that digital technology cannot solve and, when you plan to save the world, AI is the ultimate godsend. 

It’s only through understanding the pervasiveness of patriarchy, meritocracy, and exceptionalism in tech that we can explain that the sector dares to brag about its limitless ability to tackle complex issues at a planetary scale with an extremely homogenous workforce, mainly comprising white able wealthy heterosexual cisgender men.

For instance, recruiting AI tools have been regularly portrayed as the end of biased human hiring. The results say otherwise. Notably, Amazon had to scrap their AI recruiting tool because it consistently ranked male candidates over women. The application had been trained on the company’s 10-year hiring history, which was a reflection of the male prevalence across the tech sector.

Another example is the assumption of manufacturers of smart, internet-connected devices that the danger typically comes from the outside; hence, the need to use cameras, VPNs, and passwords to preserve the integrity of the households. But if you’re a woman, the enemy may be indoors. 

One in four women experience domestic violence in their lifetime; however, tech companies are oblivious to it. One way perpetrators control, harass and intimidate their victims is by taking advantage of artificial intelligence to manipulate their victims’ wearable and smart home devices. Faced with this design glitch, women don’t have another option than to become their own cybersecurity experts.

Deflecting accountability

Tech is also a master at deflecting their responsibility on how AI enables bullying and aggression towards women. For example, we’re told that we must worry about deepfakes threatening democracies around the world based on their ability to reproduce voices and images from politicians and world leaders. The reality is that women bear the brunt of this form of AI.

A 2019 study found that 96% of deepfakes are of non-consensual sexual nature, and of those, 99% are made of women. This is content aimed to silence, shame, and objectify women. And tech defers to the victims to uncover and report the material. For example, it’s on women to proactively request the removal of harmful pages from Google Search.

Then, we have the online harassment of female journalists, activists, and politicians fostered by algorithms that promote misogynistic content to users prone to engage with it, noting that Black women are 84% more likely than white women to be the target. Research by the Inter-Parliamentary Union about online abuse of women parliamentarians worldwide found that 42% of them have experienced extremely humiliating or sexually charged images of themselves spread through social media.

When tech bros are asked to take responsibility for online harassment, they hide behind the freedom of speech or their powerlessness to police their creations, whilst financially benefiting from the online abuse of women.

Reinforcing gender stereotypes

How do machines know what a woman looks like? The Gender Shades study showed that face recognition algorithms used to predict race and gender were biased against darker females, which showed up to a 35% error compared to 1% for lighter-skinned males. Whilst Microsoft and IBM acknowledged the problem and improved the algorithms subsequently, Amazon blamed the auditor’s methodology.

Tech has a long tradition of capitalising on women and gender stereotypes to anthropomorphise its chatbots. The first one was created in 1966 and played the role of a psychotherapist. Its name was not that of a famous psychotherapist such as Sigmund Freud or Carl Jung, but Eliza, after Eliza Doolittle in the play Pygmalion. The rationale was that through changing how she spoke, the fictional character created the illusion that she was a duchess.

Following suit, tech companies have intentionally designed their virtual home assistants to perpetuate societal gender biases around feminine obedience and the “good housewife”. Their default female voice, womanly names — Alexa, Siri, and Cortana — and subservient manners are calculated to make users connect to those technologies by reproducing patriarchal stereotypes. Historically, this has included a submissive attitude towards verbal sexual harassment, flirting with their aggressors, and thanking offenders for their abusive comments.

Surveillance

Tech has also profited from helping to automate and scale control and influence over women’s reproductive decisions. Whilst society depriving women of their bodily autonomy is nothing new — there are myriad examples of government-sanctioned initiatives forcing women’s sterilisation and reproduction — what’s frightening is that the use of AI brings us closer to a future where Minority Report meets The Handmaid’s Tale.

Microsoft has developed applications used across Argentina, Brazil, Colombia, and Chile with the promise to forecast the likelihood of teenage pregnancy based on data such as age, ethnicity, and disability.

AI is an ally of “pro-life” groups too. An analysis of the results shown to women searching for online guidance about abortions revealed that a substantial number of hits produced by the algorithm were adverts styled as advice services run by anti-abortion campaigners. Google’s defence? The adverts had an “ad” tag.

Censorship

Tech actively sabotages women in areas such as self-expression, healthcare, business, finances, and activism.

AI tools developed by Google, Amazon, and Microsoft rate images of women’s bodies as more sexually suggestive than those of men. Medical pictures of women, photos of pregnant bellies, and images depicting breastfeeding are all at high risk of being classified as representing “explicit nudity” and removed from social media platforms.

It can escalate too. It’s not uncommon that women’s businesses relying on portraying women’s bodies report being shadow-banned — their content is either hidden or made less prominent by social media platforms without their knowledge. This practice decimates female businesses and promotes self-censoring to avoid demotion on the platforms.

Algorithms also flag women as higher-risk borrowers. In 2019, tech founders Steve Wozniak and David Heinemeier Hansson disclosed in a viral Twitter thread that the Apple Card had offered them a credit limit ten and twenty times higher than to their wives in spite of the couples sharing their assets.

Tech doesn’t appear to think that female activism is good for business either. For years, digital campaigns have highlighted that Meta’s hate speech policies result in the removal of posts calling attention to gender-based violence and harassment. The company continues to consider those posts against their policies — despite their Oversight Board overturning their decisions — and suspending the accounts of Black women activists who have reported racial abuse.

The other women in tech

While AI is naturally associated with the virtual world, it is rooted in material objects. Moreover, most tech software and platform giants — Apple, Google, Amazon, Microsoft, and Meta (aka Facebook) — are hardware providers as well. Datacentres, smartphones, laptops, and batteries rely heavily on metals such as cobalt and women often play a key role in their extraction and recycling.

For example, the Democratic Republic of Congo supplies 60% of the world’s cobalt. The mineral is extracted via artisanal and industrial mines. Some sectors welcome the integration of women into the artisanal mines as a means to empower them financially and as a substitute for children’s labour. 

However, the specific activities females perform in the mines are the most toxic as they involve direct contact with the minerals, leading to cancer, respiratory conditions, miscarriage, and menstrual disruption. Women working in some of those artisanal mining sites report daily violence and blackmail. Still, adult females earn half of what adult males make (an average of $2.04 per day).

What tech has done about this? Software-only companies continue to look the other way while those manufacturing hardware avoided their responsibility as much as they could.

Most companies have taken moderate or minimal action whilst in some cases they have denied knowledge of breaches in human rights. Still, it’s clear that the bulk of the action is directed toward eradicating child labour and that the particular challenges that women miners face are left unaddressed.

There is also a gendered division of labour in electronic waste, a €55 billion business. Women frequently have the lowest-tier jobs in the e-waste sector. They are exposed to harmful materials, chemicals, and acids as they pick and separate the electronic equipment into their components, which in turn negatively affect their morbidity, mortality, and fertility.

Again, the focus of the efforts goes to reducing child labour and women’s work conditions are lumped with those of “adult” workers. An additional challenge compared to mining work, it’s that hardware manufacturers control the narrative, highlighting their commitment to recycling materials across their products for PR purposes.

AI-powered misogyny beyond tech

Last but not least, not only tech companies use AI as a misogyny tool. Organisations and individuals around the world are ramping up quickly.

For example, Iran has announced the use of facial recognition algorithms to identify women breaking hijab laws.

The baby-on-board market is a goldmine and technology is instrumental in helping vendors to exploit it. It has become habitual that retailers use AI algorithms to uncover and target pregnant girls and women.

Then, there is sexual exploitation. According to the United Nations, for every 10 victims of human trafficking detected globally, five are adult women and two are girls. Overall, 50 per cent of victims are trafficked for sexual exploitation (72% in the case of girls). Traffickers use online advertisements, social media platforms, and dating apps — all powered by AI — to facilitate the recruitment, exploitation, and exertion of control and pressure over the victims.

And thanks to generative AI, it has never been easier for individuals to create misogynistic content, even accidentally. Examples include:

The answer from tech leaders to their responsibility about generative AI fostering biases has been to issue letters focusing on a dystopian future rather than addressing the present harms. Even better, they have perfected the skill of putting the onus on governments to regulate AI whilst in parallel lobbying to shape those same regulations.

What’s the fix? 

Tech has embraced the patriarchal playbook in its adoption and deployment of artificial intelligence tools. Hoping to reap massive financial returns, the sector is unapologetically fostering gender inequity and stereotypes.

As Black feminist Audre Lorde wrote, “The master’s tools will never dismantle the master’s house.” Whilst tech continues to be run by wealthy white men who see themselves as the next Messiah, misogyny and patriarchy will be a feature and not a bug of artificial intelligence applications.

We need a diverse leadership in tech that sees women as an underserved market with growing purchasing and executive power. Tech also needs investors to understand that outdated patriarchal beliefs about women being a “niche” don’t serve them well. 

On the bright side, it’s encouraging to see categories such as Femtech, which focuses on female healthcare innovation, reaching $16 billion in investment and is projected to be $1.2 trillion by 2027.

Finally, Tech needs to assume responsibility for the tools it creates and that goes beyond monitoring apps performance. It starts at the ideation stage by asking uncomfortable ethical questions such as “Should we build that?”

Because not all speed is progress.

NOTE: This article is based on a piece that I wrote previously for ​The Mint​.


PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.

3 thoughts on “Techno-Patriarchy: How AI is Misogyny’s New Clothes

  1. Pingback: AI Chatbots in Customer Support: Breaking Down the Myths - Patricia Gestoso

  2. Pingback: Big Tech Can Clone Your Voice: A Technological Triumph or a Moral Tragedy? - Patricia Gestoso

  3. Pingback: Inside the Digital Underbelly: The Lucrative World of Deepfake Porn - Patricia Gestoso

How does this article resonate with you?