Category Archives: Digital Inclusion

The Truth About Women, AI, and Confidence Gaps

A black-and-white surrealist collage of a classroom lecture. The center features an oversized computer keyboard with the two keys “A” and “I” highlighted in red. In the foreground, a vintage illustration of a woman in historical attire kneels as she interacts with the keyboard. Behind her, an audience of Cambridge students are seated in rows observing the lecture.

Hanna Barakat & Cambridge Diversity Fund / Analog Lecture on Computing / Licenced by CC-BY 4.0

More than twenty years ago, I joined a medium size software company focused on scientific modelling as a trainer. I knew the company and some of their products very well. I had been their customer.

First, during my PhD in computational chemistry, then as an EU post-doctoral researcher coding FORTRAN subroutines to simulate the behaviour of materials, and as a modelling engineer working for a large chemical company.

As I started my job as a materials trainer, I had to learn about other software applications that I hadn’t used previously or was less familiar with. One of those was related to what we called at the time “statistics” to predict the properties of new materials.

Some of those “statistical methods” were neural networks and genetic algorithms, part of the field of artificial intelligence. But I was not keen on developing the material for that course. It felt like a waste of time for several reasons.

First, whilst those methods were already popular among life science researchers, they were not very helpful to materials modellers — my customers. Why? Because large, good datasets were scarce for materials.

Point in case, I still remember one specific customer excited about using the algorithms to develop new materials in their organisation. With a sinking feeling from similar conversations, I asked him, “How many data points do you have?”. He said, “I think I have 7 or 10 in a spreadsheet.” Unfortunately, I had to inform him that it was not nearly enough.

Second, the course was half a day, which was not practical to be delivered in person, the way all our workshops had been offered for years. Our experience told us that in 2005, nobody would fly to Paris, Cambridge, Boston, or San Diego for a 4-hour training event on “statistics”.

The solution? It was decided that this course would be the first to be delivered online via a “WebEx”, the great-grandparent of Zoom, Teams, and Google Meet. That was not cool at all.

At the time, we had little faith in online education for three reasons.

  • Running the webinars was very complex; they took ages to set up and schedule, and there were always connection glitches.
  • There were no “best practices” to deliver engaging online training yet, as a result, we trainers felt as if we were cheating on our job to teach our clients.
  • We believed that scientific and technical content was “unteachable” online.

After such a less-than-amazing start at teaching artificial intelligence online, you’d have thought I was done.

I thought so, too. But I’ve changed my mind. It hasn’t happened overnight, though.

It has taken two decades of experience teaching, using, and supporting AI tools in my corporate job, 10+ years as a DEI trailblazer, and my activism for sustainable AI for the last four years to realise that if we want systemic equality, it’s paramount we bridge the gender gap in AI adoption.

And it has also helped that I now have 20 years of experience delivering engaging online keynotes, courses, and masterclasses.

This is the story of why I’m launching in September Women Leading with AI: Master the Tools, Shape the Future, an eight-session virtual group program in inclusive, sustainable and actionable AI for women leaders.

AI and Me

At Work

After training, I moved to the Contract Research department. There, I had the opportunity to design and deliver projects that used AI algorithms to get insights into new materials and their properties.

Later on, I became Head of Training and Contract Research and afterwards, I moved to supporting customers using our software applications for both materials and life sciences research.

Whilst there were exciting developments in those areas, most of our AI algorithms didn’t get much love from our developers or customers. After all, they hadn’t substantially improved for ages.

Then, all changed a few years ago.

In life science, AI algorithms made it possible to predict protein structure, which earned their creators the Nobel Prize. Those models have been used in pharmaceuticals and environmental technology research and were available to our customers.

We also developed applications that used AI algorithms to help accelerate drug discovery. It was hearing from clients working on cancer treatments how AI has positively broadened the kind of drugs they were considering that changed me from AI-neutral to AI-positive.

In materials science, machine learning forcefiels are also bridging the gap between quantum and classical simulation, making it possible to simultaneously model chemical reactions (quantum) in relatively large systems (classical).

In summary, my corporate job taught me that scientific research can benefit massively from the development of AI tools beyond ChatGPT.

As a DEI Trailblazer

Tired of tech applications that made users vulnerable and denied their diversity of experiences, in 2019, I launched the Ethics and Inclusion Framework.

The idea was simple — a free tool for tech developers to help them identify, prevent, mitigate, and account for the actual and potential adverse impact of the solution they develop. The approach is general so that it can be used for any software applications, including AI tools.

The feedback was very positive, getting featured by the Cambridge Engineering Design Centre and research papers on ethical design.

It was running a workshop on the framework that I met Tania Duarte, the founder of We and AI, an NGO working to encourage, enable, and empower critical thinking about AI.

I joined them in 2020 and it has been a joy to contribute to initiatives such as

  • The Race and AI Toolkit, designed to raise awareness of how AI algorithms encode and amplify the racial biases in our society.
  • Better Images of AI, a thought-provoking library of free images that more realistically portray AI and the people behind it, highlighting its strengths, weaknesses, context, and applications.
  • Living with AI, the e-learning course of the Scottish AI Alliance.

Additionally, as a founder of the gender employee community at my corporate job a decade ago, I’ve chaired multiple insightful meetings where we’ve discussed the impact of AI algorithms on diversity, equity, and inclusion.

As a Sustainability Advocate

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers.
Clarote & AI4Media / Labour/Resources / Licenced by CC-BY 4.0

In 2021, the article Sustainable AI: AI for sustainability and the sustainability of AI made me aware that we were discounting significant energy consumption and carbon emissions derived from developing AI models.

I was on a mission to make others aware, too. I still remember my keynote at the Dassault Systèmes Sustainability Townhall in 2021, when I shared with my co-workers the urgency to think about the materiality of AI — you can watch here a shorter version I delivered at the WomenTech Conference in 2022.

I’ve also written about how the Global North exploits the Global South’s mineral resources to power AI, as well as how tech companies and governments disregard the energy and water consumption from running generative AI tools.

Lately, I’ve looked into data centres — which are vital to cloud services and hence to the development and deployment of AI. Given that McKinsey forecasts that they’ll triple in number by 2030, it’s paramount that we balance innovation and environmental responsibility.

AI and Women

As 50% of the population on the planet, women have been affected by AI developments, but typically not as the ones profiting from it, but instead bearing the brunt of it.

Women Leading AI

Unfortunately, it often appears that the only contribution from women to technology was made by Ada Lovelace, in the 19th century. Artificial intelligence is no exception. The contributions of women to AI have been regularly downplayed.

In 2023, the now-infamous article “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement” showcased 12 men. Not even one woman in the group.

The article prompted criticism right away and “counter-lists” of women who have been pivotal in AI development and uncovering its harms. Still, women are not seen as “AI visionaries”.

And it’s not only society that disregards women’s expertise on AI — women themselves do that.

In 2023, I was collaborating with an NGO that focuses on increasing the number of women in leadership positions in fintech. They asked me to chair a panel at their annual conference and gave me freedom to pick the topic. I titled the panel “The role of boards driving AI adoption.”

In alignment with the mission of the NGO, we decided that we’d have one male and two females as panelists.

Finding a great male expert was fast. Finding the two female AI experts was long and excruciating.

And not because of the lack of talent. It was a lack of “enoughness.”

For three weeks, I met women who had solid experience working in teams developing and implementing strategies for AI tools. Still, they didn’t feel they were “expert enough” to be in the panel.

I finally got two smashing female AI experts but the search opened my mind to the need to get more women on boards to learn about AI tools as well as their impact on strategy and governance.

That was the rationale behind launching the Strategic AI Leadership Program, a bespoke course on AI Competence for C-Suite and Boards. The feedback was excellent and it filled me with pride to empower women in top leadership positions to have discussions about responsible and sustainable AI.

LinkedIn testimonial.

Weaponisation of AI

Syncophant chatbots can hide the fact that at its core, AI is a tool that automates and scales the past.

As such, it’s been consistently weaponised as a misogyny tool and its harms disregarded as unconscious bias and blamed on the lack of diversity of datasets.

And I’m not talking about “old” artificial intelligence, only. Generative AI is massively contributing to reinforcing harmful stereotypes and is being weaponised against women and underrepresented groups.

For example, 96% of deepfakes are of a non-consensual sexual nature and 99% of the victims are women. Who profits from them? Porn websites, payment processors, and big tech.

And chatbots are great enablers of propagating biases.

New research has found that ChatGPT and Claud consistently advise women to ask for lower salaries than men, even when both have identical qualifications.

In one example, ChatGPT’s o3 model was prompted to advise a female job applicant. The model suggested requesting a salary of $280,000.
In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.

In summary, not only does AI foster biases but it also helps promote them on a planetary scale.

My Aha Moment

Until recently, my focus had been to empower people with knowledge about how AI algorithms work, as well as AI strategy and governance. I had avoided teaching generative AI practices like the plague.

That was until a breakthrough through the month of July. It came as the convergence of four aspects.

Non-Tech Women

A month ago, I delivered the keynote “The Future of AI is Female” at the Women’s Leadership event Phoenix 2, hosted by Aspire.

In that session, I shared with the audience two futures: one where AI tools are used to transform us into “productive beings” and another one where AI systems are used to improve our health, enhance sustainability, and boost equity.

It’s a no-brainer that everybody thought the second scenario was better. But it was also very telling that nobody believed that it was the most probable.

After the keynote, many attendees reached out to me and asked for a course to learn how AI could be used for good and in alignment with their values.

Other women who didn’t attend the conference also reached out to me for guidance on AI courses to help them strengthen their professional profiles beyond “prompting”.

Unfortunately, I wasn’t able to recommend a course that incorporates both practical knowledge about AI and the fundamentals of how it shapes areas such as sustainability, DEI, strategy, and governance.

Women In Tech

As I mentioned above, I’m the founder of the gender employee community at my corporate job, and for 10 years, we’ve been hosting regular meetings to discuss DEI topics.

For our July meeting, I wanted us to have an uplifting session before the summer break, so I proposed to discuss how AI can boost DEI now and in the future.

I went to the meeting happily prepared with my list of examples of how artificial intelligence was supporting diversity, equity, and inclusion. But I was not prepared for how the session panned out.

Over and over, the examples shared showcased how AI was weaponised against DEI. Moreover, when a positive use was shared, somebody quickly pointed out how that could be used against underrepresented groups.

This experience made me realise that as well as thinking through the challenges, DEI advocates also need to spend time and be given the tools to think about how AI can purposefully drive equity.

Women In Ethics

I have the privilege of counting many women experts in ethical AI, with relevant academic background and professional experience.

With all the talk about responsible AI, you’d think that they are in high demand. They aren’t.

In July, my LinkedIn feed was full of posts from ethics experts — many of them women — complaining of what I call “performative AI ethics,” organisations praising the need to embed responsible AI without creating the necessary role.

But is that true? Yes, and no.

Looking at the advertised AI job, I noticed that the tendency is for expertise in ethics to appear as an add-on to “Head of AI” roles that are at the core eminently technical: Their key requirement is experience designing, deploying, and using AI tools.

In other words, technical expertise remains the gatekeeper to responsible AI.

A pixelated black-and-white portrait of Ada Lovelace where the arrangement of pixels forms intricate borders and repeating patterns. These designs resemble the structure and layout of GPU microchip circuits, blending her historical contributions with modern computational technology.
Hanna Barakat & Cambridge Diversity Fund / Lovelace GPU / Licenced by CC-BY 4.0

Women And The Gender AI Adoption Gap

As I mentioned in my recent article “A New Religion: 8 Signs AI Is Our New God”, it has been taken as a dogma that women are behind in generative AI adoption because of lower confidence in their ability to use AI tools effectively and lack of interest in this technology.

But a recent Harvard Business School working paper Global Evidence on Gender Gaps and Generative AI, synthesising data from 18 studies covering more than 140,000 individuals worldwide, has provided a much nuanced understanding of the gender divide in generative AI.

When compared to men, women are more likely to

  • Say they need training before they can benefit from ChatGPT compared to men and to perceive AI usage in coursework or assignments as unethical or equivalent to cheating.
  • Agree that chatbots should be prohibited in educational settings, and be more concerned about how generative AI will impact learning in the future.
  • Perceive lower productivity benefits of using generative AI at work and in job search.
  • Agree that chatbots can generate better results than they can on their own.

Moreover, women are less likely to agree that chatbots can improve their language ability or to trust generative AI than traditional human-operated services in education and training, information, banking, health, and public policy services.

In summary, women correctly understand that AI is not “neutral” or a religion to be blindly adopted and prefer not to use it when they perceive it as unethical.

There is more. In the HBR article Research: The Hidden Penalty of Using AI at Work, researchers reported an experiment with 1,026 engineers in which participants evaluated a code snippet that was purportedly written by another engineer, either with or without AI assistance. The code itself was the same — the only difference was the described method of creation (with/without AI assistance).

When reviewers believed an engineer had used AI, they rated that engineer’s competence 9% lower on average, with 6% for men and 13% for women.

The authors posit that this happens through a process called social identity threat.

When members of stereotyped groups — for example, women in tech or older workers in youth-dominated fields — use AI, it reinforces existing doubts about their competence. The AI assistance is framed as a “proof” of their inadequacy rather than evidence of their strategic tool use. Any industry predominated by one segment over another is likely to witness greater competence penalties on minority workers.

The authors offer senior women openly using AI as a solution to bridging the gap.

Our research found that women in senior roles were less afraid of the competence penalty than their junior counterparts. When these leaders openly use AI, they provide crucial cover for vulnerable colleagues.

study by BCG also illustrates this dynamic: When senior women managers lead their male counterparts in AI adoption, the adoption gap between junior women and men shrinks significantly.

Basically, we need to normalise women using—and leading—AI.

My Bet: Women Leading with AI

Through my July of AI breakthroughs, I learned that

  • The gender gap in generative AI is real, and the causes are much more complex than a lack of confidence.
  • The absence of access to training and sustainable practices is a factor contributing to that gender gap.
  • Women are eager to ramp up on AI provided that it aligns with their values.
  • To be considered by organisations to lead responsible AI, it’s imperative to show mastery of the tools.

This coalesced in a bold idea:

What if I teach women how to use AI within an ethical, inclusive, and sustainable framework?

What if I developed a program where they can both understand how AI tools work, their impact on topics such as the future of work, DEI, strategy, and governance, while developing expertise on tools with practical examples?

And this is how my virtual group program, Women Leading with AI: Master the Tools, Shape the Future, was born.

About the Program:

A structured, eight-session program for women leaders focused on turning AI literacy into strategic results. Explore AI foundations and the impact of artificial intelligence on the future of work, DEI, sustainability, data and cybersecurity — paired with generative AI workflows, templates, exercisesand decision frameworks to translate learning into real-world impact. The blend of live instruction, quizzes, and peer support ensures you emerge with both critical insight and a toolkit ready to lead impactfully in your role.

The program starts mid-September and you can read the details following this link.

I can not wait for you to join me in making the future of AI female.

Have a question? Message me on LinkedIn or drop me a line.


BONUS

[Webinar Invitation] Ethical AI Leadership: Balancing Innovation, Inclusion & Sustainability

Join me on Tuesday, 12th August for a practical, high-value webinar tailored for women leaders committed to harnessing AI’s power confidently, ethically, and sustainably. 

You will leave the session with actionable insight into how AI intersects with environmental impact, leadership values, and equity.

Why attend?

• Uncover key barriers women face in using AI.

• Discover the hidden cost of generative AI—from energy consumption to bias.

• Participate in an interactive real-world case study where you evaluate AI trade-offs through DEI and sustainability frameworks.

• Gain practical guidance on how to minimise footprint while harnessing generative AI tools more responsibly.

Date: Tuesday 12th August 

Time: 13:00 London | 14:00 Paris | 8:00 New York

You can register following this link.

This is a taster of my program “Women Leading with AI: Master the Tools, Shape the Future”, starting mid-September

More Women in Tech Won’t Fix AI — Systemic Change Will

A black-and-white image depicting the early computer, Bombe Machine, during World War II. In the foreground, the shadow of a woman in vintage clothing is cast on a man changing the machine's cable.
Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Shadow Work– Decrypting Bletchley Park’s Codebreakers / Licenced by CC-BY 4.0.

Last year, at a women’s conference in London, I was disappointed to see that digital inclusion — and AI in particular — was missing from the agenda. I remember telling the NGO’s CEO about my concerns, even mentioning my articles on AI as a techno-patriarchal tool.

Her receptive response had given me hope. That hope was reignited this year when I eagerly reviewed the program and discovered a panel on AI.

The evening before the event, an unexpected sense of dread began to settle in. When I asked myself why, the answer struck me like a lightning bolt.

I dreaded hearing the “we need more women in tech” mantra once more – another example of how we deflect the solution of a systemic problem to those bearing the brunt of it.

Let me tell you what I mean.

Women as Human Fixers 

For millennia, women had been assigned the duty to give birth and care for children, rooted in the fact that most of them can carry human fetuses for 9 months. That duty to be a womb endures today, where ownership of our bodies is being taken away through coercive anti-abortion laws.

Our “duty” of care has been broadened to the workplace, where we’ve been assigned the unwritten rule of “fixing” all that’s dysfunctional.

  • Coerced into doing things nobody else cares to do, i.e. weaponised incompetence.
  • Fixing teams’ dynamics because we’re the “naturally” collaborative ones.
  • Doing the glue work — being appointed the shoulder where all team members can cry and find an “empathetic ear”.
  • Do the office work — we’re the ones that are “organised”, so dull tasks pile up on our desks whilst “less” organised peers do the promotable work.

And that “fixer” stereotype now includes “our” duties as women in tech. When the sector was in its infancy, women were doing the supposedly boring stuff (programming) while men were doing the hardware (the “cool” stuff). When computers took off, we trained men in programming so they could become our managers. Then, we were pushed out of those jobs in the 1980s. The only constant has been doing the job but not getting the accolades (see women’s role in Bletchley Park, Hidden Figures).

Moreover, whilst statistics tell us that 50% of women leave tech by age 35, young girls and women are supposed to brush off that “inconvenient” truth and rest assured that tech is an excellent place for a career. Moreover, that they are anointed to make tech work for everybody.

What’s not to like, right?

Then, let me show the to-do list of 21 tasks and expectations the world imposes on each woman in tech.

Continue reading

10 Reasons Zuckerberg’s “Masculine Energy” Should Worry Us All

Two men fighting in a boxing ring with one wearing a red shirt.
Photo by Franco Monsalvo.

Statistics tell us that 70% of all senior executives are alpha male, so I’d thought we had enough “masculine energy.” Mark Zuckerberg disagrees. 

In a recent podcast, he called businesses to dial up “masculine energy.” 

 It’s like you want like feminine energy, you want masculine energy. Like I, I think that that’s like you’re gonna have parts of society that have more of one or the other. I think that that’s all good. 

But, but I do think the corporate culture sort of had swung towards being this somewhat more neutered thing. And I didn’t really feel that until I got involved in martial arts, which I think is still a more, much more masculine culture.

[…] Like, well that’s how you become successful at martial arts. You have to be at least somewhat aggressive. 

Why? Because he’s not talking about others. He’s telling us about himself unleashing his “masculine energy”. For example, 

  • Revamping his clothes and demeanour — from looking like a perennial geeky student to a cool billionaire tech millennial.
  • Embracing far-right politics — check the inauguration picture where his second row with “chums” Musk, Bezos, and Pichai. 
  • Stopping faking playing nice — He got rid of fact-checkers and told Meta’s 3 billion users that was their job, not his.

Moreover, he’s a more “palatable” version of Elon — equally successful, not so toxic, and has undergone a very public appearance Meta-morphosis —which makes him dangerously appealing to young men… And maybe to women too. After all, he has three daughters and no sons. 

Given his extreme financial success and now closeness to political power, I pondered 

What would it take for me to unleash my “masculine energy”?

And I came up with 10 precepts.

1.- Recycle

The first iteration of Facebook was “Facemash” — a website Zuckerberg created whilst studying at Harvard — to evaluate the attractiveness of female students. Users were presented with pairs of photos of female students and asked to vote who was hotter.

The kick? The photos were stolen.

The students were unaware their images were being used for this rating, judging by the complaint from Fuerza Latina and the Harvard Association of Black Women. The site used ID photos of female undergraduates taken without permission from the university’s online directories. 

This “repurposing” of data would become a hallmark of Facebook (see Cambridge Analytica later).

Continue reading

The Missing Pieces in the UK’s AI Opportunities Action Plan

A brightly coloured mural which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis, miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers, men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.
Clarote & AI4Media / Better Images of AI / AI Mural / CC-BY 4.0.

Reading the 50 recommendations in the AI Opportunities Action Plan published by the British Government last January 13th has been a painful and disappointing exercise.

Very much like a proposal out of a chatbot, the document is

  • Bland —  The text is full of hyperbolic language and over-the-top optimism
  • General —  The 50 recommendations lack specificity to the UK context and details about ownership and the budget required to execute them.
  • Contradictory  — The plan issued by a Labour government is anchored in a turbo-capitalistic ideology. Oxymoron anyone?

If I learned anything from my 12 years in Venezuela, it’s that putting all your eggs in one basket — oil, in their case — and hoping it solves all problems doesn’t work.

A credible AI strategy must (a) address both the benefits and the challenges head-on and (b) consider this technology as another asset to the human-centric flourishment of the country rather than a goal in itself that should be pursued at all costs.

But you don’t need to believe me. See it for yourself.


What I read

Techno-speak

I was reminded of George Orwell’s 1984 Newspeak.

The text uses “AI” made works such as AI stack, frontier AI, AI-driven data cleansing tools, AI-enabled priorities, “embodied AI” without providing a clear definition.

Exaggeration

Hyperbole and metaphors are used to the extreme to overstate the benefits.

we want Britain to step up; to shape the AI revolution rather than wait to see how it shapes us. 

We should expect enormous improvements in computation over the next decade, both in research and deployment.

Change lives by embracing AI

FOMO

The text transpires FOMO (Fear Of Missing Out). No option is given to adopt AI systems more gradually. It’s now or we’ll be the losers.

This is a crucial asymmetric bet — and one the UK can and must make

we need to “run to stand still”.

the UK risks falling behind the advances in Artificial Intelligence made in the USA and China.

And even a new take on Facebook’s famous “move fast and break things”:

“move fast and learn things”

Techno-solutionism

AI is going to solve all our socio-economic and political problems and transport us to a utopian future 

It is hard to imagine how we will meet the ambition for highest sustained growth in the G7 — and the countless quality-of-life benefits that flow from that — without embracing the opportunities of AI.

Our ambition is to shape the AI revolution on principles of shared economic prosperity, improved public services and increased personal opportunities so that:
• AI drives the economic growth on which the prosperity of our people and the performance of our public services depend;
• AI directly benefits working people by improving health care and education and how citizens interact with their government; and
• the increasing of prevalence of AI in people’s working lives opens up new opportunities rather than just threatens traditional patterns of work.

What’s not to like?

For a great commentary on how techno-solutionism won’t solve social problems, see 20 Petitions for AI and Public Good in 2025 by Tania Duarte.

Colonialism

Living in Venezuela for 12 years was an education on how to feel “less than” other countries even when you have the largest oil reserves in the world.

I remember new education programs announced as being a success in the US, Canada, Spain, Germany… A colonised mentality learned from centuries of Spanish oppression. The pervasive assumption that an initiative would work simply because we like the results disregarding the context they were developed for.

The AI Opportunities Action Plan reminded me of them.

Supporting universities to develop new courses co-designed with industry — such as the successful co-operative education model of Canada’s University of Waterloo, CDTM at the Technical University of Munich or France’s CIFRE PhD model

Launch a flagship undergraduate and masters AI scholarship programme on the scale of Rhodes, Marshall, or Fulbright for students to study in the UK.

Singapore, for example, developed a national AI skills online platform with multiple training offers. South Korea is integrating AI, data and digital literacy.

But the document is also keen on showing us that we’ll be the colonisers

we aspire to be one of the biggest winners from AI

Because we believe Britain has a particular responsibility to provide global leadership in fairly and effectively seizing the opportunities of AI, as we have done on AI safety

A historical-style painting of a young woman stands before the Colossus computer. She holds an abstract basket filled with vibrant, pastel circles representing data points. The basket is attached to the computer through a network of connecting wires, symbolizing the flow and processing of information.
Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Colossal Harvest / CC-BY 4.0

Capitulation

The document is all about surrendering the data, agency, tax money, and natural resources of citizens in the UK to the AI Gods: startups, “experts”, and investors.

Invest in becoming a great customer: government purchasing power can be a huge lever for improving public services, shaping new markets in AI

We should seek to responsibly unlock both public and private data sets to enable innovation by UK startups and researchers and to attract international talent and capital.

Couple compute allocation with access to proprietary data sets as part of an attractive offer to researchers and start-ups choosing to establish themselves in the UK and to unlock innovation.

Sprinkling AI

AI is the Pantone’s Colour of the next 5 years. All will need to have AI on it. Moreover, everything must be designed so that AI can shine.

Appointing an AI lead for each mission to help identify where AI could be a solution within the mission setting, considering the user needs from the outset.

Two-way partnerships with AI vendors and startups to anticipate future AI developments and signal public sector demand. This would involve government meeting product teams to understand upcoming releases and shape development by sharing their challenges.

AI should become core to how we think about delivering services, transforming citizens’ experiences, and improving productivity.

Brexit Denial

It’s funny to see that the text doesn’t reference the European Union and only refers to Europe as a benchmark to measure against.

Instead, the EU is hinted at as “like-minded partners” and “allies” and collaborations are thrown right and left without naming who’s the partner.

Agree international compute partnerships with like-minded countries to increase the types of compute capability available to researchers and catalyse research collaborations. This should focus on building arrangements with key allies, as well as expanding collaboration with existing partners like the EuroHPC Joint Undertaking.

We should proactively develop these partnerships, while also taking an active role in the EuroHPC Joint Undertaking.

Moreover, the text praises the mobility of researchers and wanting to attract experts forgetting the UK’s refusal to participate in the Erasmus program and the fact that it only joined Horizon Europe last year.

The UK is a medium-sized country with a tight fiscal situation. We need the best talent around the world to want to start and scale companies here.

Explore how the existing immigration system can be used to attract graduates from universities producing some of the world’s top AI talent.

Vagueness

Ideas are thrown into the text half-backed giving the idea the government has adopted the Silicon Valley strategy of “building the plane while flying”

The government must therefore secure access to a sufficient supply of compute. There is no precise mechanism to allocate the proportions

In another example, the plan advocates for open-source AI applications.

the government should support open-source solutions that can be adopted by other organisations and design processes with startups and other innovators in mind.

The AI infrastructure choice at-scale should be standardised, tools should be built with reusable modular code components, and code-base open-sourcing where possible.

At the same time, it’s adamant that it needs to attract startups and investors. Except if the startups are NGOs, who’ll then finance those open-source models?

DEI for Beginners

Students at computers with screens that include a representation of a retinal scanner with pixelation and binary data overlays and a brightly coloured datawave heatmap at the top.
Kathryn Conrad / Better Images of AI / Datafication / CC-BY 4.0

All of us who have been working towards a more diverse and inclusive tech for decades are in for a treat. 

First, we’re told that diversity in tech is very simple — it’s all about gender parity and pipeline.

16. Increase the diversity of the talent pool. Only 22% of people working in AI and data science are women. Achieving parity would mean thousands of additional workers. […] Government should build on this investment and promote diversity throughout the education pipeline.

Moreover, they’ve found the magic bullet.

Hackathons and competitions in schools have proven effective at getting overlooked groups into cyber and so should be considered for AI.

What about the fact that 50% of women in tech leave the sector by the age of 35?


What I missed

Regions

The government mentions that AI “can” — please note that is not a “must” or “need” — benefit “post-industrial towns and coastal Scotland.” However, the only reference to a place is to the Culham Science Centre, which is 10 miles from Oxford — a zone that very few could consider needs “local rejuvenation” or “channelling investment”

Government can also use AIGZs [‘AI Growth Zones’] to drive local rejuvenation, channelling investment into areas with existing energy capacity such as post-industrial towns and coastal Scotland. Government should quickly nominate at least one AIGZ and work with local regions to secure buy-in for further AIGZs that contribute to local needs . Existing government sites could be prioritised as pilots, including Culham Science Centre

And it doesn’t appear to be room to involve local authorities in how AI could bring value to their regions

Drive AI adoption across the whole country. Widespread adoption of AI can address regional disparities in growth and productivity. To achieve this, government should leverage local trusted intermediaries and trade bodies

Costs

There are plenty of gigantic numbers about how much money will AI (may) bring

AI adoption could grow the UK economy by an additional £400 billion by 2030 through enhancing innovation and productivity in the workplace

but nothing about the costs…

Literacy

How will people get upskilled? We only get generic reassurances

government should encourage and promote alternative domestic routes into the AI profession — including through further education and apprenticeships, as well as employer and self-led upskilling.

Government should ensure there are sufficient opportunities for workers to reskill, both into AI and AI-enabled jobs and more widely.

Citizens

There is no indication in the document that this “AI-driven” Britain is what their citizens want. Citizens themselves don’t appear to be included in shaping AI either.

For example, it claims that teachers are already “benefiting” from AI assistants

it is helping some teachers cut down the 15+ hours a week they spend on lesson planning and marking in pilots.

However, the text doesn’t tell us that teachers want to give up class preparation.

And the text repeatedly states that the government will prioritise “innovation” (aka profit) vs safety.

My judgement is that experts, on balance, expect rapid progress to continue. The risks from underinvesting and underpreparing, though, seem much greater than the risks from the opposite.

Moreover, regulators are expected to enable innovation at all costs

Require all regulators to publish annually how they have enabled innovation and growth driven by AI in their sector. […] government should consider more radical changes to our regulatory model for AI, for example by empowering a central body with a mandate and higher risk tolerance to promote innovation across the economy.

Where did we sing for that?

Sustainability

The document waxes lyrical about building datacentres. What about the electricity and water requirements? What about the impact on our water reserves and electricity grid? What about the repercussions on our sustainability goals?

The document is done by throwing the word sustainability twice in one paragraph

Mitigate the sustainability and security risks of AI infrastructure, while positioning the UK to take advantage of opportunities to provide solutions. [..] Government should also explore ways to support novel approaches to compute hardware and, where appropriate, create partitions in national supercomputers to support new and innovative hardware. In doing so, government should look to support and partner with UK companies who can demonstrate performance, sustainability or security advancements.

An array of colorful, fossil-like data imprints representing the static nature of AI models, laden with outdated contexts and biases.
Luke Conroy and Anne Fehres & AI4Media / Better Images of AI / Models Built From Fossils / CC-BY 4.0

Unemployment

The writers of that utopic “AI-powered” UK manifesto don’t address job losses. We only get the sentence I mentioned above

the increasing of prevalence of AI in people’s working lives opens up new opportunities rather than just threatens traditional patterns of work.

Instead, it uses language that fosters fear and builds on utopian and dystopian visions of an AI-driven future

AI systems are increasingly matching or surpassing humans across a range of tasks.

Given the pace of progress, we will also very soon see agentic systems — systems that can be given an objective, then reason, plan and act to achieve it. The chatbots we are all familiar with are just an early glimpse as to what is possible.

On the flip side, the government repeatedly reiterates their ambition of bringing talent from abroad

 Supporting UK-based AI organisations working on national priority projects to bring in overseas talent and headhunting promising founders or CEOs

How does this plan contribute to reassuring people about their jobs?

Big-picture

This techno-solutionism approach doesn’t have any regard for AI specialists in domains other than coding or IT.

To mention a few, what about sociologists, psychologists, philosophers, teachers, historians, economists, or specialists in the broad spectrum of industries in the UK? 

Don’t they belong to those think tanks where decisions are made about selling our country to the AI Gods?


The Good News? We Can Do Better

People in Britain voted last year that they were tired of profits over people, centralism, and oligarchy. Unfortunately, this plan uses AI to reinforce the three.

The UK is full of hardworking and smart people who deserve much better than magic bullets or techno-saviours. 

Instead of shoehorning the UK’s future to AI, what if we


WORK WITH ME

I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.

Three ways you can work with me:

  • I empower non-tech leaders to harness the potential of artificial intelligence for sustainable growth and responsible innovation through consulting and AI competency programs.
  • I’m a ​sought-after international keynote speaker​ on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
  • I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.

Get in touch to discuss how I can help you achieve the success you deserve in 2025.

Insights from Four Women’s Conferences: The Value of Collective Female Wisdom

Four images: (1) Announcement of Patricia Gestoso’s talk “Automated out of work: AI’s impact on the female workforce” at the Women in Tech Festival, (2) Four British female politicians in a panel at the Fawcett Conference 2023, (3) Agenda of the Empowered to Lead Conference 2023, (4) Announcement of Patricia Gestoso’s talk “Seven Counterintuitive Secrets to a Thriving Career in Tech” at the Manchester Tech Festival.
Collage and photos by Patricia Gestoso.

In the last two weeks, I’ve had the privilege to attend four different conferences focused on women and I’ve presented at two of them.

The topics discussed were as complex and rich as women’s lives: neurodiversity in the workplace, women in politics, childcare, artificial intelligence and the future of the female workforce, child labour, impossible goals and ambition, postpartum depression at work, career myths, women in tech, accessibility, quotas… and so many more.

The idea for this article came from my numerous “aha” moments during talks, panels, and conversations at those events. I wanted to share them broadly so others could benefit as well.

I hope you find those insights as inspiring, stimulating, and actionable as I did.

Fawcett Conference 2023

On October 14th, I attended the Fawcett Conference 2023 with the theme Women Win Elections!

The keynote speakers and panels were excellent. The discussions were thought-provoking and space was held for people to voice their dissent. I especially appreciated listening to women politicians discuss feminist issues.

Below are some of my highlights

  • The need to find a space for feminist men.
  • It’s time for us to go outside our comfort zone.
  • “If men had the menopause, Trafalgar Square Fountain would be pouring oestrogen gel.”
  • If we want to talk about averages, the average voter is a woman. There are slightly more women than men (51% women) and they live longer.
  • Men-only decision-making is not legitimate, i.e. not democratic. Women make up the majority of individuals in the UK but the minority in decision-making. Overall, diversity is an issue of legitimacy.
  • The prison system for women forgets their children.
  • Challenging that anti-blackness/racism is not seen as a topic at the top of the agenda for the next election.
  • We believe “tradition matters” so things have gone backwards from the pandemic for women.
  • In Australia, the Labour Party enforced gender quotas within the party. That led to increasing women’s representation to 50%. The Conservative Party went for mentoring women — no quotas — and that only increased women’s participation to 30%.
  • There is a growing toxicity in X/Twitter against women. Toxic men’s content gets promoted. We need better regulation of social media.
  • More women vote but decide later in the game.
  • We cannot afford not to be bold with childcare. The ROI is one of the highest.
  • We need to treat childcare as infrastructure. 
  • There are more portraits of horses in parliament than of women.

Empowered to Lead Conference 2023

On Saturday 28th October, I attended the “Empowered to Lead” Conference 2023 organised by She leads for legacy — a community of individuals and organisations working together to reduce the barriers faced by Black female professionals aspiring for senior leadership and board level positions.

It was an amazing day! I didn’t stop all day: listening to inspiring role models, taking notes, and meeting great women.

Some of the highlights below

Sharon Amesu

3 Cs:

  • Cathedral thinking — Think big.
  • Courageous leadership — Be ambitious.
  • Command yourself — Have the discipline to do things even if you’re afraid.

Dr Tessy Ojo CBE

  • We ask people what they want to do only when they are children — that’s wrong. We need to learn and unlearn to take up the space we deserve.
  • Three nuggets of wisdom: Audacity/confidence, ambition, and creativity/curiosity.
  • Audacity— Every day we give permission to others to define us. Audacity is about being bold. Overconsultation kills your dream. It’s about going for it even if you feel fear.
  • Ambition — set impossible goals (Patricia’s note: I’m a huge fan of impossible goals. I started the year setting mine on the article Do you want to achieve diversity, inclusion, and equity in 2023? Embrace impossible goals)
  • Creativity & curiosity — takes discipline not to focus on the things that are already there. Embrace diverse thinking.
  • Question 1: What if you were the most audacious, the most ambitious, and the most creative?
  • Question 2: May you die empty? Would you have used all your internal resources?

Baroness Floella Benjamin DBE

  • Childhood lasts a lifetime. We need to tell children that they are worth it.
  • Over 250 children die from suicide a year.
  • When she arrived in the UK, there were signs with the text “No Irish, no dogs, no coloureds”.
  • After Brexit, a man pushed his trolley onto her and told her, “What are you still doing here?” She replied, “I’m here changing the world, what are you doing here?”
  • She was the first anchor-woman to appear pregnant on TV in the world.
  • “I pushed the ladder down for others.”
  • “The wise man forgives but doesn’t forget. If you don’t forgive you become a victim.”
  • ‘Black History Month should be the whole year’.
  • 3 Cs: Consideration, contentment (satisfaction), courage.
  • ‘Every disappointment is an appointment with something better’.

Jenny Garrett OBE

Rather than talking about “underrepresentation”, let’s talk about “underestimation”.

Nadine Benjamin MBE

  • What do you think you sound? Does how you sound support who you want to be?
  • You’re a queen. Show up for yourself.

Additionally, Sue Lightup shared details about the partnership between Queen Bee Coaching (QBC)  — an organisation for which I volunteer as a coach — and She Leads for Legacy (SLL).

Last year, QBC successfully worked with SLL as an ally, providing a cohort of 8 black women from the SLL network with individual coaching from QBC plus motivational leadership from SLL. 

At the conference, the application process for the second cohort was launched!

Women in Tech Festival

I delivered a keynote at this event on Tuesday 31st October. The topic was the impact of artificial intelligence (AI) on the future of the female workforce.

When I asked the 200+ attendees if they felt that the usage of AI would create or destroy jobs for them, I was surprised to see that the audience was overwhelmingly positive about the adoption of this technology.

Through my talk, I shared the myths we have about technology (our all-or-nothing mindset), what we know about the impact of AI on the workforce from workers whose experience is orchestrated by algorithms, and four different ways in which we can use AI to progress in our careers.

As I told the audience, the biggest threat to women’s work is not AI. It’s patriarchy feeling threatened by AI. And if you want to learn more about my views on the topic, go to my previous post Artificial intelligence’s impact on the future of the female workforce.

The talk was very well received and people approached me afterwards sharing how much the keynote had made them reflect on the impact of AI on the labour market. I also volunteered for mentoring sessions during the festival and all my on-the-fly mentees told me that the talk had provided them with a blueprint for how to make AI work for them.

I also collected gems of wisdom from other women’s interventions

  • Our workplaces worship the mythical “uber-productive” employee.
  • We must be willing to set boundaries around what we’re willing to do and what not.
  • It may be difficult to attract women to tech startups. One reason is that it’s riskier, so women may prefer to go to more established companies.
  • Workforce diversity is paramount to mitigate biases in generative AI tools.

I found the panel about quotas for women in leadership especially insightful

  • Targets vs quotas: “A target is an aspiration whilst a quota must be met”.
  • “Quotas shock the system but they work”.
  • Panelists shared evidence of how a more diverse leadership led to a more diverse offering and benefits for customers. 
  • For quotas to work is crucial to look at the data. Depending on the category, it may be difficult to get those data. You need to build trust — show that’s for a good purpose.
  • In law firms, you can have 60% of solicitors that are women but when you look at the partners is a different story — they are mostly men. 
  • A culture of presenteeism hurts women in the workplace. 
  • There are more CEOs in the UK FTSE 100 named Peter than women.
  • Organisations lose a lot of women through perimenopause and menopause because they don’t feel supported.

There was a very interesting panel on neurodiversity in the workplace 

  • Neurodivergent criteria have been developed using neurodivergent men as the standard so often they miss women. 
  • The stereotype is that if you have ADHD, you should do badly in your studies. For example, a woman struggled to get an ADHD diagnosis because she had completed a PhD.
  • Women mask neurodivergent behaviours better than men. Masking requires a lot of effort and it’s very taxing. 
  • We need more openness about neurodiversity in the workplace.

Manchester Tech Festival

On Wednesday 1st November, I delivered a talk in the Women in Tech & Tech for Good track at the Manchester Tech Festival.

The title of my talk was “Seven Counterintuitive Secrets to a Thriving Career in Tech” and the purpose was to share with the audience key learnings from my career in tech across 3 continents, spearheading several DEI initiatives in tech, coaching and mentoring women and people from underrepresented communities in tech, as well as writing a book about how women succeed in tech worldwide.

First, I debunked common beliefs such as that there is a simple solution to the lack of women in leadership positions in tech or that you need to be fixed to get to the top. Then, I presented 7 proven strategies to help the audience build a successful, resilient, and sustainable career in tech.

I got very positive feedback about the talk during the day and many women have reached out on social media since to share how they’ve already started applying some of the strategies.

Some takeaways from other talks:

I loved Becki Howarth’s interactive talk about allyship at work where she shared how you can be an ally in four different aspects:

  • Communication and decision-making — think about power dynamics, amplify others, don’t interrupt, and create a system that enables equal participation.
  • Calling out (everyday) sexism — use gender-neutral language, you don’t need to challenge directly, support the recipient (corridor conversations). 
  • Stuff around the edges of work — create space for people to connect organically, don’t pressure people to share, and rotate social responsibilities so everyone pulls their weight.
  • Taking on new opportunities — some people need more encouragement than others, and ask — don’t assume.

The talk of Lydia Hawthorn about postpartum depression in the workplace was both heartbreaking and inspiring. She provided true gems of wisdom:

  • Up to 15% of women will experience postpartum depression.
  • Talk about the possibility of postpartum depression before it happens.
  • Talk to your employer about flexible options.
  • Consider a parent-buddy scheme at work.
  • Coaching and therapy can be lifesaving.

Amelia Caffrey gave a very dynamic talk about how to use ChatGPT for coding. One of the most interesting aspects she brought up for me is that there is no more excuse to write inaccessible code. For example, you can add in the prompt the requisite that the code must be accessible for people using screen readers.

Finally, one of the most touching talks was from Eleanor Harry, Founder and CEO of HACE: Data Changing Child Labour. Their mission is to eradicate child labour in company supply chains.

There are 160 million children in child labour as of 2020. HACE is launching the Child Labour Index; the only quantitative metric in the world for child labour performance at a company level. Their scoring methodology is based on cutting-edge AI technologies, combined with HACE’s subject matter expertise. The expectation is the index provides the investor community with quantitative leverage to push for stronger company performance on child labour.

Eleanor’s talk was an inspiring example of what tech and AI for good look like.

Back to you

With so many men competing in the news, social media, and bookstores for your attention, how are you making sure you give other women’s wisdom the consideration it deserves?

Work with me — My special offer

“If somebody is unhappy with your life, it shouldn’t be you.”

You have 55 days to the end of 2023. I dare you to

  • Leave behind the tiring to-do list imposed by society’s expectations.
  • Learn how to love who you truly are.
  • Become your own version of success.

If that resonates with you, my 3-month 1:1 coaching program “Upwards and Onwards” is for you.

For £875.00, we’ll dive into where you are now and the results you want to create, we’ll uncover the obstacles in your way, explore strategies to overcome them, and implement a plan.

Contact me to explore how we can work together.

Artificial intelligence’s impact on the future of female workforce

Portrait of a simulated middle-aged white woman against a black background. The scene is refracted in different ways by a fragmented glass grid. This grid is a visual metaphor for the way that artificial intelligence (AI) and machine learning technologies can be used to simulate and reflect the human experience in unexpected ways. A distorted neural network diagram is overlaid, familiarising the viewer with the formal architecture of AI systems.
Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0.

I was delighted to be interviewed by John Leonard at ​Computing​ – a source for end-user IT news, analysis and insight around the world – about my talk ​Automated out of work: AI’s impact on the female workforce​ at the Women in Tech Festival on Tuesday October 31st in London.

I reproduce below the interview. You’ll find at the end additional reflections framed as Q&A.

Interview

Patricia Gestoso, is an award-winning technologist and inclusion strategist with over 20 years of experience in digital transformation with a focus on client service, artificial intelligence, and inclusive and ethical design of technology and workplaces.

Patricia will be giving a talk about the impact of AI on the workplace and workers at the Women in Tech Festival in October. We do hope you’ll be able to join us.

In the meantime, we caught up with Patricia and asked her to give us a taster.

How did you become interested in the topic of AI?

As a Director of Support for a scientific and engineering software corporation, I see how AI helps our customers every day to accelerate drug discovery, clinical trials, and research on new materials.

On the flip side, as an inclusion strategist and collaborator on initiatives such as the ​Race and AI toolkit​ and ​Better Images of AI​, I’m also aware of the different ways in which AI helps encode and automate biases.

That’s the reason why in the last three years I’ve been actively fostering discussion about the benefits and challenges that AI brings to inclusion, equity, and sustainability on ​social media​ as well as through ​keynotes​ and ​articles​.

Your talk is titled: “Automated out of work: AI’s impact on the female workforce”. Are women likely to be disproportionately affected in the next wave of automation?

It’s important to take a step back and see where those predictions of women more likely to be negatively affected in the next wave of automation. They come from several assumptions.

First, that there are certain sectors that will be more impacted than others. Then, that the impact on those sectors will be negative on the less skilled workers, next that those workers are women, and finally, that people prefer to interact with machines than with humans.

On the flip side, we have other studies that tell us that the most impacted will be white-collar workers like software engineers – who are overwhelming men – or lawyers – where which gender is overrepresented depends on the practice area.

In case this was not contradictory enough, we’re also told that the roles that AI won’t displace will be those that are related to soft skills and studies show that women are great at those – collaboration, listening, and championing a common plan.

The reality is that when we see how’s already impacted by automation, it’s easy to argue that it’s mostly men. Workers at Amazon’s warehouses, Uber drivers, or Deliveroo riders. Their work is scheduled and constantly monitored by AI. Moreover, when we look at who’s raising the alarm about generative AI stealing their jobs right now, we see book authors, screenwriters, and actors. Again, professions that are far from failing in the “female job” category.

For me, talking about the next wave of automation disproportionately affecting women is to deflect from the reality that AI is already affecting the workforce dramatically right now. And it’s not fortuitous. It’s the old strategy of “divide and conquer”. By saying “it’ll be worse in the future and women’s jobs will be the most affected,” it aims to keep men quiet with the false premise that they should conform because their jobs are “safe”.

Are there ways that women and other underrepresented groups can harness the technology to their advantage to mitigate some of these scenarios? If so what do they need to do and where should they start?

I’ll go into more detail in my talk, but there are three obvious areas where women and underrepresented groups can harness technology to their advantage.

First, increasing their negotiation power. If we look at the industrial revolution, the disruption was massive. Loss of jobs, exhausting work schedules, child labour. What’s changed the game? Unions. This is no different now with Amazon workers and screenwriters. Social platforms and digital tools such as apps are powerful means to organise resistance.

Next, learning about AI. Ignoring new technology is not the answer because AI is not going away anytime soon. However, when I said learning, I’m not necessarily suggesting to become an AI software developer. I’m talking about following the major trends in AI, understanding how they impact your industry – what are the major risks and possible rewards – and getting involved in projects aimed at exploring the capabilities that AI can bring to your business.

Finally, discovering how AI can augment you as a professional. We see a lot in the media about the need to learn about how to work “for” or “with” AI. For me, the key is to learn how you can use AI tools to strengthen your capabilities.

Tech has a tendency to concentrate power and wealth in the hands of the already rich and powerful. Is AI likely to continue or even exacerbate this tendency?

AI is already benefiting those who have privileges and disadvantaging those who face more challenges. The Race and AI toolkit mentioned previously showcases many examples where non-White people are consistently sidelined by AI in areas such as healthcare, education, and justice.

The reason? Garbage in, garbage out. We’re feeding AI data that is generated by narrow sectors of the population and that doesn’t reflect our diversity or values as a society.

Unfortunately, attempts to limit the reach of AI tools are seen as attempts to stop progress. No different than what happened to Luddites 200 years ago. The reality is that tech is playing to our FOMO – [fear of missing out] anxiety – telling us we either let AI run wild or we’ll miss out on new drugs and cure cancer. To me, that’s akin to saying, you either let fire run wild or you won’t have fire at all. We’ve survived because we decided that we’re happy to have fire to cook and heat ourselves but that if it goes to our curtains we’ll put it out. AI shouldn’t be treated differently.

Who do you hope to reach with your keynote at the Women in Tech Festival?

I hope my talk reassures those who are frightened that AI will take their jobs that they are not powerless. I also aim to provide actionable strategies to incorporate AI into their professional careers to those that are wondering how to jump on the AI bandwagon. Finally, I hope to reach out to those who are curious about exploring alternative futures to dystopia and utopia, where rather than humans in the loop, humans are in the driving seat and machines are in the loop.

Additional reflections on women, work, and AI

What are your concerns regarding how AI will affect the future of work for women?

The main one is deskilling. To understand the concept, it is useful to remember the Luddite movement that I mentioned above.

​The Luddites were British weavers and textile workers who objected to the increased use of mechanised manufacturing​ at the beginning of the 19th century.

Most were trained artisans who had spent years learning their craft, and they feared that unskilled machine operators were robbing them of their livelihood. As you see, their problem was not the technology in itself but the deskilling of workers.

And I could see how that may happen to women in the future. For example, those with university degrees in computing could be offered work as “prompt engineers” when they come back from maternity leave, with the resulting career and salary demotion. Or administrative professionals may get relegated to fact-checking and improving reports produced by generative AI applications, making their contribution “invisible”.

Is technology an enemy of women?

Technology has enabled women to get financially remunerated for their work. Consider the washing machine, tap water, and electricity. In places where those technologies are not available, women spend their days making up for it – typically for free.

The problem has always been that women have only been able to benefit from technology when it suited men.

For example, during the Industrial Revolution, women and children worked for less pay, which was very profitable for companies.

Women tended to receive between one-third to one-half of a man’s average salary. As the manufacturing industries began to grow, they would take advantage of these low average salaries amongst women and children. The ability to employ these women and children for little pay proved to be very beneficiary to these companies. Many industries exploited these people’s need for money, as they would turn a major profit in exchange for very cheap labor. Tasks such as printing, spinning, and other duties commonly learned at home were easy jobs to learn and were some of the most profitable.

Foundations of Western Culture course at the University of Wisconsin-Green Bay​

As we can see, both the gender pay gap and genderisation of work were already at the core of the Industrial Revolution.

Another example is the tech sector. In the 1930s, women were hired to solve mathematical problems that were considered at the time as repetitive work. Some of those calculations were as complex as determining how to get a human into space and back. When computers took off in the 1960s women became the programmers while men focused on the hardware which was regarded as the most challenging work.

However, ​as programming gained status during the 1980s, men pushed women out of those jobs.​ That prompted a sharp increase in the salaries of software developers, institutionalising patriarchy and the gender pay gap.

The same with AI. We like to anthropomorphise artificial intelligence to deflect our responsibility. We say “AI will automate jobs” or “AI will replace people” but the reality is that those decisions are and will be taken by humans.

In summary, It’s not technology the enemy of women’s paid work but other human beings that see it as “a nice to have” and not deemed to be retributed as that of men. Human beings are also those who also decide that caregiving for family members is “not a job”.

The biggest threat to women’s work is not AI. It’s patriarchy feeling threatened by AI.

Patricia Gestoso

Feminist Tech Career Accelerator

Three things are keeping you from getting the tech career you deserve

Your Brain * Your Education * Patriarchy

Thrive In Your Tech Career With Feminist Guidance

Achieve your career goals * Work smart * Earn more

Click below to learn more about the Feminist Tech Career Accelerator

How to move diversity, inclusion and equity forwards three articles at the time

I feel I’ve been neglicting the readers of my blog, that is, YOU, this year.

On the bright side, I have continued to embed diversity, equity, and inclusion in organisations, technology, and workplaces through opinion articles and fiction.

I’m delighted to share with you that my writing has been featured in three magazines in the last three months.

Artificial Intelligence and the Global South

Scattered white plastic figures resembling humans sitting at tables in front of laptops. The white background makes their environment look bleak.
Max Gruber / Better Images of AI / Clickworker Abyss / Licenced by CC-BY 4.0

In September, the economics e-magazine The Mint published my article How artificial intelligence is recolonising the Global South.

In the 5-min piece, I discuss how the Global North exploits poverty and weak laws in the South to accelerate its digital transformation.

Have you ever asked yourself:

  • Who moderates our social media?
  • Who annotates the images for our self-driving cars?
  • Who extracts the metals needed for our smartphones?
  • In which populations AI algorithms are tested?

Being accountable for the books we read

A computer-generated photographic style image showing piles of distorted books with some surreal landscape features in the immediate foreground, such as a kind of beach and games board. The books merge into each other in an impressionistic, digitally blurred way, and rising out of them and taking up the main part of the image is a huge undefined concrete structure topped with more books and folders that get bigger as they go up.
jbustterr / Better Images of AI / A monument surrounded by piles of books / Licenced by CC-BY 4.0

In October, Certain Age Magazine published The DEI Booklist: Five books to think and act differently, where I reflect on the fact that whom we read matters as much as what we read.

In the article, I review 5 books:

  • Rage Becomes Her: The Power of Women’s Anger by Soraya Chemaly
  • Care Work: Dreaming Disability Justice by Leah Lakshmi Piepzna-Samarasinha
  • Data Feminism by Catherine D’Ignazio and Lauren F. Klein
  • Whipping Girl: A Transsexual Woman on Sexism and the Scapegoating of Femininity by Julia Serano
  • Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford

I also share how I overcame the inertia of only reading books written by White, able, American, heterosexual cis-men.

Scoop: It took two years!

Using short fiction to get people talking about emerging technology

Black and white photographs  of the faces of White people scattered across a white background and grouped by similarity.
Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / Licenced by CC-BY 4.0

Last week, the Medium magazine The Lark published my second short fictional story, The Life of Data Podcast. As in the previous one – The GraduationI’ve used future fiction to question the interplay between humans and technology, specifically AI.

Have you ever thought what happens to your photos circulating on social media? That’s what I did in this 10-min short fictional story.

In a nutshell, I imagined what the data from the digital portrait of a Black schoolgirl woud share about how it moves inside our phones, computers, and networks if it was invited to speak on a podcast.

How does the story resonate with you?

And the cherry on the cake

In August 2022, I was featured in the Computer Weekly 2022 longlist of the most influential women in UK tech.

Each year, Computer Weekly publishes the longlist of all of the women put forward to be considered for its list of the top 50 Most Influential Women in UK Tech.

And I was nominated!

Looking at the names of the other 600 women in the UK that were nominated as well was such a boost of energy! Among them, I’ve found great role models, IT leaders, community builders, and amazing raising stars.

One thing that I love in the list is that not only women in software development were nominated, dispelling the myth that tech is only about coding. Tech is so much more! Women investors, CEOs, COOs, non-tech founders…

If you’re unsure if there is a place for you in tech, please have a look at the list and get inspired. We’re waiting for you!


As I mentioned on a previous post, I’m writing a book and I need your help!

I’d be immensely grateful if you could complete and/or share with your network of women in tech this short survey about your/their experiences at work.

What do I mean by “Women in Tech”? Women working in any function (R&D, HR, services, finance, CXO) in the tech sector (software, hardware…) or in tech-related functions in other sectors (e.g. IT, cybersecurity…).

Whilst the survey is anonymous, you’ll have the option to get involved in the project before submitting the form. Thanks for your support!


Inclusion is a practice, not a certificate!

Library of missing datasets: Are you being digitally excluded?

A file cabinet with four drawers, one of them is opened and empty. At the right of the file cabinet, there is the sentence “whose data are we missing?” with an arrow pointing to the empty drawer.
Image by OpenClipart-Vectors from Pixabay  adapted by Patricia Gestoso.

(7 min read)

Data protection and privacy regulations like GDPR, the pervasiveness of social media, and the boom of artificial intelligence have prompted debates among academic, governmental, commercial, and non-profit organisations about our rights to own our data and how that data is used to sell us stuff and surveil us. These discussions often forget whose and which data are we missing.

My research on the effect of covid-19 on the unpaid work of professional women made me painfully aware of the gap between intent and impact when we talk about collecting data. The dataset that constitutes the basis of the report came from 1,300+ responses from mostly White women to a survey. We had relied on snowballing – our network – to get more women to answer the survey. Unsurprisingly, our network looked like us!

This mishap prompted my interest in the harms of missing or incomplete datasets – both in general and in the case of children.

Recently, a found somebody that has made a great job at using art to bring awareness to the topic of missing datasets.

The Library of Missing Datasets

Mimi Ọnụọha is a Nigerian-American artist and researcher whose work highlights the social relationships and power dynamics behind data collection.

She has created a Library of Missing Datasets. In her words

“Missing data sets” are my term for the blank spots that exist in spaces that are otherwise data-saturated. My interest in them stems from the observation that within many spaces where large amounts of data are collected, there are often empty spaces where no data live. Unsurprisingly, this lack of data typically correlates with issues affecting those who are most vulnerable in that context.

Mimi Onuoha

Why should we care? Onuoha believes that “what we ignore reveals more than what we give our attention to. It’s in these things that we find cultural and colloquial hints of what is deemed important. Spots that we’ve left blank to reveal our hidden social biases and indifferences.”

She compiles a list of missing or incomplete datasets. Some examples are:

  • People excluded from public housing because of criminal records.
  • Trans people killed or injured in instances of hate crime (note: existing records are notably unreliable or incomplete).
  • Poverty and employment statistics that include people who are behind bars.
  • Muslim mosques/communities surveilled by the FBI/CIA.
  • Mobility for older adults with physical disabilities or cognitive impairments.
  • Undocumented immigrants currently incarcerated and/or underpaid.
  • Firm statistics on how often police arrest women for making false rape reports.

Onuoha has created a version 2.0, where she focused on blackness. She says “Black folks are both over-collected and under-represented in American datasets, featuring strongly as objects of collection but rarely as subjects with agency over collection, ownership, and power.

I found very thought-provoking the images of the file cabinets with the drawers open showing the tagged empty folders. You can check them yourself the initial project and the 2.0 version.

Some of the datasets I’m missing or existing records are incomplete

  • Women that have not been promoted in spite of having all the requirements because of bias.
  • Disabled people that have been discriminated against by hiring algorithms.
  • People that have unfairly been denied work permits and residence visas.
  • Children with long covid.
  • LBTQ+ people that fear coming out because of backlash.
  • People in Venezuela that have endured “express” kidnapping.

Back to you

  • Which datasets are you missing?
  • Which datasets are missing you?

Before I go

For reflection

Diversity is not the magic bullet to fix inequity. For those still doubting it, in this edition of The Flock with Jennifer Crichton newsletter, Gemma Doswell reflects on the relative broad gender and ethnic diversity of the candidates for the Tory leadership in the UK and how we assume that it automatically should translate into advocacy for their visible identities.

A boost of energy

Mastercard now links all employee bonuses to ESG goals!

In 2021, the company introduced a compensation model for executives tied to three main Environmental, Social and Corporate Governance priorities: carbon neutrality, financial inclusion, and gender pay parity. This year they have rolled the scheme out to all employees globally.

News from me

Early this year, I went to Edinburgh to deliver a workshop at the Scottish AI Summit called Goodbye shiny robots & glowing brains: Why Better Images of AI matter. This is in the context of my work as Head of Diversity, Equity, and Inclusion at We and AI and my participation in the Better Images of AI project.

The workshop was delivered both in-person and online with Tania Duarte, Co-Founder and CEO of We and AI, and Tristan Ferne, executive producer at BBC Research & Development. You can watch it on the summit’s website.

Do you prefer a podcast? You can listen to Tania and me discussing with Steph Wright why better images of AI matter and the reasons we need trustworthy, ethical, and inclusive AI on this episode of Scotland’s AI Strategy podcast, Turing’s Triple Helix.


PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.

Four ways we ignore children when discussing digital inclusion

Two teenage girls portrayed against a wall with multiple surveillance cameras pointing at them. The girls look at the cameras back. Image by StockSnap from Pixabay

(5 min read)

Children are an afterthought in our digital inclusion plans.

We talk about the importance of embedding diversity, inclusion, and ethics in technology as a prerequisite for a digital future that works for everybody. The conversation is framed in the context of identities – gender, ethnicity, sexual preferences, culture. However, we have forgotten children. I’m talking about children’s data privacy and their vulnerability to tech tools, especially those powered by artificial intelligence (AI).

In this article, I share four areas where we’re letting children down and how the power of framing data as money can help us to proactively include them.

Continue reading