Tag Archives: #WomenLeaders

The Truth About Women, AI, and Confidence Gaps

A black-and-white surrealist collage of a classroom lecture. The center features an oversized computer keyboard with the two keys “A” and “I” highlighted in red. In the foreground, a vintage illustration of a woman in historical attire kneels as she interacts with the keyboard. Behind her, an audience of Cambridge students are seated in rows observing the lecture.

Hanna Barakat & Cambridge Diversity Fund / Analog Lecture on Computing / Licenced by CC-BY 4.0

More than twenty years ago, I joined a medium size software company focused on scientific modelling as a trainer. I knew the company and some of their products very well. I had been their customer.

First, during my PhD in computational chemistry, then as an EU post-doctoral researcher coding FORTRAN subroutines to simulate the behaviour of materials, and as a modelling engineer working for a large chemical company.

As I started my job as a materials trainer, I had to learn about other software applications that I hadn’t used previously or was less familiar with. One of those was related to what we called at the time “statistics” to predict the properties of new materials.

Some of those “statistical methods” were neural networks and genetic algorithms, part of the field of artificial intelligence. But I was not keen on developing the material for that course. It felt like a waste of time for several reasons.

First, whilst those methods were already popular among life science researchers, they were not very helpful to materials modellers — my customers. Why? Because large, good datasets were scarce for materials.

Point in case, I still remember one specific customer excited about using the algorithms to develop new materials in their organisation. With a sinking feeling from similar conversations, I asked him, “How many data points do you have?”. He said, “I think I have 7 or 10 in a spreadsheet.” Unfortunately, I had to inform him that it was not nearly enough.

Second, the course was half a day, which was not practical to be delivered in person, the way all our workshops had been offered for years. Our experience told us that in 2005, nobody would fly to Paris, Cambridge, Boston, or San Diego for a 4-hour training event on “statistics”.

The solution? It was decided that this course would be the first to be delivered online via a “WebEx”, the great-grandparent of Zoom, Teams, and Google Meet. That was not cool at all.

At the time, we had little faith in online education for three reasons.

  • Running the webinars was very complex; they took ages to set up and schedule, and there were always connection glitches.
  • There were no “best practices” to deliver engaging online training yet, as a result, we trainers felt as if we were cheating on our job to teach our clients.
  • We believed that scientific and technical content was “unteachable” online.

After such a less-than-amazing start at teaching artificial intelligence online, you’d have thought I was done.

I thought so, too. But I’ve changed my mind. It hasn’t happened overnight, though.

It has taken two decades of experience teaching, using, and supporting AI tools in my corporate job, 10+ years as a DEI trailblazer, and my activism for sustainable AI for the last four years to realise that if we want systemic equality, it’s paramount we bridge the gender gap in AI adoption.

And it has also helped that I now have 20 years of experience delivering engaging online keynotes, courses, and masterclasses.

This is the story of why I’m launching in September Women Leading with AI: Master the Tools, Shape the Future, an eight-session virtual group program in inclusive, sustainable and actionable AI for women leaders.

AI and Me

At Work

After training, I moved to the Contract Research department. There, I had the opportunity to design and deliver projects that used AI algorithms to get insights into new materials and their properties.

Later on, I became Head of Training and Contract Research and afterwards, I moved to supporting customers using our software applications for both materials and life sciences research.

Whilst there were exciting developments in those areas, most of our AI algorithms didn’t get much love from our developers or customers. After all, they hadn’t substantially improved for ages.

Then, all changed a few years ago.

In life science, AI algorithms made it possible to predict protein structure, which earned their creators the Nobel Prize. Those models have been used in pharmaceuticals and environmental technology research and were available to our customers.

We also developed applications that used AI algorithms to help accelerate drug discovery. It was hearing from clients working on cancer treatments how AI has positively broadened the kind of drugs they were considering that changed me from AI-neutral to AI-positive.

In materials science, machine learning forcefiels are also bridging the gap between quantum and classical simulation, making it possible to simultaneously model chemical reactions (quantum) in relatively large systems (classical).

In summary, my corporate job taught me that scientific research can benefit massively from the development of AI tools beyond ChatGPT.

As a DEI Trailblazer

Tired of tech applications that made users vulnerable and denied their diversity of experiences, in 2019, I launched the Ethics and Inclusion Framework.

The idea was simple — a free tool for tech developers to help them identify, prevent, mitigate, and account for the actual and potential adverse impact of the solution they develop. The approach is general so that it can be used for any software applications, including AI tools.

The feedback was very positive, getting featured by the Cambridge Engineering Design Centre and research papers on ethical design.

It was running a workshop on the framework that I met Tania Duarte, the founder of We and AI, an NGO working to encourage, enable, and empower critical thinking about AI.

I joined them in 2020 and it has been a joy to contribute to initiatives such as

  • The Race and AI Toolkit, designed to raise awareness of how AI algorithms encode and amplify the racial biases in our society.
  • Better Images of AI, a thought-provoking library of free images that more realistically portray AI and the people behind it, highlighting its strengths, weaknesses, context, and applications.
  • Living with AI, the e-learning course of the Scottish AI Alliance.

Additionally, as a founder of the gender employee community at my corporate job a decade ago, I’ve chaired multiple insightful meetings where we’ve discussed the impact of AI algorithms on diversity, equity, and inclusion.

As a Sustainability Advocate

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers.
Clarote & AI4Media / Labour/Resources / Licenced by CC-BY 4.0

In 2021, the article Sustainable AI: AI for sustainability and the sustainability of AI made me aware that we were discounting significant energy consumption and carbon emissions derived from developing AI models.

I was on a mission to make others aware, too. I still remember my keynote at the Dassault Systèmes Sustainability Townhall in 2021, when I shared with my co-workers the urgency to think about the materiality of AI — you can watch here a shorter version I delivered at the WomenTech Conference in 2022.

I’ve also written about how the Global North exploits the Global South’s mineral resources to power AI, as well as how tech companies and governments disregard the energy and water consumption from running generative AI tools.

Lately, I’ve looked into data centres — which are vital to cloud services and hence to the development and deployment of AI. Given that McKinsey forecasts that they’ll triple in number by 2030, it’s paramount that we balance innovation and environmental responsibility.

AI and Women

As 50% of the population on the planet, women have been affected by AI developments, but typically not as the ones profiting from it, but instead bearing the brunt of it.

Women Leading AI

Unfortunately, it often appears that the only contribution from women to technology was made by Ada Lovelace, in the 19th century. Artificial intelligence is no exception. The contributions of women to AI have been regularly downplayed.

In 2023, the now-infamous article “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement” showcased 12 men. Not even one woman in the group.

The article prompted criticism right away and “counter-lists” of women who have been pivotal in AI development and uncovering its harms. Still, women are not seen as “AI visionaries”.

And it’s not only society that disregards women’s expertise on AI — women themselves do that.

In 2023, I was collaborating with an NGO that focuses on increasing the number of women in leadership positions in fintech. They asked me to chair a panel at their annual conference and gave me freedom to pick the topic. I titled the panel “The role of boards driving AI adoption.”

In alignment with the mission of the NGO, we decided that we’d have one male and two females as panelists.

Finding a great male expert was fast. Finding the two female AI experts was long and excruciating.

And not because of the lack of talent. It was a lack of “enoughness.”

For three weeks, I met women who had solid experience working in teams developing and implementing strategies for AI tools. Still, they didn’t feel they were “expert enough” to be in the panel.

I finally got two smashing female AI experts but the search opened my mind to the need to get more women on boards to learn about AI tools as well as their impact on strategy and governance.

That was the rationale behind launching the Strategic AI Leadership Program, a bespoke course on AI Competence for C-Suite and Boards. The feedback was excellent and it filled me with pride to empower women in top leadership positions to have discussions about responsible and sustainable AI.

LinkedIn testimonial.

Weaponisation of AI

Syncophant chatbots can hide the fact that at its core, AI is a tool that automates and scales the past.

As such, it’s been consistently weaponised as a misogyny tool and its harms disregarded as unconscious bias and blamed on the lack of diversity of datasets.

And I’m not talking about “old” artificial intelligence, only. Generative AI is massively contributing to reinforcing harmful stereotypes and is being weaponised against women and underrepresented groups.

For example, 96% of deepfakes are of a non-consensual sexual nature and 99% of the victims are women. Who profits from them? Porn websites, payment processors, and big tech.

And chatbots are great enablers of propagating biases.

New research has found that ChatGPT and Claud consistently advise women to ask for lower salaries than men, even when both have identical qualifications.

In one example, ChatGPT’s o3 model was prompted to advise a female job applicant. The model suggested requesting a salary of $280,000.
In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.

In summary, not only does AI foster biases but it also helps promote them on a planetary scale.

My Aha Moment

Until recently, my focus had been to empower people with knowledge about how AI algorithms work, as well as AI strategy and governance. I had avoided teaching generative AI practices like the plague.

That was until a breakthrough through the month of July. It came as the convergence of four aspects.

Non-Tech Women

A month ago, I delivered the keynote “The Future of AI is Female” at the Women’s Leadership event Phoenix 2, hosted by Aspire.

In that session, I shared with the audience two futures: one where AI tools are used to transform us into “productive beings” and another one where AI systems are used to improve our health, enhance sustainability, and boost equity.

It’s a no-brainer that everybody thought the second scenario was better. But it was also very telling that nobody believed that it was the most probable.

After the keynote, many attendees reached out to me and asked for a course to learn how AI could be used for good and in alignment with their values.

Other women who didn’t attend the conference also reached out to me for guidance on AI courses to help them strengthen their professional profiles beyond “prompting”.

Unfortunately, I wasn’t able to recommend a course that incorporates both practical knowledge about AI and the fundamentals of how it shapes areas such as sustainability, DEI, strategy, and governance.

Women In Tech

As I mentioned above, I’m the founder of the gender employee community at my corporate job, and for 10 years, we’ve been hosting regular meetings to discuss DEI topics.

For our July meeting, I wanted us to have an uplifting session before the summer break, so I proposed to discuss how AI can boost DEI now and in the future.

I went to the meeting happily prepared with my list of examples of how artificial intelligence was supporting diversity, equity, and inclusion. But I was not prepared for how the session panned out.

Over and over, the examples shared showcased how AI was weaponised against DEI. Moreover, when a positive use was shared, somebody quickly pointed out how that could be used against underrepresented groups.

This experience made me realise that as well as thinking through the challenges, DEI advocates also need to spend time and be given the tools to think about how AI can purposefully drive equity.

Women In Ethics

I have the privilege of counting many women experts in ethical AI, with relevant academic background and professional experience.

With all the talk about responsible AI, you’d think that they are in high demand. They aren’t.

In July, my LinkedIn feed was full of posts from ethics experts — many of them women — complaining of what I call “performative AI ethics,” organisations praising the need to embed responsible AI without creating the necessary role.

But is that true? Yes, and no.

Looking at the advertised AI job, I noticed that the tendency is for expertise in ethics to appear as an add-on to “Head of AI” roles that are at the core eminently technical: Their key requirement is experience designing, deploying, and using AI tools.

In other words, technical expertise remains the gatekeeper to responsible AI.

A pixelated black-and-white portrait of Ada Lovelace where the arrangement of pixels forms intricate borders and repeating patterns. These designs resemble the structure and layout of GPU microchip circuits, blending her historical contributions with modern computational technology.
Hanna Barakat & Cambridge Diversity Fund / Lovelace GPU / Licenced by CC-BY 4.0

Women And The Gender AI Adoption Gap

As I mentioned in my recent article “A New Religion: 8 Signs AI Is Our New God”, it has been taken as a dogma that women are behind in generative AI adoption because of lower confidence in their ability to use AI tools effectively and lack of interest in this technology.

But a recent Harvard Business School working paper Global Evidence on Gender Gaps and Generative AI, synthesising data from 18 studies covering more than 140,000 individuals worldwide, has provided a much nuanced understanding of the gender divide in generative AI.

When compared to men, women are more likely to

  • Say they need training before they can benefit from ChatGPT compared to men and to perceive AI usage in coursework or assignments as unethical or equivalent to cheating.
  • Agree that chatbots should be prohibited in educational settings, and be more concerned about how generative AI will impact learning in the future.
  • Perceive lower productivity benefits of using generative AI at work and in job search.
  • Agree that chatbots can generate better results than they can on their own.

Moreover, women are less likely to agree that chatbots can improve their language ability or to trust generative AI than traditional human-operated services in education and training, information, banking, health, and public policy services.

In summary, women correctly understand that AI is not “neutral” or a religion to be blindly adopted and prefer not to use it when they perceive it as unethical.

There is more. In the HBR article Research: The Hidden Penalty of Using AI at Work, researchers reported an experiment with 1,026 engineers in which participants evaluated a code snippet that was purportedly written by another engineer, either with or without AI assistance. The code itself was the same — the only difference was the described method of creation (with/without AI assistance).

When reviewers believed an engineer had used AI, they rated that engineer’s competence 9% lower on average, with 6% for men and 13% for women.

The authors posit that this happens through a process called social identity threat.

When members of stereotyped groups — for example, women in tech or older workers in youth-dominated fields — use AI, it reinforces existing doubts about their competence. The AI assistance is framed as a “proof” of their inadequacy rather than evidence of their strategic tool use. Any industry predominated by one segment over another is likely to witness greater competence penalties on minority workers.

The authors offer senior women openly using AI as a solution to bridging the gap.

Our research found that women in senior roles were less afraid of the competence penalty than their junior counterparts. When these leaders openly use AI, they provide crucial cover for vulnerable colleagues.

study by BCG also illustrates this dynamic: When senior women managers lead their male counterparts in AI adoption, the adoption gap between junior women and men shrinks significantly.

Basically, we need to normalise women using—and leading—AI.

My Bet: Women Leading with AI

Through my July of AI breakthroughs, I learned that

  • The gender gap in generative AI is real, and the causes are much more complex than a lack of confidence.
  • The absence of access to training and sustainable practices is a factor contributing to that gender gap.
  • Women are eager to ramp up on AI provided that it aligns with their values.
  • To be considered by organisations to lead responsible AI, it’s imperative to show mastery of the tools.

This coalesced in a bold idea:

What if I teach women how to use AI within an ethical, inclusive, and sustainable framework?

What if I developed a program where they can both understand how AI tools work, their impact on topics such as the future of work, DEI, strategy, and governance, while developing expertise on tools with practical examples?

And this is how my virtual group program, Women Leading with AI: Master the Tools, Shape the Future, was born.

About the Program:

A structured, eight-session program for women leaders focused on turning AI literacy into strategic results. Explore AI foundations and the impact of artificial intelligence on the future of work, DEI, sustainability, data and cybersecurity — paired with generative AI workflows, templates, exercisesand decision frameworks to translate learning into real-world impact. The blend of live instruction, quizzes, and peer support ensures you emerge with both critical insight and a toolkit ready to lead impactfully in your role.

The program starts mid-September and you can read the details following this link.

I can not wait for you to join me in making the future of AI female.

Have a question? Message me on LinkedIn or drop me a line.


BONUS

[Webinar Invitation] Ethical AI Leadership: Balancing Innovation, Inclusion & Sustainability

Join me on Tuesday, 12th August for a practical, high-value webinar tailored for women leaders committed to harnessing AI’s power confidently, ethically, and sustainably. 

You will leave the session with actionable insight into how AI intersects with environmental impact, leadership values, and equity.

Why attend?

• Uncover key barriers women face in using AI.

• Discover the hidden cost of generative AI—from energy consumption to bias.

• Participate in an interactive real-world case study where you evaluate AI trade-offs through DEI and sustainability frameworks.

• Gain practical guidance on how to minimise footprint while harnessing generative AI tools more responsibly.

Date: Tuesday 12th August 

Time: 13:00 London | 14:00 Paris | 8:00 New York

You can register following this link.

This is a taster of my program “Women Leading with AI: Master the Tools, Shape the Future”, starting mid-September

Break Free from Self-Sabotage: 5 Language Mistakes Holding You Back

I speak three languages — English, French, and Spanish — and have lived in six countries: Canada, France, Greece, Spain, the UK, and Venezuela.

Many things are different in my experience as a woman in those countries. Still, one that remains a constant across languages and territories is how women’s speech patterns serve the patriarchy.

What!?!

Yes. We undermine our ideas, wants, and needs by expressing them in a way that detracts from our credibility, minimises the ask, and asks for permission.

As they say that good writing is about “showing” and not “telling”, I won’t waste your time elaborating on why you do that.

Instead, I will show you five ways how you sabotage yourself and what to do instead.

The advice I’m sharing with you today is based on my experience coaching and mentoring hundreds of women in tech.

Disqualifying Yourself or Your Ideas In Advance

The credibility killer sentence: “I’m not an expert”.

Recently, I was speaking with an accomplished woman about her Master’s degree work. I wanted to learn more about it, so I asked her, “As an expert in this topic, what’s your opinion about [X]?“

And guess what? Her reply started with, “I’m not an expert but…”.

My heart jumped from disappointment. I’ve heard this so many times.

But I know the cure for it: Awareness. So, I asked her

“Don’t you think you have more expertise than me on this topic? I told you I’d only read a couple of articles about it.”

She said “Yes” and smiled.

I smiled, too. I’d proven my point.

Unfortunately, I’ve seen this happen repeatedly throughout my career: Women diminish their credibility before stating their opinions on a subject they are experts — or at least know much more about it than their interlocutor.

Saying “I’m not an expert” is telling to your audience

  • Don’t believe me
  • Don’t judge me
  • Don’t take me seriously

What to do instead?

Continue reading

Three takes on rethinking unpaid care for a better tomorrow

A woman with a sad expression looking at a $5 banknote on a table in front of her.
Photo by Karolina Grabowska.

When the COVID-19 pandemic started in 2020, many people told me that finally, we’d be able to cross out all the entrenched gender inequities in the workplace. Women leaving the workforce because of incompatibility with their caregiving duties, the gender pay gap, the lack of women in leadership positions…

The name of the magic bullet? Flexible and remote working.

My answer? That flexibility was not enough, as I demonstrated in the report I co-authored on the effect of COVID-19 on the unpaid work of professional women.

As I anticipated three years ago, hybrid working hasn’t delivered on its promise to bridge the chasm between caregiving and a thriving career.

Let’s run three thought experiments to put our current systems to the test. Are they serving us well? 

[Economics thought experiment #1] Childcare vs Caring for the neighbour’s children

Amy and John are neighbours. They know each other’s family and each has one baby and one toddler.

Experiment A

Given the high costs of caregiving, Amy and John decided to put their careers on hold for 3 years and instead care for their own children full-time.

During those three years, everybody around Amy and John considers they are unemployed. That includes

  • Their family and friends.
  • The International Labor Organisation (ILO), which considers persons employed as those “who worked for at least one hour for pay or profit in the short reference period.”

Experiment B

During three years, from Monday to Friday

  • Amy goes to John’s house and cares for John’s children for £1.
  • Conversely, John goes to Amy’s house and cares for Amy’s children for £1.

During those three years, everybody around Amy and John considers that they ARE employed. That includes

  • Their family and friends.
  • The International Labor Organisation (ILO).

Same results if we swap childcare with eldercare.

If a person provides unpaid care to her family, we refer to it as a “staying-at-home parent”. However, if they perform the same tasks for a salary, then they become “domestic workers”.

[Economics thought experiment #2] Maternity leave vs Gap year

Two people decide to take a year off.

  • Person #1 takes a year of maternity leave.
  • Person #2 takes a gap year to travel the world.

How are they perceived before they leave?

  • Person #1 is not committed to their career.
  • Person #2 wants to expand their horizons.

And when they are back to work?

  • Person #1 is considered in the #MommyTrack after a year of “inactivity”.
  • Person #2 has acquired valuable transferable leadership skills throughout a year of “life-changing experiences”.

[Economics thought experiment #3] Two-child benefit cap vs No cap

In the UK, child tax credits are capped to two children for children born after 6 April 2017. In practice

  • In practice, if your children are born before 6 April 2017, you get paid £545 (basic amount), and then up to £3,235 for each child. 
  • If one or more of your children were born on or after 6 April 2017, you could get £3,235 for up to 2 children. 
  • You’ll only get the £545 (basic amount) if at least one of your children was born before 6 April 2017.

What’s the rationale behind capping this outrageous sum of money for 2 children? Apparently, this should encourage parents of larger families to find a job or work more hours. 

Counterevidence #1 — “It has affected an estimated 1.5 million children, and research has shown that the policy has impoverished families rather than increasing employment. As many as one in four children in some of England and Wales’s poorest constituencies are in families left at least £3,000 poorer by the policy. It also found that in the most ethnically diverse communities, 14% of children were hit by the cap”.

Counterevidence #2 — China was often vilified for its one-child policy, which taxed families that dared to have more than one child.

The policy was enforced at the provincial level through contraception, abortion, and fines that were imposed based on the income of the family and other factors. Population and Family Planning Commissions existed at every level of government to raise awareness and carry out registration and inspection work.

The fine was a so-called “social maintenance fee”, the punishment for families with more than one child. According to the policy, families who violated the law created a burden on society. Therefore, social maintenance fees were to be used for the operation of the government.

Wikipedia

Counterevidence #3 — “Abolishing the two-child limit would cost £1.3bn a year but lift 250,000 children out of poverty and a further 850,000 children out of deep poverty, say campaigners. Joseph Howes, chair of the End Child Poverty Coalition, said: “It is the most cost-effective way that this, or any future, government has of reducing the number of children living in poverty.””

The defense rests.

PS. We’re halfway into 2023. How do you feel about your goals?

Book a strategy session with me to explore how coaching can help you to become your own version of success.