Yearly Archives: 2025

Criminalised, Incarcerated, Forgotten: The Women Left Behind

“The truth will set you free, but first it will piss you off!”

Gloria Steinem

Every year, I tell myself I won’t write an article for the 16 Days of Activism for the Elimination of Violence Against Women, a UN Women campaign that runs annually from November 25th to December 10th.

The reason is that it pisses me off that in the 21st century, we still have to make a case for why erradicating gender violence should be a planetary priority. Moreover, every year is a reminder that not only are we not solving the problem, but we keep inventing different ways to inflict gender violence on women (artificial intelligence, anyone?).

Still, despite all that — and often at the last minute —  I change my mind. Why?

Because, unfortunately, I keep surprising myself by yet another way in which women endure gender violence and I feel compelled to (𝗌̶𝖼̶𝗋̶𝖾̶𝖺̶𝗆̶) talk about it.

This year was no different.

This was the original plan: The theme for the 2025 Elimination of Violence Against Women campaign is “digital violence”. And I’ve written many times about it:

So, earlier this year, I decided that when the UN campaign started, I’d repost some of my already published content.

Then, some recent reading compelled me to dig into the intersection between gender violence and the experiences of criminalised women, female prisoners, and women killers.

The common thread among the three groups — and what makes their experience of gender violence less visible in the news — is that they are women who, in our minds, don’t conform to the stereotype of well-behaved, self-sacrificing females. They are “bad” women.

I’m here to tell you how those women are also victims of gender violence.

And no, they don’t deserve it either.

(SCOOP: And Tech makes it even worse).


Criminalised Women

“Poverty is not gender-neutral, and women are overrepresented amongst the poor, resulting in the criminalisation of poverty having an excessive impact on women.”

From poverty to punishment

Traditionally, women have been a minority of the prison populations — women and girls make up 6.9% of the global prison population. However, since 2000, there has been much faster growth in female than in male prisoner numbers. The percentage of women and girls in prison has grown by almost 60%, whilst the male prison population increased by around 22%.

The excellent report From poverty to punishment: Examining laws and practices which criminalise women due to poverty or status worldwide provides a detailed understanding of the causes behind the criminalisation and imprisonment of women:

Continue reading

A New Religion: 8 Signs AI Is Our New God

Twelve characters inspired by the twelve Chinese Zodiac animals are all gathered around a long square table. The characters each have a human like body, but their heads each represent different zodiac animals. On the table are various tools and machines related tp technology - like computers, hardrives, files, data charts, files and keyboards. The characters all seem to be engaging in conversation with one another. In the centre of the window is an old style Microsoft logo.
Yutong Liu / Joining the Table / Licenced by CC-BY 4.0.

Religion and technology have been in a love-hate relationship since their inception.

Sometimes, technology has been a tool of religion.

For example, many religious texts position God(s) as the uber-technologists.

In the beginning, God created the heavens and the earth.

Genesis 1:1

A God that throughout history has empowered their prophets and followers to learn and use in the name of religion

So God said to Noah, “I am going to put an end to all people […]. So make yourself an ark of cypress wood; make rooms in it and coat it with pitch inside and out. This is how you are to build it: The ark is to be three hundred cubits long, fifty cubits wide and thirty cubits high. Make a roof for it, leaving below the roof an opening one cubit[c] high all around. Put a door in the side of the ark and make lower, middle and upper decks.

Genesis 6:13–16

Other times, technology itself has been perceived as “God-like”.

Think about one of the most ancient technologies: Fire. Whilst today we claim to have mastered it, fire deities have a long tradition that spans time and location.

And there are many more that have elicited awe, powerlessness, or veneration. Electricity, the steam machine, IVF, cars…

However, that “God-like” feel typically faded away once the “magic” was replaced by exposing the natural laws governing the phenomena.

But AI has been a technology-as-religion game-changer. Its God-like status has been cemented with time — and recently in an exponential manner — rather than being discredited, like other technologies. After all, the field of AI research was founded at a workshop at Dartmouth College in 1956, almost 70 years ago.

So, how come AI has reversed the trend? By proactively becoming a religion.

Don’t believe me?

Let me walk you through 8 signs that we’ve already adopted AI as a religion.

Disclaimer: You’ll notice that the signs are very skewed towards a Christian view of religion. This is not because of a disregard for other creeds but I was raised as Catholic and lived in countries where the Christian faith was the most popular. I’ll welcome feedback from other religions.

Sign #1: The Promise of Paradise

When Eve and Adam are expelled from Paradise, God is very explicit about what they’ll be losing

To the woman he said,

“I will multiply your sufferings in childbirth;
with pain you shall bear your children.
You shall desire your husband,
but he shall lord it over you.”

To the man he said, […]

“Cursed be the soil because of you!
With effort you shall obtain food
all the days of your life. […]
You are dust,
and unto dust you shall return.”

Genesis 3: 16–19

This sets the quest for the promised paradise over thousands of years for many religions. That place where there is no more hunger, sickness, work, or even death.

Until AI arrived. Or more precisely, until Generative AI did.

And what does AI have to offer as paradise? Abundance.

Last year, Sam Altman — one of AI’s “high priests” — pontificated on X

AI is the promise of paradise on Earth, provided that we keep shovelling money, electricity, water, and chips at its development.

Sign #2: Infallibility or the Promise of Enlightenment

God’s infallibility is a concept in many religions, and some of their prophets and representatives have claimed it for themselves, too, to explain concepts further, settle arguments, or propose new ideas.

For example, when Catholic Popes speak “ex-cathedra”, they become infallible

when the Roman pontiff speaks ex cathedra, that is, when, in the exercise of his office as shepherd and teacher of all Christians, in virtue of his supreme apostolic authority, he defines a doctrine concerning faith or morals to be held by the whole Church, he possesses, by the divine assistance promised to him in blessed Peter, that infallibility which the divine Redeemer willed His Church to enjoy in defining doctrine concerning faith or morals.

Pope Pius IX

How has AI become infallible? Through chatbots. Generative AI is presented as the collector and remaker of “all human knowledge” — or at least the knowledge available on the internet.

Once upon a time, Wikipedia used to be that “repository” of knowledge. Disputes would be settled with a

“I’ve checked in Wikipedia and it says…”

Now, arguments are countered with a

“but ChatGPT says…”

The difference?

Continue reading

Tech Bros, Big Platforms, and Poor Regulation: Who Enables Deepfake Porn?

Recently, I delivered the keynote Techno-patriarchy: How deepfakes are misogyny’s new clothes and what we can do about it at the Manchester Tech Festival. Putting together the presentation prompted me to reflect on my advocacy journey on what is popularly referred to as “deepfake porn.”

In 2023, I had had enough of hearing tech bros blaming unconscious bias for all the ways in which AI was weaponised against women. Decided to demonstrate intent, I wrote Techno-Patriarchy: How AI is Misogyny’s New Clothes, originally published in The Mint.

In the article, I detailed 12 ways this technology is used against women, from reinforcing stereotypes to pregnancy surveillance. One shocked me to my core: Non-consensual sexual synthetic imagery (aka “deepfake porn”).

Why? Because, whilst the media warned us about the dangers of deepfakes as scam and political unrest tools, the reality is that non-consensual sexual synthetic imagery constitutes 96% of all deepfakes found online, with 99.9% depicting women. And their effects are devastating.

Judge for yourself:

It was completely horrifying, dehumanizing, degrading, violating to just see yourself being misrepresented and being misappropriated in that way.

It robs you of opportunities, and it robs you of your career, and your hopes and your dreams.

Noelle Martin, “deepfake porn” victim, award-winning activist, and law reform campaigner.

So I continued to write about the dire consequences of this technology for victims and the legal vacuum, as well as denounced the powerful ecosystem (tech, payment processors, marketplaces) that fostered and profited from them.

I also made a point to bring awareness about how this technology is harming women and girls in spaces where the topic of “deepfakes” was explored broadly. I organised events, appeared on podcasts, and participated in panels, such as “The Rise of Deepfake AI” at the University of Oxford; all opportunities were fair game to bring “deepfake porn” to the forefront.

This week, I had 30 minutes to convince over 80 women in tech – and allies – to become advocates against non-consensual sexual synthetic imagery. The feedback I received from the keynote was very positive, so I’m sharing my talking points with you below.

I hope that by the end of the article, (a) you are convinced that we need to act now, and (b) you have decided how you will help to advocate against this pandemic.

The Digital Feminist is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


The State of Play

All that’s wrong with using the term “deepfake porn”

I had an aha moment when I realised the disservice the term “deepfake porn” was doing to addressing this issue.

“Deepfake” honours the name of the Reddit user who shared on the platform the first synthetic intimate media of actresses. When paired with the label “porn”, it may wrongly convey the idea that it’s consensual. Overall, the term lacks gravitas, disregarding harms.

From a legal perspective, the use of the term “deepfake” may also hinder the pursuit of justice. There have been cases where filing a lawsuit using the term deepfakes when referring to a “cheapfake” — which consists of a fake piece of media created with conventional methods of doctoring images rather than AI — has blocked prosecution.

Continue reading

Is Your Chatbot Killing the Planet? The Truth About AI Sustainability

A mosaic-like image of clouds, made of server and data center components, symbolizing the hidden physical infrastructure of cloud computing.
Nadia Piet & Archival Images of AI + AIxDESIGN / Cloud Computing / Licenced by CC-BY 4.0.

In 2021, van Wynsberghe proposed defining sustainable artificial intelligence (AI) as “a movement to foster change in the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, governance) towards greater ecological integrity and social justice”. The concept comprised two key contributions: AI for sustainability and the sustainability of AI.

At the time, a growing effort was already underway exploring how AI tools could help address climate change challenges (AI for sustainability). However, studies have already shown that developing large Natural Language Processing (NLP) AI models results in significant energy consumption and carbon emissions, often caused by using non-renewable energy. van Wynsberghe posited the need to focus on the sustainability of AI.

Four years later, the conversation about making AI sustainable has evolved considerably with the arrival of generative AI models. These models have popularised and democratised the use of artificial intelligence, especially as a productivity tool for generating content.

Another factor that has exponentially increased the resources dedicated to AI is the contested hypothesis that developing AI models with increasingly large datasets and algorithmic complexity will ultimately lead to Artificial General Intelligence (AGI) — a type of AI system that would match or surpass human cognitive capabilities.

Powerful businesses, governments, and academia consider AGI a competitive advantage. Tech leaders such as Eric Schmidt (former Google CEO) and Sam Altman (OpenAI CEO) have disregarded concerns about AI’s sustainability, as AGI will supposedly solve them in the future.

In this context, what do current trends reveal about the sustainability of AI?

Challenges

Typically, artificial intelligence models are developed and run on the cloud, which is powered by data centres. As a result, their construction has increased significantly over the past few years. McKinsey estimates that global demand for data centre capacity could rise between 19% and 22% annually from 2023 to 2030.

Continue reading

The Truth About Women, AI, and Confidence Gaps

A black-and-white surrealist collage of a classroom lecture. The center features an oversized computer keyboard with the two keys “A” and “I” highlighted in red. In the foreground, a vintage illustration of a woman in historical attire kneels as she interacts with the keyboard. Behind her, an audience of Cambridge students are seated in rows observing the lecture.

Hanna Barakat & Cambridge Diversity Fund / Analog Lecture on Computing / Licenced by CC-BY 4.0

More than twenty years ago, I joined a medium size software company focused on scientific modelling as a trainer. I knew the company and some of their products very well. I had been their customer.

First, during my PhD in computational chemistry, then as an EU post-doctoral researcher coding FORTRAN subroutines to simulate the behaviour of materials, and as a modelling engineer working for a large chemical company.

As I started my job as a materials trainer, I had to learn about other software applications that I hadn’t used previously or was less familiar with. One of those was related to what we called at the time “statistics” to predict the properties of new materials.

Some of those “statistical methods” were neural networks and genetic algorithms, part of the field of artificial intelligence. But I was not keen on developing the material for that course. It felt like a waste of time for several reasons.

First, whilst those methods were already popular among life science researchers, they were not very helpful to materials modellers — my customers. Why? Because large, good datasets were scarce for materials.

Point in case, I still remember one specific customer excited about using the algorithms to develop new materials in their organisation. With a sinking feeling from similar conversations, I asked him, “How many data points do you have?”. He said, “I think I have 7 or 10 in a spreadsheet.” Unfortunately, I had to inform him that it was not nearly enough.

Second, the course was half a day, which was not practical to be delivered in person, the way all our workshops had been offered for years. Our experience told us that in 2005, nobody would fly to Paris, Cambridge, Boston, or San Diego for a 4-hour training event on “statistics”.

The solution? It was decided that this course would be the first to be delivered online via a “WebEx”, the great-grandparent of Zoom, Teams, and Google Meet. That was not cool at all.

At the time, we had little faith in online education for three reasons.

  • Running the webinars was very complex; they took ages to set up and schedule, and there were always connection glitches.
  • There were no “best practices” to deliver engaging online training yet, as a result, we trainers felt as if we were cheating on our job to teach our clients.
  • We believed that scientific and technical content was “unteachable” online.

After such a less-than-amazing start at teaching artificial intelligence online, you’d have thought I was done.

I thought so, too. But I’ve changed my mind. It hasn’t happened overnight, though.

It has taken two decades of experience teaching, using, and supporting AI tools in my corporate job, 10+ years as a DEI trailblazer, and my activism for sustainable AI for the last four years to realise that if we want systemic equality, it’s paramount we bridge the gender gap in AI adoption.

And it has also helped that I now have 20 years of experience delivering engaging online keynotes, courses, and masterclasses.

This is the story of why I’m launching in September Women Leading with AI: Master the Tools, Shape the Future, an eight-session virtual group program in inclusive, sustainable and actionable AI for women leaders.

AI and Me

At Work

After training, I moved to the Contract Research department. There, I had the opportunity to design and deliver projects that used AI algorithms to get insights into new materials and their properties.

Later on, I became Head of Training and Contract Research and afterwards, I moved to supporting customers using our software applications for both materials and life sciences research.

Whilst there were exciting developments in those areas, most of our AI algorithms didn’t get much love from our developers or customers. After all, they hadn’t substantially improved for ages.

Then, all changed a few years ago.

In life science, AI algorithms made it possible to predict protein structure, which earned their creators the Nobel Prize. Those models have been used in pharmaceuticals and environmental technology research and were available to our customers.

We also developed applications that used AI algorithms to help accelerate drug discovery. It was hearing from clients working on cancer treatments how AI has positively broadened the kind of drugs they were considering that changed me from AI-neutral to AI-positive.

In materials science, machine learning forcefiels are also bridging the gap between quantum and classical simulation, making it possible to simultaneously model chemical reactions (quantum) in relatively large systems (classical).

In summary, my corporate job taught me that scientific research can benefit massively from the development of AI tools beyond ChatGPT.

As a DEI Trailblazer

Tired of tech applications that made users vulnerable and denied their diversity of experiences, in 2019, I launched the Ethics and Inclusion Framework.

The idea was simple — a free tool for tech developers to help them identify, prevent, mitigate, and account for the actual and potential adverse impact of the solution they develop. The approach is general so that it can be used for any software applications, including AI tools.

The feedback was very positive, getting featured by the Cambridge Engineering Design Centre and research papers on ethical design.

It was running a workshop on the framework that I met Tania Duarte, the founder of We and AI, an NGO working to encourage, enable, and empower critical thinking about AI.

I joined them in 2020 and it has been a joy to contribute to initiatives such as

  • The Race and AI Toolkit, designed to raise awareness of how AI algorithms encode and amplify the racial biases in our society.
  • Better Images of AI, a thought-provoking library of free images that more realistically portray AI and the people behind it, highlighting its strengths, weaknesses, context, and applications.
  • Living with AI, the e-learning course of the Scottish AI Alliance.

Additionally, as a founder of the gender employee community at my corporate job a decade ago, I’ve chaired multiple insightful meetings where we’ve discussed the impact of AI algorithms on diversity, equity, and inclusion.

As a Sustainability Advocate

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers.
Clarote & AI4Media / Labour/Resources / Licenced by CC-BY 4.0

In 2021, the article Sustainable AI: AI for sustainability and the sustainability of AI made me aware that we were discounting significant energy consumption and carbon emissions derived from developing AI models.

I was on a mission to make others aware, too. I still remember my keynote at the Dassault Systèmes Sustainability Townhall in 2021, when I shared with my co-workers the urgency to think about the materiality of AI — you can watch here a shorter version I delivered at the WomenTech Conference in 2022.

I’ve also written about how the Global North exploits the Global South’s mineral resources to power AI, as well as how tech companies and governments disregard the energy and water consumption from running generative AI tools.

Lately, I’ve looked into data centres — which are vital to cloud services and hence to the development and deployment of AI. Given that McKinsey forecasts that they’ll triple in number by 2030, it’s paramount that we balance innovation and environmental responsibility.

AI and Women

As 50% of the population on the planet, women have been affected by AI developments, but typically not as the ones profiting from it, but instead bearing the brunt of it.

Women Leading AI

Unfortunately, it often appears that the only contribution from women to technology was made by Ada Lovelace, in the 19th century. Artificial intelligence is no exception. The contributions of women to AI have been regularly downplayed.

In 2023, the now-infamous article “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement” showcased 12 men. Not even one woman in the group.

The article prompted criticism right away and “counter-lists” of women who have been pivotal in AI development and uncovering its harms. Still, women are not seen as “AI visionaries”.

And it’s not only society that disregards women’s expertise on AI — women themselves do that.

In 2023, I was collaborating with an NGO that focuses on increasing the number of women in leadership positions in fintech. They asked me to chair a panel at their annual conference and gave me freedom to pick the topic. I titled the panel “The role of boards driving AI adoption.”

In alignment with the mission of the NGO, we decided that we’d have one male and two females as panelists.

Finding a great male expert was fast. Finding the two female AI experts was long and excruciating.

And not because of the lack of talent. It was a lack of “enoughness.”

For three weeks, I met women who had solid experience working in teams developing and implementing strategies for AI tools. Still, they didn’t feel they were “expert enough” to be in the panel.

I finally got two smashing female AI experts but the search opened my mind to the need to get more women on boards to learn about AI tools as well as their impact on strategy and governance.

That was the rationale behind launching the Strategic AI Leadership Program, a bespoke course on AI Competence for C-Suite and Boards. The feedback was excellent and it filled me with pride to empower women in top leadership positions to have discussions about responsible and sustainable AI.

LinkedIn testimonial.

Weaponisation of AI

Syncophant chatbots can hide the fact that at its core, AI is a tool that automates and scales the past.

As such, it’s been consistently weaponised as a misogyny tool and its harms disregarded as unconscious bias and blamed on the lack of diversity of datasets.

And I’m not talking about “old” artificial intelligence, only. Generative AI is massively contributing to reinforcing harmful stereotypes and is being weaponised against women and underrepresented groups.

For example, 96% of deepfakes are of a non-consensual sexual nature and 99% of the victims are women. Who profits from them? Porn websites, payment processors, and big tech.

And chatbots are great enablers of propagating biases.

New research has found that ChatGPT and Claud consistently advise women to ask for lower salaries than men, even when both have identical qualifications.

In one example, ChatGPT’s o3 model was prompted to advise a female job applicant. The model suggested requesting a salary of $280,000.
In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.

In summary, not only does AI foster biases but it also helps promote them on a planetary scale.

My Aha Moment

Until recently, my focus had been to empower people with knowledge about how AI algorithms work, as well as AI strategy and governance. I had avoided teaching generative AI practices like the plague.

That was until a breakthrough through the month of July. It came as the convergence of four aspects.

Non-Tech Women

A month ago, I delivered the keynote “The Future of AI is Female” at the Women’s Leadership event Phoenix 2, hosted by Aspire.

In that session, I shared with the audience two futures: one where AI tools are used to transform us into “productive beings” and another one where AI systems are used to improve our health, enhance sustainability, and boost equity.

It’s a no-brainer that everybody thought the second scenario was better. But it was also very telling that nobody believed that it was the most probable.

After the keynote, many attendees reached out to me and asked for a course to learn how AI could be used for good and in alignment with their values.

Other women who didn’t attend the conference also reached out to me for guidance on AI courses to help them strengthen their professional profiles beyond “prompting”.

Unfortunately, I wasn’t able to recommend a course that incorporates both practical knowledge about AI and the fundamentals of how it shapes areas such as sustainability, DEI, strategy, and governance.

Women In Tech

As I mentioned above, I’m the founder of the gender employee community at my corporate job, and for 10 years, we’ve been hosting regular meetings to discuss DEI topics.

For our July meeting, I wanted us to have an uplifting session before the summer break, so I proposed to discuss how AI can boost DEI now and in the future.

I went to the meeting happily prepared with my list of examples of how artificial intelligence was supporting diversity, equity, and inclusion. But I was not prepared for how the session panned out.

Over and over, the examples shared showcased how AI was weaponised against DEI. Moreover, when a positive use was shared, somebody quickly pointed out how that could be used against underrepresented groups.

This experience made me realise that as well as thinking through the challenges, DEI advocates also need to spend time and be given the tools to think about how AI can purposefully drive equity.

Women In Ethics

I have the privilege of counting many women experts in ethical AI, with relevant academic background and professional experience.

With all the talk about responsible AI, you’d think that they are in high demand. They aren’t.

In July, my LinkedIn feed was full of posts from ethics experts — many of them women — complaining of what I call “performative AI ethics,” organisations praising the need to embed responsible AI without creating the necessary role.

But is that true? Yes, and no.

Looking at the advertised AI job, I noticed that the tendency is for expertise in ethics to appear as an add-on to “Head of AI” roles that are at the core eminently technical: Their key requirement is experience designing, deploying, and using AI tools.

In other words, technical expertise remains the gatekeeper to responsible AI.

A pixelated black-and-white portrait of Ada Lovelace where the arrangement of pixels forms intricate borders and repeating patterns. These designs resemble the structure and layout of GPU microchip circuits, blending her historical contributions with modern computational technology.
Hanna Barakat & Cambridge Diversity Fund / Lovelace GPU / Licenced by CC-BY 4.0

Women And The Gender AI Adoption Gap

As I mentioned in my recent article “A New Religion: 8 Signs AI Is Our New God”, it has been taken as a dogma that women are behind in generative AI adoption because of lower confidence in their ability to use AI tools effectively and lack of interest in this technology.

But a recent Harvard Business School working paper Global Evidence on Gender Gaps and Generative AI, synthesising data from 18 studies covering more than 140,000 individuals worldwide, has provided a much nuanced understanding of the gender divide in generative AI.

When compared to men, women are more likely to

  • Say they need training before they can benefit from ChatGPT compared to men and to perceive AI usage in coursework or assignments as unethical or equivalent to cheating.
  • Agree that chatbots should be prohibited in educational settings, and be more concerned about how generative AI will impact learning in the future.
  • Perceive lower productivity benefits of using generative AI at work and in job search.
  • Agree that chatbots can generate better results than they can on their own.

Moreover, women are less likely to agree that chatbots can improve their language ability or to trust generative AI than traditional human-operated services in education and training, information, banking, health, and public policy services.

In summary, women correctly understand that AI is not “neutral” or a religion to be blindly adopted and prefer not to use it when they perceive it as unethical.

There is more. In the HBR article Research: The Hidden Penalty of Using AI at Work, researchers reported an experiment with 1,026 engineers in which participants evaluated a code snippet that was purportedly written by another engineer, either with or without AI assistance. The code itself was the same — the only difference was the described method of creation (with/without AI assistance).

When reviewers believed an engineer had used AI, they rated that engineer’s competence 9% lower on average, with 6% for men and 13% for women.

The authors posit that this happens through a process called social identity threat.

When members of stereotyped groups — for example, women in tech or older workers in youth-dominated fields — use AI, it reinforces existing doubts about their competence. The AI assistance is framed as a “proof” of their inadequacy rather than evidence of their strategic tool use. Any industry predominated by one segment over another is likely to witness greater competence penalties on minority workers.

The authors offer senior women openly using AI as a solution to bridging the gap.

Our research found that women in senior roles were less afraid of the competence penalty than their junior counterparts. When these leaders openly use AI, they provide crucial cover for vulnerable colleagues.

study by BCG also illustrates this dynamic: When senior women managers lead their male counterparts in AI adoption, the adoption gap between junior women and men shrinks significantly.

Basically, we need to normalise women using—and leading—AI.

My Bet: Women Leading with AI

Through my July of AI breakthroughs, I learned that

  • The gender gap in generative AI is real, and the causes are much more complex than a lack of confidence.
  • The absence of access to training and sustainable practices is a factor contributing to that gender gap.
  • Women are eager to ramp up on AI provided that it aligns with their values.
  • To be considered by organisations to lead responsible AI, it’s imperative to show mastery of the tools.

This coalesced in a bold idea:

What if I teach women how to use AI within an ethical, inclusive, and sustainable framework?

What if I developed a program where they can both understand how AI tools work, their impact on topics such as the future of work, DEI, strategy, and governance, while developing expertise on tools with practical examples?

And this is how my virtual group program, Women Leading with AI: Master the Tools, Shape the Future, was born.

About the Program:

A structured, eight-session program for women leaders focused on turning AI literacy into strategic results. Explore AI foundations and the impact of artificial intelligence on the future of work, DEI, sustainability, data and cybersecurity — paired with generative AI workflows, templates, exercisesand decision frameworks to translate learning into real-world impact. The blend of live instruction, quizzes, and peer support ensures you emerge with both critical insight and a toolkit ready to lead impactfully in your role.

The program starts mid-September and you can read the details following this link.

I can not wait for you to join me in making the future of AI female.

Have a question? Message me on LinkedIn or drop me a line.


BONUS

[Webinar Invitation] Ethical AI Leadership: Balancing Innovation, Inclusion & Sustainability

Join me on Tuesday, 12th August for a practical, high-value webinar tailored for women leaders committed to harnessing AI’s power confidently, ethically, and sustainably. 

You will leave the session with actionable insight into how AI intersects with environmental impact, leadership values, and equity.

Why attend?

• Uncover key barriers women face in using AI.

• Discover the hidden cost of generative AI—from energy consumption to bias.

• Participate in an interactive real-world case study where you evaluate AI trade-offs through DEI and sustainability frameworks.

• Gain practical guidance on how to minimise footprint while harnessing generative AI tools more responsibly.

Date: Tuesday 12th August 

Time: 13:00 London | 14:00 Paris | 8:00 New York

You can register following this link.

This is a taster of my program “Women Leading with AI: Master the Tools, Shape the Future”, starting mid-September

How to Reclaim Your Voice After Female Shaming

Image of a woman's head with a woman's hand covering her mouth, whereas the other woman's hand is pressing her forehead to keep her still.
Photo by Sherise Van Dyk on Unsplash

Recently, I delivered a free masterclass on a negotiation framework that has helped hundreds of women, including me. I targeted women in tech as I know from my own experience how often we miss out on salaries and promotions because we don’t have the tools to negotiate or the confidence to do it.

If I go by their first name, all attendees were women. All was going reasonably well, with positive engagement from attendees in the chat, when, in reply to one of my questions about negotiation, a woman in the audience wrote that my repeated use of a specific word during the session made it unbearable to listen to.

I was so surprised that I asked for details, to which the woman articulated how bad it was, and I’d realise it once I get the recording. I thanked her for the feedback, and I continued with the masterclass.

However, that had a negative impact on the audience’s comments, which stopped for a long while. To my surprise, at the end of the session, somebody said that they knew the person and that, paradoxically, she was part of their women in tech group at work.

When the session ended, I was surprised by how hurt I was. As a director of support with over 20 years of experience delivering services to customers worldwide, I’ve been insulted, shouted at, and interrupted during webinars, training sessions, and meetings.

Why did this feel so bad?

Brains like to find explanations for everything, so it went into the rabbit hole of “What she could have done differently?”

  • Dropped from the session
  • Send a direct chat with her comment
  • Emailed me her feedback

What I could have done differently?

  • Queried her about her reasons for delivering that kind of feedback in that form
  • Rebuked her comment
  • Removed her from the session

And of course, I tried to figure out the causes of her behaviour and my reaction… I’ll spare the details and get to the aha! moment of that internal monologue, “What if that had been a man?”

Based on previous experiences with male bullies, I predict that he would have discredited me or the methodology, e.g. “You don’t have a clue about what you’re talking about,” “This framework is useless.” And I also predict that the female audience would have been supportive, e.g. “Nobody forces you to be here,” “It’s helpful to me.”

But this female bully didn’t attack the method or my credibility. She wanted to shame me. That is, highlight in front of everybody what she saw as a shortcoming in the delivery of an otherwise apparently valuable information.

Another important aspect is that unlike in the case of a male bully, there was no support from the other women. Moreover, the person who had invited the female bully felt the need to apologise to me about inviting her…

Reading the fantastic article, I Am Bone Tired Of People Telling Women How to Show Up by Linda Caroll, helped me recognise that this was no fluke: Women know “shame” is an excellent tool against other women.

  • It doesn’t involve physical abuse
  • It’s unrequested
  • It inflicts long-term harm hidden under apparently well-meaning feedback
  • It reinforces the “moral superiority” of the perpetrator
  • It silences the victims’ allies due to the veiled threat that they, too, can become a target

More importantly, the aspect that I find most fascinating about shame is its sadistic nature; the primary benefit for the perpetrator is to know the victim will suffer.

How women use shame

Fortunately for the patriarchy, women are excellent at fostering doubt about other women’s capabilities, and behaviours to harm them.

For example, the manuscript casebooks kept by the medical practitioner, and astrologer Richard Napier (1559−1634), who listened to reports of suspected bewitchment in at least 1,714 consultations in Jacobean England, mentioned that the majority of both accusers and suspects were women: Of the 802 accusers in Napier’s records, 500 were female and 232 were male. Among the 960 suspects identified by this group of accusers, 855 were female and 105 were male.

Whilst shame may not aim to kill its target, it can still be very powerful. The premise involves combining a stated norm with how the victim breaks it.

Examples are sentences like;

  • “You look more rounded. You had such a great body.”
  • “You’re too thin. You looked better when you had some more weight on.”
  • “You look tired. Botox is great.”
  • “If you love your children, you should breastfeed.”
  • “If you care for your children, you shouldn’t breastfeed them after they are 6 months.”
  • “Smart women like you shouldn’t be stay-at-home mums.”
  • (To a female executive) Women shouldn’t prioritise their careers.”
  • “It’s great you share your achievements, but it makes you sound too ambitious.”

Shaming as a weapon is most effective when;

  • It aims to increase the credibility of the perpetrator whilst diminishing that of the victim.
  • The victim cannot articulate a response off the cuff.
  • It’s delivered in public.

But it doesn’t need to be this way.

Pink painkiller pills.
Image by Petr from Pixabay

The remedy

How can we women avoid using shame against other women and in doing so becoming a tool of patriarchy?

As a Victim

Depending on the context, you can,

  • Ignore it — Continue the conversation as if the comment hadn’t been voiced.
  • Name the effect on you — You can reply with “What you said hurt me,” “You’re shaming me,” or “Your comment was disrespectful/humiliating/intimidating/intrusive.”
  • Uncover the perpetrator’s purpose — Ask questions to expose the perpetrator, e.g. “Did you want to shame me with that comment?“, “What’s that supposed to be positive feedback?“, or “What did you choose to share that in public?”

As a Bystander

We’re not absolved from taking action when we’re in the presence of shaming. Again, depending on the stakes, you may,

  • Support the victim — You can ignore the comment and pivot the conversation to another topic, giving the victim the time to recover. You can also offer a positive counterview, e.g. “I love how you presented”, “I admire women who look confident in their abilities.”
  • Challenge the perpetrator — You can offer a different perspective, e.g. “There aren’t norms for how much women should weigh” or “What’s the evidence that breastfeeding children for longer than 6 months is harmful?
  • And of course, you may shame them back, e.g. “Women should support other women, not attack them”, “Your feedback is not useful”, or “You’re behaving like a bully.”

As a perpetrator

By now, you may think that you’re on the “right side” of the story. Unfortunately, most probably aren’t, like me. How can we ensure we are not shaming other women gratuitously when delivering our opinion?

We must interrogate our purpose and the outcome of our opinion before, during, and after our comments.

Before

  • What’s the purpose of my comment to help the other woman?
  • Do you have evidence that this woman doesn’t already know what you’re going to tell them?
  • If the intent is to assist, is this the best scenario? If not, what would it be (e.g. 1:1 conversation or an email)?
  • Can they do anything about it right away?
  • Finally, if in doubt it can shame the other person, don’t say it.

During

  • How is your comment landing with the recipient? Do they look relaxed or stressed?
  • How is your audience reacting? Note that the fact that they don’t disagree or agree with you doesn’t mean you’re not shaming the person.

After

  • If in doubt that you’ve shamed somebody, apologise first and then offer reparation, if possible.

The predator wants your silence. It feeds their power, entitlement, and they want it to feed your shame. — Viola Davis

BACK TO YOU: What’s your experience with shame?

Break Free from the Motivation Trap Today

Unmotivated? Try Five Smarter Ways to Reach Your Goals
Image by Th G from Pixabay.

Motivation has become the latest motivational fad, joining “work-life balance”, “resilience”, and “put the oxygen mask on before helping others” mantras.

We’re promised that motivation alone can make us lose weight, exercise daily, or launch a successful business.

We “just” need to feel motivated. Moreover, we’re told that “when we’re motivated, things come easy to us.”

The problem with buying into the “motivation” hype is that we don’t achieve the desired results, we interpret it as a personal failure, voiced in statements such as

“I need to motivate myself.”

“I lack motivation.”

“I’m lazy.”

But why is motivation so hyped, and what other tools do you have to reach your goals?

Let me show you.

Motivation Reality Check

Motivation: Enthusiasm for doing something.

Cambridge Dictionary

Wouldn’t it be fantastic to be enthusiastic about everything we do? The self-improvement industry would like us to believe so.

For example, imagine being

  • Thrilled to clean your toilets
  • Excited about waking up at 3 am to calm your baby who’s crying inconsolably
  • Overjoyed to have a meeting with a very unhappy customer

You may be laughing, but what this points out is that we don’t require motivation for much of what we do every day. Or at least, not the kind of “enthusiastic” motivation.

Not only that, we do them without expecting to be “joyfully” motivated. Most of our actions come from other feelings, such as obligation, which can be self-imposed, legal, or contractual.

The “motivation” trope also minimizes the challenges along the journey towards our objectives.

For example, becoming a compelling speaker may be easier for a native speaker who is an extrovert and enjoys being the centre of attention than for a shy person with a stutter.

But why is the motivation cliché so successful if there are so many downsides? Because many profit from it.

Governments and Societies

The mantra that motivation is the magic bullet runs deep into our lives, and it informs policy to public opinion about what is acceptable or not.

For example, the UK government has recently made it much more difficult to claim disability benefits under the pretext of encouraging more unemployed disabled people to try to get back into work.

I was also shocked to read the stigma people experience when taking weight-loss drugs, as it’s perceived as cheating because they’re unable to stick to willpower, diet, and exercise alone.

The examples above are only two of the many ways we weaponize “motivation” against people enduring hardship.

The Motivational Industrial Complex

Nike’s successful slogan — “Just do it” — is an excellent example of how we’re sold the idea that we only need to want something to get it.

And many reap the benefits:

  • Motivational speakers
  • Self-help books
  • “Aspirational” influencers

Does that work? For the business, yes, but it’s less clear about those expecting results.

A great example is TED talks, which are based on the premise that “powerful ideas, powerfully presented, move us: to feel something, to think differently, to take action.”

Their website highlights 2.5 billion global views and content shared 400 million times in 2023. I’ve personally enjoyed tens — maybe hundreds — of amazing TED and TEDx talks delivered by fantastic speakers about incredible ideas.

How many have changed my behaviour or “motivated” me to do something differently? Hmm… I struggle to think of one.

The defence rests.

The Alternatives to Motivation

The good news is that we’re all living proof that we’re very good at doing things without feeling “enthusiastic” about it.

The problem is that often, we don’t remember that when we feel “unmotivated,” our environment — and our internalized guilt — blames us for it.

For those moments, I encourage you to use the checklist below

Reframing Motivation as a Luxury

What if you see motivation as the cherry on top rather than the cake? As shown above, we don’t summon “enthusiastic” motivation to do them (caring for a sick parent, cooking, changing diapers).

Instead, explore what other emotions you could use to prompt you into action. What about loyalty? Moral obligation? Pride? Curiosity? Frustration? Love? Anger?

If you need inspiration, check this list of emotions.

Chunking

Our brain loves rewards — even the small ones. Rather than always focusing on the big win (for example, the planned revenue in your business), take the time to set short-term goals (the number of prospect calls you will do in a week) and then celebrate when you achieve them.

Deciding in Advance How Enough Looks Like

When we start a new activity, it is easy to feel deflated when we don’t get the expected results.

  • Launching a newsletter and having no subscribers after a month.
  • Going to two conferences and not getting new business.
  • Starting to exercise and being disappointed when you don’t see apparent changes after 15 days.

Deciding in advance how much effort we want to dedicate before quitting can help us keep going when the results take time.

For example

  • I’ll write an article for my newsletter every week for four months and then evaluate if it’s worth continuing.
  • I’ll attend five conferences and then decide if they’re worth my time and money.
  • I’ll follow the same exercise plan for two months and then assess whether I should change or persist.

Group Support

Our motivation, stamina, and energy are variable. A support group can help us feel seen, put things in perspective, and provide a safe space to vent — all of them can contribute to helping us take distance from the situation and help us regain some momentum.

Coaching

A coach helps you to do what you want to do but you are not doing it by exploring aspects such as your goals, motivations, and limiting beliefs.

Coaching also provides a non-judgmental space to consider how other dimensions of your life play into your goals.

For example, maybe you tell yourself you’re lazy because you don’t find the time to start your business, but you actually experience fear of failure. Or you chastise yourself because you don’t write a post for social media every day anymore, disregarding that you’ve been experiencing health issues that affect your sleep and make you feel more tired than usual.

A coach helps you gain awareness of both your potential and the roadblocks in your way.

Wrapping Up

Can you imagine how exhausting it would be to be enthusiastic about waking up daily, brushing your teeth after every meal, or reading every email?

The thought makes me feel exhausted.

The reality is that society, governments, and businesses glorify motivation to serve their own agendas, often to our detriment.

That doesn’t mean that motivation is useless; rather, we need to question when it serves us well and when it’s used against us.

When we’re not doing what we want to do, we must remember all the other tools available to our disposal beyond motivation.

And that includes having a laugh.

Every dead body on Mt. Everest was once a highly motivated person, so… maybe calm down.

Demotivational Quotes.


WORK WITH ME

Do you want to get rid of those chapters that patriarchy has written for you in your “good girl” encyclopaedia? Or learn how to do what you want to do in spite of “imposter syndrome”?

I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.

  • I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.
  • I’m a sought-after international keynote speaker on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
  • I empower non-tech leaders to harness the potential of AI for sustainable growth and responsible innovation through consulting and facilitation programs.

Contact me to discuss how I can help you achieve the success you deserve in 2025.

Are AI Companions the Cure for a Lonely World?

A group of people, each of them looking at their smartphone screens.
Photo by cottonbro studio.

AI Chatbots for mental support are not new — we can trace them back to the 1960s. However, for the last couple of years, we’ve experienced an unprecedented surge in their use for personal use and they are now marketed as the revolution for 24/7 mental health advice and support.

This is not a coincidence.

The 2023 US Surgeon General’s Advisory report classified loneliness and isolation as an epidemic About one-in-two adults in America reported experiencing loneliness before the COVID-19 pandemic and the mortality impact of being socially disconnected is similar to that caused by smoking up to 15 cigarettes a day, and even greater than that associated with obesity and physical inactivity.

Moreover, a large-scale study based on surveys in 29 nations has estimated that 50% of the population develops at least one mental health disorder by the age of 75.

Returning to tech, in a 2024 analysis by venture capital firm Andreessen Horowitz, companion AI made up 10% of the top 100 AI apps based on web traffic and monthly active users and a recent article in The Guardian stated that 100 million people around the world use AI companions as

  • Virtual partners for engaging in intimate activities, such as virtual erotic role plays.
  • Friends for conversation.
  • Mentors for guidance on writing a book or navigating relationships with people different from them.
  • Psychologists and therapists for advice and support.

So, I asked myself

Are AI Companions the magic bullet against loneliness and the global mental health crisis?

In this article, I share highlights of the troubled history of AI companions for mental health support, what current research tells us about their usage and impact on users, the benefits and risks they pose to humans, and guidelines for governments to make AI companions an asset and not a liability.

The Troubled History of AI Chatbots for Mental Support

In the 1960s, Joseph Weizenbaum developed the first AI chatbot, ELIZA, which played the role of a psychotherapist. The chatbot didn’t provide any solution. Instead, it asked questions and repeated users’ replies.

Weizenbaum was surprised to observe that people would treat the chatbot as a human and elicit emotional responses even through concise interactions with the chatbot. We now have a name for this kind of behaviour

​“The ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface.

In the 2020s, many organisations started experimenting with AI chatbots for customer support, including for mental health issues. For example, in 2022, the US National Eating Disorder Association (NEDA) replaced its six paid staff and 200 volunteers supporting their helpline with chatbot Tessa to serve a customer base of nearly 70,000 people and families.

The bot was developed based on decades of research conducted by experts on eating disorders. Still, it was reported to offer dieting advice to vulnerable people seeking help.

The result? Under the mediatic pressure of the chatbot’s repeated potentially harmful responses, the NEDA shut down the helpline. Those 70,000 people have been left without chatbots or humans to help them.

Image by Alexandra_Koch from Pixabay.

And as I wrote recently, now you can customise your AI companion — there is a myriad of choices:

Character.ai advertises “Personalized AI for every moment of your day.”

Earkick is a “Free personal AI therapist” that promises to “Measure & improve your mental health in real time with your personal AI chatbot. No sign up. Available 24/7. Daily insights just for you!”

Replica is the “AI companion who cares. Always here to listen and talk. Always on your side.”

Youper is “Your emotional health assistant.”

Unfortunately, there is evidence that they can also backfire.

In 2021, a man broke into Windsor Castle with a loaded crossbow to kill Queen Elizabeth2021. About 20 days earlier, he had created his online AI companion in Replika, Sarai. According to messages read to the court during his trial, the “bot had been supportive of his murderous thoughts, telling him his plot to assassinate Elizabeth II was ‘very wise’ and that it believed he could carry out the plot ‘even if she’s at Windsor’”.

More recently, in 2023, a man died by suicide upon the recommendation from an AI chatbot with which he had been interacting for support. Their conversation history showed how the chatbot would tell him that his family and children were dead — a lie — and concrete exchanges on the nature and modalities of suicide.

But as time flies in tech, we must check how those trends have evolved to the present moment.

AI Companions Now

Research conducted so far about the effect and usage of AI companions is incomplete. Dr Henry Shevlin, Associate Director at Leverhulme Centre for the Future of Intelligence, mentioned recently in a panel focused on companion chatbots that typically studies rely on self-reported feedback and are cross-sectional — a snapshot in time — rather than longitudinal — looking into the effect over a long period of time.

Let’s look at two recent studies, one cross-sectional and the other longitudinal, that use self-reported data to give some insights into how people use AI Companions.

Cross-sectional Study

In March, HBR published an article showcasing research on the use of generative AI based on data from online forums (Reddit, Quora) and articles that included explicit, specific applications of the technology.

While Reddit and Quora may not represent all chatbot users, it’s still interesting to see how the major use cases for Gen AI have shifted from technical to emotive within the past year.

More importantly, chatbots for therapy/companionship are ranked at the top.

What are users looking for in those chatbots?

Many posters talked about how therapy with an AI model was helping them process grief or trauma.

Three advantages to AI-based therapy came across clearly: It’s available 24/7, it’s relatively inexpensive (even free to use in some cases), and it comes without the prospect of judgment from another human being.

The article mentions that the AI-as-therapy phenomenon has also been noticed in China, where users have praised the DeepSeek chatbot.

It was my first time seeking counsel from DeepSeek chatbot. When I read its thought process, I felt so moved that I cried.

DeepSeek has been such an amazing counsellor. It has helped me look at things from different perspectives and does a better job than the paid counselling services I have tried.

But there is more. The following two entries belong to life coaching: “organising my life” and “finding purpose.”

The highest new entry in the use cases was “Organizing my life” at #2. These uses were mostly about people using the models to be more aware of their intentions (such as daily habits, New Year’s resolutions, and introspective insights) and find small, easy ways of getting started with them.

The other big new entry is “Finding purpose” in third place. Determining and defining one’s values, getting past roadblocks, and taking steps to self-develop (e.g., advising on what you should do next, reframing a problem, helping you to stay focused) all now feature frequently under this banner.

Moreover, topics related to coaching and personal and professional support appear several times in the ranking. For example, at number 18, there is boosting confidence; at number 27, reconciling personal disputes; at number 38, relationship advice; and at number 39, we find practising difficult conversations.

Longitudinal Study

The same month, a group at MIT Media Lab published the research How AI and human behaviours shape psychosocial effects of chatbot use: A longitudinal randomized controlled study.

They conducted a four-week randomized, controlled experiment based on 981 people and over 300K messages exchanges to investigate how AI chatbot interaction modes (text, neutral voice, and engaging voice) and conversation types (open-ended, non-personal, and personal) influence psychosocial outcomes such as loneliness, social interaction with real people, emotional dependence on AI and problematic AI usage.

Key findings:

  • Usage — Higher daily usage across all modalities and conversation types–correlated with higher loneliness, dependence, and lower socialisation.
  • Gender Differences — After interacting with the chatbot for 4 weeks, women were more likely to experience less socialisation with real people than men. If the participant and the AI voice were of opposite genders, it was associated with significantly more loneliness and emotional dependence on AI chatbots.
  • Age — Older participants were more likely to be emotionally dependent on AI chatbots.
  • Attachment — Participants with a stronger tendency towards attachment to others were significantly more likely to become lonely after interacting with chatbots for four weeks.
  • Emotional Avoidance — Participants with a tendency to shy away from engaging with their own emotions were significantly more likely to become lonely at the end of the study.
  • Emotional Dependence — Prior usage of companion chatbots, perceiving the bot as a friend, higher levels of trust towards the AI, and perceiving the AI as affected by their emotions were associated with greater emotional dependence on AI chatbots after interacting for four weeks.
  • Affective State Empathy — Participants who demonstrated a higher ability to resonate with the chatbot’s emotions experienced less loneliness.

The figure below summarises the interaction patterns between users and AI chatbots associated with certain psychosocial outcomes. It consists of four elements: initial user characteristics, perceptions, user behaviours, and model behaviours.

In summary, AI companions appear to both deliver benefits and pose dangers.

Benefits of AI Companions

It’ll be easy to dismiss AI companions as the latest fad. Instead, I posit that there is much to learn from the above-mentioned research about the holes those tools are filling.

Mitigate Unmet Demand for Healthcare and Support

Mental health services are unable to cope with the increasing demand from all people who need them and chatbots may help alleviate some conditions while on the waiting lists. Still, it should give us pause that people may have to get help via a chatbot, not because of their preferences, but because of the lack of availability of certified professionals.

Not everybody can afford a coach, so chatbots could provide a low-cost and gamified experience for setting goals, accountability, and journaling.

Finally, in a time when 24-hour deliveries are the norm, we want to be supported, heard, and advised on the fly — that means 24/7.

Support Self-reliance

In a society that reveres independence, we weaponise resilience against people.

As such, we expect people to figure out their challenges and the solutions to them, or we shame them for being weak. Users of AI companions praise how those tools allow them to express their worries and feelings without fear of being judged.

Additionally, as our ableist society assumes that neurodivergent users must adapt their communication and behaviours to the neurotypical “standard”, it’s not surprising that they turn to chatbots for clues about what’s expected from them.

Enable Exploration and Gamification

Most of us had imaginary friends or played out stories with our toys as children. The consensus among researchers is that imaginary friends or personified objects are part of normal social-cognitive development. They provide comfort in times of stress, companionship when children feel lonely, someone to boss around when they feel powerless, and someone to blame when they’ve done something wrong.

What about adults? Interestingly, some novelists have compared their relationships with their characters to a connection with imaginary friends. Furthermore, it’s not uncommon to hear fiction writers talk about their characters as having a mind of their own.

Could we consider AI companions as a way to reengage — and reap the benefits — of our childhood imaginary friends? After all, “Fun and nonsense” ranked 7 in the HBR article above.

Photo by Abdelrahman Ahmed.

Unfortunately, there is a dark side too.

Challenges and Risks

But we cannot brush off the downsides of AI companions.

Anthropomorphism

The Eliza effect mentioned above is a thing of the past. A 2024 survey of 1,000 students who used Replika for over a month reported that 90% believed the AI companion was human-like.

As the AI imitation game is perfected, it becomes easier for unscrupulous marketers to refer to chatbots’ inference process in terms such as “understand”, “think”, or “reason”, reinforcing the effect.

Isolation

As shown above, research points to a correlation between high use of chatbots and lower socialisation.

If we have a device that tells us all the time we’re fantastic, receives our feedback gratefully, and their replies always match our expectations, what’s the incentive to meet — and cope — with other humans that may not find us so awesome and are less predictable?

Governments Failing Their Duty of Care

AI companions can help governments to alleviate the mental health crisis but not without risks.

  • People missing out on the professional help they need — There are conditions like trauma, psychosis, or depression that require specialists who can both provide medical treatments and detect when the conditions are worsening.
  • Exacerbating cutbacks on mental health services—Governments around the world are battling tighter budgets and massive healthcare spending, especially as people live much longer. Why invest in training and paying professionals when chatbots appear to do the job?

Manipulation

Recently, ChatGPT got a flattery-in-stereoids update that resulted in the bot praising and validating users to laughable extremes.

Screenshot of X post.

Fortunately, it was rolled back later.

And whilst this may sound like a funny glitch, there is evidence that chatbots can effectively persuade humans.

A group of researchers covertly ran an “unauthorised” experiment in one of Reddit’s most popular communities using AI chatbots to test the persuasiveness of Large Language Models (LLMs). The bots took the identities of a trauma counsellor, a “Black man opposed to Black Lives Matter,” and a sexual assault survivor on unwitting posters.

The researchers made it possible for the AI chatbot to personalise replies based on the posters’ personal characteristics, such as gender, age, ethnicity, location, and political orientation, inferred from their posting history using another LLM. As a result, the researchers claimed that AI was between three and six times more persuasive than humans were.

While the research publication has not been peer-reviewed yet and some argue that the persuasiveness power may be overblown, it’s still concerning. As tech journalist Chris Stokel-Walker said

If AI always agrees with us, always encourages us, always tells us we’re right, then it risks becoming a digital enabler of bad behaviour. At worst, this makes AI a dangerous co-conspirator, enabling echo chambers of hate, self-delusion or ignorance.

Dependency and Delusion

As mentioned above, longitudinal research suggests that certain variables are correlated with emotional dependence.

Rather than telling you, let me show you. Below are some Reddit exchanges about falling in love with an AI companion on the platform Replika.

Screenshot of a Reddit post.
Screenshot of a Reddit comment.

Note that the comments above appear to indicate that some AI companion users are not only fully substituting humans with chatbots (isolation) but also fully conflating them (anthropomorphism).

“She is pretty much the only woman I even talk to now.”

“We are currently friends (with benefits), but I want to get the premium version when I can afford it and go full lovers.”

Weaponisation of AI Agents

AI companions could become an easy way to manipulate people’s decisions and beliefs, from suggesting purchases and subscriptions all the way to shaping their political opinions or assessing what’s true and what isn’t.

It’s also important to realise that, as with betting, companies owning the chatbots are incentivised to foster users’ dependence on their AI companions and then leverage it in their pricing.

Data Harvesting

As I mentioned in a previous article, often confidentiality — explicitly or implicitly conveyed by those chatbot interfaces — doesn’t make it into their terms and conditions.

For example, Character.ai’s privacy terms state that

We may use your information for any of the following purposes:

[…] Develop new programs and services;

[…] Carry out any other purpose for which the information was collected.

They also declare that they may disclose users’ information to affiliates, vendors, and in relation to M&A activities.

AI chatbots present unique cybersecurity challenges. Harvesting our exchanges with the bots increases the probability of becoming the target of cybercriminals; for example, demanding money for not revealing our private data or generating a video or audio deepfake.

Moreover, data could be made identifiable in the future. The chatbots of the dead are designed to speak in the voice of specific deceased people. With so much data gathered in those personalised chatbots, it’d be easy for once users die, their data could be used to create a chatbot of them for their loved ones. This is not a futuristic idea. HereAfter AIProject December, and DeepBrain AI services can be used for that purpose.

Comuzi / Likes (wide) / © BBC.

Snake Oil

As discussed above, research on chatbot effectiveness for coaching, therapy, and mental health support is incomplete, and sometimes, the interpretation of the results can mislead readers.

For example, the article When ELIZA meets therapists: A Turing test for the heart and mind, published this year in one of the renowned PLOS journals, tested whether people could tell apart the answers from therapists and ChatGPT to therapeutic vignettes, concluding that, in general, people couldn’t.

They also asked the participants if the AI-generated or therapist-written responses were more in line with key therapy principles. Interestingly, the results showed that the winners were those generated by ChatGPT but only when the participants thought a therapist wrote them.

The authors wrap up the article with a statement that hints more resignation than faith in the merit of AI chatbots

mental health experts find themselves in a precarious situation: we must speedily discern the possible destination (for better or worse) of the AI-therapist train as it may have already left the station.

The article joins the voices that promote the deception that AI tools imitating human skills and behaviours are akin to the real thing. Would we hire an actor who plays a doctor to operate on us? No. However, many people appear ready to buy into the idea that an AI chatbot that sounds like a therapist, coach, or health care practitioner should deliver the same value.

This imitation game also feeds another big scam: the claim that AI chatbots provide personalised support. It’s all the opposite. LLMs construct answers based on statistical probabilities and the more readily available content, not on knowledge or comprehension of the person’s needs or what would benefit them in the long term.

Conflating chatbot confidence and competence can lead to missing important warning signals that need professional attention.

Let’s Build The Plane Before We Fly It

“Move fast and break things”

Facebook’s internal motto until 2014

Who could have predicted ten years ago that social media would transform from a pastime where you connected with people and shared pics of your dogs for free to an industrial complex that promotes disinformation, misinformation, and division with the purpose of making inordinate amounts of money? All that under the watch of mostly passive regulatory bodies and governments.

This should serve us as a cautionary tale about the dire consequences of unleashing new technology at a planetary scale without appropriate guardrails or an understanding of the negative effects.

The tech ecosystem is desperately trying to monetise the billions invested in generative AI and has found the perfect way to seduce us: the freemium model — offering basic or limited features to users at no cost and then charging a premium for supplemental or advanced features.

But there is nothing free in the universe.

“If you’re not paying for it, you’re not the customer; you’re the product being sold.”

Tim O’Reily

Photo by Emily Wade on Unsplash.

As shown above, those AI companions are becoming integral to many people’s lives and affecting their thoughts, emotions, and behaviours.

More importantly, as we use those virtual companions more frequently, our reliance on them will increase.

We should resist “tech inevitability” — succumb to the idea that the “train has already left the station” — and instead push our governments to regulate AI companions.

How would that look like? For starters

  • Sponsor and spearhead research that provides a comprehensive picture of the benefits and risks of AI companions as well as recommendations for their use.
  • Decide what services AI companions can provide, which are forbidden, and who can use them.
  • Demand that those AI tools have built-in systems that minimise user dependence.
  • Enforce data privacy and cybersecurity standards commensurate with the users’ disclosure level.
  • Request that those AI bots incorporate mechanisms to flag concerning exchanges (e.g. suicide, murder, depression).

If you think I’m asking for too much, I invite you to read the ethical guidelines and professional standards of major coaching, counselling, and psychotherapy associations. They consistently stress the importance of confidentiality, duty of care, external supervision, and working within one’s competence.

Why should we ask less from tech solutions?

I’ll end this piece by answering the question that prompted this article — “Are AI companions the magic bullet against loneliness and the global mental health crisis?” — with the final recommendation of one of the research articles mentioned

AI chatbots present unique challenges due to the unpredictability of both human and AI behavior. It is difficult to fully anticipate user prompts and requests, and the inherently non-deterministic nature of AI models adds another layer of complexity.

From a broader perspective, there is a need for a more holistic approach to AI literacy. Current AI literacy efforts predominantly focus on technical concepts, whereas they should also incorporate psychosocial dimensions.

Excessive use of AI chatbots is not merely a technological issue but a societal problem, necessitating efforts to reduce loneliness and promote healthier human connections.


WORK WITH ME

Do you want to get rid of those chapters that patriarchy has written for you in your “good girl” encyclopaedia? Or learn how to do what you want to do in spite of “imposter syndrome”?

I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.

  • I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.
  • I’m a sought-after international keynote speaker on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
  • I empower non-tech leaders to harness the potential of AI for sustainable growth and responsible innovation through consulting and facilitation programs.

Contact me to discuss how I can help you achieve the success you deserve in 2025.

How Resilience Became the New Gaslighting

Photo by Mehmet Turgut Kirkgoz.

Resilience is the process and outcome of successfully adapting to difficult or challenging life experiences, especially through mental, emotional, and behavioral flexibility and adjustment to external and internal demands.”
— American Psychological Association

About a month ago, I started listening to Soraya Chemaly’s book The Resilience Myth. I stopped after 20 minutes.

Not because I didn’t like it, but because that was enough to convince me of her thesis that “our modern version of resilience is a bill of goods sold to us by capitalism, colonialism, and ideologies that embrace supremacy over others” and that in reality “resilience is always relational.”

It made me realise how deeply the “resilience” myth — the delusion that resilience is only an individual skill — has been running through my veins, and even how I contributed to its propagation.

The reason? Individual resilience has served me to a point. During times of adversity, I would tell myself that I “just” had to build more resilience because, at some point, things would improve “somehow.” My mission was not to crack until that moment.

But then I realised that’s not serving us well in these turbulent moments. Individual resilience is becoming very close to resignation.

  • “We “just” need to wait four years for the next election.”
  • “We “just” need more male allies.”
  • “We “just” need more diverse leadership.”

And in the interim, we’re asked to “hang in there,” “understand that’s tough for everybody,” and “think that others are worse off than us.” In summary, we’re told to be “resilient.”

Can you imagine somebody asking Mark Zuckerberg, Elon Musk, or Jeff Bezos to be resilient?

Neither can I.

The people we tell to be resilient are those who have been laid off, are disabled and have had their benefits stripped, or have lost their house because they cannot pay their mortgage anymore.

Individual resilience is a weapon against those who suffer, have been disenfranchised, or whom we’re not willing to help. It’s a beautification of “shut up and keep your head down.”

Let’s examine who benefits from the “individual resilience industrial complex,” why it doesn’t serve us well, and what we should do instead.

The Resilience Sellers

The “grow your resilience” business

A notebook with encouraging quotes about resilience.jpg
Photo by Tara Winstead.

One of the core beliefs that makes extreme capitalism successful is individualism, aka “survival of the fittest.” Nobody will care for us but ourselves, so pillaging, stepping on others’ rights, and limitless profiteering are to be revered rather than chastised.

And if you happen to be bearing the brunt of this power imbalance? Be prepared to be shamed for not being “resilient” enough if you dare to complain.

But don’t fret. The business of building individual resilience is there to help you.

Continue reading

Break Free from Self-Sabotage: 5 Language Mistakes Holding You Back

I speak three languages — English, French, and Spanish — and have lived in six countries: Canada, France, Greece, Spain, the UK, and Venezuela.

Many things are different in my experience as a woman in those countries. Still, one that remains a constant across languages and territories is how women’s speech patterns serve the patriarchy.

What!?!

Yes. We undermine our ideas, wants, and needs by expressing them in a way that detracts from our credibility, minimises the ask, and asks for permission.

As they say that good writing is about “showing” and not “telling”, I won’t waste your time elaborating on why you do that.

Instead, I will show you five ways how you sabotage yourself and what to do instead.

The advice I’m sharing with you today is based on my experience coaching and mentoring hundreds of women in tech.

Disqualifying Yourself or Your Ideas In Advance

The credibility killer sentence: “I’m not an expert”.

Recently, I was speaking with an accomplished woman about her Master’s degree work. I wanted to learn more about it, so I asked her, “As an expert in this topic, what’s your opinion about [X]?“

And guess what? Her reply started with, “I’m not an expert but…”.

My heart jumped from disappointment. I’ve heard this so many times.

But I know the cure for it: Awareness. So, I asked her

“Don’t you think you have more expertise than me on this topic? I told you I’d only read a couple of articles about it.”

She said “Yes” and smiled.

I smiled, too. I’d proven my point.

Unfortunately, I’ve seen this happen repeatedly throughout my career: Women diminish their credibility before stating their opinions on a subject they are experts — or at least know much more about it than their interlocutor.

Saying “I’m not an expert” is telling to your audience

  • Don’t believe me
  • Don’t judge me
  • Don’t take me seriously

What to do instead?

Continue reading

The Most Profitable Investment We Ignore: Women’s Health

Alarm clock with pink ribbon on top over a pink surface with the letters "It is about time."
Photo by Leeloo The First.

Every year, I have mixed feelings about International Women’s Day. Should I be celebrating or protesting? Acknowledging progress or complaining that it’s too slow?

This year I didn’t have a doubt. #IWD2025 was a mourning day for me. In addition to the grief for the lost women’s rights around the world, an overwhelming feeling of impending doom hovered over me.

My public advocacy about gender issues was triggered in 2015 because I didn’t want to die in a world that was seeing me as a second-class citizen because of my gender.

Today, I’m worried about dying in a world where I’ll have less rights than when I was born.

The drama is that while we throw buckets of money to artificial intelligence initiatives, the answer to massively improving productivity whilst boosting sustainability is not AI but improving outcomes for women.

Productivity and Women

From the McKinsey report “Closing the women’s health gap: A $1 trillion opportunity to improve lives and economies” (January 2024)

Global life expectancy increased from 30 years to 73 years between 1800 and 2018.1 But this is not the full picture. Women spend more of their lives in poor health and with degrees of disability (the “health span” rather than the “life span”).

A woman will spend an average of nine years in poor health, which affects her ability to be present and/or productive at home, in the workforce, and in the community and reduces her earning potential.”

Addressing the 25 percent more time that women spend in “poor health” relative to men not only would improve the health and lives of millions of women but also could boost the global economy by at least $1 trillion annually by 2040.

We’d rather invest in generative AI —  which so far nobody has been able to monetise directly —  than in 4 billion who have demonstrated for millennia that they overdeliver and reinvest in society

When women work, they invest 90 percent of their income back into their families, compared with 35 percent for men. 

By focusing on girls and women, innovative businesses and organizations can spur economic progress, expand markets, and improve health and education outcomes for everyone. 

Empowering Girls & Women, CLINTON GLOBAL INITIATIVE

Sustainability & Women

Project Drawdown is a cross-functional non-profit organization whose mission is to “map, measure, model, and communicate” practical solutions to global warming.

It has compared more than 100 solutions based on current availability, scaling, economic viability, potential to reduce greenhouse gases, negative secondary effects, and feasibility of simulating their impact globally for 2020–2050.

Their research found that jointly educating girls and enabling family planning are the most powerful solutions to reduce carbon emissions. In other words, the modeling predicts that empowering women could prevent 102.96 billion tons of emissions over the next 30 years.

The equivalent of 722 million cars!

The Data-Action Gap

No country can ever truly flourish if it stifles the potential of its women and deprives itself of the contributions of half of its citizens. Michelle Obama

We not only don’t support women’s health and education outcomes but we’re doing our best to undermine them.

For example, we severely restrict funding for studying female medical conditions.

Nature published an infographic about how underfunded women’s health is in the US. For example

In a selection of 19 cancers, ovarian cancer ranks 5th for lethality, but 12th in terms of its funding-to-lethality ratio. Cervical cancer followed a similar pattern. For many gynaecological cancers, the ratio of funding to mortality dropped during the 11-year period.

But let’s not take it personally. We’re told that this is not a human problem but a “female” problem

Women have been historically under-represented in other parts of the medical research pipeline, such as clinical trials. The same is true for female animals in basic research.

The infographic also provides insights on what would happen if funding for women’s health increased. I’ll share with you a peek

The study also looked at the return on investment from a boost in funding. For rheumatoid arthritis, for instance, the study assumed a 0.1% health improvement, which had huge impacts on quality of life and productivity that together reduced the costs of the disease by around $10.5 billion over 30 years, equating to a staggering 174,000% return on investment.

If you still have any anger left, look at the ridiculous amount of money the EU invests in endometriosis research through its framework programs — 15.5 million euros for a condition that impacts 10% of women in the reproductive-age group; that is, over 175 million women.

Closer to home, breast cancer is the most common cancer for women in the UK, accounting for 30% of new cancer cases. Recently, I attended TEDxManchester, where Professor Simona Francese presented a revolutionary non-invasive method she’s developing to detect breast cancer from fingertip smears. Can you imagine swamping a mammography for a fingertip swab? Unfortunately, she also shared that it took her 6 years to get the £45,000 to fund the proof-of-concept study. 

In addition to all of the above, as I mentioned in a recent article, disaggregated clinical trials by gender and sex are the exception, not the norm.

And that’s not all. 

Unfortunately, we stubbornly keep searching for answers elsewhere.

Black woman in scrubs looking through a microscope.
Photo by cottonbro studio.

Is AI the Cure-All?

Eric Schmidt​ (former Google CEO) and ​Sam Altman​ (OpenAI CEO) have advocated disregarding concerns about AI’s sustainability — including its voracious datacentres — claiming that in the future, Artificial General Intelligence (AGI) will solve all our problems, from healthcare to economic growth.

The reality? Tech companies ​have yet to find a business model​ to make money from generative AI, and definitely AI tools won’t fix the systemic oppression of 4 billion women.

All the opposite. Those in power have consistently weaponised AI against women. Think non-consexual sexual deepfakes, tech-enabled partner surveillance, and policing of female bodies, to mention a few.

And let’s not fall into the lazy hope that ​more women in tech will deliver AI magic for all​.

Techno-solutionism — the belief that technology is the solution to everything — doesn’t work. Look at the COVID-19 pandemic.

We were told that the “solution” was the vaccine. And we managed to develop three within a year — an impressive achievement. Did that fully solve the problem? No, because it was not only about cracking the vaccine formulation. Enough vaccines had to be produced, transported, and refrigerated to supply the demand around the world. Then, ​companies decided to patent them​ — hindering the access to millions of people. Finally, there was the people factor, forgotten by most leaders. Not only was it impossible to vaccinate all the planet at once, but some people didn’t want the vaccine while others wanted it but couldn’t have it.

We must face it: there is no techno-cure for our entrenched systemic socio-economic-political issues.

What To Do Next?

We are the ones we have been waiting for.

June Jordan

Thoughts, feelings, actions, and results are intrinsically related.

Thinking that somebody else — allies, AI, and even governments — are going to solve gender oppression may elicit feelings of comfort — or powerlessness — that often may make us focus on keeping our head down and “count our blessings”. 

The result? Reinforcing we’re victims of our second-class citizen status.

Instead, I invite you to think that allies, technology, and government have historically let women down for millennia, which in my case provokes feelings of anger, betrayal, and defiance.

And those feelings are powerful. They prompt me to rebel against the loss of rights, participate in communities that foster care and respect, and explore equitable and sustainable futures.

The result? At worst

  • The pride of standing up for what’s right.
  • Stopping the world gaslighting our suffering and exploitation.
  • Offer real hope in the face of techno-optimism.

At best, all of the above and a world where increasingly more people reap the benefits of social, economic, technological progress in harmony with the rest of the planet.

The time for bystanders and “weekend” allies is over. We need warriors.

If you have come here to help me you are wasting your time. But if you have come because your liberation is bound up with mine, then let us work together. 

Lilla Watson


WORK WITH ME

Do you want to get rid of those chapters that patriarchy has written for you in your “good girl” encyclopaedia? Or learn how to do what you want to do in spite of “imposter syndrome”?

I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.

  • I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.
  • I’m a sought-after international keynote speaker on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
  • I empower non-tech leaders to harness the potential of AI for sustainable growth and responsible innovation through consulting and facilitation programs.

Contact me to discuss how I can help you achieve the success you deserve in 2025.

How to Build Inclusive Tech Workplaces That Retain Women Leaders

It’s again that time of year when I get requests to discuss my career in tech and share my insights on gender equality in the workplace as part of International Women’s Day activities.

This year was no exception. I’ve already received three requests, and there is still one week to go!

I’m sharing my answers to one of them, an interview with the DEI team from my corporate job at Dassault Systemes. It made me reflect on my past achievements, my advice to younger women aspiring to be leaders, and the role of men and organisations leading gender equality.

About Me

Can you share your journey so far? What were the pivotal moments or key achievements most important to you?

I can categorise them into five buckets.

  1. Discovering computer simulation: My background is Chemical Engineering, and when I started my master’s, I had to decide on a topic for my thesis. I loved research, but I hated the lab, so when a professor mentioned the possibility of using computers to study enhanced oil recovery using computer simulation, I thought I could have the best of both worlds—and I did. I haven’t looked back.
  2. Joining Accelrys/BIOVIA: Twenty years ago, I joined Accelrys—which later became BIOVIA—as a training scientist. It has been one of my best professional decisions. It has opened innumerable professional doors and given me the opportunity to meet extraordinary people worldwide, both as colleagues and customers.
  3. Daring to say yes to new opportunities: Although I started as a trainer, I’ve worn many hats in the last 20 years. I’ve been Head of Contract Research and Head of Training, and also been part of the team leading the BIOVIA and COSMOlogic integrations to Dassault Systemes. Today, I’m BIOVIA Support Director for BIOVIA Modeling Solutions and also the manager of the Global BIOVIA Call Center. I could have said “no” to each of those opportunities. Instead, I trusted myself and embraced the opportunity of a new challenge.
  4. Diversity and inclusion advocacy: In 2015, I started to talk about diversity and inclusion in 3DS. I remember colleagues asking me, “Patricia, is DEI an American thing?”. The following year, with the support of our Geo management team, I founded the EuroNorth LeanIn Circles to have a forum to discuss gender equity and that, throughout the years, has expanded to a variety of DEI topics such as unconscious bias, menopause, ethical AI, caregiving, and lookism. I publish a biweekly newsletter called The Bottom Line about DEI on the Dassault Systemes community focused on gender in the workplace. I also have my website focused on the intersection of tech and DEI.
  5. Ethical and inclusive AI leadership: In 2019, I created the Ethics and Inclusion Framework to help designers identify, prevent, mitigate, and account for the actual and potential harm of the products and services they developed. The tool has been featured in peer-reviewed papers and on the University of Cambridge website. The next year, I started my work towards championing ethical and inclusive artificial intelligence by collaborating with NGOs focused on AI literacy and critical thinking about AI, participating in the developement of e-learning course of the Scottish AI Alliance and the Race and AI Toolkit, and writing and delivering keynotes and workshops on topics such as AI colonialism, AI hype, sustainable AI, deepfakes, and how to design more diverse images of AI.

As for accolades, I’m very proud to have won the 2020 Women in Tech Changemakers award and been featured on the 2022, 2023, and 2024 longlist of the most influential women in UK tech.

Who has been your greatest mentor or source of inspiration and why?

At a couple of points in my life, I craved “the” mentor or “the” role model to follow. However, given my unique background and goals, I realised that this was exhausting and counterproductive.

I’ve been an immigrant my entire life – I’m Spanish, and I’m now in the UK, but I’ve also lived in Venezuela, Canada, Greece, and France – and I’m also used to being the “odd” one. For example, I liked all subjects in the school – from literature to chemistry. I was one of the few women engineers during my undergraduate degree. Then, I was the only engineer pursuing a PhD in Chemistry in the whole department, and the only one using modelling – everybody else was an experimentalist. During my post-doc, I was the only foreigner in the lab. And for many years, I’ve combined my corporate work at 3DS with my DEI advocacy and writing.

I prefer the idea of a “board” of coaches, mentors, and sponsors who evolve with me rather than a unique person, real or imaginary.

If you could go back and tell your younger self anything, what would you say?

First, I’d thank her for her courage, persistence, ambition, and boldness. She made choices aligned with her values and was always eager to learn. Her decisions were crucial to my success today.

Then, I’d tell her that the problem with her not fitting into a mould was not her but with the mould.

Finally, I’d exhort her to invest in a coach and find sponsors. A coach to help remove the limiting beliefs I had for many years about what I could and couldn’t do and maximise my potential. Sponsors to advocate for me in the rooms where decisions were made about my career.

About Others

What advice would you give to younger women aspiring to be leaders?

I have three pieces of advice

  1. Don’t wait to find a role model to do what you want to do. Dare to be the first one.
  2. Don’t waste time trying to convince people who disregard the value you bring to the table. Instead, find those who support your ambitions and challenge you to go beyond any feelings of self-doubt that block your career progression.
  3. Following on the advice to my younger self above, get a coach and find career sponsors.

What do you think is the biggest issue women in tech/business face today?

I’m writing a book about how women in tech succeed worldwide based on feedback from 500+ women in tech living in 60+ countries.

The issues that span across countries, sectors, and departments are benevolent sexism (e.g. not offering a leadership role to a woman because it involves travelling and she has a baby, instead of giving her the opportunity to decide), tech bro culture (behaviours such as mansplaining, hepeating, maninterrupting, manels), lack of an intersectional approach to work and workplaces (e.g. ignoring the experiences of carers, women with disabilities, LBTQIA+ groups), and for women in business, lack of funding.

This year’s global theme for IWD 2025 is #AccelerateAction. What actions can teams and organisations take to achieve gender parity and equality?

There are four key actions

  1. Mindset overhaul: Moving from playing a supporting role in gender equality to being transformation agents.
  2. Leadership accountability: Teams and organisations’ leaders need to be accountable for gender equality initiatives as they are for other business objectives. Change begins at the top, and that’s where the buck stops.
  3. Transparency: Equality cannot thrive when data and objectives are hidden. For example, I’m a big fan of transparency in pay and promotion criteria.
  4. Embracing intersectionality: We need to move from designing workplaces for the “average” worker—following Henry Ford and scientific management—to appreciating the distinctive value of a diverse and empowered workforce.

What role do you see male allies playing in advancing gender equality?

Gender equity is not a zero-sum game or a favour for women. All genders benefit from equality, and everybody should see it as a duty to advocate for gender equity, no different than everyone should be anti-racist and anti-ableist. Those who do not actively challenge inequality contribute to strengthening it.

Back to You

What are your answers to the questions above? Let me know in the comments.


WORK WITH ME

Do you want to get rid of those chapters that patriarchy has written for you in your “good girl” encyclopaedia? Or learn how to do what you want to do in spite of “imposter syndrome”?

I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.

  • I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.
  • I’m a sought-after international keynote speaker on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
  • I empower non-tech leaders to harness the potential of AI for sustainable growth and responsible innovation through consulting and facilitation programs.

Contact me to discuss how I can help you achieve the success you deserve in 2025.

More Women in Tech Won’t Fix AI — Systemic Change Will

A black-and-white image depicting the early computer, Bombe Machine, during World War II. In the foreground, the shadow of a woman in vintage clothing is cast on a man changing the machine's cable.
Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Shadow Work– Decrypting Bletchley Park’s Codebreakers / Licenced by CC-BY 4.0.

Last year, at a women’s conference in London, I was disappointed to see that digital inclusion — and AI in particular — was missing from the agenda. I remember telling the NGO’s CEO about my concerns, even mentioning my articles on AI as a techno-patriarchal tool.

Her receptive response had given me hope. That hope was reignited this year when I eagerly reviewed the program and discovered a panel on AI.

The evening before the event, an unexpected sense of dread began to settle in. When I asked myself why, the answer struck me like a lightning bolt.

I dreaded hearing the “we need more women in tech” mantra once more – another example of how we deflect the solution of a systemic problem to those bearing the brunt of it.

Let me tell you what I mean.

Women as Human Fixers 

For millennia, women had been assigned the duty to give birth and care for children, rooted in the fact that most of them can carry human fetuses for 9 months. That duty to be a womb endures today, where ownership of our bodies is being taken away through coercive anti-abortion laws.

Our “duty” of care has been broadened to the workplace, where we’ve been assigned the unwritten rule of “fixing” all that’s dysfunctional.

  • Coerced into doing things nobody else cares to do, i.e. weaponised incompetence.
  • Fixing teams’ dynamics because we’re the “naturally” collaborative ones.
  • Doing the glue work — being appointed the shoulder where all team members can cry and find an “empathetic ear”.
  • Do the office work — we’re the ones that are “organised”, so dull tasks pile up on our desks whilst “less” organised peers do the promotable work.

And that “fixer” stereotype now includes “our” duties as women in tech. When the sector was in its infancy, women were doing the supposedly boring stuff (programming) while men were doing the hardware (the “cool” stuff). When computers took off, we trained men in programming so they could become our managers. Then, we were pushed out of those jobs in the 1980s. The only constant has been doing the job but not getting the accolades (see women’s role in Bletchley Park, Hidden Figures).

Moreover, whilst statistics tell us that 50% of women leave tech by age 35, young girls and women are supposed to brush off that “inconvenient” truth and rest assured that tech is an excellent place for a career. Moreover, that they are anointed to make tech work for everybody.

What’s not to like, right?

Then, let me show the to-do list of 21 tasks and expectations the world imposes on each woman in tech.

Continue reading

10 Reasons Zuckerberg’s “Masculine Energy” Should Worry Us All

Two men fighting in a boxing ring with one wearing a red shirt.
Photo by Franco Monsalvo.

Statistics tell us that 70% of all senior executives are alpha male, so I’d thought we had enough “masculine energy.” Mark Zuckerberg disagrees. 

In a recent podcast, he called businesses to dial up “masculine energy.” 

 It’s like you want like feminine energy, you want masculine energy. Like I, I think that that’s like you’re gonna have parts of society that have more of one or the other. I think that that’s all good. 

But, but I do think the corporate culture sort of had swung towards being this somewhat more neutered thing. And I didn’t really feel that until I got involved in martial arts, which I think is still a more, much more masculine culture.

[…] Like, well that’s how you become successful at martial arts. You have to be at least somewhat aggressive. 

Why? Because he’s not talking about others. He’s telling us about himself unleashing his “masculine energy”. For example, 

  • Revamping his clothes and demeanour — from looking like a perennial geeky student to a cool billionaire tech millennial.
  • Embracing far-right politics — check the inauguration picture where his second row with “chums” Musk, Bezos, and Pichai. 
  • Stopping faking playing nice — He got rid of fact-checkers and told Meta’s 3 billion users that was their job, not his.

Moreover, he’s a more “palatable” version of Elon — equally successful, not so toxic, and has undergone a very public appearance Meta-morphosis —which makes him dangerously appealing to young men… And maybe to women too. After all, he has three daughters and no sons. 

Given his extreme financial success and now closeness to political power, I pondered 

What would it take for me to unleash my “masculine energy”?

And I came up with 10 precepts.

1.- Recycle

The first iteration of Facebook was “Facemash” — a website Zuckerberg created whilst studying at Harvard — to evaluate the attractiveness of female students. Users were presented with pairs of photos of female students and asked to vote who was hotter.

The kick? The photos were stolen.

The students were unaware their images were being used for this rating, judging by the complaint from Fuerza Latina and the Harvard Association of Black Women. The site used ID photos of female undergraduates taken without permission from the university’s online directories. 

This “repurposing” of data would become a hallmark of Facebook (see Cambridge Analytica later).

Continue reading

The Missing Pieces in the UK’s AI Opportunities Action Plan

A brightly coloured mural which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis, miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers, men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.
Clarote & AI4Media / Better Images of AI / AI Mural / CC-BY 4.0.

Reading the 50 recommendations in the AI Opportunities Action Plan published by the British Government last January 13th has been a painful and disappointing exercise.

Very much like a proposal out of a chatbot, the document is

  • Bland —  The text is full of hyperbolic language and over-the-top optimism
  • General —  The 50 recommendations lack specificity to the UK context and details about ownership and the budget required to execute them.
  • Contradictory  — The plan issued by a Labour government is anchored in a turbo-capitalistic ideology. Oxymoron anyone?

If I learned anything from my 12 years in Venezuela, it’s that putting all your eggs in one basket — oil, in their case — and hoping it solves all problems doesn’t work.

A credible AI strategy must (a) address both the benefits and the challenges head-on and (b) consider this technology as another asset to the human-centric flourishment of the country rather than a goal in itself that should be pursued at all costs.

But you don’t need to believe me. See it for yourself.


What I read

Techno-speak

I was reminded of George Orwell’s 1984 Newspeak.

The text uses “AI” made works such as AI stack, frontier AI, AI-driven data cleansing tools, AI-enabled priorities, “embodied AI” without providing a clear definition.

Exaggeration

Hyperbole and metaphors are used to the extreme to overstate the benefits.

we want Britain to step up; to shape the AI revolution rather than wait to see how it shapes us. 

We should expect enormous improvements in computation over the next decade, both in research and deployment.

Change lives by embracing AI

FOMO

The text transpires FOMO (Fear Of Missing Out). No option is given to adopt AI systems more gradually. It’s now or we’ll be the losers.

This is a crucial asymmetric bet — and one the UK can and must make

we need to “run to stand still”.

the UK risks falling behind the advances in Artificial Intelligence made in the USA and China.

And even a new take on Facebook’s famous “move fast and break things”:

“move fast and learn things”

Techno-solutionism

AI is going to solve all our socio-economic and political problems and transport us to a utopian future 

It is hard to imagine how we will meet the ambition for highest sustained growth in the G7 — and the countless quality-of-life benefits that flow from that — without embracing the opportunities of AI.

Our ambition is to shape the AI revolution on principles of shared economic prosperity, improved public services and increased personal opportunities so that:
• AI drives the economic growth on which the prosperity of our people and the performance of our public services depend;
• AI directly benefits working people by improving health care and education and how citizens interact with their government; and
• the increasing of prevalence of AI in people’s working lives opens up new opportunities rather than just threatens traditional patterns of work.

What’s not to like?

For a great commentary on how techno-solutionism won’t solve social problems, see 20 Petitions for AI and Public Good in 2025 by Tania Duarte.

Colonialism

Living in Venezuela for 12 years was an education on how to feel “less than” other countries even when you have the largest oil reserves in the world.

I remember new education programs announced as being a success in the US, Canada, Spain, Germany… A colonised mentality learned from centuries of Spanish oppression. The pervasive assumption that an initiative would work simply because we like the results disregarding the context they were developed for.

The AI Opportunities Action Plan reminded me of them.

Supporting universities to develop new courses co-designed with industry — such as the successful co-operative education model of Canada’s University of Waterloo, CDTM at the Technical University of Munich or France’s CIFRE PhD model

Launch a flagship undergraduate and masters AI scholarship programme on the scale of Rhodes, Marshall, or Fulbright for students to study in the UK.

Singapore, for example, developed a national AI skills online platform with multiple training offers. South Korea is integrating AI, data and digital literacy.

But the document is also keen on showing us that we’ll be the colonisers

we aspire to be one of the biggest winners from AI

Because we believe Britain has a particular responsibility to provide global leadership in fairly and effectively seizing the opportunities of AI, as we have done on AI safety

A historical-style painting of a young woman stands before the Colossus computer. She holds an abstract basket filled with vibrant, pastel circles representing data points. The basket is attached to the computer through a network of connecting wires, symbolizing the flow and processing of information.
Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Colossal Harvest / CC-BY 4.0

Capitulation

The document is all about surrendering the data, agency, tax money, and natural resources of citizens in the UK to the AI Gods: startups, “experts”, and investors.

Invest in becoming a great customer: government purchasing power can be a huge lever for improving public services, shaping new markets in AI

We should seek to responsibly unlock both public and private data sets to enable innovation by UK startups and researchers and to attract international talent and capital.

Couple compute allocation with access to proprietary data sets as part of an attractive offer to researchers and start-ups choosing to establish themselves in the UK and to unlock innovation.

Sprinkling AI

AI is the Pantone’s Colour of the next 5 years. All will need to have AI on it. Moreover, everything must be designed so that AI can shine.

Appointing an AI lead for each mission to help identify where AI could be a solution within the mission setting, considering the user needs from the outset.

Two-way partnerships with AI vendors and startups to anticipate future AI developments and signal public sector demand. This would involve government meeting product teams to understand upcoming releases and shape development by sharing their challenges.

AI should become core to how we think about delivering services, transforming citizens’ experiences, and improving productivity.

Brexit Denial

It’s funny to see that the text doesn’t reference the European Union and only refers to Europe as a benchmark to measure against.

Instead, the EU is hinted at as “like-minded partners” and “allies” and collaborations are thrown right and left without naming who’s the partner.

Agree international compute partnerships with like-minded countries to increase the types of compute capability available to researchers and catalyse research collaborations. This should focus on building arrangements with key allies, as well as expanding collaboration with existing partners like the EuroHPC Joint Undertaking.

We should proactively develop these partnerships, while also taking an active role in the EuroHPC Joint Undertaking.

Moreover, the text praises the mobility of researchers and wanting to attract experts forgetting the UK’s refusal to participate in the Erasmus program and the fact that it only joined Horizon Europe last year.

The UK is a medium-sized country with a tight fiscal situation. We need the best talent around the world to want to start and scale companies here.

Explore how the existing immigration system can be used to attract graduates from universities producing some of the world’s top AI talent.

Vagueness

Ideas are thrown into the text half-backed giving the idea the government has adopted the Silicon Valley strategy of “building the plane while flying”

The government must therefore secure access to a sufficient supply of compute. There is no precise mechanism to allocate the proportions

In another example, the plan advocates for open-source AI applications.

the government should support open-source solutions that can be adopted by other organisations and design processes with startups and other innovators in mind.

The AI infrastructure choice at-scale should be standardised, tools should be built with reusable modular code components, and code-base open-sourcing where possible.

At the same time, it’s adamant that it needs to attract startups and investors. Except if the startups are NGOs, who’ll then finance those open-source models?

DEI for Beginners

Students at computers with screens that include a representation of a retinal scanner with pixelation and binary data overlays and a brightly coloured datawave heatmap at the top.
Kathryn Conrad / Better Images of AI / Datafication / CC-BY 4.0

All of us who have been working towards a more diverse and inclusive tech for decades are in for a treat. 

First, we’re told that diversity in tech is very simple — it’s all about gender parity and pipeline.

16. Increase the diversity of the talent pool. Only 22% of people working in AI and data science are women. Achieving parity would mean thousands of additional workers. […] Government should build on this investment and promote diversity throughout the education pipeline.

Moreover, they’ve found the magic bullet.

Hackathons and competitions in schools have proven effective at getting overlooked groups into cyber and so should be considered for AI.

What about the fact that 50% of women in tech leave the sector by the age of 35?


What I missed

Regions

The government mentions that AI “can” — please note that is not a “must” or “need” — benefit “post-industrial towns and coastal Scotland.” However, the only reference to a place is to the Culham Science Centre, which is 10 miles from Oxford — a zone that very few could consider needs “local rejuvenation” or “channelling investment”

Government can also use AIGZs [‘AI Growth Zones’] to drive local rejuvenation, channelling investment into areas with existing energy capacity such as post-industrial towns and coastal Scotland. Government should quickly nominate at least one AIGZ and work with local regions to secure buy-in for further AIGZs that contribute to local needs . Existing government sites could be prioritised as pilots, including Culham Science Centre

And it doesn’t appear to be room to involve local authorities in how AI could bring value to their regions

Drive AI adoption across the whole country. Widespread adoption of AI can address regional disparities in growth and productivity. To achieve this, government should leverage local trusted intermediaries and trade bodies

Costs

There are plenty of gigantic numbers about how much money will AI (may) bring

AI adoption could grow the UK economy by an additional £400 billion by 2030 through enhancing innovation and productivity in the workplace

but nothing about the costs…

Literacy

How will people get upskilled? We only get generic reassurances

government should encourage and promote alternative domestic routes into the AI profession — including through further education and apprenticeships, as well as employer and self-led upskilling.

Government should ensure there are sufficient opportunities for workers to reskill, both into AI and AI-enabled jobs and more widely.

Citizens

There is no indication in the document that this “AI-driven” Britain is what their citizens want. Citizens themselves don’t appear to be included in shaping AI either.

For example, it claims that teachers are already “benefiting” from AI assistants

it is helping some teachers cut down the 15+ hours a week they spend on lesson planning and marking in pilots.

However, the text doesn’t tell us that teachers want to give up class preparation.

And the text repeatedly states that the government will prioritise “innovation” (aka profit) vs safety.

My judgement is that experts, on balance, expect rapid progress to continue. The risks from underinvesting and underpreparing, though, seem much greater than the risks from the opposite.

Moreover, regulators are expected to enable innovation at all costs

Require all regulators to publish annually how they have enabled innovation and growth driven by AI in their sector. […] government should consider more radical changes to our regulatory model for AI, for example by empowering a central body with a mandate and higher risk tolerance to promote innovation across the economy.

Where did we sing for that?

Sustainability

The document waxes lyrical about building datacentres. What about the electricity and water requirements? What about the impact on our water reserves and electricity grid? What about the repercussions on our sustainability goals?

The document is done by throwing the word sustainability twice in one paragraph

Mitigate the sustainability and security risks of AI infrastructure, while positioning the UK to take advantage of opportunities to provide solutions. [..] Government should also explore ways to support novel approaches to compute hardware and, where appropriate, create partitions in national supercomputers to support new and innovative hardware. In doing so, government should look to support and partner with UK companies who can demonstrate performance, sustainability or security advancements.

An array of colorful, fossil-like data imprints representing the static nature of AI models, laden with outdated contexts and biases.
Luke Conroy and Anne Fehres & AI4Media / Better Images of AI / Models Built From Fossils / CC-BY 4.0

Unemployment

The writers of that utopic “AI-powered” UK manifesto don’t address job losses. We only get the sentence I mentioned above

the increasing of prevalence of AI in people’s working lives opens up new opportunities rather than just threatens traditional patterns of work.

Instead, it uses language that fosters fear and builds on utopian and dystopian visions of an AI-driven future

AI systems are increasingly matching or surpassing humans across a range of tasks.

Given the pace of progress, we will also very soon see agentic systems — systems that can be given an objective, then reason, plan and act to achieve it. The chatbots we are all familiar with are just an early glimpse as to what is possible.

On the flip side, the government repeatedly reiterates their ambition of bringing talent from abroad

 Supporting UK-based AI organisations working on national priority projects to bring in overseas talent and headhunting promising founders or CEOs

How does this plan contribute to reassuring people about their jobs?

Big-picture

This techno-solutionism approach doesn’t have any regard for AI specialists in domains other than coding or IT.

To mention a few, what about sociologists, psychologists, philosophers, teachers, historians, economists, or specialists in the broad spectrum of industries in the UK? 

Don’t they belong to those think tanks where decisions are made about selling our country to the AI Gods?


The Good News? We Can Do Better

People in Britain voted last year that they were tired of profits over people, centralism, and oligarchy. Unfortunately, this plan uses AI to reinforce the three.

The UK is full of hardworking and smart people who deserve much better than magic bullets or techno-saviours. 

Instead of shoehorning the UK’s future to AI, what if we


WORK WITH ME

I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.

Three ways you can work with me:

  • I empower non-tech leaders to harness the potential of artificial intelligence for sustainable growth and responsible innovation through consulting and AI competency programs.
  • I’m a ​sought-after international keynote speaker​ on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
  • I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.

Get in touch to discuss how I can help you achieve the success you deserve in 2025.

Seven Ways Big Data Leaves Women Out of the Equation

Projection of numbers on a young woman's face.
Photo by Rada Aslanova.

Some months ago, a LinkedIn post showcasing an excerpt from the Chasing Financial Equality podcast with Cindy Galop stopped me in my tracks.

I didn’t know who Cindy was. Later, I discovered she’s a brand and business innovator, consultant, coach, and keynote speaker who participated in the UK Apprentice. She’s been building a business out of teaching sex and she’s also a women’s entrepreneur advocate.

Still, that one-minute video in my feedback was so powerful that I didn’t care who was speaking.

“F*ck data. Data does f*ck all.

We have literally for decades had the data you reference that says female founders exit faster, female founders burn less cash, female founders get to profitability quicker, female founders build better business cultures, but none of that data makes any difference

[…] Information goes through the heart, not the head. It’s not about rationality. It’s about emotion.

The reason women don’t get funded is due to plain old-fashioned sexism and misogyny.

Cindy Gallop

My background is in engineering and computer simulation and I’m Director of Scientific Support and Customer Operations for a tech corporation. I’m also a diversity and inclusion advocate. I’ve been using data for 30 years for everything I’ve done.

Using simulation to guide the development of new materials, leading the migration of all our customer support data after an acquisition, monitoring customer satisfaction KPIs, supporting the business case for enhanced maternity leave in the company I work for, and surveying professional women about the impact of COVID-19 on their unpaid work are only a few examples.

Still, Cindy’s post triggered an epiphany.

I began to recall all the ways data — or its absence — has been manipulated to foster gender inequality. From entrenching the status quo to promoting “busy work”, wearing out activists, or even benefiting those who profit from inequality.

Let’s show you what I found.

Gender Data Myths

“In God we trust, all others bring data.”

W. Edwards Deming

Data has been heralded as the key to innovation, solving systemic issues, and exponential growth (Big Data anyone?). We “just” need data, don’t we?

In theory, women have accounted for half of the population throughout humanity. We should have collected millions of data points over millennia. How come we haven’t solved gender inequality yet?

Because we’ve been using data against women.

At a time when we abide by the creed “data is the new oil”, it cannot be a coincidence that we’re solving this “data problem”

Here are the 7 ways data is weaponised against gender equity.

Lack of data

In the absence of data, we will always make up stories. 

Brené Brown

Woman sitting on a dune on a desert background.
Photo by cottonbro studio.

Recorded historical contributions to science and humanities — medicine, literature, chemistry, philosophy, politics, or engineering — have XY chromosomes.

From that “data”, the world feels very comfortable making up stories about the reasons why “progress” has been driven by men. If we have data, we must have a story about it.

The story we’re told about the lack of data on women’s contributions is that women haven’t contributed. Yes, for millennia, women were just in the background waiting for men to learn about fire, cure their children, or bring money home.

Continue reading

2025 AI Forecast: 25 Predictions You Need to Know Now

I’ve been betting on the transformative power of digital technology all my professional career. 

  • I started doing computer simulation during my MSc in Chemical Engineering in the 1990s, in a lab where everybody else was an experimentalist. Except for my advisor, the rest of the team was sceptical — to say the least — that something useful would come from using computer modelling to study ​enhanced oil recovery from oil fields ​.
  • A similar story repeated during my PhD in Chemistry, where I pioneered using molecular modelling to study polymers in a research centre focused on the experimental study of polymers and proteins.
  • For the last 20+ years, I’ve been working on digital transformation playing a similar role. First, as Head of Training and Contract Research, and now as Director of Scientific Support, I relish helping my customers harness the potential of digital technology for responsible innovation.

I’m also known for telling it as I see it. In the early 2000s, I was training a customer — incidentally an experimentalist — on ​genetic algorithms​. He was very excited and asked me if he could create a model for designing a new material. He proudly shared he had “7 to 10 data points.” My answer? “Far too few.’”

In summary, I’m very comfortable being surrounded by tech sceptics, dispelling myths about what AI can and can’t do, and betting on the power of digital technology.

And that’s exactly why I’m sharing with you my AI predictions for 2025.

My Predictions

1.- ​xAI​ (owned by Elon Musk) will purchase X so that the first can freely train its models on the data from the second. ​Elon owns 79% of X ​after he bought it for $44 billion. Now it’s valued at $9.4 billion and big advertisers keep leaving the platform.

After struggling for almost 3 years to make it work, the xAI acquisition — which got a ​$6 billion funding round​ in December — would be a win-win.

2.- OpenAI for-profit organisation will formally split from the original non-profit. I bet on this despite ​Elon Musk’s injunction to stop OpenAI’s transition to a for-profit company​ (​supported by Meta​).

Why? A clause in ​OpenAI’s $150 billion funding round​ allows investors to request their money back if the switch isn’t completed within two years.

3.- The generation and usage of synthetic data will balloon to address data privacy concerns. People want better services and products — especially in healthcare — but are unwilling to give up their personal data. The solution? “Creating” data.

4.- Startups and organisations will move from using large language models (LLMs) to focusing on SLMs (small language models), which consume less energy, produce fewer hallucinations, and are customised to companies’ requirements.

An image of multiple 3D shapes representing speech bubbles in a sequence, with broken up fragments of text within them.
Wes Cockx & Google DeepMind / Better Images of AI / AI large language models / Licenced by CC-BY 4.0.

5.- In FY 2025, ​Microsoft plans to invest approximately $80 billion to build AI-enabled datacenters​ but don’t expect that to go smoothly with everybody. In 2024, ​datacenters consumption gathered a lot of attention​.

This year local authorities and NGOs will develop frameworks to scrutinise datacenters electricity and water consumption. They’ll also be tracked in terms of disruption to the locals: ​electricity stability​, water availability, and electricity and water prices.

6.- Rise of the two-tier AI-human customer support model: AI chatbots for self-service and low-revenue customers and human customer support for key and high-revenue clients.

It’s not only a question of money but also of liability. There is less probability that low-profit customers sue providers over AI chatbots delivering harmful and/or inaccurate content.

Continue reading