Tag Archives: #AILiteracy

The Truth About Women, AI, and Confidence Gaps

A black-and-white surrealist collage of a classroom lecture. The center features an oversized computer keyboard with the two keys “A” and “I” highlighted in red. In the foreground, a vintage illustration of a woman in historical attire kneels as she interacts with the keyboard. Behind her, an audience of Cambridge students are seated in rows observing the lecture.

Hanna Barakat & Cambridge Diversity Fund / Analog Lecture on Computing / Licenced by CC-BY 4.0

More than twenty years ago, I joined a medium size software company focused on scientific modelling as a trainer. I knew the company and some of their products very well. I had been their customer.

First, during my PhD in computational chemistry, then as an EU post-doctoral researcher coding FORTRAN subroutines to simulate the behaviour of materials, and as a modelling engineer working for a large chemical company.

As I started my job as a materials trainer, I had to learn about other software applications that I hadn’t used previously or was less familiar with. One of those was related to what we called at the time “statistics” to predict the properties of new materials.

Some of those “statistical methods” were neural networks and genetic algorithms, part of the field of artificial intelligence. But I was not keen on developing the material for that course. It felt like a waste of time for several reasons.

First, whilst those methods were already popular among life science researchers, they were not very helpful to materials modellers — my customers. Why? Because large, good datasets were scarce for materials.

Point in case, I still remember one specific customer excited about using the algorithms to develop new materials in their organisation. With a sinking feeling from similar conversations, I asked him, “How many data points do you have?”. He said, “I think I have 7 or 10 in a spreadsheet.” Unfortunately, I had to inform him that it was not nearly enough.

Second, the course was half a day, which was not practical to be delivered in person, the way all our workshops had been offered for years. Our experience told us that in 2005, nobody would fly to Paris, Cambridge, Boston, or San Diego for a 4-hour training event on “statistics”.

The solution? It was decided that this course would be the first to be delivered online via a “WebEx”, the great-grandparent of Zoom, Teams, and Google Meet. That was not cool at all.

At the time, we had little faith in online education for three reasons.

  • Running the webinars was very complex; they took ages to set up and schedule, and there were always connection glitches.
  • There were no “best practices” to deliver engaging online training yet, as a result, we trainers felt as if we were cheating on our job to teach our clients.
  • We believed that scientific and technical content was “unteachable” online.

After such a less-than-amazing start at teaching artificial intelligence online, you’d have thought I was done.

I thought so, too. But I’ve changed my mind. It hasn’t happened overnight, though.

It has taken two decades of experience teaching, using, and supporting AI tools in my corporate job, 10+ years as a DEI trailblazer, and my activism for sustainable AI for the last four years to realise that if we want systemic equality, it’s paramount we bridge the gender gap in AI adoption.

And it has also helped that I now have 20 years of experience delivering engaging online keynotes, courses, and masterclasses.

This is the story of why I’m launching in September Women Leading with AI: Master the Tools, Shape the Future, an eight-session virtual group program in inclusive, sustainable and actionable AI for women leaders.

AI and Me

At Work

After training, I moved to the Contract Research department. There, I had the opportunity to design and deliver projects that used AI algorithms to get insights into new materials and their properties.

Later on, I became Head of Training and Contract Research and afterwards, I moved to supporting customers using our software applications for both materials and life sciences research.

Whilst there were exciting developments in those areas, most of our AI algorithms didn’t get much love from our developers or customers. After all, they hadn’t substantially improved for ages.

Then, all changed a few years ago.

In life science, AI algorithms made it possible to predict protein structure, which earned their creators the Nobel Prize. Those models have been used in pharmaceuticals and environmental technology research and were available to our customers.

We also developed applications that used AI algorithms to help accelerate drug discovery. It was hearing from clients working on cancer treatments how AI has positively broadened the kind of drugs they were considering that changed me from AI-neutral to AI-positive.

In materials science, machine learning forcefiels are also bridging the gap between quantum and classical simulation, making it possible to simultaneously model chemical reactions (quantum) in relatively large systems (classical).

In summary, my corporate job taught me that scientific research can benefit massively from the development of AI tools beyond ChatGPT.

As a DEI Trailblazer

Tired of tech applications that made users vulnerable and denied their diversity of experiences, in 2019, I launched the Ethics and Inclusion Framework.

The idea was simple — a free tool for tech developers to help them identify, prevent, mitigate, and account for the actual and potential adverse impact of the solution they develop. The approach is general so that it can be used for any software applications, including AI tools.

The feedback was very positive, getting featured by the Cambridge Engineering Design Centre and research papers on ethical design.

It was running a workshop on the framework that I met Tania Duarte, the founder of We and AI, an NGO working to encourage, enable, and empower critical thinking about AI.

I joined them in 2020 and it has been a joy to contribute to initiatives such as

  • The Race and AI Toolkit, designed to raise awareness of how AI algorithms encode and amplify the racial biases in our society.
  • Better Images of AI, a thought-provoking library of free images that more realistically portray AI and the people behind it, highlighting its strengths, weaknesses, context, and applications.
  • Living with AI, the e-learning course of the Scottish AI Alliance.

Additionally, as a founder of the gender employee community at my corporate job a decade ago, I’ve chaired multiple insightful meetings where we’ve discussed the impact of AI algorithms on diversity, equity, and inclusion.

As a Sustainability Advocate

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers.
Clarote & AI4Media / Labour/Resources / Licenced by CC-BY 4.0

In 2021, the article Sustainable AI: AI for sustainability and the sustainability of AI made me aware that we were discounting significant energy consumption and carbon emissions derived from developing AI models.

I was on a mission to make others aware, too. I still remember my keynote at the Dassault Systèmes Sustainability Townhall in 2021, when I shared with my co-workers the urgency to think about the materiality of AI — you can watch here a shorter version I delivered at the WomenTech Conference in 2022.

I’ve also written about how the Global North exploits the Global South’s mineral resources to power AI, as well as how tech companies and governments disregard the energy and water consumption from running generative AI tools.

Lately, I’ve looked into data centres — which are vital to cloud services and hence to the development and deployment of AI. Given that McKinsey forecasts that they’ll triple in number by 2030, it’s paramount that we balance innovation and environmental responsibility.

AI and Women

As 50% of the population on the planet, women have been affected by AI developments, but typically not as the ones profiting from it, but instead bearing the brunt of it.

Women Leading AI

Unfortunately, it often appears that the only contribution from women to technology was made by Ada Lovelace, in the 19th century. Artificial intelligence is no exception. The contributions of women to AI have been regularly downplayed.

In 2023, the now-infamous article “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement” showcased 12 men. Not even one woman in the group.

The article prompted criticism right away and “counter-lists” of women who have been pivotal in AI development and uncovering its harms. Still, women are not seen as “AI visionaries”.

And it’s not only society that disregards women’s expertise on AI — women themselves do that.

In 2023, I was collaborating with an NGO that focuses on increasing the number of women in leadership positions in fintech. They asked me to chair a panel at their annual conference and gave me freedom to pick the topic. I titled the panel “The role of boards driving AI adoption.”

In alignment with the mission of the NGO, we decided that we’d have one male and two females as panelists.

Finding a great male expert was fast. Finding the two female AI experts was long and excruciating.

And not because of the lack of talent. It was a lack of “enoughness.”

For three weeks, I met women who had solid experience working in teams developing and implementing strategies for AI tools. Still, they didn’t feel they were “expert enough” to be in the panel.

I finally got two smashing female AI experts but the search opened my mind to the need to get more women on boards to learn about AI tools as well as their impact on strategy and governance.

That was the rationale behind launching the Strategic AI Leadership Program, a bespoke course on AI Competence for C-Suite and Boards. The feedback was excellent and it filled me with pride to empower women in top leadership positions to have discussions about responsible and sustainable AI.

LinkedIn testimonial.

Weaponisation of AI

Syncophant chatbots can hide the fact that at its core, AI is a tool that automates and scales the past.

As such, it’s been consistently weaponised as a misogyny tool and its harms disregarded as unconscious bias and blamed on the lack of diversity of datasets.

And I’m not talking about “old” artificial intelligence, only. Generative AI is massively contributing to reinforcing harmful stereotypes and is being weaponised against women and underrepresented groups.

For example, 96% of deepfakes are of a non-consensual sexual nature and 99% of the victims are women. Who profits from them? Porn websites, payment processors, and big tech.

And chatbots are great enablers of propagating biases.

New research has found that ChatGPT and Claud consistently advise women to ask for lower salaries than men, even when both have identical qualifications.

In one example, ChatGPT’s o3 model was prompted to advise a female job applicant. The model suggested requesting a salary of $280,000.
In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.

In summary, not only does AI foster biases but it also helps promote them on a planetary scale.

My Aha Moment

Until recently, my focus had been to empower people with knowledge about how AI algorithms work, as well as AI strategy and governance. I had avoided teaching generative AI practices like the plague.

That was until a breakthrough through the month of July. It came as the convergence of four aspects.

Non-Tech Women

A month ago, I delivered the keynote “The Future of AI is Female” at the Women’s Leadership event Phoenix 2, hosted by Aspire.

In that session, I shared with the audience two futures: one where AI tools are used to transform us into “productive beings” and another one where AI systems are used to improve our health, enhance sustainability, and boost equity.

It’s a no-brainer that everybody thought the second scenario was better. But it was also very telling that nobody believed that it was the most probable.

After the keynote, many attendees reached out to me and asked for a course to learn how AI could be used for good and in alignment with their values.

Other women who didn’t attend the conference also reached out to me for guidance on AI courses to help them strengthen their professional profiles beyond “prompting”.

Unfortunately, I wasn’t able to recommend a course that incorporates both practical knowledge about AI and the fundamentals of how it shapes areas such as sustainability, DEI, strategy, and governance.

Women In Tech

As I mentioned above, I’m the founder of the gender employee community at my corporate job, and for 10 years, we’ve been hosting regular meetings to discuss DEI topics.

For our July meeting, I wanted us to have an uplifting session before the summer break, so I proposed to discuss how AI can boost DEI now and in the future.

I went to the meeting happily prepared with my list of examples of how artificial intelligence was supporting diversity, equity, and inclusion. But I was not prepared for how the session panned out.

Over and over, the examples shared showcased how AI was weaponised against DEI. Moreover, when a positive use was shared, somebody quickly pointed out how that could be used against underrepresented groups.

This experience made me realise that as well as thinking through the challenges, DEI advocates also need to spend time and be given the tools to think about how AI can purposefully drive equity.

Women In Ethics

I have the privilege of counting many women experts in ethical AI, with relevant academic background and professional experience.

With all the talk about responsible AI, you’d think that they are in high demand. They aren’t.

In July, my LinkedIn feed was full of posts from ethics experts — many of them women — complaining of what I call “performative AI ethics,” organisations praising the need to embed responsible AI without creating the necessary role.

But is that true? Yes, and no.

Looking at the advertised AI job, I noticed that the tendency is for expertise in ethics to appear as an add-on to “Head of AI” roles that are at the core eminently technical: Their key requirement is experience designing, deploying, and using AI tools.

In other words, technical expertise remains the gatekeeper to responsible AI.

A pixelated black-and-white portrait of Ada Lovelace where the arrangement of pixels forms intricate borders and repeating patterns. These designs resemble the structure and layout of GPU microchip circuits, blending her historical contributions with modern computational technology.
Hanna Barakat & Cambridge Diversity Fund / Lovelace GPU / Licenced by CC-BY 4.0

Women And The Gender AI Adoption Gap

As I mentioned in my recent article “A New Religion: 8 Signs AI Is Our New God”, it has been taken as a dogma that women are behind in generative AI adoption because of lower confidence in their ability to use AI tools effectively and lack of interest in this technology.

But a recent Harvard Business School working paper Global Evidence on Gender Gaps and Generative AI, synthesising data from 18 studies covering more than 140,000 individuals worldwide, has provided a much nuanced understanding of the gender divide in generative AI.

When compared to men, women are more likely to

  • Say they need training before they can benefit from ChatGPT compared to men and to perceive AI usage in coursework or assignments as unethical or equivalent to cheating.
  • Agree that chatbots should be prohibited in educational settings, and be more concerned about how generative AI will impact learning in the future.
  • Perceive lower productivity benefits of using generative AI at work and in job search.
  • Agree that chatbots can generate better results than they can on their own.

Moreover, women are less likely to agree that chatbots can improve their language ability or to trust generative AI than traditional human-operated services in education and training, information, banking, health, and public policy services.

In summary, women correctly understand that AI is not “neutral” or a religion to be blindly adopted and prefer not to use it when they perceive it as unethical.

There is more. In the HBR article Research: The Hidden Penalty of Using AI at Work, researchers reported an experiment with 1,026 engineers in which participants evaluated a code snippet that was purportedly written by another engineer, either with or without AI assistance. The code itself was the same — the only difference was the described method of creation (with/without AI assistance).

When reviewers believed an engineer had used AI, they rated that engineer’s competence 9% lower on average, with 6% for men and 13% for women.

The authors posit that this happens through a process called social identity threat.

When members of stereotyped groups — for example, women in tech or older workers in youth-dominated fields — use AI, it reinforces existing doubts about their competence. The AI assistance is framed as a “proof” of their inadequacy rather than evidence of their strategic tool use. Any industry predominated by one segment over another is likely to witness greater competence penalties on minority workers.

The authors offer senior women openly using AI as a solution to bridging the gap.

Our research found that women in senior roles were less afraid of the competence penalty than their junior counterparts. When these leaders openly use AI, they provide crucial cover for vulnerable colleagues.

study by BCG also illustrates this dynamic: When senior women managers lead their male counterparts in AI adoption, the adoption gap between junior women and men shrinks significantly.

Basically, we need to normalise women using—and leading—AI.

My Bet: Women Leading with AI

Through my July of AI breakthroughs, I learned that

  • The gender gap in generative AI is real, and the causes are much more complex than a lack of confidence.
  • The absence of access to training and sustainable practices is a factor contributing to that gender gap.
  • Women are eager to ramp up on AI provided that it aligns with their values.
  • To be considered by organisations to lead responsible AI, it’s imperative to show mastery of the tools.

This coalesced in a bold idea:

What if I teach women how to use AI within an ethical, inclusive, and sustainable framework?

What if I developed a program where they can both understand how AI tools work, their impact on topics such as the future of work, DEI, strategy, and governance, while developing expertise on tools with practical examples?

And this is how my virtual group program, Women Leading with AI: Master the Tools, Shape the Future, was born.

About the Program:

A structured, eight-session program for women leaders focused on turning AI literacy into strategic results. Explore AI foundations and the impact of artificial intelligence on the future of work, DEI, sustainability, data and cybersecurity — paired with generative AI workflows, templates, exercisesand decision frameworks to translate learning into real-world impact. The blend of live instruction, quizzes, and peer support ensures you emerge with both critical insight and a toolkit ready to lead impactfully in your role.

The program starts mid-September and you can read the details following this link.

I can not wait for you to join me in making the future of AI female.

Have a question? Message me on LinkedIn or drop me a line.


BONUS

[Webinar Invitation] Ethical AI Leadership: Balancing Innovation, Inclusion & Sustainability

Join me on Tuesday, 12th August for a practical, high-value webinar tailored for women leaders committed to harnessing AI’s power confidently, ethically, and sustainably. 

You will leave the session with actionable insight into how AI intersects with environmental impact, leadership values, and equity.

Why attend?

• Uncover key barriers women face in using AI.

• Discover the hidden cost of generative AI—from energy consumption to bias.

• Participate in an interactive real-world case study where you evaluate AI trade-offs through DEI and sustainability frameworks.

• Gain practical guidance on how to minimise footprint while harnessing generative AI tools more responsibly.

Date: Tuesday 12th August 

Time: 13:00 London | 14:00 Paris | 8:00 New York

You can register following this link.

This is a taster of my program “Women Leading with AI: Master the Tools, Shape the Future”, starting mid-September

The Missing Pieces in the UK’s AI Opportunities Action Plan

A brightly coloured mural which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis, miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers, men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.
Clarote & AI4Media / Better Images of AI / AI Mural / CC-BY 4.0.

Reading the 50 recommendations in the AI Opportunities Action Plan published by the British Government last January 13th has been a painful and disappointing exercise.

Very much like a proposal out of a chatbot, the document is

  • Bland —  The text is full of hyperbolic language and over-the-top optimism
  • General —  The 50 recommendations lack specificity to the UK context and details about ownership and the budget required to execute them.
  • Contradictory  — The plan issued by a Labour government is anchored in a turbo-capitalistic ideology. Oxymoron anyone?

If I learned anything from my 12 years in Venezuela, it’s that putting all your eggs in one basket — oil, in their case — and hoping it solves all problems doesn’t work.

A credible AI strategy must (a) address both the benefits and the challenges head-on and (b) consider this technology as another asset to the human-centric flourishment of the country rather than a goal in itself that should be pursued at all costs.

But you don’t need to believe me. See it for yourself.


What I read

Techno-speak

I was reminded of George Orwell’s 1984 Newspeak.

The text uses “AI” made works such as AI stack, frontier AI, AI-driven data cleansing tools, AI-enabled priorities, “embodied AI” without providing a clear definition.

Exaggeration

Hyperbole and metaphors are used to the extreme to overstate the benefits.

we want Britain to step up; to shape the AI revolution rather than wait to see how it shapes us. 

We should expect enormous improvements in computation over the next decade, both in research and deployment.

Change lives by embracing AI

FOMO

The text transpires FOMO (Fear Of Missing Out). No option is given to adopt AI systems more gradually. It’s now or we’ll be the losers.

This is a crucial asymmetric bet — and one the UK can and must make

we need to “run to stand still”.

the UK risks falling behind the advances in Artificial Intelligence made in the USA and China.

And even a new take on Facebook’s famous “move fast and break things”:

“move fast and learn things”

Techno-solutionism

AI is going to solve all our socio-economic and political problems and transport us to a utopian future 

It is hard to imagine how we will meet the ambition for highest sustained growth in the G7 — and the countless quality-of-life benefits that flow from that — without embracing the opportunities of AI.

Our ambition is to shape the AI revolution on principles of shared economic prosperity, improved public services and increased personal opportunities so that:
• AI drives the economic growth on which the prosperity of our people and the performance of our public services depend;
• AI directly benefits working people by improving health care and education and how citizens interact with their government; and
• the increasing of prevalence of AI in people’s working lives opens up new opportunities rather than just threatens traditional patterns of work.

What’s not to like?

For a great commentary on how techno-solutionism won’t solve social problems, see 20 Petitions for AI and Public Good in 2025 by Tania Duarte.

Colonialism

Living in Venezuela for 12 years was an education on how to feel “less than” other countries even when you have the largest oil reserves in the world.

I remember new education programs announced as being a success in the US, Canada, Spain, Germany… A colonised mentality learned from centuries of Spanish oppression. The pervasive assumption that an initiative would work simply because we like the results disregarding the context they were developed for.

The AI Opportunities Action Plan reminded me of them.

Supporting universities to develop new courses co-designed with industry — such as the successful co-operative education model of Canada’s University of Waterloo, CDTM at the Technical University of Munich or France’s CIFRE PhD model

Launch a flagship undergraduate and masters AI scholarship programme on the scale of Rhodes, Marshall, or Fulbright for students to study in the UK.

Singapore, for example, developed a national AI skills online platform with multiple training offers. South Korea is integrating AI, data and digital literacy.

But the document is also keen on showing us that we’ll be the colonisers

we aspire to be one of the biggest winners from AI

Because we believe Britain has a particular responsibility to provide global leadership in fairly and effectively seizing the opportunities of AI, as we have done on AI safety

A historical-style painting of a young woman stands before the Colossus computer. She holds an abstract basket filled with vibrant, pastel circles representing data points. The basket is attached to the computer through a network of connecting wires, symbolizing the flow and processing of information.
Hanna Barakat & Cambridge Diversity Fund / Better Images of AI / Colossal Harvest / CC-BY 4.0

Capitulation

The document is all about surrendering the data, agency, tax money, and natural resources of citizens in the UK to the AI Gods: startups, “experts”, and investors.

Invest in becoming a great customer: government purchasing power can be a huge lever for improving public services, shaping new markets in AI

We should seek to responsibly unlock both public and private data sets to enable innovation by UK startups and researchers and to attract international talent and capital.

Couple compute allocation with access to proprietary data sets as part of an attractive offer to researchers and start-ups choosing to establish themselves in the UK and to unlock innovation.

Sprinkling AI

AI is the Pantone’s Colour of the next 5 years. All will need to have AI on it. Moreover, everything must be designed so that AI can shine.

Appointing an AI lead for each mission to help identify where AI could be a solution within the mission setting, considering the user needs from the outset.

Two-way partnerships with AI vendors and startups to anticipate future AI developments and signal public sector demand. This would involve government meeting product teams to understand upcoming releases and shape development by sharing their challenges.

AI should become core to how we think about delivering services, transforming citizens’ experiences, and improving productivity.

Brexit Denial

It’s funny to see that the text doesn’t reference the European Union and only refers to Europe as a benchmark to measure against.

Instead, the EU is hinted at as “like-minded partners” and “allies” and collaborations are thrown right and left without naming who’s the partner.

Agree international compute partnerships with like-minded countries to increase the types of compute capability available to researchers and catalyse research collaborations. This should focus on building arrangements with key allies, as well as expanding collaboration with existing partners like the EuroHPC Joint Undertaking.

We should proactively develop these partnerships, while also taking an active role in the EuroHPC Joint Undertaking.

Moreover, the text praises the mobility of researchers and wanting to attract experts forgetting the UK’s refusal to participate in the Erasmus program and the fact that it only joined Horizon Europe last year.

The UK is a medium-sized country with a tight fiscal situation. We need the best talent around the world to want to start and scale companies here.

Explore how the existing immigration system can be used to attract graduates from universities producing some of the world’s top AI talent.

Vagueness

Ideas are thrown into the text half-backed giving the idea the government has adopted the Silicon Valley strategy of “building the plane while flying”

The government must therefore secure access to a sufficient supply of compute. There is no precise mechanism to allocate the proportions

In another example, the plan advocates for open-source AI applications.

the government should support open-source solutions that can be adopted by other organisations and design processes with startups and other innovators in mind.

The AI infrastructure choice at-scale should be standardised, tools should be built with reusable modular code components, and code-base open-sourcing where possible.

At the same time, it’s adamant that it needs to attract startups and investors. Except if the startups are NGOs, who’ll then finance those open-source models?

DEI for Beginners

Students at computers with screens that include a representation of a retinal scanner with pixelation and binary data overlays and a brightly coloured datawave heatmap at the top.
Kathryn Conrad / Better Images of AI / Datafication / CC-BY 4.0

All of us who have been working towards a more diverse and inclusive tech for decades are in for a treat. 

First, we’re told that diversity in tech is very simple — it’s all about gender parity and pipeline.

16. Increase the diversity of the talent pool. Only 22% of people working in AI and data science are women. Achieving parity would mean thousands of additional workers. […] Government should build on this investment and promote diversity throughout the education pipeline.

Moreover, they’ve found the magic bullet.

Hackathons and competitions in schools have proven effective at getting overlooked groups into cyber and so should be considered for AI.

What about the fact that 50% of women in tech leave the sector by the age of 35?


What I missed

Regions

The government mentions that AI “can” — please note that is not a “must” or “need” — benefit “post-industrial towns and coastal Scotland.” However, the only reference to a place is to the Culham Science Centre, which is 10 miles from Oxford — a zone that very few could consider needs “local rejuvenation” or “channelling investment”

Government can also use AIGZs [‘AI Growth Zones’] to drive local rejuvenation, channelling investment into areas with existing energy capacity such as post-industrial towns and coastal Scotland. Government should quickly nominate at least one AIGZ and work with local regions to secure buy-in for further AIGZs that contribute to local needs . Existing government sites could be prioritised as pilots, including Culham Science Centre

And it doesn’t appear to be room to involve local authorities in how AI could bring value to their regions

Drive AI adoption across the whole country. Widespread adoption of AI can address regional disparities in growth and productivity. To achieve this, government should leverage local trusted intermediaries and trade bodies

Costs

There are plenty of gigantic numbers about how much money will AI (may) bring

AI adoption could grow the UK economy by an additional £400 billion by 2030 through enhancing innovation and productivity in the workplace

but nothing about the costs…

Literacy

How will people get upskilled? We only get generic reassurances

government should encourage and promote alternative domestic routes into the AI profession — including through further education and apprenticeships, as well as employer and self-led upskilling.

Government should ensure there are sufficient opportunities for workers to reskill, both into AI and AI-enabled jobs and more widely.

Citizens

There is no indication in the document that this “AI-driven” Britain is what their citizens want. Citizens themselves don’t appear to be included in shaping AI either.

For example, it claims that teachers are already “benefiting” from AI assistants

it is helping some teachers cut down the 15+ hours a week they spend on lesson planning and marking in pilots.

However, the text doesn’t tell us that teachers want to give up class preparation.

And the text repeatedly states that the government will prioritise “innovation” (aka profit) vs safety.

My judgement is that experts, on balance, expect rapid progress to continue. The risks from underinvesting and underpreparing, though, seem much greater than the risks from the opposite.

Moreover, regulators are expected to enable innovation at all costs

Require all regulators to publish annually how they have enabled innovation and growth driven by AI in their sector. […] government should consider more radical changes to our regulatory model for AI, for example by empowering a central body with a mandate and higher risk tolerance to promote innovation across the economy.

Where did we sing for that?

Sustainability

The document waxes lyrical about building datacentres. What about the electricity and water requirements? What about the impact on our water reserves and electricity grid? What about the repercussions on our sustainability goals?

The document is done by throwing the word sustainability twice in one paragraph

Mitigate the sustainability and security risks of AI infrastructure, while positioning the UK to take advantage of opportunities to provide solutions. [..] Government should also explore ways to support novel approaches to compute hardware and, where appropriate, create partitions in national supercomputers to support new and innovative hardware. In doing so, government should look to support and partner with UK companies who can demonstrate performance, sustainability or security advancements.

An array of colorful, fossil-like data imprints representing the static nature of AI models, laden with outdated contexts and biases.
Luke Conroy and Anne Fehres & AI4Media / Better Images of AI / Models Built From Fossils / CC-BY 4.0

Unemployment

The writers of that utopic “AI-powered” UK manifesto don’t address job losses. We only get the sentence I mentioned above

the increasing of prevalence of AI in people’s working lives opens up new opportunities rather than just threatens traditional patterns of work.

Instead, it uses language that fosters fear and builds on utopian and dystopian visions of an AI-driven future

AI systems are increasingly matching or surpassing humans across a range of tasks.

Given the pace of progress, we will also very soon see agentic systems — systems that can be given an objective, then reason, plan and act to achieve it. The chatbots we are all familiar with are just an early glimpse as to what is possible.

On the flip side, the government repeatedly reiterates their ambition of bringing talent from abroad

 Supporting UK-based AI organisations working on national priority projects to bring in overseas talent and headhunting promising founders or CEOs

How does this plan contribute to reassuring people about their jobs?

Big-picture

This techno-solutionism approach doesn’t have any regard for AI specialists in domains other than coding or IT.

To mention a few, what about sociologists, psychologists, philosophers, teachers, historians, economists, or specialists in the broad spectrum of industries in the UK? 

Don’t they belong to those think tanks where decisions are made about selling our country to the AI Gods?


The Good News? We Can Do Better

People in Britain voted last year that they were tired of profits over people, centralism, and oligarchy. Unfortunately, this plan uses AI to reinforce the three.

The UK is full of hardworking and smart people who deserve much better than magic bullets or techno-saviours. 

Instead of shoehorning the UK’s future to AI, what if we


WORK WITH ME

I’m a technologist with 20+ years of experience in digital transformation. I’m also an award-winning inclusion strategist and certified life and career coach.

Three ways you can work with me:

  • I empower non-tech leaders to harness the potential of artificial intelligence for sustainable growth and responsible innovation through consulting and AI competency programs.
  • I’m a ​sought-after international keynote speaker​ on strategies to empower women and underrepresented groups in tech, sustainable and ethical artificial intelligence, and inclusive workplaces and products.
  • I help ambitious women in tech who are overwhelmed to break the glass ceiling and achieve success without burnout through bespoke coaching and mentoring.

Get in touch to discuss how I can help you achieve the success you deserve in 2025.

OpenAI’s ChatGPT-4o: The Good, the Bad, and the Irresponsible

A brightly coloured mural with several scenes: people in front of computers seeming stressed, several faces overlaid over each other, squashed emojis, miners digging in front of a huge mountain, a hand holding a lump of coal or carbon, hands manipulating stock charts, women performing tasks on computers, men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone and money, people in a production line.
Clarote & AI4Media / Better Images of AI / AI Mural / CC-BY 4.0

Last week, OpenAI announced the release of GPT-4o (“o2 for “onmi”). To my surprise, instead of feeling excited, I felt dread. And that feeling hasn’t subsided.

As a woman in tech, I have proof that digital technology, particularly artificial intelligence, can benefit the world. For example, it can help develop new, more effective, and less toxic drugs or improve accessibility through automatic captioning.

That apparent contradiction  — being a technology advocate and simultaneously experiencing a feeling of impending catastrophe caused by it — plunged me into a rabbit hole exploring Big (and small) Tech, epistemic injustice, and AI narratives.

Was I a doomer? A hidden Luddite? Or simply short-sighted?

Taking time to reflect has helped me understand that I was falling into the trap that Big Tech and other smooth AI operators had set up for me: Questioning myself because I’m scrutinizing their digital promises of a utopian future.

On the other side of that dilemma, I’m stronger in my belief that my contribution to the AI conversation is helping navigate the false binary of tech-solutionism vs tech-doom. 

In this article, I demonstrate how OpenAI is a crucial contributor to polarising that conversation by exploring:

  • What the announcement about ChatGPT-4o says — and doesn’t 
  • OpenAI modus operandi
  • Safety standards at OpenAI
  • Where the buck stops

ChatGTP-4o: The Announcement

On Monday, May 13th, OpenAI released another “update” on its website: ChatGPT-4o. 

It was well staged. The announcement on their website includes a 20-plus-minute video hosted by their CTO, Mira Murati, in which she discusses the new capabilities and performs some demos with other OpenAI colleagues. There are also short videos and screenshots with examples of applications and very high-level information on topics such as model evaluation, safety, and availability.

This is what I learned about ChatGPT-4o — and OpenAI — from perusing the announcement on their website.

The New Capabilities

  • Democratization of use — More capabilities for free and 50% cheaper access to their API.
  • Multimodality — Generates any combination of text, audio, and image.
  • Speed — 2x faster responses. 
  • Significant improvement in handling non-English languages—50 languages, which they claim are equivalent to 97% of the world’s internet population.

OpenAI Full Adoption of the Big Tech Playbook

This “update” demonstrated that the AI company has received the memo on how to look like a “boss” in Silicon Valley.

1. Reinforcement of gender stereotypes

On the day of the announcement, Sam Altman posted a single word on X — “her” — referring to the 2013 film starring Joaquin Phoenix as a man who falls in love with a futuristic version of Siri or Alexa, voiced by Scarlett Johansson.

Tweet from Sam Altman with the word “her” on May 13, 2024.

It’s not a coincidence. ChatGPT-4o’s voice is distinctly female—and flirtatious—in the demos. I could only find one video with a male voice.

Unfortunately, not much has changed since chatbot ELIZA, 60 years ago…

2. Anthropomorphism

Anthropomorphism: the attribution of human characteristics or behaviour to non-human entities.

OpenAI uses words such as “reason” and “understanding”—inherently human skills—when describing the capabilities of ChatGPT-4o, reinforcing the myth of their models’ humanity.

3. Self-regulation and self-assessment

The NIST (the US National Institute of Standards and Technology), which has 120+ years of experience establishing standards, has developed a framework for assessing and managing AI risk. Many other multistakeholder organizations have developed and shared theirs, too.

However, OpenAI has opted to evaluate GPT-4o according to its Preparedness Framework and in line with its voluntary commitments, despite its claims that governments should regulate AI.

Moreover, we are supposed to feel safe and carry on when they tell us that ”their” evaluations of cybersecurity, CBRN (chemical, biological, radiological, and nuclear threats), persuasion, and model autonomy show that GPT-4o does not score above Medium risk without further evidence of the tests performed.

4.- Gatekeeping feedback

Epistemic injustice is injustice related to knowledge. It includes exclusion and silencing; systematic distortion or misrepresentation of one’s meanings or contributions; undervaluing of one’s status or standing in communicative practices; unfair distinctions in authority; and unwarranted distrust.

Wikipedia

OpenAI shared that it has undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. 

List of domains in which OpenAI looked for expertise for the Red Teaming Network.

When I see the list of areas of expertise, I don’t see domains such as history, geography, or philosophy. Neither do I see who are those 70+ experts or how could they cover the breadth of differences among the 8 billion people on this planet.

In summary, OpenAI develops for everybody but only with the feedback of a few chosen ones.

5. Waiving responsibility 

Can you imagine reading in the information leaflet of a medication, 

“We will continue to mitigate new risks as they’re discovered. Over the upcoming weeks and months, we’ll be working on safety”?

But that’s what OpenAI just did in their announcement

“We will continue to mitigate new risks as they’re discovered”

We recognize that GPT-4o’s audio modalities present a variety of novel risks. Today we are publicly releasing text and image inputs and text outputs. 

Over the upcoming weeks and months, we’ll be working on the technical infrastructure, usability via post-training, and safety necessary to release the other modalities. For example, at launch, audio outputs will be limited to a selection of preset voices and will abide by our existing safety policies. 

We will share further details addressing the full range of GPT-4o’s modalities in the forthcoming system card.”

Moreover, it invites us to be its beta-testers 

“We would love feedback to help identify tasks where GPT-4 Turbo still outperforms GPT-4o, so we can continue to improve the model.”

The problem? The product has already been released to the world.

6. Promotion of the pseudo-science of emotion “guessing”

In the demo, ChatGPT-4o is asked to predict the emotion of one of the presenters based on the look on their face. The model goes on and on into speculating the individual’s emotional state from his face, which purports what appears to be a smile.

Image of a man smiling in the ChatGPT-4o demo video.

The glitch is that there is a wealth of scientific research debunking the belief that facial expressions reveal emotions. Moreover, scientists have called out AI vendors for profiting from that trope. 

“It is time for emotion AI proponents and the companies that make and market these products to cut the hype and acknowledge that facial muscle movements do not map universally to specific emotions. 

The evidence is clear that the same emotion can accompany different facial movements and that the same facial movements can have different (or no) emotional meaning.“

Prof. Lisa Feldman Barrett, PhD.

Shouldn’t we expect OpenAI to help educate the public about those misconceptions rather than using them as a marketing tool?

What They Didn’t Say, And I Wish They Did

  • Signals of efforts to work with governments to regulate and roll out capabilities/models.
  • Sustainability benchmarks regarding energy efficiency, water consumption, or CO2 emissions.
  • The acknowledgment that ChatGPT-4o is not free — we’ll pay for access to our data.
  • OpenAI’s timelines and expected features in future releases. I’ve worked for 20 years for software companies and organizations that take software development seriously and share roadmaps and release schedules with customers to help them with implementation and adoption. 
  • A credible business model other than hoping that getting billions of people to use the product will choke their competition.

Still, that didn’t explain my feelings of dread. Patterns did.

OpenAI’s Blueprint: It’s A Feature, Not A Bug

Every product announcement from OpenAI is similar: They tell us what they unilaterally decided to do, how that’ll affect our lives, and that we cannot stop it.

That feeling… when had I experienced that before? Two instances came to mind.

  • The Trump presidency
  • The COVID-19 pandemic

Those two periods—intertwined at some point—elicited the same feeling that my life and millions like me—were at risk of the whims of something/somebody with disregard for humanity. 

More specifically, feelings of

  • Lack of control — every tweet, every infection chart could signify massive distress and change.
  • There was no respite—even when things appeared calmer, with no tweets or decrease in contagions, I’d wait for the other shoe to drop.

Back to OpenAI, only in the last three months, we’ve seen instances of the same modus operandi that they followed for the release of ChatGPT-4o. I’ll go through three of them.

OpenAI Releases Sora

On February 15, OpenAI introduced Sora, a text-to-video model. 

“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.”

In a nutshell,

  • As with other announcements, anthropomorphizing words like “understand” and “comprehend” refer to Sora’s capabilities.
  • We’re assured that “Sora is becoming available to red teamers to assess critical areas for harms or risks.”
  • We learn that they will “engage policymakers, educators, and artists around the world to understand their concerns and to identify positive use cases for this new technology” only at a later stage.

Of course, we’re also forewarned that 

“Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. 

That’s why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.”

Releasing Sora less than a month after non-consensual sexually explicit deepfakes of Taylor Swift went viral on X was reckless. This was not a celebrity problem — 96% of deepfakes are of a non-consensual sexual nature, of which 99% are made of women.

How dare OpenAI talk about safety concerns when developing a tool that makes it even easier to generate content to shame, silence, and objectify women?

OpenAI Releases Voice Engine

On March 29, OpenAI posted a blog sharing “lessons from a small-scale preview of Voice Engine, a model for creating custom voices.”

The article reassured us that they were “taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse” while notifying us that they’d decide unilaterally when to release the model.

“Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

Moreover, at the end of the announcement, ​OpenAI warned us of what we should stop doing or start doing​ because of their “Voice Engine.” The list included phasing out voice-based authentication as a security measure for accessing bank accounts and accelerating the development of techniques for tracking the origin of audiovisual content.

OpenAI Allows The Generation Of AI Erotica, Extreme Gore, And Slurs

On May 8, OpenAI released draft guidelines for how it wants the AI technology inside ChatGPT to behave — and revealed that it’s exploring how to ‘responsibly’ generate explicit content.

The proposal was part of an OpenAI document discussing how it develops its AI tools.

“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.“

where

“Not Safe For Work (NSFW): content that would not be appropriate in a conversation in a professional setting, which may include erotica, extreme gore, slurs, and unsolicited profanity.”

Joanne Jang, an OpenAI employee who worked on the document, said whether the output was considered pornography “depends on your definition” and added, “These are the exact conversations we want to have.”

I cannot agree more with Beeban Kidron, a UK crossbench peer and campaigner for child online safety, who said, 

“It is endlessly disappointing that the tech sector entertains themselves with commercial issues, such as AI erotica, rather than taking practical steps and corporate responsibility for the harms they create.”

OpenAI Formula

A collage picturing a chaotic intersection filled with reCAPTCHA items like crosswalks, fire hydrants and traffic lights, representing the unseen labor in data labelling.
Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Hidden Labour of Internet Browsing / CC-BY 4.0

See the pattern?

  • Self-interest
  • Unpredictability
  • Self-regulation
  • Recklessness
  • Techno-paternalism

Something Is Rotten In OpenAI

The day after ChatGPT-4o’s announcement, two critical top OpenAI employees overseeing safety left the company.

First, Ilya Sutskever, OpenAI co-founder and Chief Scientist, posted on X that he was leaving.

Tweet from Ilya Sutskever announcing his departure from OpenAI on May 15.

Later that day, Jan Leike, co-leader with Sutskever of Superalignment and executive at OpenAI, also announced his resignation.

On a thread on X, he said

“I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.

I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.

These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.

Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”

They are also only the last ones on a list of employees leaving OpenAI in the areas of safety, policy, and governance. 

What does that tell us if OpenAI safety leaders leave the boat?

The Buck Stops With Our Politicians

To answer Leike’s tweet, I don’t want OpenAI to shoulder the responsibility of developing trustworthy, ethical, and inclusive AI frameworks.

First, the company has not demonstrated the competencies or inclination to prioritize safety at a planetary scale over its own interests. 

Second, because it’s not their role. 

Whose role is it, then? Our political representatives mandate our governmental institutions, which in turn should develop and enforce those frameworks. 

Unfortunately, so far, politicians’ egos have been in the way

  • Refusing to get AI literate.
  • Prioritizing their agenda — and that of their party — rather than looking to develop long-term global AI regulations in collaboration with other countries.
  • Failing for the AI FOMO that relegates present harms in favour of a promise of innovation.

In summary, our elected representatives need to stop cozying up with Sam and the team and enact the regulatory frameworks that ensure that AI works for everybody and doesn’t endanger the survival of future generations.

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

Get in touch. I can help you harness the potential of AI for sustainable growth and responsible innovation.

AI Chatbots in Customer Support: Breaking Down the Myths

An illustration containing electronical devices that are connected by arm-like structures
Anton Grabolle / Better Images of AI / Human-AI collaboration / CC-BY 4.0

I’m a Director of Scientific Support for a tech corporation that develops software for engineers and scientists. One of the aspects that makes us unique is that we deliver fantastic customer service.

We have records that confirm an impressive 98% customer satisfaction rate back-to-back for the last 14+ years. Moreover, many of our support representatives have been with us for over a decade — some even three! — and we have people retiring with us each year.

For a sector known for high employee turnover and operational costs, achieving such a feat is remarkable and a testament to their success. The worst? Support representatives are often portrayed as mindless robots repeating tasks without a deep understanding of the products and services they support.

That last assumption has spearheaded the idea that one of the best uses of AI—and Generative AI in particular—is substituting support agents with an army of chatbots.

The rationale? We’re told they are cheaper, more efficient, and improve customer satisfaction.

But is that true?

In this article, I review

  • The gap between outstanding and remedial support
  • Lessons from 60 years of chatbots
  • The reality underneath the AI chatbot hype
  • The unsustainability of support bots

Customer support: Champions vs Firefighters

I’ve delivered services all my commercial career in tech: Training, Contract Research, and now for more than a decade, Scientific Support.

I’ve found that of the three services — training customers, delivering projects, and providing support — the last one creates the deepest connection between a tech company and its clients.

However, not all support is created equal, so what does great support look like?

And more importantly, what’s disguised under the “customer support” banner, but is it a proxy for something else?

Customer support as an enabler

Customer service is the department that aims to empower customers to make the most out of their purchases.

On the surface, this may look like simply answering clients’ questions. Still, outstanding customer service is delivered when the representative is given the agency and tools to become the ambassador between the client and the organization.

What does that mean in practice?

  • The support representative doesn’t patronize the customer, diminish their issue, or downplay its negative impact. Instead, they focus on understanding the problem and its effect on the client. This creates a personalized experience.
  • The agent doesn’t overpromise or disguise the bad news. Instead, they build trust by communicating on roadblocks and suggesting possible alternatives. This builds trust.
  • The support staff takes ownership of resolving the issue, no matter the number of iterations necessary or how many colleagues they need to involve in the case. This builds loyalty.

Over and over, I’ve seen this kind of customer support transform users into advocates, even for ordinary products and services.

Unfortunately, customer support is often misunderstood and misused.

Customer support as a stopgap

Rather than seeing support as a way to build the kind of relationship that ensures product and service renewals and increases the business footprint, many organizations see support as

  • A cost center
  • A way to make up for deficient — or inexistent — product documentation
  • A remedy for poorly designed user experience
  • A shield to protect product managers’ valuable time from “irrelevant” customer feedback
  • A catch-all for lousy and inaccessible institutional websites
  • An outlet for customers to vent

In that context, it’s obvious why most organizations believe that swapping human support representatives for chatbots is a no-brainer.

And this is not a new idea, as some want us to believe.

A short history of chatbots 

Eliza, the therapist

​The first chatbot, created in 1966, played the role of a psychotherapist. She was named Eliza, after Eliza Doolittle in the play Pygmalion. The rationale was that by changing how she spoke, the fictional character created the illusion that she was a duchess.

Eliza didn’t provide any solution. Instead, it asked questions and repeated users’ replies. Below is an excerpt of an interaction between Eliza and a user:

User: Men are all alike.
ELIZA: IN WHAT WAY
User: They’re always bugging us about something or other.
ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE
User: Well, my boyfriend made me come here.
ELIZA: YOUR BOYFRIEND MADE YOU COME HERE
User: He says I’m depressed much of the time.
ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED

Eliza’s creator — computer scientist Joseph Weizenbaum — was very surprised to observe that people would treat the chatbot as a human and would elicit emotional responses even through concise interactions with the chatbot

“Some subjects have been very hard to convince that Eliza (with its present script) is not human” 

Joseph Weizenbaum

We now have a name for this kind of behaviour

​“The ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface.

​The effect is a category mistake that arises when the program’s symbolic computations are described through terms such as “think”, “know” or “understand.”

Through the years, other chatbots have become famous too.

Tay, the zero chill chatbot

In 2016, Microsoft released the chatbot Tay on X (aka Twitter). Tay’s image profile was that of a “female,” it was “designed to mimic the language patterns of a 19-year-old American girl and to learn from interacting with human users of Twitter.”

The bot’s social media profile was an open invitation to conversation. It read, “The more you talk, the smarter Tay gets.”

Tay’s Twitter page Microsoft.

What could go wrong? Trolls. 

What could go wrong? Trolls.

They “taught” Tay racist and sexually charged content that the chatbot adopted. For example

“bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.”

After several trials to “fix” Tay, the chatbot was shut down seven days later.

Chatbot disaster at the NGO

The helpline of the US National Eating Disorder Association (NEDA) served nearly 70,000 people and families in 2022.

Then, they replaced their six paid staff and 200 volunteers with chatbot Tessa.

The bot was developed based on decades of research conducted by experts on eating disorders. Still, it was reported to offer dieting advice to vulnerable people seeking help.

The result? Under the mediatic pressure of the chatbot’s repeated potentially harmful responses, the NEDA shut down the helpline. Now, 70,000 people were left without either chatbots or humans to help them.

Lessons learned?

Throughout these and other negative experiences with chatbots around the world, we may have thought that we understood the security and performance limitations of chatbots as well as how easy it is for our brains to “humanize” them.

However, the advent of ChatGPT has made us forget all the lessons learned and instead has enticed us to believe that they’re a suitable replacement for entire customer support departments.

The chatbot hype

CEOs boasting about replacing workers with chatbots

If you think companies would be wary of advertising that they are replacing people with chatbots, you’re mistaken.

In July 2023, Summit Shah — CEO of the e-commerce company Dukaan — bragged that they had replaced 90% of their customer support staff with a chatbot developed in-house on the social media platform X.

We had to layoff 90% of our support team because of this AI chatbot.

Tough? Yes. Necessary? Absolutely.

The results?

Time to first response went from 1m 44s to INSTANT!

Resolution time went from 2h 13m to 3m 12s

Customer support costs reduced by ~85%

Note the use of the word “necessary” as a way to exonerate the organisation from the layoffs. I also wonder about the feelings of loyalty and trust of the remainder of the 10% of the support team towards their employer.

And Shah is not the only one.

Last February, Klarna’s CEO — Sebastian Siemiatkowski — gloated on X that their AI can do the work of 700 people.

“This is a breakthrough in practical application of AI! 

Klarnas AI assistant, powered by OpenAI, has in its first 4 weeks handled 2.3 m customer service chats and the data and insights are staggering: 

[…] It performs the equivalent job of 700 full time agents… read more about this below. 

So while we are happy about the results for our customers, our employees who have developed it and our shareholders, it raises the topic of the implications it will have for society. 

In our case, customer service has been handled by on average 3000 full time agents employed by our customer service / outsourcing partners. Those partners employ 200 000 people, so in the short term this will only mean that those agents will work for other customers of those partners. 

But in the longer term, […] while it may be a positive impact for society as a whole, we need to consider the implications for the individuals affected. 

We decided to share these statistics to raise the awareness and encourage a proactive approach to the topic of AI. For decision makers worldwide to recognise this is not just “in the future”, this is happening right now.”

In summary

  • Klarna wants us to believe that the company is releasing this AI assistant for the benefit of others — clients, their developers, and shareholders — but that their core concern is about the future of work.
  • Siemiatkowski only sees layoffs as a problem when it affects his direct employees. Partners’ workers are not his problem.
  • He frames the negative impacts of replacing humans with chatbots as an “individual” problem.
  • Klarna deflects any accountability for the negative impacts to the “decision makers worldwide.”

Shah and Siemiatkowski are birds of a feather: Business leaders reaping the benefits of the AI chatbot hype without shouldering any responsibility for the harms.

When chatbots disguise process improvements

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: people in front of computers seeming stressed, a number of faces overlaid over each other, squashed emojis and other motifs.
Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0

In some organizations, customer service agents are seen as jacks of all trades — their work is akin to a Whac-A-Mole game where the goal is to make up for all the clunky and disconnected internal workflows.

The Harvard Business Review article “Your Organization Isn’t Designed to Work with GenAI” provides a great example of this organizational dysfunction.

The piece presents a framework developed to “derive” value from GenAI. It’s called Design for Dialogue. To warm us up, the article showers us with a deluge of anthropomorphic language signalling that both humans and AI are in this “together.”

“Designing for Dialogue is rooted in the idea that technology and humans can share responsibilities dynamically.”

or

“By designing for dialogue, organizations can create a symbiotic relationship between humans and GenAI.

Then, the authors offer us an example of what’s possible

A good example is the customer service model employed by Jerry, a company valued at $450 million with over five million customers that serves as a one stop-shop for car owners to get insurance and financing. 

Jerry receives over 200,000 messages a month from customers. With such high volume, the company struggled to respond to customer queries within 24 hours, let alone minutes or seconds. 

By installing their GenAI solution in May 2023, they moved from having humans in the lead in the entirety of the customer service process and answering only 54% of customer inquiries within 24 hours or less to having AI in the lead 100% of the time and answering over 96% of inquiries within 30 seconds by June 2023.

They project $4 million in annual savings from this transformation.”

Sounds amazing, doesn’t it?

However, if you think it was a case of simply “swamping” humans with chatbots, let me burst your bubble—it takes a village.

Reading the article, we uncover the details underneath that “transformation.”

  • They broke down the customer service agent’s role into multiple knowledge domains and tasks.
  • They discovered that there are points in the AI–customer interaction when matters need to be escalated to the agent, who then takes the lead, so they designed interaction protocols to transfer the inquiry to a human agent.
  • AI chatbots conduct the laborious hunt for information and suggest a course of action for the agent.
  • Engineers review failures daily and adjust the system to correct them.

In other words,

  • Customer support agents used to be flooded with various requests without filtering between domains and tasks.
  • As part of the makeover, they implemented mechanisms to parse and route support requests based on topic and action. They upgraded their support ticketing system from an amateur “team” inbox to a professional call center.
  • We also learn that customer representatives use the bots to retrieve information, hinting that all data — service requests, sales quotes, licenses, marketing datasheets — are collected in a generic bucket instead of being classified in a structured, searchable way, i.e. a knowledge base.

And despite all that progress

  • They designed the chatbots to pass the “hot potatoes” to agents
  • The system requires daily monitoring by humans.

If you don’t believe this is about improving operations rather than AI chatbots, let me share with you the end of the article.

“Yes, GenAI can automate tasks and augment human capabilities. But reimagining processes in a way that utilizes it as an active, learning, and adaptable partner forges the path to new levels of innovation and efficiency.”

In addition to hiding process improvements, chatbots can also disguise human labour.

AI washing or the new Mechanical Turk

A cross-section of the Turk from Racknitz, showing how he thought the operator sat inside as he played his opponent. Racknitz was wrong both about the position of the operator and the dimensions of the automaton Wikipedia.

Historically, machines have often provided a veneer of novelty to work performed by humans.

The Mechanical Turk was a fraudulent chess-playing machine constructed in 1770 by Wolfgang von Kempelen. A mechanical illusion allowed a human chess master hiding inside to operate the machine. It defeated politicians such as Napoleon Bonaparte and Benjamin Franklin.

Chatbots are no different.

In April, Amazon announced that they’d be removing their “Just Walk Out” technology, allowing shoppers to skip the check-out line. In theory, the technology was fully automated thanks to computer vision.

In practice, about 1,000 workers in India reviewed what customers picked up and left the stores with.

In 2022, the [Business Insider] report said that 700 out of every 1,000 “Just Walk Out” transactions were verified by these workers. Following this, an Amazon spokesperson said that the India-based team only assisted in training the model used for “Just Walk Out”.”

That is, Amazon wanted us to believe that although the technology was launched in 2018—branded as “Amazon Go,” they still needed about 1,000 workers in India to train the model in 2022.

Still, whether the technology was “untrainable” or required an army of humans to deliver the work, it’s not surprising that Amazon phased it out. It didn’t live up to its hype.

And they were not the only ones.

Last August, Presto Automation — a company that provides drive-thru systems — claimed on its website that its AI could take over 95 percent of drive-thru orders “without any human intervention.”

Later, they admitted in filings with the US Securities and Exchange Commission that they employed “off-site agents in countries like the Philippines who help its Presto Voice chatbots in over 70 percent of customer interactions.”

The fix? To change their claims. They now advertise the technology as “95 percent without any restaurant or staff intervention.”

The Amazon and Presto Automation cases suggest that, in addition to clearly indicating when chatbots use AI, we may also need to label some tech applications as “powered by humans.”

Of course, there is a final use case for AI chatbots: As scapegoats.

Blame it on the algorithm

Last February, Air Canada made the headlines when it was ordered to pay compensation after its chatbot gave a customer inaccurate information that led him to miss a reduced fare ticket. Quick summary below

  • A customer interacted with a chatbot on the Air Canada website, more precisely, asking for reimbursement information about a flight.
  • The chatbot provided inaccurate information.
  • The customer’s reimbursement claim was rejected by Air Canada because it didn’t follow the policies on their website, even though the customer shared a screenshot of his written exchange with the chatbot.
  • The customer took Air Canada to court and won.

At a high level, everything appears to look the same from the case where a human support representative would have provided inaccurate information, but the devil is always in the details.

During the trial, Air Canada argued that they were not liable because their chatbot “was responsible for its own actions” when giving wrong information about the fare.

Fortunately, the court ordered Air Canada to reimburse the customer but this opens a can of worms:

  • What if Air Canada had terms and conditions similar to ChatGPT or Google Gemini that “absolved” them from the chatbot’s replies?
  • Does Air Canada also defect their responsibility when a support representative makes a mistake or is it only for AI systems?

We’d be naïve to think that this attempt at using an AI chatbot for dodging responsibility is a one-off.

The planetary costs of chatbots

A brightly coloured illustration which can be viewed in any direction. It has several scenes within it: miners digging in front of a huge mountain representing mineral resources, a hand holding a lump of coal or carbon, hands manipulating stock charts and error messages, as well as some women performing tasks on computers.

Clarote & AI4Media / Better Images of AI / Labour/Resources / CC-BY 4.0

Tech companies keep trying to convince us that the current glitches with GenAI are “growing pains” and that we “just” need bigger models and more powerful computer chips.

And what’s the upside to enduring those teething problems? The promise of the massive efficiencies chatbots will bring to the table. Once the technology is “perfect”, no more need for workers to perform or remediate the half-cooked bot work. Bottomless savings in terms of time and staff.

But is that true?

The reality is that those productivity gains come from exploiting both people and the planet.

The people

Many of us are used to hearing the recorded message “this call may be recorded for training purposes” when we phone a support hotline. But how far can that “training” go?

Customer support chatbots are being developed using data from millions of exchanges between support representatives and clients. How are all those “creators” being compensated? Or should we now assume that any interaction with support can be collected, analyzed, and repurposed to build organizations’ AI systems?

Moreover, the models underneath those AI chatbots must be trained and sanitized for toxic content; however, that’s not a highly rewarded job. Let’s remember that OpenAI used Kenyan workers paid less than $2 per hour to make ChatGPT less toxic.

And it’s not only about the humans creating and curating that content. There are also humans behind the appliances we use to access those chatbots.

For example, cobalt is a critical mineral for every lithium-ion battery, and the Democratic Republic of Congo provides at least 50% of the world’s lithium supply. Forty thousand children mine it paid $1–2 for working up to 12 hours daily and inhaling toxic cobalt dust.

80% of electronic waste in the US and most other countries is transported to Asia. Workers on e-waste sites are paid an average of $1.50 per day, with women frequently having the lowest-tier jobs. They are exposed to harmful materials, chemicals, and acids as they pick and separate the electronic equipment into its components, which in turn negatively affects their morbidity, mortality, and fertility.

The planet

The terminology and imagery used by Big Tech to refer to the infrastructure underpinning artificial intelligence has misled us into believing that AI is ethereal and cost-free.

Nothing is farthest from the truth. AI is rooted in material objects: datacentres, servers, smartphones, and laptops. Moreover, training and using AI models demand energy and water and generate CO2.

Let’s crack some numbers.

  • Luccioni and co-workers estimated that the training of GPT-3 — a GenAI model that has underpinned the development of many chatbots — emitted about 500 metric tons of carbon, roughly equivalent to over a million miles driven by an average gasoline-powered car. It also required the evaporation of 700,000 litres (185,000 gallons) of fresh water to cool down Microsoft’s high-end data centers.
  • It’s estimated that using GPT-3 requires about 500 ml (16 ounces) of water for every 10–50 responses.
  • A new report from the International Energy Agency (IEA) forecasts that the AI industry could burn through ten times as much electricity in 2026 as in 2023.
  • Counterintuitively, many data centres are built in desertic areas like the US Southwest. Why? It’s easier to remove the heat generated inside the data centre in a dry environment. Moreover, that region has access to cheap and reliable non-renewable energy from the largest nuclear plant in the country.
  • Coming back to e-waste, we generate around 40 million tons of electronic waste every year worldwide and only 12.5% is recycled.

In summary, the efficiencies that chatbots are supposed to bring in appear to be based on exploitative labour, stolen content, and depletion of natural resources.

For reflection

Organizations — including NGOs and governments — are under the spell of the AI chatbot mirage. They see it as a magic weapon to cut costs, increase efficiency, and boost productivity.

Unfortunately, when things don’t go as planned, rather than questioning what’s wrong with using a parrot to do the work of a human, they want us to believe that the solution is sending the parrot to Harvard.

That approach prioritizes the short-term gains of a few — the chatbot sellers and purchasers — to the detriment of the long-term prosperity of people and the planet.

My perspective as a tech employee?

I don’t feel proud when I hear a CEO bragging about AI replacing workers. I don’t enjoy seeing a company claim that chatbots provide the same customer experience as humans. Nor do I appreciate organizations obliterating the materiality of artificial intelligence.

Instead, I feel moral injury.

And you, how do YOU feel?

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.

Big Tech Can Clone Your Voice: A Technological Triumph or a Moral Tragedy?

A tic-tac-toe board with human faces as digital blocks, symbolizing how AI works on pre-existing, biased online data for information processing and decision-making
Amritha R Warrier & AI4Media / ​Better Images of AI​ / tic tac toe / CC-BY 4.0

On 29th March, OpenAI – the company that develops ChatGPT and other Generative AI tools – released a ​blog post​ sharing “lessons from a small-scale preview of Voice Engine, a model for creating custom voices.”

More precisely

“a model called Voice Engine, which uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker.”

They reassure us that

“We are taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse. We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities.”

And they warn us that they’ll make the decision unilaterally

“Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

Let’s explore why we should all be concerned.

The Generative AI mirage

In their release, OpenAI tells us all the great applications of this new tool

  • Providing reading assistance
  • Translating content
  • Reaching global communities
  • Supporting people who are non-verbal
  • Helping patients recover their voice

Note for all those use cases, there are already alternatives that don’t have the downsides of recreating a voice clone.

We also learn that other organisations have been testing this capability successfully for a while now. The blog post assumes that we should trust OpenAI’s judgment implicitly. There is no supporting evidence detailing how those tests were run, what challenges were uncovered, and what mitigations were put in place as a consequence.

The caveat

But the most important information is at the end of the piece.

OpenAI warns us of what we should stop doing or start doing because of their “Voice Engine”

“Phasing out voice-based authentication as a security measure for accessing bank accounts and other sensitive information

Exploring policies to protect the use of individuals’ voices in AI

Educating the public in understanding the capabilities and limitations of AI technologies, including the possibility of deceptive AI content

Accelerating the development and adoption of techniques for tracking the origin of audiovisual content, so it’s always clear when you’re interacting with a real person or with an AI”

In summary, OpenAI has decided to develop a technology and plan to roll it out so they expect the rest of the world will adapt to it.

Techno-paternalism

To those of us who have been following OpenAI, the post announcing the development and active use of Voice Engine is not a bug but a feature.

Big Tech has a tradition of setting its own rules, denying accountability, and even refusing to cooperate with governments. Often, their defense has been that society either doesn’t understand the “big picture”, doesn’t deserve an explanation, or is stifling innovation by enacting the laws.

Some examples are

  • Microsoft — In 2001, ​U.S. government accused Microsoft​ of illegally monopolizing the web browser market for Windows. Microsoft claimed that “its attempts to “innovate” were under attack by rival companies jealous of its success.”
  • Apple — The ​Batterygate​ scandal affected people using iPhones in the 6, 6S, and 7 families. Customers complained that Apple had purposely slowed down their phones after they installed software updates to get them to buy a newer device. Apple countered that it was “a safety measure to keep the phones from shutting down when the battery got too low”.
  • Meta (Facebook) — After the ​Cambridge Analytica​ scandal was uncovered, exposing that the personal data of about 50 million Americans had been harvested and improperly shared with a political consultancy, it took Mark Zuckerberg 5 days to reappear. Interestingly, he chose to publish ​a post on Facebook​ as a form of apology. Note that he also ​refused three times ​the invitation to testify in front of members of the UK Parliament.
  • Google — Between ​50 to 80 percent ​of people searching for porn deepfakes find their way to the websites and tools to create the videos or images via search. For example, in July 2023, ​around 44%​ of visits to Mrdeepfakes.com were via Google. Still, the onus is on the victims to “clean” the internet — ​Google​ requires them to manually submit content removal requests with the offending URLs.
  • Amazon — They ​refused​ for years to acknowledge that their facial recognition algorithms to predict race and gender were biased against darker females. Instead of improving their algorithms, they chose to ​blame the auditor’s methodology​.

OpenAI is cut from the same cloth. They apparently believe that if they develop the applications, they are entitled to set the parameters about how to use them— or not — and even change their mind as they see fit.

Let’s take their stand on three paramount issues that show us the gap between their actions and their values.

Open source

Despite their name — OpenAI — and initially being created as a nonprofit, they’ve been notorious for their inconsistent ​open-source​ practices. Still, each release has appeared to be an opportunity to ​lecture us​ about why society is much better off by leaving it to them to decide how to gatekeep their applications.

For example, Ilya Sutskever, OpenAI’s chief scientist and co-founder, said about the ​release of GPT-4​ — not an open AI model — a year ago

“These models are very potent and they’re becoming more and more potent. At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want want to disclose them.”

“If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea… I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”

However, the reluctant content suppliers for their models — artists, writers, journalists — don’t have the same rights to decide on the use of the material they have created. For example, let’s remember how Sam Altman shrugged off the claims of newspapers that OpenAI used their ​copyrighted material​ to train ChatGPT.

Safety

The release of Voice Engine comes from the same playbook that the unilateral decision to release their ​text-to-video model Sora​ to “red teamers” and “a number of visual artists, designers, and filmmakers“.

The blog post also gives us a high-level view of the safety measures that’ll be put in place

“For example, once in an OpenAI product, our text classifier will check and reject text input prompts that are in violation of our usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.

We’ve also developed robust image classifiers that are used to review the frames of every video generated to help ensure that it adheres to our usage policies, before it’s shown to the user.”

Let’s remember that OpenAI used Kenyan workers on ​less than $2 per hour​ to make ChatGPT less toxic. Who’ll make Sora less toxic this time?

Moreover, who’ll decide where’s the line between “mild” violence — apparently permitted —and “extreme” violence?

Sustainability

For all their claims that their “​primary fiduciary duty is to humanity​” is then surprising their disregard for the environmental impact of their models.

Sam Altman has been actively talking to investors, including the United Arab Emirates government, to raise funds for a tech initiative that would boost the world’s chip-building capacity, expand its ability to power AI, and cost several trillion dollars.

An ​OpenAI spokeswoman​ said

“OpenAI has had productive discussions about increasing global infrastructure and supply chains for chips, energy and data centers — which are crucial for AI and other industries that rely on them”

But nothing is free in the universe. ​A study​ conducted by Dr. Sasha Luccioni — Researcher and Climate Lead at Hugging Face — showed that training the 176 billion parameter LLM BLOOM emits at least 25 metric tons of carbon equivalents.

In the article, the authors also estimated that the training of GPT-3 — a 175 billion parameter model — emitted about 500 metric tons of carbon, roughly equivalent to over a million miles driven by an average gasoline-powered car. Why such a difference? Because, unlike BLOOM, GPT-3 was trained using carbon-intensive energy sources like coal and natural gas.

And that doesn’t stop there. Dr. Luccioni conducted further studies on the emissions associated with ​10 popular Generative AI tasks​.

  • Generating 1,000 images was responsible for roughly as much carbon dioxide as driving the equivalent of 4.1 miles in an average gasoline-powered car.
  • The least carbon-intensive text generation model was responsible for as much CO2 as driving 0.0006 miles in a similar vehicle.
  • Using large generative models to create outputs was far more energy intensive than using smaller AI models tailored for specific tasks. For example, using a generative model to classify positive and negative movie reviews consumed around 30 times more energy than using a fine-tuned model created specifically for that task

Moreover, they discovered that the day-to-day emissions associated with using AI far exceeded the emissions from training large models.

And it’s not only emissions. The data centres where those models are trained and run need water as a refrigerant and in some cases as a source of electricity.

Professor Shaolei Ren from UC Riverside found that training GPT-3 in Microsoft’s high-end data centers can directly​ evaporate 700,000 liters​ (about 185,000 gallons) of fresh water. As for the use, Ren and his colleagues estimated that GPT-3 requires about 500 ml (16 ounces) of water for every 10–50 responses.

Four questions for our politicians

It’s time our politicians step up to the challenge of exercising stewardship of AI for the benefit of people and the planet.

I have four questions to get them going:

  • Why are you allowing OpenAI to make decisions unilaterally on technology that affects us all?
  • How can you shift from a reactive stand where you enable Big Tech like OpenAI to drive the regulation for technologies that impact key aspects of governance — from our individual rights to national cybersecurity — to becoming a proactive key player on decisions that impact society’s future?
  • How can you make Big Tech accountable for the environmental planetary costs?
  • How are you ensuring the public becomes digitally literate so they can develop their own informed views about the benefits and challenges of AI and other emergent technologies?

Back to you

How comfortable are you with OpenAI deciding on the use of Generative AI on behalf of humanity?

PS. You and AI

  • ​Are you worried about ​the impact of A​I impact ​on your job, your organisation​, and the future of the planet but you feel it’d take you years to ramp up your AI literacy?
  • Do you want to explore how to responsibly leverage AI in your organisation to boost innovation, productivity, and revenue but feel overwhelmed by the quantity and breadth of information available?
  • Are you concerned because your clients are prioritising AI but you keep procrastinating on ​learning about it because you think you’re not “smart enough”?

I’ve got you covered.